If you've been anywhere near tech Twitter or LinkedIn this past week, you've seen it. A post by Matt Shumer, CEO of HyperWrite AI, went massively viral and sent half the internet into an existential spiral.
The post is called "Something Big Is Happening," and depending on who you ask, it's either the most important thing you'll read this year or complete AI hype. Either way, people can't stop talking about it.
So let's break it down: the post, the criticisms, and how to make sense of where we actually are today.
What Matt Shumer Said
Shumer’s core argument is that AI has already crossed a major threshold inside tech, and that shockwave is about to hit nearly all white-collar work far faster than most people realize.
He points to the latest model releases (GPT-5.3 Codex, Claude Opus 4.6) and says these are wildly meaningful updates. The time spent going back and forth with AI and meticulously prompting has decreased exponentially, because the models are just getting so much better (we’ve seen this in our work, too).
Shumer walks through examples where AI is already competitive with junior professionals across law, finance, writing, software, medicine, and customer service. His blunt claim: if your job mostly happens on a computer (reading, writing, analyzing, deciding, communicating) AI is coming for a large share of it.
Importantly, he notes that this year may be the most crucial of many people's careers: those who show AI-enabled productivity will become disproportionately valuable while everyone else watches from the sidelines.
The Case Against the Hype
We can't take every viral post at face value. Gary Marcus, a cognitive scientist, NYU professor emeritus, and one of the most credible AI skeptics in the public conversation, had a lot to say (unsurprisingly). He's not anti-AI (he’s founded an AI company himself that was ultimately acquired by Uber), but he calls Shumer's post "weaponized hype."
Marcus’ biggest issue is with reliability. Shumer claims the latest AI can write whole complex apps without making errors, but provides no hard evidence for that. He references a well-known AI benchmark to show capability is doubling rapidly, but leaves out that the benchmark measures 50% correctness on coding tasks specifically, not 100% accuracy across general work.
He also excludes points that don't fit the narrative: a study showing coders who thought AI made them more productive had actually lost productivity, and pro-AI users reporting that tools are sometimes brilliant and other times maddening.
Marcus concedes the models have genuinely improved. But that's what makes them more dangerous, because outputs look right more often, people relax their skepticism and over-trust results that still fail in unpredictable ways.
They’re all fair points.
Marc Andreessen: Tasks vs. Jobs
Meanwhile, Marc Andreessen has been making the rounds talking about AI and work, and his take on jobs is the most grounded thing we've heard in this whole conversation.
Jobs don't disappear all at once, he argues. Tasks do. The title on your door (lawyer, designer, project manager) stays the same, but the daily work underneath it changes. You lose some tasks to AI, and you pick up new, higher-leverage ones. Historically, every single wave of automation has increased demand for good people, not decreased it. He expects the same here: humans who can use AI become far more valuable, because they're effectively commanding more capability per head.
This extends to how companies are built too. The default question is shifting from "what org chart do I need to hire?" to "what do I automate, and where are humans irreplaceable?"
Our Take: Embrace, Don't Avoid
We believe the truth is somewhere between Shumer's alarm and Marcus's skepticism, and closer to the framework Andreessen is laying out. Here’s what we think:
It never hurts to be early. It always hurts to not know what's happening.
The jump in capability over the last six months is real.
AI isn’t hype, but it's also not magic. The people who understand both of those things are the ones who will get the most out of the tools.
So what should you do?
Start experimenting
Pay for a top-tier model (the free versions don't represent what's actually possible)
Push it beyond search by giving AI real tasks, not just questions
And most importantly, invest in what makes you hard to replace: relationships, judgment, taste, domain expertise, the ability to connect dots across disciplines. Those are the things that compound with AI, and what make you more valuable.
Stay curious,
Julia & Russell


