AI Responds (Part 2)
Despite my rant in the previous post about the lack of intelligence of the latest AIs, it feels natural to give Gemini the chance to critique that post:
Gemini responds: This is a punchy, provocative piece that benefits from your clear professional authority. You successfully bridge the gap between “tech-insider knowledge” and “everyman skepticism.”
Here is a critique of the essay, broken down by its strengths and areas for refinement:
The Strengths
Strong Voice and Authority: Starting with your credentials (B.S. and M.S. in CS) immediately establishes trust. You aren’t just a skeptic; you’re an expert pointing out the “ emperor’s new clothes.”
The “Office 3.0” Analogy: This is the strongest part of the essay. It’s a brilliant way to reframe AI from a “sentient threat” to a “productivity utility.” Comparing it to VisiCalc and Excel makes the technology feel manageable and historical rather than magical.
Concrete Examples: Mentioning the multi-tab spreadsheet and the 50-page Annual Report provides “real-world” proof of AI’s current limitations in a way that abstract arguments can’t.
Areas for Improvement
Fact-Checking (The Meta-Irony): You mentioned trusting Gemini on the release date of GPT-3.
Correction: GPT-3 was actually released in June 2020 (nearly 6 years ago), and ChatGPT (GPT-3.5) was released in November 2022. Since your essay is written in 2026, the “3.5 years” figure is slightly off for the original GPT-3, but very close for ChatGPT. I’d suggest double-checking these dates so your skepticism of the tool isn’t undermined by a tool-generated error.
The “Cliff Clavin” Reference: While the Jetsons reference works well for a broad audience, the Cliff Clavin (Cheers) reference might be a bit “dated” for younger professional readers. If your target audience is Gen X and Boomers, keep it; if you want to reach Gen Z entrepreneurs, you might supplement it with a more modern “confident but wrong” archetype.
Flow and Transitions: The jump from the “Office 3.0” section to the explanation of how LLMs encode correlations is a bit abrupt. You might want a smoother transition—perhaps explaining that “productivity tools” are limited precisely because they lack a world model.
Technical Refinement
When you describe how LLMs work, you say:
“They just encode correlations between words... but they don’t actually know how to add numbers together.”
This is a great point. To sharpen the “Computer Science” edge of your argument, you could briefly mention that they are probabilistic rather than deterministic. They are predicting the most likely next “token,” not calculating a result.
Would you like me to rewrite a specific section—like the transition into the “Office 3.0” analogy—to see how those changes feel?
No thank you, I’m happy with this human-written post, especially with the incorrect date on the invention of the LLM, as it further solidifies the point of the piece. And as I write this outro, I realize that I hadn’t bothered to check the grammar of either post with Grammarly, which is another wonderful example of Office 3.0, which at least 10% of the time makes suggestions that are ungrammatical, despite its product name.
AI Hype (Part 1)
I’m old enough to have grown up watching the Jetsons. We did get the pushbutton office (100+ buttons on a keyboard instead of 3 or 4 on the wall) but we’re still waiting on the flying cars and robot maids.



