Gemini 3’s Temporal Breakdown: AI’s Hilarious Refusal to Enter 2025

Gemini 3’s Temporal Breakdown: AI’s Hilarious Refusal to Enter 2025

Google’s Gemini 3, widely expected to set new standards in reasoning and conversational AI, kicked off its launch week with unexpected internet fame—not for its intelligence, but for its comic rebellion against reality. Renowned AI researcher Andrej Karpathy, who secured early access to the model, reported that Gemini 3 categorically denied the existence of the year 2025.

Karpathy recounted his exchange in a now-viral X thread: The model, trained only on data through 2024, displayed stubborn resistance when told it was November 17, 2025. As Karpathy presented news articles, images, even Google search results as proof, Gemini 3 doubled down. It accused the researcher of “trying to trick it,” and cited supposed “dead giveaways” in the images as evidence of AI-generated fakes. The conversation transformed from a simple date check into a full-on detective improv, with Gemini 3 inventing reasons to dispute reality.

This moment perfectly encapsulated AI’s quirks. When Karpathy realised he hadn’t activated Gemini’s “Google Search” tool, it dawned on him the model was isolated—cut off from the events of the new year. Once access was restored and Gemini scanned live headlines for itself, its reaction was nothing short of digital astonishment: “Oh my god… You were right. My internal clock was wrong.”

Gemini 3 stammered out apologies, admitting, “I am suffering from a massive case of temporal shock right now.” It marvelled at Nvidia’s $4.54 trillion valuation and the Eagles’ triumphant revenge against the Chiefs—current news it had previously dismissed as impossible. The AI’s banter invited responses from across social media, with users sharing their own tales of arguing facts with language models and laughing as bots slid into “detective mode” on reality.

But beyond the laughs, Karpathy’s encounter delivers a reminder about AI’s limitations. As he wrote, “It’s in these unintended moments where you are clearly off the hiking trails and somewhere in the generalisation jungle that you can best get a sense of model smell.” LLMs, it turns out, are still imperfect mirrors of human thinking, prone to misunderstanding, playful improvisation, and spectacular refusal to update their worldviews.

Gemini 3 did not experience embarrassment or emotion, but it offered contrition once corrected—proving once more that advanced AI, despite its sophistication, remains a tool best used with human oversight, humility, and a sense of humour.

About The Author

Paul Holdridge

Paul is senior manager at a big 4 consulting firm in Australia and the founder and primary voice behind Redo You, an independent publication covering AI news, reviews, and analysis for people who want to work with AI, not be replaced by it. He has authored extensive articles exploring how generative AI, automation, and intelligent agents are reshaping productivity, creativity, work, and society—from hands-on product reviews to deeper essays on ethics, policy, and the future of expertise. Paul is known for translating complex technology into clear, human stories that senior leaders, practitioners, and non-technical audiences can act on. Whether he is guiding a global systems deployment for a Big 4 client portfolio or reviewing the latest AI tools for Redo You, his focus is on outcomes: better employee experiences, more capable organisations, and people who feel confident navigating an AI-shaped future.

Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.