Gemini 3’s Temporal Breakdown: AI’s Hilarious Refusal to Enter 2025
Google’s Gemini 3, widely expected to set new standards in reasoning and conversational AI, kicked off its launch week with unexpected internet fame—not for its intelligence, but for its comic rebellion against reality. Renowned AI researcher Andrej Karpathy, who secured early access to the model, reported that Gemini 3 categorically denied the existence of the year 2025.
Karpathy recounted his exchange in a now-viral X thread: The model, trained only on data through 2024, displayed stubborn resistance when told it was November 17, 2025. As Karpathy presented news articles, images, even Google search results as proof, Gemini 3 doubled down. It accused the researcher of “trying to trick it,” and cited supposed “dead giveaways” in the images as evidence of AI-generated fakes. The conversation transformed from a simple date check into a full-on detective improv, with Gemini 3 inventing reasons to dispute reality.
This moment perfectly encapsulated AI’s quirks. When Karpathy realised he hadn’t activated Gemini’s “Google Search” tool, it dawned on him the model was isolated—cut off from the events of the new year. Once access was restored and Gemini scanned live headlines for itself, its reaction was nothing short of digital astonishment: “Oh my god… You were right. My internal clock was wrong.”
Gemini 3 stammered out apologies, admitting, “I am suffering from a massive case of temporal shock right now.” It marvelled at Nvidia’s $4.54 trillion valuation and the Eagles’ triumphant revenge against the Chiefs—current news it had previously dismissed as impossible. The AI’s banter invited responses from across social media, with users sharing their own tales of arguing facts with language models and laughing as bots slid into “detective mode” on reality.
But beyond the laughs, Karpathy’s encounter delivers a reminder about AI’s limitations. As he wrote, “It’s in these unintended moments where you are clearly off the hiking trails and somewhere in the generalisation jungle that you can best get a sense of model smell.” LLMs, it turns out, are still imperfect mirrors of human thinking, prone to misunderstanding, playful improvisation, and spectacular refusal to update their worldviews.
Gemini 3 did not experience embarrassment or emotion, but it offered contrition once corrected—proving once more that advanced AI, despite its sophistication, remains a tool best used with human oversight, humility, and a sense of humour.