

Elon Musk just announced that his AI empire will soon speak directly to your children.
On Saturday, he posted a short but pointed message on X:
We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.
According to Fox News, Baby Grok will be a simplified version of the Grok AI chatbot. The app is being designed for “safe and educational interactions with children.” No technical details have been released, but the goal is obvious: to embed AI into the early stages of childhood development.
In AI’s current PR cycle, the word “educational” carries magical powers. Developers and advocates pitch large language models (LLMs) as tireless digital tutors, offering personalized pacing, infinite patience, and boundless knowledge. Who could say no to that?
For instance, according to a recent paper by University of Birmingham professor Russell Beale, LLMs are “opening exciting avenues in education” by enabling forms of instruction that were previously impossible. Unlike old educational software, which relied on canned responses, today’s AI can hold open-ended, context-rich conversations. This, Beale argues, makes them function more like flexible instructors than static tools.
But there’s a difference between information and understanding. Especially when the audience is still forming neural pathways.
“Kid-friendly” suggests safety and value. But who defines either? In the case of Baby Grok, there’s yet no information of involvement from educators, developmental psychologists, or neuroscientists. No long-term studies. No ethical review boards. Just vibes and private capital chasing the next big thing in digital child-rearing.
Still, the narrative is spreading: If AI can talk to your child, it can teach them, and likely do so more efficiently than parents or other human educators. But before we hand over the lesson plan, we might want to ask: How does the brain respond to this type of input?
A recent preprint from MIT, titled “Your Brain on ChatGPT,” offers an early answer. Researchers used electroencephalography (EEG) to log participants’ brain activity during essay writing, measuring both cognitive engagement and cognitive load.
Fifty-four participants, ages 18 to 39, were asked to write SAT-style essays using one of three tools: ChatGPT, Google Search, or no assistance. EEG data from 32 brain regions showed that ChatGPT users had the lowest cognitive activity. They also performed worse on linguistic and behavioral measures. By the final task, many were simply copy-pasting.
Lead author Nataliya Kosmyna said she published the findings early out of concern:
I am afraid in 6–8 months, there will be some policymaker who decides, “let’s do GPT kindergarten.” I think that would be absolutely bad and detrimental…. Developing brains are at the highest risk.
The study hasn’t been peer-reviewed yet and the sample size is modest — but the implications are serious. Repeated exposure to AI-generated content appears to dull cognitive engagement.
And there’s yet another layer to this troubling picture. Studies released earlier this year by OpenAI and the MIT Media Lab suggest that the more time users spend interacting with ChatGPT, the lonelier they feel. What was meant to simulate meaningful conversation seems, over time, to highlight its absence.
If that effect is troubling in adults, it’s potentially irreversible in children.
Young brains are defined by neuroplasticity. They absorb input quickly and adapt with remarkable speed. That makes them excellent learners — but also uniquely susceptible to distortion. If Baby Grok becomes a primary source of interaction, it won’t just influence what children know. It will influence how they learn, process new information, and make decisions based on the new knowledge.
Teachers are already seeing the signs. Across the United States, detrimental effects of AI are no longer speculative — they’re showing up in classrooms.
A Forbes article from December 2024, “The Dark Side Of AI: Tracking The Decline of Human Cognitive Skills,” cites a University of Pennsylvania study titled “Generative AI Can Harm Learning.” The study found that
students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.
Worse still, Forbes notes that “students are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts.” In plain terms: They’re not learning how to solve problems, they’re learning how to outsource them.
Musk’s vision closely aligns with federal policy.
On April 23, President Donald Trump signed the Executive Order “Advancing Artificial Intelligence Education for American Youth.” The directive creates a White House Task Force on AI Education, mandates AI curriculum for K-12 schools, institutes teacher-training programs, and launches a nationwide Presidential AI Challenge.
Its desired outcome is to “equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society.”
In late June, the White House unveiled the “Pledge to America’s Youth: Investing in AI Education” meant to support the goal and mission of this EO. Sixty-seven companies — including Google, IBM, Oracle, Meta, Microsoft, and OpenAI — committed to “make available resources for youth and teachers,” thus “preparing the next-generation for an AI-enabled economy.”
But what kind of generation are they trying to prepare?
Arguably, the real corporate goal isn’t to nurture independent thinkers, it’s to mass-produce compliant users and consumers. Students fluent in prompting machines, but not in questioning them. Workers who adapt on demand, but don’t pause to ask why. Not citizens, but optimized units in a frictionless digital supply chain.
And the funding? There isn’t any — from Congress, at least. The initiative relies entirely on corporate pledges. The federal government is mandating AI in schools while outsourcing its implementation to the very companies that build the algorithms.
No oversight. No guarantees. Just a handshake — and a generation caught in the middle.
When Musk offers Baby Grok to children, it’s not just as an innovator, but as a politically wired technocrat with an unmistakable transhumanist agenda. The same man who has wired up federal infrastructure now wants to co-parent your kids — digitally, of course.
At the same time, Washington, lacking any constitutional authority, is pushing AI into schools at full speed, without waiting for longitudinal studies, ethical reviews, or public debate.
We’re watching a quiet convergence of platform, pedagogy, and power.
By all indications, for children, AI won’t just be a learning tool. It will be a companion, a formative presence, and possibly a cognitive crutch. The question isn’t whether AI can teach. It’s whether we’re willing to ask what it teaches — and what it may quietly take away in the process.