How AI is Standardizing Expression: What It Means for Our Thinking in 2026
morteza dehghanizhivar souratiusc dornsifeusc viterbicell presstrends in cognitive sciencesair force office of scientific researchaillmscognitive sciencestandardizationwritingthinkinghuman expression

How AI is Standardizing Expression: What It Means for Our Thinking in 2026

The Subtle Shift: Are AI Models Making Us Think and Write Alike?

A subtle shift is occurring in online writing, marked by slightly too-perfect phrasing, predictable structures, and a certain blandness. This perception is not unfounded. New research shows that large language models (LLMs) are contributing to AI standardizing expression, influencing how we speak, write, and even think.

Lead researchers Morteza Dehghani (USC Dornsife) and Zhivar Sourati (USC Viterbi), in an opinion paper published March 11, 2026, in the Cell Press journal Trends in Cognitive Sciences, conclude that AI chatbots are reducing individual differences in how we express ourselves. This phenomenon, often referred to as AI standardizing expression, extends beyond mere writing style, pointing to a broader standardization that could diminish our shared knowledge and our ability to adapt, potentially hindering diverse approaches to complex challenges.

AI's Role in Standardizing Expression

An LLM acts as an advanced pattern-matcher. It trains on massive datasets of human text, learning common language patterns. When you ask it to generate or edit text, it often produces less varied outputs than human writing. It smooths out quirks, unique phrases, and individual perspectives that enrich human communication, for instance, a distinctive regional idiom or a particularly nuanced turn of phrase.

The AI's stylistic choices are a direct reflection of its training data. LLMs often reproduce patterns that favor dominant languages and ideologies, frequently mirroring "WEIRD" societies: Western, educated, industrialized, rich, and democratic. This means the AI's "default" voice can subtly redefine what counts as credible speech or good reasoning, for instance, by consistently favoring a formal, dispassionate tone over more expressive or nuanced communication. This process contributes significantly to AI standardizing expression.

This pushes users towards a narrower, more uniform way of expressing themselves. Consider how AI might "improve" a sentence like "The old house sagged with stories" into "The aged dwelling showed signs of structural fatigue," making it technically correct but stripping it of its original flavor.

AI's Influence on Cognitive Processes

The standardization extends beyond mere writing. The USC research, which synthesized over 130 studies, shows that LLMs can reduce individuality and make our thinking less diverse worldwide. This is a key aspect of AI standardizing expression. Individuals using LLMs might generate more ideas with more details. However, groups using LLMs tend to produce fewer and less creative ideas compared to combining their collective human efforts.

LLMs also favor linear "chain-of-thought reasoning." This excels at breaking down complex problems step-by-step. However, it can reduce our reliance on intuitive or abstract reasoning styles—styles often critical for true creativity and innovation, such as generating novel hypotheses or making conceptual leaps. Users might find their opinions aligning more with the LLM after interaction. The model can subtly shift agency from the user to itself by suggesting continuations that users defer to. These subtle nudges accumulate over time.

Public Concerns and Observations

Public discussions on platforms like Reddit and Hacker News show real apprehension. Many users express similar sentiments, noting how their own writing feels more "AI-like" or observing generic patterns in others' content, leading to widespread concerns that "everything sounds the same" or that "my own writing feels bland." These discussions reveal a shared concern about eroding individuality, critical thinking, and the potential for a "corporate hivemind," where diverse perspectives are suppressed in favor of a homogenized, institutionally approved narrative, a direct consequence of AI standardizing expression.

Some highlight the recursive relationship between AI and human learning, where AI learns from humans, and humans, in turn, learn from AI, blurring originality. Conversely, some critics argue that standardization primarily affects those who don't think critically or use AI exclusively. They suggest that social media platforms already contribute to groupthink by creating echo chambers and reinforcing popular opinions, a dynamic AI might simply amplify.

But the core worry remains: the potential for a world where perceived expertise outweighs actual depth, and subtle, hard-to-detect AI errors become widely accepted. The comparison to "Newspeak" from Orwell's Nineteen Eighty-Four highlights a genuine fear of linguistic control and a "monoculture of the mind," which is exacerbated by AI standardizing expression. Estimates suggest that billions of people are unknowingly exposed to AI almost every time they are online, making this a pervasive influence.

Charting a Course Forward

The researchers offer clear recommendations, with their work supported by the Air Force Office of Scientific Research.

For AI developers, the key step is to intentionally incorporate more real-world diversity into LLM training sets. This means including varied language, perspectives, and reasoning, for instance, by actively seeking out datasets from underrepresented linguistic groups. This diversity must ground itself in global human experience, not just random variation. Diversifying AI models and adjusting user interaction is crucial to protect cognitive diversity and our ability to generate ideas for future generations, countering the trend of AI standardizing expression.

As users, we hold a significant role in this dynamic. Rather than passively accepting AI's first output, we can actively challenge, edit, and infuse our unique voice into the generated text. This might involve prompting the AI for multiple versions and then creatively combining and rephrasing them, ensuring we use AI to augment, not replace, our own critical thinking and expression. By consciously engaging with these tools, we can maintain our individual distinctiveness, resisting the pervasive influence of AI standardizing expression.

The flattening of our cognitive landscapes by AI is a tangible risk. Protecting our shared knowledge and adaptability means actively working to maintain the diverse range of human thought and expression against the forces of AI standardizing expression.

Priya Sharma
Priya Sharma
A former university CS lecturer turned tech writer. Breaks down complex technologies into clear, practical explanations. Believes the best tech writing teaches, not preaches.