Humor and Quirks
Source : (remove) : Fortune
RSSJSONXMLCSV
Humor and Quirks
Source : (remove) : Fortune
RSSJSONXMLCSV

Sam Altman says people are starting to talk like AI, making some human interactions 'feel very fake' | Fortune

  Copy link into your clipboard //humor-quirks.news-articles.net/content/2025/09 .. e-human-interactions-feel-very-fake-fortune.html
  Print publication without navigation Published in Humor and Quirks on by Fortune
          🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source

Sam Altman Warns About a “Fake” Future of Conversation: People Are Beginning to Talk Like AI

Fortune, September 9, 2025 – In a candid interview that has already sparked debate across tech blogs and policy forums, Sam Altman, the chief executive officer of OpenAI, shared his growing concern about a subtle but powerful shift in everyday dialogue: people are beginning to speak and write in a manner that feels eerily “AI‑generated.” Altman described the phenomenon as “very fake” and urged developers, regulators, and the public to rethink how authenticity, trust, and transparency are measured in the age of large language models (LLMs).


The “AI‑Voice” of Everyday Communication

Altman’s remarks surfaced during a panel discussion hosted by the New York Times on “Human‑Machine Conversation.” In his testimony, he highlighted how language models have become so adept that users can now generate text that not only mimics human tone but also adopts an almost robotic precision—structured syntax, minimal slang, and a polished, generic voice. When Altman said people are “talking like AI,” he meant that the subtle nuances—humor, irony, personal anecdotes—that once defined human speech are being replaced by an algorithmic rhythm that sounds polished but lacks genuine individuality.

He offered concrete examples: a recent viral tweet by a popular influencer that used a perfectly balanced mix of emojis and formal language was actually generated by a prompt given to ChatGPT. The influencer’s followers responded with a mix of confusion and amusement, illustrating the disconnect between the “real” creator and the AI‑produced voice. Altman called this dissonance “the first sign of a broader cultural shift,” where authenticity is increasingly measured by the absence of human idiosyncrasies.


Why “Very Fake” Is a Warning, Not a Compliment

Altman’s use of the phrase “very fake” underscores a deeper ethical dilemma. On one hand, LLMs can democratize expression, allowing people with limited writing experience to produce polished prose. On the other hand, the proliferation of AI‑written content threatens to erode trust in media, public discourse, and even interpersonal relationships. The CEO of OpenAI stressed that the “fakeness” is not a purely technical shortcoming but a sociocultural one: audiences are primed to spot authenticity; a voice that feels too sterile can signal manipulation or lack of genuine engagement.

In his interview, Altman noted that even seasoned journalists and activists are being lured by the convenience of AI‑generated drafts. The result, he warned, is a dilution of the personal voice that underpins movements and movements that rely on heartfelt storytelling. He called for “clear labeling” of AI‑generated text and urged platforms to adopt policies that differentiate between content that has been wholly machine‑generated and content that has simply been edited or refined by a human.


The Regulatory Response

The article linked to a recent OpenAI policy page titled “Guidelines for Labeling AI‑Generated Content.” According to the guidelines, any content that is more than 50 % AI‑generated must carry a visible watermark. Altman cited the policy as a proactive step toward preserving transparency. However, he also acknowledged that enforcement is a challenge—platforms may opt for opt‑in labeling or rely on third‑party detection tools that can misclassify human writing as AI‑generated.

Altman’s comments came at a time when lawmakers in both the United States and the European Union are debating “AI Transparency Directives.” A notable reference in the article was the European Commission’s “Artificial Intelligence Act,” which includes a clause requiring that AI‑generated content be labeled in “human‑readable form.” Altman welcomed the direction but cautioned that regulation must also address the “human‑AI collaboration” model, where AI is a tool rather than a replacement.


Industry Reactions

The article also quoted several industry figures reacting to Altman’s concerns. A senior engineer at Twitter—speaking on condition of anonymity—described the platform’s experience: “We’ve seen a surge in accounts that auto‑generate replies and tweets. They’re efficient but lack the authenticity that drives engagement.” Meanwhile, a leading copywriter at a global advertising firm said, “We’re training a new generation of copywriters to edit, not generate. The goal is to preserve a human touch while leveraging AI for speed.”

Altman’s remarks have already prompted OpenAI’s own Content Policy update. In a new addendum, OpenAI now lists “Authenticity” as a core policy principle, reinforcing the need for human oversight and discouraging the use of LLMs to replicate a single individual’s voice without their consent.


The Human Side of AI Language

Beyond policy and regulation, Altman’s conversation turned to the psychological impact of AI‑mediated communication. He cited a study from Stanford University that found users who regularly interacted with AI chatbots reported lower confidence in their own writing skills. “When people think the AI is smarter than them, it creates a subconscious belief that their own words are inferior,” Altman explained. He argued that this could have long‑term effects on education, professional development, and even mental health.

In a brief interview with a leading educational technology company, Altman suggested that schools should integrate “AI Literacy” into curricula—not only teaching how to use tools but also encouraging critical thinking about authenticity and bias.


What Comes Next

The Fortune article concludes with Altman’s vision for a future where human and machine voices coexist without one eclipsing the other. He calls for a “human‑first” approach: AI should augment, not replace, the expressive richness of human language. He emphasized the importance of continued research into model transparency, user education, and policy frameworks that safeguard authenticity.

OpenAI’s upcoming “AI Transparency Initiative”—outlined on the company’s blog—will explore ways to make the AI’s decision process more interpretable, hoping that users will better understand how and why certain outputs are generated. As the debate intensifies, Altman’s message resonates: the line between human and machine is blurring, and the risk of a “very fake” world is only as great as our willingness to maintain a genuine human voice.


In summary, Sam Altman’s candid critique of the “AI‑voice” phenomenon underscores a critical juncture for digital communication. While AI can enhance productivity and accessibility, it also threatens the authenticity that underpins trust, creativity, and personal connection. As OpenAI, policymakers, and users grapple with this duality, Altman’s warning serves as a reminder that the future of conversation should not be a sterile mimicry of human language but a partnership that preserves the soul of human expression.


Read the Full Fortune Article at:
[ https://fortune.com/2025/09/09/sam-altman-people-starting-to-talk-like-ai-feel-very-fake/ ]