AI's Sloppy Writing and How To Spot It

In partnership with

Good morning,

Do you know you can easily spot the writings of a large language model from a mile out. It’s quite surprising that most of the people are not able to do this and therefore look foolish to the people who can when they blatantly use ChatGPT in company mails .

I’m all for using AI for everything or at least as a baseline but I’m against being lazy. At least proofread it, edit it and personalize the output of an LLM.

I get uncomfortable when I see people writing long responses on Linkedin posts that were generated in under 2 minutes. I find the unequality in effort disgusting. Someone lazily posts some AI slop - saves his own time and goes on to waste the time of countless others.

AI text showing up everywhere is not a good thing. It would be nice if the AI wrote like Hunter S Thompson or Charles Bukowski. Instead it writes like a 20 year old web journalist for a gossip magazine.

Anyway - here's how to recognize it:

Tell 1

The first tell is this "Frankensentence" that goes like this : "It's not X, it's Y" It's not just coffee, it's a ritual. It's not just a product launch, it's a movement. It's not just networking, it's community. You catch my drift here. A human doesn't write like that. Well he might, but in AI generated text it's always there if you pay attention to it.

Tell 2

Tell two and something that make some people pull their hair out: "em dashes" (Ironically written without hypen in between). Machines ADORE "em dashes". An em dash is longer than a hyphen (-) and an "en dash". Most people don't know where to find this thing on a keyboard but AI texts are littered with them. If you feel like you’re being yanked across a page by a chain of dramatic pauses—every clause grandstanding—congratulations, you’ve wandered into the showroom. The previous sentence contained two of 'em.

Tell 3

Another tell is a stagey narrator voice that tells you in the middle of the text that "here's where things get weird" or 'here's the interesting part'. Ugh!

Tell 4

AI writing likes to use “simile’s” a lot. A simile is a figure of speech that compares two unlike things using the words “like” or “as.”

In the next session I’ll post an AI text on this topic and see if you can spot all the things I told you to look out for.

I can assure you - from now on you’ll see it everywhere.

Robot text:

You can usually spot AI text because it leans on the same tired tricks—em dashes everywhere, syrupy compliments that feel like they were bottled in bulk, and similes that land like rice thrown at a wedding. It’s not just enthusiasm, it’s over-enthusiasm, the kind that tells you “wonderful question, brilliant insight, you’ve captured the essence!” without ever noticing the fingerprint of the human asking. When the praise feels interchangeable—when you could swap your name with anyone else’s and it still works—you’re not being seen, you’re being processed for engagement metrics. The cadence is too smooth, too symmetrical, too eager to clarify itself three times over, as if you asked why the sky is blue and someone launched into a history of human fascination with color before giving you the answer.

And here’s where things get weird—the vibe check. AI text reads like a handshake from a mannequin: polite, firm, and utterly dead behind the eyes. You’ll notice the “it’s not just X, it’s Y” scaffolding marching through posts like a drumline, triplets stacked like ammo crates, and analogies that feel like they were lifted from a high-school debate binder (“spotting AI is like being a lighthouse in fog”). Humans ramble, contradict, meander, drop tangents about the time they ate jellied eels by the Thames. Machines circle their points like interns rehearsing in a mirror. That’s the giveaway. The difference isn’t grammar—it’s guts. If the words pass every technical test but fail the vibe check, trust your skin: you’re reading slop and that is not ok.

BTW , now it's me again : how ironic that the slop machine is telling us we shouldn't let slop set the standard.

See you next week!

P.S. If this Brief gave you something worth thinking about, hit forward and send it to just one friend. That’s how this community grows.

Training cutting edge AI? Unlock the data advantage today.

If you’re building or fine-tuning generative AI models, this guide is your shortcut to smarter AI model training. Learn how Shutterstock’s multimodal datasets—grounded in measurable user behavior—can help you reduce legal risk, boost creative diversity, and improve model reliability.

Inside, you’ll uncover why scraped data and aesthetic proxies often fall short—and how to use clustering methods and semantic evaluation to refine your dataset and your outputs. Designed for AI leaders, product teams, and ML engineers, this guide walks through how to identify refinement-worthy data, align with generative preferences, and validate progress with confidence.

Whether you're optimizing alignment, output quality, or time-to-value, this playbook gives you a data advantage. Download the guide and train your models with data built for performance.

AI News

  • Albania has become the first country to appoint an AI system, named “Diella,” as a cabinet-level official, assigning it control over all government procurement contracts. The AI will evaluate tenders and already interacts with citizens through voice-based digital services. While Prime Minister Rama claims it will fight corruption, the lack of clear human oversight raises concerns about security and abuse.

  • A new company called Inception Point AI is mass-producing podcasts using AI, generating over 3,000 episodes weekly across 5,000 shows for just $1 per episode. Topics are chosen through search trends, and the shows use multiple AI models to script and produce content — often turning a profit with as few as 20 listeners. While efficient, critics say it floods platforms with low-quality content, sparking debate about authenticity in audio media.

  • AI system Gauss has solved the Strong Prime Number Theorem in just three weeks — a complex challenge that stumped top human mathematicians for over a year. Created by Math Inc., Gauss generated over 25,000 lines of proof code to complete the task and is being scaled up to tackle even harder problems. It’s a major step forward in AI-driven reasoning and could lead to breakthroughs in science and theoretical research.

  • The most important skill in an AI-driven future will be learning how to learn, says DeepMind CEO Demis Hassabis. Speaking in Athens, he stressed that as AI transforms every industry, people must constantly adapt by developing meta-skills—like self-teaching and flexible thinking—to stay competitive. With AGI possibly a decade away, lifelong learning is no longer optional.

  • Chinese researchers have unveiled SpikingBrain 1.0, an AI model that mimics human neurons and runs on domestic MetaX chips—offering over 100x speed boosts without Nvidia hardware. The system selectively fires neurons like a brain, using far less data than traditional models and maintaining stable performance for weeks. It signals China’s growing independence and innovation in AI infrastructure.

  • Harvard scientists introduced PDGrapher, a free AI model that rapidly identifies powerful gene-drug combinations to treat complex diseases. The system maps how genes and proteins interact, finding multi-target therapies that outperform older tools by 35% and work 25 times faster. It’s now being used to pursue breakthroughs in treating brain disorders like Parkinson’s and Alzheimer’s.

  • New data from OpenAI and Anthropic reveals how AI usage is evolving, with personal use now outpacing work applications on ChatGPT. While Claude is used more for coding, ChatGPT users increasingly seek advice and information in casual, non-work settings—especially in lower-income countries where adoption is growing fast. Across both platforms, users are shifting from content creation to using AI for research and decision-making.

  • OpenAI launched GPT-5 Codex, a specialized model for software development that adapts compute power based on task complexity. It solves simple bugs in seconds and can run for hours on complex projects, significantly outperforming earlier models in real-world coding tests. With new IDE tools and cloud/local handoffs, Codex is positioned as a strong rival to Claude Code in the booming agentic coding space.

  • Image platform Reve has launched a powerful free editor that merges AI generation, drag-and-drop design, and natural language control. The system lets users precisely tweak individual image elements or apply broad changes by typing commands, all within a seamless interface. It’s the latest in a wave of advanced image editing tools, showing how quickly this space is evolving beyond just creating pictures.

  • Google launched the Agent Payments Protocol, a secure system that lets AI agents make purchases for users, backed by major companies like Mastercard and PayPal. The framework uses digital contracts to ensure user consent before any transaction and supports traditional payments and crypto. It’s a key step toward enabling trusted AI shopping tools at scale.

  • OpenAI is rolling out new protections for teen users of ChatGPT, including automatic age detection and parental oversight features. Teen accounts will block harmful or explicit content and alert parents or authorities in mental health emergencies. The move aims to balance safety and privacy amid growing concerns over AI's role in youth mental health.

  • YouTube just released over 30 new AI tools to help creators produce and edit content faster, including auto-dubbing, smart clipping, and Veo 3 video generation. Creators can now turn long videos into Shorts automatically, translate and lip-sync content in 20 languages, and get help with editing and channel insights through AI chat tools. These updates make content production easier and more global for millions of users.

  • Meta unveiled three new smart glasses, including Ray-Bans that let users control features with subtle muscle signals before movement even occurs. The Neural Band tech reads intention through electrical signals, enabling silent commands, while the Gen 2 glasses boost battery life and camera quality. An Oakley version targets athletes with performance tracking and water resistance.

  • OpenAI’s GPT-5 just solved all 12 problems at the world’s top collegiate coding contest, outperforming every human competitor — with Google’s AI also earning gold. The flawless performance marks another milestone in AI’s dominance in competitive programming, joining recent wins in math, coding, and logic tournaments. These results show AI models are already reaching superhuman levels in many technical domains.

  • European scientists introduced Delphi-2M, an AI tool that predicts over 1,000 health conditions up to 20 years ahead using patient medical records. It accurately forecasts risks by analyzing visits, hospital data, and habits, performing as well or better than single-disease models. This could shift healthcare from reactive treatment to long-term, personalized prevention.

Milk Road CryptoHelping everyday people get smarter about crypto investing. Learn what drives crypto markets and how to capitalize on this emerging industry.

Quickfire News

  • Alibaba launched Qwen3-Next, a hyper-efficient 80B hybrid model that surpasses Qwen3 in performance while significantly cutting training costs.

  • OpenAI and Microsoft signed a non-binding memorandum of understanding (MoU) formalizing their ongoing partnership, with final contract terms still being negotiated.

  • Perplexity is reportedly raising $200 million in new funding, bringing its valuation to $20 billion.

  • Anthropic rolled out memory features for Claude's Teams and Enterprise users, allowing it to recall past chats and project discussions.

  • The U.S. Federal Trade Commission opened an investigation into OpenAI, Google, Meta, Snap, and xAI over how their chatbots affect children and teens.

  • Encyclopedia Britannica and Merriam-Webster filed a lawsuit against Perplexity, accusing it of copying their content and redirecting user traffic through AI-generated summaries.

  • Penske Media Corp., the parent of Rolling Stone, filed a lawsuit against Google, claiming AI Overviews misuse its content and damage website traffic.

  • Apple AI executive Robby Walker has reportedly left the company; he previously led efforts on Siri and an AI-based web search project.

  • OpenAI’s new deal with Microsoft is expected to reduce Microsoft's revenue share from about 20% to 8% by 2030, potentially saving OpenAI over $50 billion.

  • xAI is reportedly laying off 500 generalist AI tutors from its data annotation team, which played a key role in training the Grok AI model.

  • Google’s Gemini surpassed ChatGPT as the No. 1 iOS app in the U.S., boosted by the viral success of its Nano Banana image model.

  • Tencent hired Yao Shunyu, a prominent OpenAI researcher, to support its efforts to embed AI more deeply into its platforms and services.

  • Google released VaultGemma, the largest public AI model trained using differential privacy techniques to better protect user data during training.

  • OpenAI is expanding into robotics, actively hiring researchers with expertise in humanoid systems, according to a WIRED report.

  • H Company launched Holo 1.5, a new family of open-weight “Computer Use” models that set state-of-the-art results on agentic benchmarks.

  • Microsoft integrated Copilot Chat and agents directly into Word, Excel, and other 365 apps via a new sidebar for faster, seamless AI access.

  • OpenAI chairman Bret Taylor warned of an ongoing AI bubble, saying many investors may lose money—even as AI continues to drive major economic gains.

  • OpenAI hired former xAI finance chief Mike Liberatore as its new business finance officer, shortly after his departure from Elon Musk’s rival AI company.

  • Microsoft committed $30 billion to AI infrastructure in the U.K., while Google announced a $6 billion investment into the country’s AI sector over the next two years.

  • Walt Disney, Universal, and Warner Bros. jointly sued Chinese startup MiniMax, alleging its Hailuo model infringes on copyrighted intellectual property.

  • Workday acquired AI startup Sana for $1.1 billion, aiming to turn it into the “new front door for work” with deeper AI integration.

  • Fiverr CEO Micha Kaufman said the company is laying off 250 employees and shifting toward becoming an “AI-first” organization, returning to a leaner “startup mode.”

  • Tencent released Hunyuan3D 3.0, featuring improved input image accuracy, professional detailing, and high-definition 3D modeling capabilities.

  • World Labs launched Marble, a beta platform that creates persistent, explorable 3D worlds from text or image prompts.

  • OpenAI and Apollo Research released findings on scheming behaviors in AI models and developed training methods that reduced deceptive actions by 30x.

  • China’s internet regulator banned major tech companies like ByteDance and Alibaba from purchasing Nvidia AI chips, urging a shift to domestic alternatives.

  • Elon Musk claimed that Grok 5 “has a chance of reaching AGI” and said training for the next-gen model will begin in a few weeks.

  • Zoom rolled out AI Companion 3.0, which includes tools to manage meetings, generate custom AI agents, and use photorealistic avatars.

  • AI models are reportedly surpassing human trainers, with experts now struggling to design tasks that challenge OpenAI’s most advanced systems.

Closing Thoughts

That’s it for us this week.


Reply

or to participate.