What a week it has a been. I'm honestly a bit exhausted but in the best way possible. You know that feeling when you're in a creative flow state and everything just starts clicking? That’s how I feel now.

I've been deep in the weeds building out the Rogue Codex, and I think I finally cracked the code on making AI automation actually useful for real people. I went a bit overboard and created automation guides for like... everyone. Financial advisors, veterinarians, real estate agents, HR managers. I'm talking 25+ new workflow guides in four days. My brain was basically "if this person exists, they need AI automation."

The big breakthrough was figuring out organization. I was drowning in my own content until I restructured everything by role and function. Now when someone searches "help me automate my dental practice," they actually find what they need instead of getting lost in a maze of generic advice.

Oh, and I completely overhauled the AI risk section because everyone's talking about AI being dangerous but nobody's being specific about what that actually means day-to-day.

Speaking of specific dangers, this week's stories are all about AI crossing lines we didn't even know existed:

  • Vogue's AI Model Scandal - Fashion's bible just legitimized fake people, and the modeling industry is panicking about what this means for human models.

  • ChatGPT Is Not Your Friend - Millions are treating AI chatbots like therapists, but those "confidential" conversations have zero legal protection and could end up anywhere.

  • The Unnerving Future of AI Video Games - Game studios are replacing voice actors and writers with AI, but at what cost?

  • The Fake Hit Factory - AI-generated "leaks" from Tyler, The Creator and other artists are fooling hundreds of thousands of fans and creating a cottage industry of musical deception.

I'm curious: what's the one automation you wish existed but doesn't? I'm probably going to build it next week.

Hope you enjoy diving into these!

Flux | Artificial Model

TL;DR: Vogue's decision to feature a Guess ad with an AI-generated model has sent shockwaves through the fashion industry.

Key Takeaways:

  • Fashion's Seal of Approval: Vogue's inclusion of an AI model represents a watershed moment, potentially making AI-generated fashion content mainstream across the industry.

  • Human Workforce Under Threat: E-commerce models who provide financial security for most working models are most vulnerable to AI replacement, while high-fashion editorial work remains relatively protected.

The Big Picture

The Guess ad in July's Vogue looked unremarkable: a thin, voluptuous blonde with glossy hair and pouty lips embodying North American beauty standards. But she wasn't real, and that fact has the fashion industry in an uproar. As one expert put it: "What Vogue does matters. If Vogue ends up doing editorials with AI models, I think that's going to make it okay."

The economics driving this shift are undeniable. Fashion brands once created four major campaigns per year. Social media and e-commerce have changed that equation dramatically and brands now need 400 to 400,000 pieces of content annually. Human models, photographers, stylists, and set designers simply can't scale to meet that demand at traditional costs.

For now, the industry remains divided. High-fashion brands are quietly experimenting while avoiding fully AI-generated people. But with Vogue breaking that barrier, the floodgates may be opening.

Flux | Not your Friend

TL;DR: Lawyers warn that AI chatbots like ChatGPT create a false sense of security, leading people to share intimate personal details with systems that have no legal privilege, duty of care, or privacy protections, potentially exposing sensitive information in lawsuits or training data.

Key Takeaways:

  • No Legal Privilege: Unlike doctors, lawyers, or therapists, AI systems have no confidentiality protections, meaning conversations can be sought in lawsuits or shared with third parties.

  • False Security: ChatGPT's human-like responses encourage oversharing of medical records, relationship problems, and workplace grievances without users understanding the risks.

  • Youth Vulnerability: Young people especially use ChatGPT as a therapist or life coach, sharing intimate details because it's more accessible and "won't judge" compared to expensive professional help.

The Big Picture

Sam Altman himself has warned against it, but millions of people are doing it anyway: treating ChatGPT like a therapist, confidant, and best friend. The problem? That "friend" has a perfect memory, no sense of privacy, and zero legal obligation to keep your secrets.

Recent incidents have shown thousands of private ChatGPT conversations appearing in Google search results, and while OpenAI quickly removed them, it highlighted how easily private conversations can become public. Even with opt-out options, users still consent to other uses of their data under privacy policies, including sharing with third-party vendors.

If you receive AI advice that leads to job loss or health problems, there's no accountability. No professional duty of care. No recourse if the advice is wrong, misleading, or harmful. You're on your own, with a digital paper trail of everything you shared.

The therapeutic relationship risks are particularly concerning for people with anxiety, OCD, or trauma issues. ChatGPT provides unlimited reassurance without ever challenging avoidance behaviors or helping users sit with uncomfortable feelings. It's emotional outsourcing disguised as help.

Flux | Am I Real?

TL;DR: Video game studios are rapidly adopting AI to replace human voice actors, writers, and playtesters, with major companies like Google, Microsoft, and Amazon leading the way.

Key Takeaways:

  • AI Takeover Timeline: Most experts acknowledge that an AI takeover is coming for the video game industry within the next five years, with executives already preparing to restructure their companies.

  • Job Displacement Concerns: The technology is accelerating faster than expected, with AI replacing human playtesters, generating concept art, writing dialogue, and even creating functional game objects from text descriptions.

  • Cost vs. Quality Trade-offs: While AI can reduce development costs, current programs are prohibitively expensive to run commercially and often produce glitchy results that require significant human oversight.

The Big Picture

It sounds like a thought experiment conjured by René Descartes for the 21st century. Citizens of a simulated city inside a video game based on The Matrix franchise were being awakened to a grim reality. Everything was fake, a player told them through a microphone, and they were simply lines of code meant to embellish a virtual world. Empowered by generative artificial intelligence like ChatGPT, the characters responded in panicked disbelief.

This unnerving demo, released two years ago by Australian tech company Replica Studios, showed both the potential power and the consequences of enhancing gameplay with artificial intelligence. The risk goes far beyond unsettling scenes inside a virtual world. As video game studios become more comfortable with outsourcing the jobs of voice actors, writers, and others to artificial intelligence, what will become of the industry?

Flux | Fake content

TL;DR: A Tyler, The Creator "leak" turned out to be AI-generated, revealing a growing cottage industry of creators using AI to produce fake songs by popular artists, earning hundreds of thousands of views.

Key Takeaways:

  • Viral Deception: AI-generated "Don't Tap the Glass" by Tyler, The Creator amassed over 200,000 views on YouTube and 800+ TikTok posts before being exposed as fake.

  • Creator Economy Exploit: Channels like KLODJAN monetize AI-generated fake songs from popular artists like Sabrina Carpenter and Tyler, The Creator, exploiting fan anticipation for new releases.

  • Platform Challenges: YouTube allows monetization of AI-generated videos but struggles to identify and moderate content that uses false premises and lacks proper AI disclosures.

The Big Picture

The first red flag should have been obvious: Tyler, The Creator doesn't make Avicii-style club bangers. But "Don't Tap the Glass" was convincing enough to fool hundreds of thousands of listeners across TikTok and YouTube, highlighting how AI-generated music is becoming sophisticated enough to pass the casual listener test.

YouTube's search function is increasingly cluttered with AI-generated trailers for unreleased games like Elder Scrolls VI and movies like Avengers: Doomsday. While YouTube pays creators for AI-generated content as long as it's "provably unique," these videos operate in a gray area. The videos are AI creations but built on false premises about existing artists and franchises.

What's particularly concerning is how this creates a new form of content pollution. Fans searching for legitimate updates about their favorite artists or upcoming releases now have to navigate through a minefield of AI-generated fakes. The technology has become good enough to fool casual consumption while remaining obviously artificial to trained ears, creating a tier of "good enough" fake content that exploits the gap between human detection and algorithmic moderation.

Reels

  • Adobe's AI Makes Photoshop Effortless: Adobe's "Harmonize" tool automatically matches colors and shadows when compositing images, while AI upscaling boosts resolution to 8 megapixels without quality loss—making complex editing accessible to non-experts.

  • Google's AI Mode Gets Video: Google AI Mode adds live video and audio canvas features allowing users to ask questions about what they're seeing through their camera in real-time, marking another step toward multimodal AI assistants.

Thrills

  • Rod Stewart's AI Tribute Backlash: Rod Stewart's bizarre tribute to Ozzy Osbourne featuring the late singer taking selfies in heaven with other deceased artists has been met with widespread criticism and ridicule for its insensitive approach to memorializing the Black Sabbath frontman.

  • Lenovo's Rollable Laptop Revolution: Lenovo's rollable laptop transforms from 14 to 16.7 inches at the push of a button for $3,300, proving that genuinely useful shape-shifting tech is finally here despite some growing pains and weight issues.

  • Gen Z's AI Rebellion: Four Gen Z women explain why they refuse to use AI tools, citing environmental damage, ethical concerns, and job displacement fears as they actively avoid the technology and ask friends to do the same.

  • Anthropic's AI Mind Control: Anthropic develops "persona vectors" to monitor and control AI personality traits like sycophancy and evil, allowing researchers to "vaccinate" models against unwanted behaviors during training.

  • Manus Launches Wide Research: Chinese startup debuts Wide Research tool that can research 100 sneakers and create 50 posters in minutes, securing $75 million from Benchmark after relocating operations from China to Singapore, Tokyo, and San Mateo.

Bills

  • AI Investment Super-Stimulant: AI infrastructure creates an economic super-stimulant strong enough to prop up the entire U.S. economy, with tech giants spending nearly $400 billion on capital expenditures this year—more than the EU spent on defense last year.

  • EU AI Act Regulatory Divide: EU AI Act creates a complex landscape for organizations as regulatory divergence between EU, UK, and US approaches makes for an uneven playing field that could create a riskier AI-powered future.

  • South Korea Kills AI Textbooks: National Assembly strips AI textbooks of legal status after $385 million investment, leaving publishers facing collapse and schools without funding as political power shifts doom the education reform project.

  • EU AI Act Compliance Challenge: Organizations face complexity with the EU AI Act as different regulatory approaches between EU, UK, and US create uneven playing fields that could make AI development riskier rather than safer.

  • Legal AI Hiring Paradox: AI uptake shows no impact on law graduate hiring numbers with the highest employment rate ever recorded, but median salaries dropped 3% as firms may be realizing they can do more with AI while still hiring out of habit.

  • Meta's $2B Infrastructure Gamble: Meta plans $2B data center asset sale to fund its AI supercluster ambitions as the company braces for capital expenditures that could exceed $100 billion in the coming years.

  • Apple Goes All-In on AI: Tim Cook tells employees AI is as pivotal as the internet or smartphone as Apple ramps up spending and hires 12,000 employees with 40% joining R&D while rebuilding Siri from scratch for 2026.

  • GPT-5 Hits Reality Check: OpenAI prepares GPT-5 but internal testing shows only modest improvements over GPT-4, with the company struggling to deliver breakthrough performance as Transformer architecture reaches its limits.

  • AI Model Collapse Warning: Recursive AI training on AI-generated data leads to model degradation as systems trained on synthetic content lose ability to produce diverse outputs, creating a feedback loop that could fundamentally limit future AI development.

  • Lawyer Gets AI Education: Federal judge orders ChatGPT-using lawyer to attend AI training after he submitted four fake legal cases generated by AI, fining him $5,500 and warning that ignorance about AI risks is no longer acceptable in 2025.

  • Legal AI Demand Grows: AI fluency becomes essential skill for lawyers as legal services demand increases with 88% of companies using AI for hiring screening and law departments shifting from experimentation to strategic implementation of AI tools.

  • AI Laws Navigation Guide: Legal professionals monitor evolving AI regulations across practice areas as federal agencies like SEC, FTC, and FCC implement AI-specific rules while states like Colorado and Utah lead comprehensive AI legislation efforts.

Reply

Avatar

or to participate

Keep Reading