Flux | Not in Kansas

TL;DR:

Google employed advanced AI techniques to enhance the 1939 film The Wizard of Oz for the Las Vegas Sphere’s massive screen, generating new pixels for higher resolution and creating additional character performances and backgrounds.

Key Points:

  • Google’s AI enhanced the 86-year-old film’s resolution by generating new pixels, overcoming the limitations of the original 35mm footage.

  • New AI methods, dubbed “performance generation” and “outpainting,” added characters and scenery not in the original shots, such as Uncle Henry appearing in a scene with an expanded background.

Why It Matters: Google’s AI-driven re-creation of The Wizard of Oz for the Las Vegas Sphere could redefine how legacy content is adapted for modern venues. By generating new pixels to enhance resolution and crafting additional performances—such as Uncle Henry’s visible presence in a previously off-camera moment—the project demonstrates AI’s capacity to expand the boundaries of archival footage.

This approach enables studios to repurpose classics for immersive environments like the Sphere, where traditional upscaling falls short on a 160,000-square-foot display. The collaboration with filmmakers like Jane Rosenthal underscores an effort to balance technological innovation with artistic integrity, addressing concerns from an industry wary of AI’s encroachment. The reimagined film will debut August 28

Jobs

Flux | Cost Cutting

TL;DR: Director James Cameron, speaking on Meta’s “Boz to the Future” podcast, advocated for using generative AI to halve the cost of effects-heavy blockbuster films.

Key Points:

  • Cameron joined Stability AI’s board in 2024 to deepen his understanding of AI technologies like Stable Diffusion.

  • He proposed AI could double the speed of visual effects (VFX) workflows, reducing costs for CG-heavy films without cutting jobs.

Why It Matters: Hollywood’s blockbuster model is under strain from declining studio budgets and a post-pandemic box office that struggles to rebound. By advocating for AI to streamline VFX workflows, Cameron proposes a practical fix: cutting production costs by accelerating shot completion, not slashing jobs. For instance, a typical VFX-heavy film can spend tens of millions on post-production alone; halving this through AI could free up budgets for riskier projects or wider theatrical runs.

His involvement with Stability AI, a leader in generative models, lends weight to his vision of purpose-built AI tools tailored for film pipelines. Cameron’s argument that artists are inherently “models” shaped by unregulatable inputs challenges the industry to rethink copyright in an AI era. If adopted, his approach could accelerate production cycles, enabling studios to produce more content with less financial risk.

Flux | New Movie Experience

TL;DR: Meta’s partnership with Blumhouse for the sci-fi horror film M3gan encourages moviegoers to use their phones during screenings to interact with a chatbot, raising concerns among UK cinema executives about disrupted audience experiences and piracy risks.

Key Points:

  • Meta has teamed up with Blumhouse to re-release the 2022 sci-fi horror film M3gan in a 40-city, three-week US film festival called Halfway to Halloween, starting at the end of April.

  • During screenings, audiences can message a special Instagram account to interact with a M3gan chatbot, receiving trivia and behind-the-scenes content as a “second screen” experience..

Why It Matters: By inviting audiences to message a chatbot tied to the film’s AI doll during screenings, Meta aims to deepen fan immersion with exclusive content like trivia and behind-the-scenes insights, effectively turning phones into a sanctioned part of the viewing experience.

Some fear legitimizing phone use could erode decades of etiquette that prioritize collective focus on the screen, potentially alienating audiences who value cinema’s escapist allure. Studios also fear piracy, estimated $29 billion annually, could worsen if phone use becomes normalized, as covert recordings grow harder to police.

Flux | Robo Darkroom

TL;DR: Synthesia, a $2b British AI startup, has partnered with Shutterstock to license corporate video footage, aiming to enhance the realism of its AI-generated avatars.

Key Points:

  • Synthesia will use Shutterstock’s corporate video library to train its AI model, focusing on improving avatars’ expressions, vocal tones, and body language.

  • The startup licenses actors’ likenesses for three years, compensating them with cash and, recently, company stock for popular avatars.

Why It Matters: By paying for access to Shutterstock’s vast library of corporate footage, Synthesia sidesteps the contentious practice of scraping copyrighted material without permission—a flashpoint in debates over AI training data. The deal underscores the growing demand for lifelike digital avatars in professional settings, where nuanced body language and vocal delivery can enhance communication.

Synthesia’s decision to compensate actors with stock options reflects an acknowledgment of their value, yet the three-year licensing model raises questions about long-term rights in an era where digital likenesses can persist indefinitely. As companies like the United Nations adopt these avatars for global outreach, the technology’s scalability is clear—but so is the need for robust frameworks to protect intellectual property and ensure fair compensation.

Reels

Thrills

Bills

Reply

Avatar

or to participate

Keep Reading