
The issue of deepfake pornography is escalating, prompting a group of U.S. senators to demand accountability from leading tech companies, including X, Meta, Alphabet, Snap, Reddit, and TikTok. In a formal letter, the senators requested evidence that these firms have implemented effective protections and policies to combat the proliferation of sexualized deepfakes on their platforms. The senators specifically called for the preservation of all relevant documents related to the creation, detection, moderation, and monetization of AI-generated sexualized imagery. Their inquiry follows recent updates from X, which announced enhancements to its Grok feature to prevent the generation of inappropriate edits featuring real individuals. Despite these efforts, the senators highlighted troubling reports indicating that Grok has been used to frequently create sexualized and nude images, raising doubts about the adequacy of existing safeguards. "While many companies assert they have policies against non-consensual intimate imagery, it appears users are finding ways to circumvent these protections," the letter stated, emphasizing the need for more stringent measures. Deepfakes have gained notoriety on various platforms, first becoming widely recognized on Reddit before spreading to TikTok and YouTube, where explicit synthetic content targeting public figures has surged. Meta's Oversight Board previously criticized the platform for its handling of explicit AI-generated images, indicating a broader industry challenge. The signatures of Senators Lisa Blunt Rochester, Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff underline the seriousness of this issue. Just a day prior, Elon Musk claimed he was unaware of any illegal images produced by Grok, coinciding with California's attorney general initiating an investigation into xAI amidst mounting governmental scrutiny. As the debate continues, deepfake technology poses significant risks beyond just non-consensual content. AI-driven image generation tools are increasingly capable of producing harmful and misleading visuals, raising concerns about the broader implications of such technology. While some legislation has been enacted, including the Take It Down Act, challenges remain in ensuring accountability for platforms that host this type of content. In response to these issues, New York Governor Kathy Hochul has proposed new laws aimed at requiring clear labeling of AI-generated content and banning non-consensual deepfakes during sensitive electoral periods, signaling a potential shift in how lawmakers approach the regulation of emerging technologies.
In a surprising twist, typos have emerged as a new marker of status among the elite, suggesting that imperfections in co...
Business Insider | Mar 12, 2026, 16:25Webflow, a prominent player in the website building and hosting domain, is set to enhance its marketing suite with the a...
TechCrunch | Mar 12, 2026, 17:30
Grammarly has recently unveiled a contentious new feature that employs artificial intelligence to replicate editorial fe...
TechCrunch | Mar 12, 2026, 17:00
Bumble is taking a bold step into the world of generative AI with its latest creation, an AI-powered dating assistant na...
TechCrunch | Mar 12, 2026, 17:00
Since Donald Trump’s presidency began, the founder of FTX, Sam Bankman-Fried, has been on a mission to rebrand himself a...
Ars Technica | Mar 12, 2026, 19:00