Google is tapping into its extensive library of YouTube videos to develop its artificial intelligence models, including the Gemini platform and the Veo 3 video and audio generator. According to sources familiar with the matter, the tech giant is utilizing a portion of its vast collection of 20 billion YouTube videos for this purpose. The company has confirmed its reliance on YouTube content for AI training but emphasizes that it uses only a specific subset of videos while adhering to agreements with content creators and media firms. A spokesperson for YouTube stated, "We have always utilized YouTube content to enhance our products, and this practice continues in the era of AI. We are also committed to establishing safeguards that allow creators to protect their identity and likeness." However, experts warn that this strategy could trigger an intellectual property crisis for content creators and media organizations. While YouTube asserts that it has previously communicated this practice, many creators remain unaware that their content may be used for AI training. Although the platform has not disclosed which videos are being utilized, even training on just 1% of its catalog could equate to an astonishing 2.3 billion minutes of content—significantly more than what competing AI models typically use. In a blog post from September, Google stated that YouTube content could be leveraged to "enhance the product experience, including through machine learning and AI applications." Creators who upload videos to the platform cannot opt out of allowing Google to use their content for training purposes, raising concerns among many regarding the implications of their work being used to develop competing AI systems. Luke Arrigoni, CEO of Loti, highlighted the potential unfairness of creators unknowingly contributing to AI models that could replicate their work. He noted, "It's plausible that they're taking data from a lot of creators who have invested significant time and thought into their videos. This could result in the creation of synthetic versions of these creators, which does not seem equitable." Conversations with prominent creators and intellectual property professionals revealed that none were informed by YouTube about the potential use of their videos for AI training. This revelation follows Google's recent announcement of Veo 3, a highly advanced AI video generator capable of producing cinematic-quality sequences entirely generated by AI. YouTube sees an average of 20 million videos uploaded daily, and many creators now express concern about inadvertently assisting in the development of tools that may compete with or replace their own creative efforts. Arrigoni commented on the lack of transparency from Google regarding the specifics of video training, suggesting that such information could impact the company's relationship with creators. Even if the output of Veo 3 does not directly copy existing works, the generated content can still support commercial endeavors that compete with the original creators, often without any form of acknowledgment or compensation. YouTube's terms of service grant it broad rights to use uploaded content, which has led to growing unease among creators about the potential misuse of their likenesses. Dan Neely, CEO of Vermillio, noted that the rise of AI tools like Veo 3 could exacerbate the trend of creators encountering unauthorized versions of themselves online. Neely's company has developed a tool called Trace ID to evaluate the similarities between AI-generated content and human-created videos, with significant scores indicating potential infringement. Some creators have expressed a more accepting view towards the use of AI tools like Veo 3, considering it as a form of competition rather than a threat. Sam Beres, a YouTube creator with a substantial following, remarked, "I try to treat it as friendly competition rather than adversarial. It's an exciting inevitability." Google has included an indemnification clause for its generative AI products, indicating that it will assume legal responsibility in cases of copyright disputes related to AI-generated content. YouTube also rolled out a partnership with the Creative Artists Agency to help top talent manage AI-generated content featuring their likeness. While creators can request the removal of videos that misuse their likeness, concerns persist regarding the reliability of this process. Moreover, although users can decline third-party training from select AI companies, there is no option to opt out of Google's own AI training initiatives. Recent legal actions, such as the lawsuit filed by The Walt Disney Company and Universal against the AI image generator Midjourney, highlight the growing tensions surrounding AI and copyright issues in the entertainment industry. As discussions continue, there is a clear need for enhanced rights and protections for creators in an increasingly AI-driven landscape. As Senator Josh Hawley pointed out during a Senate hearing, "We need to empower individuals with enforceable rights over their images and property, or this issue will persist indefinitely."
Norwegian police have pointed to pro-Russian hackers as the likely culprits behind a suspected sabotage incident at a da...
Mint | Aug 14, 2025, 02:35Geoffrey Hinton, often referred to as the 'godfather of AI', recently shared his perspective at the Ai4 conference in La...
Business Insider | Aug 14, 2025, 07:25In a significant shake-up for Elon Musk's xAI startup, co-founder Igor Babuschkin has announced his departure from the c...
TechCrunch | Aug 13, 2025, 22:00In a significant shift for the commercial space industry, President Donald Trump has enacted an executive order aimed at...
Ars Technica | Aug 14, 2025, 24:50Foxconn, the leading contract electronics manufacturer based in Taiwan, announced a significant 27% year-over-year incre...
CNBC | Aug 14, 2025, 06:45