
Riverside, the popular online podcast recording platform, has introduced a novel year-end review tool reminiscent of Spotify's "Wrapped." Dubbed "Rewind," this feature generates three personalized videos for podcasters, offering a unique twist on the typical statistics like total recording time or episode count. Instead of focusing solely on numbers, Riverside presents a fifteen-second montage filled with laughter, showcasing moments where co-hosts share genuine joy. Another video humorously compiles a montage of the most frequent "umm" utterances. Additionally, the platform analyzes AI-generated transcripts to identify the word that was spoken most frequently, excluding common filler words such as "and" or "the." In an amusing twist, my co-host and I found that we said "book" more than anything else, likely influenced by our subscriber-only book club discussions and our co-host's upcoming publication. In our podcast network's Slack channel, we shared our Rewind videos, finding humor in the endless repetitions of "umm." However, these videos also underscore a growing concern: the increasing saturation of AI features in creative tools, many of which may not be necessary. While Riverside's Rewind is entertaining, it raises questions about its true value—what purpose does a video of us repeating "book" serve in the long run? Despite the enjoyment I found in Riverside's recap, it arrives at a time when many in the industry feel the impact of AI encroaching on creativity and production opportunities. Tools that automate tasks like removing filler words can be helpful, yet podcasting is an inherently nuanced art form. AI may efficiently generate transcripts for accessibility, significantly reducing the time previously spent on such tasks, but it lacks the ability to make editorial judgments that enhance storytelling. Unlike human editors, AI struggles to discern when a light-hearted tangent is engaging or when it should be omitted for pacing. Recent developments in AI-generated content highlight the technology's limitations. The Washington Post's attempt to launch AI-generated daily podcasts resulted in troubling inaccuracies, with up to 84% of these podcasts failing to meet the publication's standards due to fabricated quotes and factual errors. This situation reflects a fundamental misunderstanding of how large language models (LLMs) operate. They are designed to create statistically probable outputs rather than ensure factual accuracy, especially in the context of breaking news. While Riverside has crafted an enjoyable year-end product, it serves as a crucial reminder of the complexities involved with AI in every industry, including podcasting. As we navigate this "AI boom," it is essential to differentiate between beneficial applications of AI and those that may lead to subpar content.
Rivian has unveiled the specifications and pricing details for its highly anticipated R2 SUV, but customers eager to pur...
TechCrunch | Mar 12, 2026, 21:00
Substack is making significant strides in the realm of video content with the introduction of its new Substack Recording...
TechCrunch | Mar 12, 2026, 18:45
The landscape of enterprise software is on the brink of a significant transformation, driven by an unexpected alliance b...
CNBC | Mar 12, 2026, 21:05
In an exciting development for AI enthusiasts, Perplexity has introduced its latest innovation: the 'Personal Computer.'...
Ars Technica | Mar 12, 2026, 17:45
During an interview with CNBC, Palantir's CEO Alex Karp emphasized the significant advantage that artificial intelligenc...
CNBC | Mar 12, 2026, 22:05