A recent incident involving Lovable, a Swedish AI coding startup, has raised significant concerns about the security implications of vibe coding. On Monday, a user on X, known as "Impulsive," claimed that Lovable experienced a data breach that impacted all projects created before November 2025, allowing access to another user's code, AI chat histories, and customer data through a free account. The user highlighted that employees from major companies such as Nvidia, Microsoft, Uber, and Spotify were among those affected. They noted that despite reporting the issue 48 days prior, Lovable categorized it as a duplicate and left it unresolved. In response, Lovable refuted the breach claim, asserting that the visibility of public project codes was intentional to facilitate exploration of ongoing projects. However, following backlash regarding the clarity of their messaging and user data security, Lovable issued a second statement explaining that since December, all subscription tiers defaulted to private visibility. They further admitted to a security miscalculation, confessing that a backend update had inadvertently made chats on public projects accessible again. Upon discovering this, they promptly reverted this change to restore privacy to public chats. The incident has elicited mixed reactions from users. Some appreciated Lovable's transparency, while others expressed frustration, comparing the initial response to gaslighting. Tom Van de Wiele, founder of the security firm Hacker Minded, characterized this event as a stark reminder of the need for secure defaults in the age of automation and AI. He cautioned that relying on users to differentiate between public and private information often leads to security oversights. Jake Moore, a global cybersecurity advisor at ESET, argued that while this incident might not fit the traditional definition of a data breach, it nonetheless exposes critical vulnerabilities. He noted that the focus on semantics rather than the impact suggests a lack of foundational security measures from the outset. Professional developers generally discourage excessive reliance on AI due to its tendency to generate untested and potentially insecure code, further complicating information security. The Lovable incident is part of a worrying trend, following two significant data leaks from AI companies in recent weeks. Anthropic recently reported a leak involving nearly 2,000 files and 500,000 lines of code, while Vercel disclosed unauthorized access to internal systems due to a third-party tool compromise. These occurrences underscore the urgent need for robust security practices in an era where AI technologies are increasingly deployed in coding environments.
In a dramatic turn of events during the third day of the OpenAI trial, tensions escalated as Elon Musk was subjected to ...
CNBC | Apr 30, 2026, 13:15
Sam Altman, the CEO of OpenAI, has shifted his stance on universal basic income (UBI), a concept he once championed. Dur...
Business Insider | Apr 30, 2026, 13:25Instagram has announced significant changes aimed at discouraging the proliferation of content aggregators who primarily...
TechCrunch | Apr 30, 2026, 12:15In a bold move to revolutionize online shopping, Amazon has rolled out an innovative feature that utilizes AI to create ...
Business Insider | Apr 30, 2026, 09:30A recent analysis has uncovered that more than half of the long-shot bets placed on military actions through Polymarket ...
Ars Technica | Apr 30, 2026, 13:25