Back

AI Coding Surge Overwhelms Security Teams, Creates New Risk

When a financial services company swapped traditional development practices for the AI coding assistant Cursor, its code output exploded. Monthly lines of code surged from roughly 25,000 to 250,000, a tenfold increase that initially sounded like a triumph. Within weeks the firm faced a backlog of about one million lines of code that had never been reviewed for bugs or security flaws.

"The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can’t keep up with," said Joni Klippert, CEO of StackHawk, a security startup that assists the firm. The rapid rise in unvetted code has turned into a systemic risk across Silicon Valley, where many organizations are now generating more software than their security staff can examine.

Application security engineers—professionals tasked with catching errors in AI‑generated code—are in short supply. "There are not enough application security engineers on the planet to satisfy what American companies need," warned Joe Sullivan, an adviser to Costanoa Ventures. The talent gap means that even as code volumes climb, the workforce capable of safeguarding it remains stagnant.

Beyond staffing, the way AI tools are deployed creates additional hazards. Developers often run the models on personal laptops rather than secure corporate servers, pulling entire codebases onto devices that can be lost or stolen. A single missing laptop could expose sensitive data alongside the newly generated code.

Recognizing the looming threat, several AI firms have begun to embed code‑review capabilities directly into their platforms. Anthropic, OpenAI and Cursor are each working on automated security checks. Cursor recently acquired a startup specializing in code review to weave those functions into its product suite. "The software development factory kind of broke. We’re trying to rearrange the parts," said Cursor’s head of engineering.

Nevertheless, experts caution that AI‑driven reviewers are not a panacea. Human oversight remains critical before any code reaches production. The stakes were underscored when an AI‑written script caused an Amazon outage, resulting in more than 100,000 lost orders and 1.6 million errors. No company wants a repeat of that scenario, and the industry is still grappling with how to balance speed and security.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Digital Trends

Also available in: