Google’s AI tool ‘Big Sleep’ flags 20 security flaws in open-source software without human input

3 min read


Google has announced that its artificial intelligence-based tool, named Big Sleep, has discovered 20 security vulnerabilities in popular open-source software. The company shared this development as part of its broader initiative to use AI in cybersecurity research. Heather Adkins, Google’s Vice President of Security, revealed the update through a post on X, formerly known as Twitter.

Google's AI tool Big Sleep has detected 20 security flaws in widely used open-source software.(REUTERS)
Google’s AI tool Big Sleep has detected 20 security flaws in widely used open-source software.(REUTERS)

Google AI Tool Flags Issues in Popular Tools

Big Sleep, developed in collaboration between Google DeepMind and Project Zero, analysed a variety of open-source tools and detected flaws in systems like FFmpeg and ImageMagick. These tools are widely used in multimedia processing, including handling audio, video, and image files. Although Google has not released specific technical details of the identified vulnerabilities, it confirmed that the AI system located and reproduced the flaws independently. A human security analyst later reviewed and verified the findings before the company reported them.

Also read: Xiaomi Unveils New AI Voice Model to Boost Auto, Home Tech

How Big Sleep Operates

The AI system works by simulating the behaviour of malicious users and scanning software code and network services for weak points. Big Sleep not only probes systems for vulnerabilities but also adapts its methods over time and learns new ways to uncover complex issues. The 20 flaws discovered so far span across Google’s internal platforms and several open-source projects.

Also read: CEO Tim Cook says Apple is ready to open its wallet to catch up in AI

According to Google, the purpose of deploying AI in this capacity is not to replace human security researchers but to assist them. The company highlighted that its AI agent can execute thousands of test cases much faster than humans, which will allow cybersecurity teams to focus on strategic decision-making while the AI manages routine testing.

Also read: Samsung Galaxy S25 review: Flagship features in a handful package

Big Sleep is not the only AI tool involved in such work. Other AI systems like RunSybil and XBOW have also shown promise in the bug-hunting field. XBOW recently led a leaderboard on the HackerOne bug bounty platform in the United States.

Also read: Oppo F29 Pro Review: Polished performer that delivers where it matters

While AI tools have started to reshape how vulnerabilities are discovered, concerns remain. Some developers have expressed caution over an increase in inaccurate bug reports generated by AI, often referred to as “AI slop.” These reports sometimes contain errors or imaginary flaws. Nonetheless, some experts in the field, including Vlad Ionescu, co-founder of RunSybil, have endorsed Google’s Big Sleep as a serious project backed by the right expertise and infrastructure.

Mobile finder: Google Pixel 9 Pro XL LATEST price, specs and all details

Source link

You May Also Like