GPT-3 experiment for security vulnerability detection in code
Top 55.0% on sourcepulse
This repository demonstrates the experimental use of OpenAI's GPT-3 (text-davinci-003) for identifying security vulnerabilities in code. It targets developers and security professionals interested in AI-assisted code analysis, showcasing GPT-3's potential to find a significant number of vulnerabilities, often exceeding commercial tools in quantity, with a low false positive rate.
How It Works
The approach involves feeding individual code files to GPT-3, which has a limited context window. To overcome this, each file is scanned separately. GPT-3 leverages its pre-existing knowledge of common libraries (like Express.js, Flask, C standard library) to infer vulnerabilities even without direct access to the library source code. This method is advantageous as it mimics how some static analysis tools operate and capitalizes on the LLM's broad training data.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Licensing & Compatibility
snoopysecurity/Vulnerable-Code-Snippets
, which may have its own licensing.Limitations & Caveats
GPT-3's inability to process entire repositories at once may limit its effectiveness in detecting vulnerabilities that span multiple files or require deep inter-file context. The experiment acknowledges that GPT-3 missed some vulnerabilities that an experienced human auditor would find.
2 years ago
Inactive