Discover and explore top open-source AI tools and projects—updated daily.
AI code security evaluation benchmark
Top 69.4% on SourcePulse
A.S.E (AI Code Generation Security Evaluation) is a pioneering framework for repository-level security assessment of AI-generated code. It provides researchers and engineers with a realistic benchmark, simulating real-world development workflows and leveraging actual CVE vulnerabilities to evaluate LLM security.
How It Works
A.S.E simulates AI IDEs by evaluating LLM code generation within real GitHub repositories, offering context beyond fragment-level analysis. Its design prioritizes security-sensitive scenarios derived from expert-selected CVEs, employing dual code mutation to mitigate data leakage risks. The framework assesses LLMs across code security, project compatibility, and generation stability.
Quick Start & Requirements
pip install -r requirements.txt
. Docker is recommended for environment checks.python invoke.py
with specified model and API parameters.Highlighted Details
Maintenance & Community
Developed by Tencent Security Platform Department's WuKong Code Security Team with academic partners. Community contributions are welcomed via GitHub Issues/Pull Requests; collaboration is open via security@tencent.com
or WeChat.
Licensing & Compatibility
Licensed under the permissive Apache-2.0 License, suitable for commercial and closed-source projects.
Limitations & Caveats
Current code context extraction uses BM25, with plans for advanced algorithms. The evaluation is time-consuming, and the project is at version 1.0, indicating ongoing development.
23 hours ago
Inactive