vibe-security-skill  by raroque

AI coding assistant security audit skill

Created 3 weeks ago

New!

437 stars

Top 68.1% on SourcePulse

GitHubView on GitHub
Project Summary

This agent skill audits code, particularly applications built with AI coding assistants, for common security vulnerabilities. It targets developers seeking to prevent security flaws like hardcoded secrets or improper data handling that AI tools often introduce, thereby enhancing application security before deployment.

How It Works

The skill functions as an agent extension, leveraging technology-specific rule files to conduct targeted security audits. It analyzes code for patterns commonly missed by AI assistants, such as insecure authentication flows, exposed secrets, or inadequate database security policies. This approach ensures relevant checks are performed without unnecessary context, improving efficiency and accuracy in identifying AI-introduced vulnerabilities.

Quick Start & Requirements

  • Primary install: Use npx skills add https://github.com/raroque/vibe-security-skill --skill vibe-security.
  • Prerequisites: Node.js must be installed.
  • Compatibility: Works with Claude Code and OpenAI Codex (select "Codex" when prompted for OpenAI). Manual installation by cloning the repository is also supported.

Highlighted Details

  • Secrets & Env Vars: Detects hardcoded API keys, secrets in public environment variables (NEXT_PUBLIC_, VITE_, EXPO_PUBLIC_), and missing .gitignore entries.
  • Database Security: Checks for disabled Supabase Row-Level Security (RLS), insecure Firebase rules (allow: if true), and missing authentication in Convex.
  • Auth & Authorization: Identifies issues like jwt.decode() without verification, tokens stored in localStorage, and unprotected Server Actions.
  • Payments: Catches vulnerabilities such as client-submitted prices and missing webhook signature verification.
  • Mobile: Flags API keys within JS bundles and insecure use of AsyncStorage for tokens.
  • AI / LLM: Detects exposed AI API keys, lack of usage caps, and potential prompt injection vulnerabilities.
  • Deployment: Scans for debug modes in production, exposed source maps, and missing security headers.
  • Data Access: Identifies SQL injection risks and insecure usage of raw SQL queries ($queryRawUnsafe).

Maintenance & Community

Created by Chris Raroque (@raroque) in collaboration with Aloa. Contributions and improvements are welcomed, with guidelines available in CONTRIBUTING.md.

Licensing & Compatibility

Licensed under the MIT License, permitting broad compatibility for commercial use and integration into closed-source projects.

Limitations & Caveats

The skill is specifically designed to catch vulnerabilities introduced by AI coding assistants and may not cover all traditional security flaws. Its effectiveness is dependent on the completeness of its technology-specific rule sets.

Health Check
Last Commit

3 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
1
Star History
438 stars in the last 24 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems").

codegate by stacklok

1.5%
793
AI agent security and management tool
Created 1 year ago
Updated 10 months ago
Feedback? Help us improve.