Discover and explore top open-source AI tools and projects—updated daily.
sashiko-devAutomated agent for Linux kernel code review
Top 54.3% on SourcePulse
Sashiko is an agentic system designed to automate the review of Linux kernel code changes, aiming to reinforce kernel quality by identifying bugs that may bypass human review. It targets kernel developers and maintainers seeking to enhance the robustness and security of kernel code through intelligent, automated analysis. The primary benefit is potentially higher bug detection rates and a more consistent review process.
How It Works
Sashiko employs a sophisticated, multi-stage review protocol comprising nine distinct stages, each mimicking a specialized reviewer. This approach systematically analyzes patches from high-level architectural correctness and commit message alignment to detailed execution flow, resource management, concurrency, security vulnerabilities, and hardware-specific interactions. It is a self-contained system, capable of ingesting patches from mailing lists or local git repositories and integrating with various LLM providers like Gemini and Claude, offering a flexible and comprehensive automated review pipeline.
Quick Start & Requirements
git clone --recursive), navigate into it, and build with cargo build --release. The daemon runs with cargo run, and the CLI with cargo run --bin sashiko-cli -- [COMMAND].LLM_API_KEY environment variable or provider-specific keys are required.Settings.toml and configure AI provider, model, and other parameters.https://github.com/rgushchin/sashikoHighlighted Details
Maintenance & Community
The project is licensed under The Linux Foundation and its contributors. No specific community channels (like Discord or Slack) or detailed contributor information are provided in the README.
Licensing & Compatibility
Licensed under the Apache License, Version 2.0. This license is permissive and generally compatible with commercial use and linking in closed-source projects.
Limitations & Caveats
Sashiko transmits patch data and kernel history to LLM providers, requiring users to ensure authorization and comfort with third-party data sharing; authors disclaim liability for data privacy or IP issues. Running the system incurs computational costs and potential LLM API expenses, for which users are solely responsible for monitoring. LLM outputs are probabilistic, meaning bug detection is not guaranteed, and false positive rates are estimated within a 20% range, often involving ambiguous findings.
1 day ago
Inactive
anthropics