Discover and explore top open-source AI tools and projects—updated daily.
Local LLM alternative to GitHub Copilot
Top 56.6% on SourcePulse
This project provides a local, cross-platform alternative to GitHub Copilot for Visual Studio Code users by enabling the use of Code Llama models. It targets developers seeking to leverage powerful LLMs for code generation and assistance directly within their IDE without relying on external APIs or paid services.
How It Works
The project acts as a mock API for llama.cpp
, bridging the gap between Code Llama models and the Continue VSCode extension. It achieves this by running a Flask server that mimics the expected API endpoints, allowing Continue to interact with a locally hosted Code Llama instance. This approach offers a unified, self-contained solution for local LLM integration.
Quick Start & Requirements
llamacpp_mock_api.py
to your Code Llama directory.pip install flask
.torchrun --nproc_per_node 1 llamacpp_mock_api.py --ckpt_dir <path_to_model> --tokenizer_path <path_to_tokenizer> --max_seq_len 512 --max_batch_size 4
.config.json
in the Continue extension.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
This project requires manual setup and configuration, including downloading models and modifying configuration files. It relies on the llama.cpp
project's compatibility with Code Llama models.
1 year ago
Inactive