code-llama-for-vscode  by xNul

Local LLM alternative to GitHub Copilot

Created 2 years ago
569 stars

Top 56.6% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a local, cross-platform alternative to GitHub Copilot for Visual Studio Code users by enabling the use of Code Llama models. It targets developers seeking to leverage powerful LLMs for code generation and assistance directly within their IDE without relying on external APIs or paid services.

How It Works

The project acts as a mock API for llama.cpp, bridging the gap between Code Llama models and the Continue VSCode extension. It achieves this by running a Flask server that mimics the expected API endpoints, allowing Continue to interact with a locally hosted Code Llama instance. This approach offers a unified, self-contained solution for local LLM integration.

Quick Start & Requirements

  • Install:
    1. Download a Code Llama Instruct model.
    2. Install the Continue VSCode extension.
    3. Move llamacpp_mock_api.py to your Code Llama directory.
    4. Install Flask: pip install flask.
    5. Run the mock API: torchrun --nproc_per_node 1 llamacpp_mock_api.py --ckpt_dir <path_to_model> --tokenizer_path <path_to_tokenizer> --max_seq_len 512 --max_batch_size 4.
    6. Configure config.json in the Continue extension.
  • Prerequisites: Code Llama Instruct model, Continue VSCode extension, Python environment with Flask.
  • Setup Time: Estimated 15-30 minutes, depending on model download size.
  • Docs: Continue Extension

Highlighted Details

  • Enables local, offline use of Code Llama within VSCode.
  • Cross-platform compatibility (Windows, Linux, macOS).
  • Serves as a direct alternative to cloud-based AI coding assistants.

Maintenance & Community

  • Maintained by xNul.
  • Community support likely through the Continue extension's channels.

Licensing & Compatibility

  • The repository itself appears to be MIT licensed.
  • Compatibility with Code Llama models is subject to Meta's licensing terms for Code Llama.

Limitations & Caveats

This project requires manual setup and configuration, including downloading models and modifying configuration files. It relies on the llama.cpp project's compatibility with Code Llama models.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.