codeshell-vscode  by WisdomShell

VS Code extension for intelligent coding assistance using CodeShell

created 1 year ago
576 stars

Top 56.9% on sourcepulse

GitHubView on GitHub
Project Summary

CodeShell VSCode Extension provides an intelligent coding assistant for Visual Studio Code, supporting multiple programming languages like Python, Java, C++, JavaScript, and Go. It aims to boost developer productivity by offering features such as code completion, explanation, optimization, comment generation, and conversational Q&A, all powered by the CodeShell large language model.

How It Works

The extension integrates with a self-hosted CodeShell model service. It supports two primary deployment methods for the model: llama_cpp_for_codeshell for quantized models (e.g., codeshell-chat-q4_0.gguf) which can leverage CPU or Metal (on Apple Silicon) for inference, and text-generation-inference (TGI) for larger models (e.g., CodeShell-7B, CodeShell-7B-Chat) requiring NVIDIA GPUs. This modular approach allows users to choose the backend that best suits their hardware capabilities.

Quick Start & Requirements

  • Install: Package the extension from source using npm exec vsce package to generate a .vsix file, then install via VS Code's "Install from VSIX..." command.
  • Prerequisites: Node.js v18+, VS Code v1.68.1+, and a running CodeShell model service.
  • Model Service: Requires compiling llama_cpp_for_codeshell or deploying with text-generation-inference. Model weights must be downloaded separately from Hugging Face.
  • Setup: Model service deployment and configuration can take significant time depending on compilation and model download.
  • Docs: CodeShell VSCode Extension

Highlighted Details

  • Supports code completion (auto-trigger and hotkey), code explanation, optimization, comment generation, and unit test creation.
  • Offers intelligent Q&A with multi-turn conversation, history, editable questions, and code block insertion.
  • Quantized models (-int4) can run on CPU with Metal support for Apple Silicon.
  • NVIDIA GPU acceleration is available via TGI for larger models.

Maintenance & Community

  • The project is hosted on GitHub. No specific community channels (Discord/Slack) or roadmap links are provided in the README.

Licensing & Compatibility

  • License: Apache 2.0.
  • Compatibility: Permissive license allows for commercial use and integration with closed-source projects.

Limitations & Caveats

The README implies that specific model configurations (e.g., codeshell-chat-q4_0.gguf with llama.cpp or larger models with TGI) must be correctly selected in the VS Code extension settings for proper operation. Deployment of the model service itself requires technical expertise and potentially significant hardware resources.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.