gpt_academic  by binary-husky

LLM tool for paper reading/polishing/writing, optimized UI

created 2 years ago
69,027 stars

Top 0.2% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a practical, modular interface for large language models (LLMs) like GPT and GLM, specifically optimizing the experience for academic tasks such as reading, polishing, and writing papers. It targets researchers, students, and developers who need to interact with and leverage LLMs for complex document processing and code analysis, offering significant benefits through its extensive customization and multi-model support.

How It Works

The core of the project is a Python-based application that acts as a frontend for various LLMs. It employs a modular design, allowing users to integrate custom functions and shortcut buttons. Key features include advanced PDF and LaTeX processing, code analysis, and the ability to query multiple LLMs concurrently. This approach enables a highly personalized and efficient workflow for academic and technical tasks.

Quick Start & Requirements

  • Installation: pip install -r requirements.txt (Python 3.9-3.11 recommended). Docker installation is also supported.
  • Prerequisites: Python, pip/conda. Optional dependencies for specific models (e.g., bitsandbyte for quantization, modelscope for ChatGLM4). CUDA is required for GPU acceleration with certain models.
  • Setup: Basic setup involves cloning the repository, configuring API keys in config.py (or config_private.py), and installing dependencies.
  • Documentation: Wiki

Highlighted Details

  • Advanced PDF/LaTeX translation and summarization, including Arxiv paper processing.
  • Real-time voice input with automatic sentence segmentation and response timing.
  • Support for numerous LLMs, including OpenAI, GLM, Qwen, DeepseekCoder, and local models via Hugging Face.
  • Modular plugin system with hot-reloading and a "Void Terminal" for natural language command execution.

Maintenance & Community

The project is actively maintained, with frequent updates noted in the README. A QQ group (610599535) is provided for community interaction.

Licensing & Compatibility

The project is licensed under the MIT License, permitting commercial use and linking with closed-source projects.

Limitations & Caveats

Some browser translation plugins may interfere with the frontend. Official Gradio compatibility issues are noted, recommending installation via requirements.txt. Certain advanced local models may require significant GPU VRAM (e.g., 24GB for GLM4-9B).

Health Check
Last commit

2 days ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
6
Star History
1,062 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Tim J. Baek Tim J. Baek(Founder of Open WebUI), and
2 more.

llmware by llmware-ai

0.2%
14k
Framework for enterprise RAG pipelines using small, specialized models
created 1 year ago
updated 1 week ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Nat Friedman Nat Friedman(Former CEO of GitHub), and
32 more.

llama.cpp by ggml-org

0.4%
84k
C/C++ library for local LLM inference
created 2 years ago
updated 13 hours ago
Feedback? Help us improve.