LLM tool for paper reading/polishing/writing, optimized UI
Top 0.2% on sourcepulse
This project provides a practical, modular interface for large language models (LLMs) like GPT and GLM, specifically optimizing the experience for academic tasks such as reading, polishing, and writing papers. It targets researchers, students, and developers who need to interact with and leverage LLMs for complex document processing and code analysis, offering significant benefits through its extensive customization and multi-model support.
How It Works
The core of the project is a Python-based application that acts as a frontend for various LLMs. It employs a modular design, allowing users to integrate custom functions and shortcut buttons. Key features include advanced PDF and LaTeX processing, code analysis, and the ability to query multiple LLMs concurrently. This approach enables a highly personalized and efficient workflow for academic and technical tasks.
Quick Start & Requirements
pip install -r requirements.txt
(Python 3.9-3.11 recommended). Docker installation is also supported.bitsandbyte
for quantization, modelscope
for ChatGLM4). CUDA is required for GPU acceleration with certain models.config.py
(or config_private.py
), and installing dependencies.Highlighted Details
Maintenance & Community
The project is actively maintained, with frequent updates noted in the README. A QQ group (610599535) is provided for community interaction.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and linking with closed-source projects.
Limitations & Caveats
Some browser translation plugins may interfere with the frontend. Official Gradio compatibility issues are noted, recommending installation via requirements.txt
. Certain advanced local models may require significant GPU VRAM (e.g., 24GB for GLM4-9B).
2 days ago
1 day