Discover and explore top open-source AI tools and projects—updated daily.
OmkarPathakLocal AI-powered resume parser for privacy-conscious data extraction
Top 86.3% on SourcePulse
A privacy-focused resume parser that leverages local Large Language Models (LLMs) to extract structured data and generate insights from resumes. It targets engineers and researchers needing efficient, cost-effective, and secure resume analysis, offering automated professional summaries and key strength identification without external API dependencies.
How It Works
This project utilizes the Qwen2.5-1.5B-Instruct LLM, quantized to q4_k_m, for local inference via the llama-cpp-python library. This approach prioritizes speed and accuracy for structured data extraction, offering a lightweight (~1GB model) solution. Running entirely locally eliminates API costs and enhances data privacy by keeping sensitive resume information on the user's machine.
Quick Start & Requirements
pip install -r requirements.txt), download the AI model (python download_model.py qwen), and run the Django development server (python resume_parser/manage.py runserver). The GUI is accessible at http://127.0.0.1:8000/.docker-compose up --build for an integrated setup../build_mac.sh to create a standalone executable.Highlighted Details
Maintenance & Community
No specific details regarding contributors, sponsorships, community channels (like Discord/Slack), or roadmaps were provided in the README.
Licensing & Compatibility
Limitations & Caveats
The LLM's context window is limited to 2048 tokens, requiring input text to be truncated to approximately 1500 characters. GPU acceleration (Metal/CUDA) is optional and may require explicit configuration, with the system defaulting to CPU execution for stability.
16 hours ago
Inactive
kyang6
finic-ai