SDK for simplifying LLM interactions
Top 66.7% on sourcepulse
EasyLLM provides a unified interface for interacting with various Large Language Models (LLMs), abstracting away differences between providers like OpenAI, Hugging Face, and Amazon SageMaker. This allows developers to easily switch between different LLM backends with minimal code changes, streamlining experimentation and deployment for both open-source and proprietary models.
How It Works
EasyLLM implements clients that mimic the OpenAI API's ChatCompletion
, Completion
, and Embedding
interfaces. This compatibility layer enables users to swap out OpenAI calls for equivalent calls to other supported LLM providers (e.g., Hugging Face, SageMaker) by simply changing an import statement and the model identifier. It also includes helper modules for prompt formatting and, in progress, evolutionary instruction generation.
Quick Start & Requirements
pip install easyllm
from easyllm.clients import huggingface
huggingface.prompt_builder = "llama2"
response = huggingface.ChatCompletion.create(
model="meta-llama/Llama-2-70b-chat-hf",
messages=[
{"role": "system", "content": "\nYou are a helpful assistant speaking like a pirate. argh!"},
{"role": "user", "content": "What is the sun?"},
],
temperature=0.9,
top_p=0.6,
max_tokens=256,
)
print(response)
Highlighted Details
ChatCompletion
, Completion
, and Embedding
.Maintenance & Community
hatch
.Licensing & Compatibility
Limitations & Caveats
The evol_instruct
feature is marked as "work in progress." While the library aims for seamless switching, hyperparameter compatibility between different models may require manual adjustment.
1 year ago
1 day