SDK for LLM token cost estimation
Top 25.0% on sourcepulse
This repository provides a Python library for estimating the token count and associated costs of using various Large Language Models (LLMs). It aims to help developers building AI applications and agents accurately track and manage their LLM API expenses.
How It Works
The library leverages tiktoken
, OpenAI's official tokenizer, for accurate tokenization of text and message formats. For Anthropic models (Claude 3.5 and newer), it utilizes the Anthropic beta token counting API. For older Claude models, it falls back to approximating token counts using tiktoken
with the cl100k_base
encoding. The cost calculations are based on a comprehensive, regularly updated list of model prices stored in model_prices.json
.
Quick Start & Requirements
pip install tokencost
calculate_prompt_cost
, calculate_completion_cost
, count_message_tokens
, or count_string_tokens
functions.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
model_prices.json
.3 days ago
1 day