Improve your LLM calls by dynamically selecting the best model for a given task. Our APIs help you classify tasks and optimize prompts – giving you the tools you need to build robust AI applications and agents at scale.
Automodels detects the intent of your prompts and maps them to the best model for the given task, based on your preferences for cost and quality.
Use our tools to restructure your prompts auto-magically to improve performance and quality of your LLM calls, tailored to the model you're using.
Our library of tools is easy to integrate into your existing workflows. Call our API directly, or start with our npm or pip packages:
from llm_automodels import AutoModels
client = AutoModels("your-api-key")
prompt = "Explain the key aspects of special relativity."
response = client.get_best_model(
prompt,
profile="cost"
)
print("Identified task type: ", response.task_type)
print("Best model: ", response.best_model)
reponse = client.optimize_prompt(
prompt,
model=reponse.best_model
)
print("Optimized prompt: ", response.optimized_prompt)
Our API can compress your prompts using advanced semantic analysis, reducing token usage by up to 70% while maintaining context and quality.
Get detailed insights into your token usage, compression rates, and optimization opportunities with our comprehensive analytics dashboard or via API.
Use Automodels to improve the quality of your LLM calls, save on tokens, and reduce time spent on prompt engineering.
Shape the future of AI optimization
Direct engineering team access
Join our community of early adopters
We recently released our public API documentation. Check it out now and to get started using our API! View docs →
How can I get support? → Beta users can post a Github issue (private repo for beta users), and use our invite-only support channel in Discord.
Does Automodels have a on-prem version? → Join the waitlist
What does Automodels cost? → Our API is currently free to use.
Are you hiring? → Currently not hiring, but we're always looking for talented people to join our team. Get in touch