Anthropic's most fastest model
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Claude 3.5 Haiku is Anthropic's fastest model, designed for high performance across a variety of tasks.
Claude 3.5 Haiku supports a context window of up to 200,000 tokens.
The model can generate outputs of up to 8,192 tokens in a single response.
Claude 3.5 Haiku was released on November 5, 2024.
The knowledge cut-off date for this model is July 1, 2024.
No, Claude 3.5 Haiku does not support vision capabilities.
Yes, Cladue 3.5 Haiku supports function calling.
Yes, Claude 3.5 Haiku supports multiple languages.
No, Claude 3.5 Haiku does not offer fine-tuning capabilities.
Yes, Claude 3.5 Haiku has a higher output cost compared to Sonnet 3.5.
Yes, Claude 3.5 Haiku is four times more expensive than Claude 3 Haiku.
Is Claude 3.5 Haiku more expensive than GPT-4o?
Yes, Claude 3.5 Haiku's input and output costs are higher than those of GPT-4o.
You can find the official documentation for Claude 3.5 Haiku here.
PromptHub is better way to test, manage, and deploy prompts for your AI products