Small, cost-efficient reasoning model that’s optimized for coding, math, and science
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
o3-mini has a cost structure of $1.10 per million input tokens and $4.40 per million output tokens (reasoning tokens are priced identically to output tokens).
The API cost for o3-mini is $1.10 per million input tokens and $4.40 per million output tokens.
For o3-mini, the price is $0.0011 per 1,000 input tokens and $0.0044 per 1,000 output tokens.
o3-mini supports a context window of up to 200,000 tokens.
o3-mini can generate up to 100,000 tokens in a single output.
o3-mini was released on January 31, 2025.
The knowledge cut-off date for o3-mini is June 1, 2024.
No, o3-mini does not support vision capabilities.
Yes, o3-mini supports tool calling (functions).
Yes, o3-mini supports multiple languages, allowing it to handle input and output in several languages.
No, o3-mini does not support fine-tuning.
You can find the official documentation for o3-mini here.
Yes, o3-mini is much cheaper than o1. The input cost for o3-mini is $1.10 per million tokens, compared to o1’s $15.00 per million tokens. The output cost for o3-mini is $4.40 per million tokens, while o1 costs $60.00 per million tokens.
o3-mini is roughly 93% cheaper than o1 for input tokens and about 93% cheaper for output tokens. Specifically, o3-mini costs $1.10 per million input tokens, compared to o1's $15.00 per million tokens, and $4.40 per million output tokens, compared to o1's $60.00 per million tokens.
PromptHub is better way to test, manage, and deploy prompts for your AI products