GPT-4.1 nano is the smallest, fastest, and most cost-effective option of the GPT-4.1 family. GPT-4.1 nano is ideal for high-volume applications such as autocomplete, classification, and extracting details from lengthy documents while maintaining a strong performance profile.
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
GPT-4.1 nano has a cost structure of $0.10 per million input tokens and $0.40 per million output tokens.
The input token cost for GPT-4.1 nano is $0.10 per million input tokens.
The output token cost for GPT-4.1 nano is $0.40 per million output tokens.
GPT-4.1 nano supports a context window of up to 1,000,000 tokens.
GPT-4.1 nano can generate up to 32,000 tokens in a single output.
GPT-4.1 nano was released on April 14, 2025.
The knowledge cut-off date for GPT-4.1 nano is June 1, 2024.
Yes, GPT-4.1 nano supports vision capabilities, allowing it to process and analyze visual inputs like images.
Yes, GPT-4.1 nano supports tool calling (functions).
Yes, GPT-4.1 nano supports multiple languages, allowing it to handle input and output in various languages.
Yes, GPT-4.1 nano supports fine-tuning on custom datasets.
You can find the official documentation for GPT-4.1 nano here:
GPT-4.1 nano Documentation
Collaborate with thousands of AI builders to discover, manage, and improve prompts—free to get started.