A highly performant model for coding performance and complex prompts
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Gemini 2.0 Pro is free while in the experimental stage, with no cost for input or output tokens.
The input token cost for Gemini 2.0 Pro is $0.00 per million input tokens while it is in the experimental stage.
The output token cost for Gemini 2.0 Pro is $0.00 per million output tokens while it is in the experimental stage.
Gemini 2.0 Pro supports a context window of up to 2,000,000 tokens.
Gemini 2.0 Pro can generate up to 8,192 tokens in a single output.
Gemini 2.0 Pro was released on February 5, 2025.
Yes, Gemini 2.0 Pro supports vision capabilities, allowing it to process and analyze visual inputs like images.
Yes, Gemini 2.0 Pro supports tool calling (functions).
Yes, Gemini 2.0 Pro supports multiple languages, allowing it to handle input and output in various languages.
Yes, Gemini 2.0 Pro supports fine-tuning on custom datasets.
You can find the official documentation for Gemini 2.0 Pro here:
Gemini 2.0 Pro Documentation
Collaborate with thousands of AI builders to discover, manage, and improve prompts—free to get started.