A model from Google that leverages chain of thought thinking before generating an output.
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Gemini 2.0 Flash Thinking Mode is free of charge during its experimental stage.
The API cost for Gemini 2.0 Flash Thinking Mode is $0.00 per million input tokens and $0.00 per million output tokens during the experimental phase.
For Gemini 2.0 Flash Thinking Mode, the price is $0.00 per 1,000 input tokens and $0.00 per 1,000 output tokens, while in the experimental stage.
Gemini 2.0 Flash Thinking Mode supports a context window of up to 32,000 tokens.
Gemini 2.0 Flash Thinking Mode can generate up to 8,000 tokens in a single output.
Gemini 2.0 Flash Thinking Mode was released on December 18, 2024.
The knowledge cut-off date for Gemini 2.0 Flash Thinking Mode is August 1, 2024.
Yes, Gemini 2.0 Flash Thinking Mode supports vision capabilities and can process and analyze visual inputs, such as images.
No, Gemini 2.0 Flash Thinking Mode does not support tool calling (functions).
Yes, Gemini 2.0 Flash Thinking Mode supports multiple languages, allowing it to handle input and output in several languages.
Yes, Gemini 2.0 Flash Thinking Mode supports fine-tuning on custom datasets.
You can find the official documentation for Gemini 2.0 Flash Thinking Mode here.
PromptHub is better way to test, manage, and deploy prompts for your AI products