Gemini 2.0 Flash overview

Provider

The company that provides the model

Google

Context window

The number of tokens you can send in a prompt

1,048,576 tokens

Maximum output

The maximum number of tokens a model can generate in one request

8,192 tokens

Input token cost

The cost of prompt tokens sent to the model

$0.00 / 1M input tokens (free while in experimental stage)

Output token cost

The cost of output tokens generated by the model

$0.00 / 1M output tokens (free while in experimental stage)

Knowledge cut-off date

When the model's knowledge ends

August 1, 2024
Unknown

Release date

When the model was launched

December 11, 2024

Gemini 2.0 Flash functionality

Function (tool calling) support

Capability for the model to use external tools

Yes

Vision support

Ability to process and analyze visual inputs, like images

Yes

Multilingual

Support for multiple languages

Yes

Fine-tuning

Whether the model supports fine-tuning on custom datasets

Yes

Common questions about Gemini 2.0 Flash

What is Gemini 2.0 Flash?

Gemini 2.0 Flash is a powerful language model from Google, designed for both text and visual inputs, offering enhanced functionality, multilingual support, and the ability to perform tool calling.

What is the context window for Gemini 2.0 Flash?

Gemini 2.0 Flash supports a context window of up to 1,048,576 tokens, enabling it to handle large and complex inputs effectively.

What is the maximum output length for Gemini 2.0 Flash?

Gemini 2.0 Flash can generate up to 8,192 tokens in a single output, making it suitable for detailed responses and complex tasks.

When was Gemini 2.0 Flash released?

Gemini 2.0 Flash was released on December 11, 2024.

What is the knowledge cut-off date for Gemini 2.0 Flash?

The knowledge cut-off date for Gemini 2.0 Flash is August 1, 2024.

What are the input and output costs for Gemini 2.0 Flash?

  • Input Cost: $0.00 per million tokens (free while in the experimental stage)
  • Output Cost: $0.00 per million tokens (free while in the experimental stage)

Does Gemini 2.0 Flash support tool calling or functions?

Yes, Gemini 2.0 Flash supports tool calling, allowing it to use external tools as part of its operations.

Does Gemini 2.0 Flash support vision capabilities?

Yes, Gemini 2.0 Flash supports vision capabilities, allowing it to process and analyze visual inputs like images.

Is Gemini 2.0 Flash a multilingual model?

Yes, Gemini 2.0 Flash supports multiple languages, making it suitable for global applications.

Does Gemini 2.0 Flash support fine-tuning?

Yes, Gemini 2.0 Flash can be fine-tuned on custom datasets to improve its performance for specific tasks.

Where can I find the official documentation for Gemini 2.0 Flash?

You can find the official documentation for Gemini 2.0 Flash here.

Better LLM outputs are a click away

PromptHub is better way to test, manage, and deploy prompts for your AI products