Gemini 2.0 Flash Lite overview

Provider

The company that provides the model

Google

Context window

The number of tokens you can send in a prompt

1,000,000 tokens

Maximum output

The maximum number of tokens a model can generate in one request

8,000 tokens

Input token cost

The cost of prompt tokens sent to the model

$0.075 / 1M input tokens

Output token cost

The cost of output tokens generated by the model

$0.30 / 1M output tokens

Knowledge cut-off date

When the model's knowledge ends

Unknown

Release date

When the model was launched

February 5, 2025

Gemini 2.0 Flash Lite functionality

Function (tool calling) support

Capability for the model to use external tools

Yes

Vision support

Ability to process and analyze visual inputs, like images

Yes

Multilingual

Support for multiple languages

Yes

Fine-tuning

Whether the model supports fine-tuning on custom datasets

Yes

Common questions about Gemini 2.0 Flash Lite

How much does Gemini 2.0 Flash Lite cost?

Gemini 2.0 Flash Lite has a cost structure of $0.075 per million input tokens and $0.30 per million output tokens.

What is the input token cost for Gemini 2.0 Flash Lite?

The input token cost for Gemini 2.0 Flash Lite is $0.075 per million input tokens.

What is the output token cost for Gemini 2.0 Flash Lite?

The output token cost for Gemini 2.0 Flash Lite is $0.30 per million output tokens.

What is the context window for Gemini 2.0 Flash Lite?

Gemini 2.0 Flash Lite supports a context window of up to 1,000,000 tokens.

What is the maximum output length for Gemini 2.0 Flash Lite?

Gemini 2.0 Flash Lite can generate up to 8,192 tokens in a single output.

When was Gemini 2.0 Flash Lite released?

Gemini 2.0 Flash Lite was released onFebruary 5, 2025.

Does Gemini 2.0 Flash Lite support vision capabilities?

Yes, Gemini 2.0 Flash Lite supports vision capabilities, allowing it to process and analyze visual inputs like images.

Can Gemini 2.0 Flash Lite perform tool calling or functions?

No, Gemini 2.0 Flash Lite does not support tool calling or functions.

Is Gemini 2.0 Flash Lite a multilingual model?

Yes, Gemini 2.0 Flash Lite supports multiple languages, allowing it to handle input and output in various languages.

Does Gemini 2.0 Flash Lite support fine-tuning?

Yes, Gemini 2.0 Flash Lite supports fine-tuning on custom datasets.

Where can I find the official documentation for Gemini 2.0 Flash Lite?

You can find the official documentation for Gemini 2.0 Flash Lite here:
Gemini 2.0 Flash Lite Documentation

Join thousands of AI builders

Collaborate with thousands of AI builders to discover, manage, and improve prompts—free to get started.