A cost efficient model
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Gemini 2.0 Flash Lite has a cost structure of $0.075 per million input tokens and $0.30 per million output tokens.
The input token cost for Gemini 2.0 Flash Lite is $0.075 per million input tokens.
The output token cost for Gemini 2.0 Flash Lite is $0.30 per million output tokens.
Gemini 2.0 Flash Lite supports a context window of up to 1,000,000 tokens.
Gemini 2.0 Flash Lite can generate up to 8,192 tokens in a single output.
Gemini 2.0 Flash Lite was released onFebruary 5, 2025.
Yes, Gemini 2.0 Flash Lite supports vision capabilities, allowing it to process and analyze visual inputs like images.
No, Gemini 2.0 Flash Lite does not support tool calling or functions.
Yes, Gemini 2.0 Flash Lite supports multiple languages, allowing it to handle input and output in various languages.
Yes, Gemini 2.0 Flash Lite supports fine-tuning on custom datasets.
You can find the official documentation for Gemini 2.0 Flash Lite here:
Gemini 2.0 Flash Lite Documentation
Collaborate with thousands of AI builders to discover, manage, and improve prompts—free to get started.