Google's fastest multimodal model with great performance for diverse, repetitive tasks and a 1 million token context window
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Gemini 1.5 Flash has a cost structure of $0.075 per million input tokens for prompts up to 128k tokens and $0.30 per million output tokens for prompts up to 128k tokens.
The API cost for Gemini 1.5 Flash is $0.075 per million input tokens and $0.30 per million output tokens for prompts up to 128k tokens.
For Gemini 1.5 Flash, the price is $0.000075 per 1,000 input tokens and $0.0003 per 1,000 output tokens.
Gemini 1.5 Flash supports a context window of up to 1,048,576 tokens, making it ideal for processing extensive input.
Gemini 1.5 Flash can generate up to 8,192 tokens in a single output.
Gemini 1.5 Flash was released on May 14, 2024.
Yes, Gemini 1.5 Flash supports vision capabilities.
Yes, Gemini 1.5 Flash supports tool calling (functions).
Yes, Gemini 1.5 Flash supports multiple languages, allowing it to handle input and output in several languages.
Yes, Gemini 1.5 Flash supports fine-tuning. The model version gemini-1.5-flash-002 can be fine-tuned.
You can find the official documentation for Gemini 1.5 Flash on Google’s website: Gemini 1.5 Flash Documentation
PromptHub is better way to test, manage, and deploy prompts for your AI products