Ideal for enterprise level applications, research and development, synthetic data generation and model distillation.
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Llama 3.1 405B Instruct has a cost structure of $5.33 per million input tokens and $16.00 per million output tokens when hosted on Azure.
The API cost for Llama 3.1 405B Instruct is $5.33 per million input tokens and $16.00 per million output tokens when hosted on Azure.
For Llama 3.1 405B Instruct, the price is $0.00533 per 1,000 input tokens and $0.016 per 1,000 output tokens when hosted on Azure.
Llama 3.1 405B Instruct supports a context window of up to 128,000 tokens.
Llama 3.1 405B Instruct can generate up to 2,048 tokens in a single output.
Llama 3.1 405B Instruct was released on July 23, 2024.
The knowledge cut-off date for Llama 3.1 405B Instruct is December 1, 2023.
No, Llama 3.1 405B Instruct is a text-only model and does not support vision capabilities.
Yes, Llama 3.1 405B Instruct supports tool calling (functions).
Yes, Llama 3.1 405B Instruct supports multiple languages, allowing it to handle input and output in several languages.
Yes, Llama 3.1 405B Instruct supports fine-tuning.
You can find the official documentation for Llama 3.1 405B Instruct on Meta’s GitHub page: Llama 3.1 405B InstructDocumentation
PromptHub is better way to test, manage, and deploy prompts for your AI products