A multilingual, text-only, instruction-tuned model,
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Llama 3.3 70B is a multilingual, text-only, instruction-tuned model designed for efficient language understanding and task execution across various languages.
Llama 3.3 70B supports a context window of up to 128,000 tokens, allowing it to process a large amount of input data.
Llama 3.3 70B can generate up to 2,048 tokens in a single output, making it suitable for generating detailed responses.
Llama 3.3 70B was released on December 5, 2024.
The knowledge cut-off date for Llama 3.3 70B is December 1, 2023.
No, Llama 3.3 70B is a text-only model and does not support vision capabilities.
Yes, Llama 3.3 70B supports functions and tool calling.
Yes, Llama 3.3 70B supports multiple languages, allowing it to handle input and output in a variety of languages.
Yes, Llama 3.3 70B can be fine-tuned for specific tasks and applications.
You can find the official documentation for Llama 3.3 70B here.
PromptHub is better way to test, manage, and deploy prompts for your AI products