A dialogue use case optimized variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture
The company that provides the model
The number of tokens you can send in a prompt
The maximum number of tokens a model can generate in one request
The cost of prompt tokens sent to the model
The cost of output tokens generated by the model
When the model's knowledge ends
When the model was launched
Capability for the model to use external tools
Ability to process and analyze visual inputs, like images
Support for multiple languages
Whether the model supports fine-tuning on custom datasets
Llama 2 Chat 13B has a cost structure of $0.52 per million input tokens and $0.67 per million output tokens when hosted on Azure.
The API cost for Llama 2 Chat 13B is $0.52 per million input tokens and $0.67 per million output tokens when hosted on Azure.
For Llama 2 Chat 13B, the price is $0.00052 per 1,000 input tokens and $0.00067 per 1,000 output tokens when hosted on Azure.
Llama 2 Chat 13B supports a context window of up to 4,096 tokens.
Llama 2 Chat 13B can generate up to 2,048 tokens in a single output.
Llama 2 Chat 13B was released on July 18, 2023.
The knowledge cut-off date for Llama 2 Chat 13B is September 1, 2022.
No, Llama 2 Chat 13B is a text-only model and does not support vision capabilities.
No, Llama 2 Chat 13B does not support tool calling or functions.
No, Llama 2 Chat 13B does not support multiple languages.
Yes, Llama 2 Chat 13B supports fine-tuning.
You can find the official documentation for Llama 2 Chat 13B on Meta’s GitHub page: Llama 2 Chat 13B Documentation
PromptHub is better way to test, manage, and deploy prompts for your AI products