Llama 3.1 405B Instruct overview

Provider

The company that provides the model

Anthropic Logo
Meta

Context window

The number of tokens you can send in a prompt

128k tokens

Maximum output

The maximum number of tokens a model can generate in one request

2,048 tokens

Input token cost

The cost of prompt tokens sent to the model

$5.33 / 1M input tokens (when hosted on Azure)

Output token cost

The cost of output tokens generated by the model

$16.00 / 1M output tokens (when hosted on Azure)

Knowledge cut-off date

When the model's knowledge ends

December 1, 2023
Unknown

Release date

When the model was launched

July 23, 2024

Llama 3.1 405B Instruct functionality

Function (tool calling) support

Capability for the model to use external tools

Yes

Vision support

Ability to process and analyze visual inputs, like images

No

Multilingual

Support for multiple languages

Yes

Fine-tuning

Whether the model supports fine-tuning on custom datasets

Yes

Common questions about Llama 3.1 405B Instruct

How much does Llama 3.1 405B Instruct cost?

Llama 3.1 405B Instruct has a cost structure of $5.33 per million input tokens and $16.00 per million output tokens when hosted on Azure.

What is the API cost for Llama 3.1 405B Instruct?

The API cost for Llama 3.1 405B Instruct is $5.33 per million input tokens and $16.00 per million output tokens when hosted on Azure.

What is the price per token for Llama 3.1 405B Instruct?

For Llama 3.1 405B Instruct, the price is $0.00533 per 1,000 input tokens and $0.016 per 1,000 output tokens when hosted on Azure.

What is the context window for Llama 3.1 405B Instruct?

Llama 3.1 405B Instruct supports a context window of up to 128,000 tokens.

What is the maximum output length for Llama 3.1 405B Instruct?

Llama 3.1 405B Instruct can generate up to 2,048 tokens in a single output.

When was Llama 3.1 405B Instruct released?

Llama 3.1 405B Instruct was released on July 23, 2024.

How recent is the training data for Llama 3.1 405B Instruct?

The knowledge cut-off date for Llama 3.1 405B Instruct is December 1, 2023.

Does Llama 3.1 405B Instruct support vision capabilities?

No, Llama 3.1 405B Instruct is a text-only model and does not support vision capabilities.

Can Llama 3.1 405B Instruct perform tool calling or functions?

Yes, Llama 3.1 405B Instruct supports tool calling (functions).

Is Llama 3.1 405B Instruct a multilingual model?

Yes, Llama 3.1 405B Instruct supports multiple languages, allowing it to handle input and output in several languages.

Does Llama 3.1 405B Instruct support fine-tuning?

Yes, Llama 3.1 405B Instruct supports fine-tuning.

Where can I find the official documentation for Llama 3.1 405B Instruct?

You can find the official documentation for Llama 3.1 405B Instruct on Meta’s GitHub page: Llama 3.1 405B InstructDocumentation

Better LLM outputs are a click away

PromptHub is better way to test, manage, and deploy prompts for your AI products