id
stringlengths
9
61
canonical_slug
stringlengths
11
50
hugging_face_id
stringlengths
0
56
name
stringlengths
8
54
created
int64
1.69B
1.78B
description
stringlengths
67
330
context_length
int64
2.82k
2M
architecture
dict
pricing
dict
top_provider
dict
per_request_limits
null
supported_parameters
listlengths
0
22
default_parameters
dict
knowledge_cutoff
stringclasses
26 values
expiration_date
stringclasses
4 values
links
dict
meta-llama/llama-3.2-3b-instruct:free
meta-llama/llama-3.2-3b-instruct
meta-llama/Llama-3.2-3B-Instruct
Meta: Llama 3.2 3B Instruct (free)
1,727,222,400
Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it...
131,072
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "stop", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/meta-llama/llama-3.2-3b-instruct/endpoints" }
meta-llama/llama-3.2-3b-instruct
meta-llama/llama-3.2-3b-instruct
meta-llama/Llama-3.2-3B-Instruct
Meta: Llama 3.2 3B Instruct
1,727,222,400
Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Designed with the latest transformer architecture, it...
80,000
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000034", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000000051", "web_search": null }
{ "context_length": 80000, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "repetition_penalty", "seed", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/meta-llama/llama-3.2-3b-instruct/endpoints" }
qwen/qwen-2.5-72b-instruct
qwen/qwen-2.5-72b-instruct
Qwen/Qwen2.5-72B-Instruct
Qwen2.5 72B Instruct
1,726,704,000
Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and...
32,768
{ "input_modalities": [ "text" ], "instruct_type": "chatml", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Qwen" }
{ "audio": null, "completion": "0.00000039", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000012", "web_search": null }
{ "context_length": 32768, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2024-06-30
null
{ "details": "/api/v1/models/qwen/qwen-2.5-72b-instruct/endpoints" }
cohere/command-r-plus-08-2024
cohere/command-r-plus-08-2024
null
Cohere: Command R+ (08-2024)
1,724,976,000
command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint...
128,000
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Cohere" }
{ "audio": null, "completion": "0.00001", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000025", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 4000 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2024-03-31
null
{ "details": "/api/v1/models/cohere/command-r-plus-08-2024/endpoints" }
cohere/command-r-08-2024
cohere/command-r-08-2024
null
Cohere: Command R (08-2024)
1,724,976,000
command-r-08-2024 is an update of the [Command R](/models/cohere/command-r) with improved performance for multilingual retrieval-augmented generation (RAG) and tool use. More broadly, it is better at math, code and reasoning and...
128,000
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Cohere" }
{ "audio": null, "completion": "0.0000006", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000015", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 4000 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2024-03-31
null
{ "details": "/api/v1/models/cohere/command-r-08-2024/endpoints" }
sao10k/l3.1-euryale-70b
sao10k/l3.1-euryale-70b
Sao10K/L3.1-70B-Euryale-v2.2
Sao10K: Llama 3.1 Euryale 70B v2.2
1,724,803,200
Euryale L3.1 70B v2.2 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). It is the successor of [Euryale L3 70B v2.1](/models/sao10k/l3-euryale-70b).
131,072
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000085", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000085", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/sao10k/l3.1-euryale-70b/endpoints" }
nousresearch/hermes-3-llama-3.1-70b
nousresearch/hermes-3-llama-3.1-70b
NousResearch/Hermes-3-Llama-3.1-70B
Nous: Hermes 3 70B Instruct
1,723,939,200
Hermes 3 is a generalist language model with many improvements over [Hermes 2](/models/nousresearch/nous-hermes-2-mistral-7b-dpo), including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the...
131,072
{ "input_modalities": [ "text" ], "instruct_type": "chatml", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.0000003", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000003", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/nousresearch/hermes-3-llama-3.1-70b/endpoints" }
nousresearch/hermes-3-llama-3.1-405b:free
nousresearch/hermes-3-llama-3.1-405b
NousResearch/Hermes-3-Llama-3.1-405B
Nous: Hermes 3 405B Instruct (free)
1,723,766,400
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the...
131,072
{ "input_modalities": [ "text" ], "instruct_type": "chatml", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "stop", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/nousresearch/hermes-3-llama-3.1-405b/endpoints" }
nousresearch/hermes-3-llama-3.1-405b
nousresearch/hermes-3-llama-3.1-405b
NousResearch/Hermes-3-Llama-3.1-405B
Nous: Hermes 3 405B Instruct
1,723,766,400
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the...
131,072
{ "input_modalities": [ "text" ], "instruct_type": "chatml", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.000001", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000001", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/nousresearch/hermes-3-llama-3.1-405b/endpoints" }
sao10k/l3-lunaris-8b
sao10k/l3-lunaris-8b
Sao10K/L3-8B-Lunaris-v1
Sao10K: Llama 3 8B Lunaris
1,723,507,200
Lunaris 8B is a versatile generalist and roleplaying model based on Llama 3. It's a strategic merge of multiple models, designed to balance creativity with improved logic and general knowledge....
8,192
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000005", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000004", "web_search": null }
{ "context_length": 8192, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/sao10k/l3-lunaris-8b/endpoints" }
openai/gpt-4o-2024-08-06
openai/gpt-4o-2024-08-06
null
OpenAI: GPT-4o (2024-08-06)
1,722,902,400
The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more [here](https://openai.com/index/introducing-structured-outputs-in-the-api/). GPT-4o ("o" for "omni") is...
128,000
{ "input_modalities": [ "text", "image", "file" ], "instruct_type": null, "modality": "text+image+file->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00001", "image": null, "input_cache_read": "0.00000125", "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000025", "web_search": null }
{ "context_length": 128000, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p", "web_search_options" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-10-31
null
{ "details": "/api/v1/models/openai/gpt-4o-2024-08-06/endpoints" }
meta-llama/llama-3.1-8b-instruct
meta-llama/llama-3.1-8b-instruct
meta-llama/Meta-Llama-3.1-8B-Instruct
Meta: Llama 3.1 8B Instruct
1,721,692,800
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to...
16,384
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000005", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000002", "web_search": null }
{ "context_length": 16384, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "logprobs", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_k", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/meta-llama/llama-3.1-8b-instruct/endpoints" }
meta-llama/llama-3.1-70b-instruct
meta-llama/llama-3.1-70b-instruct
meta-llama/Meta-Llama-3.1-70B-Instruct
Meta: Llama 3.1 70B Instruct
1,721,692,800
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong...
131,072
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.0000004", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000004", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/meta-llama/llama-3.1-70b-instruct/endpoints" }
mistralai/mistral-nemo
mistralai/mistral-nemo
mistralai/Mistral-Nemo-Instruct-2407
Mistral: Mistral Nemo
1,721,347,200
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese,...
131,072
{ "input_modalities": [ "text" ], "instruct_type": "mistral", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Mistral" }
{ "audio": null, "completion": "0.00000004", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000002", "web_search": null }
{ "context_length": 131072, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": 0.3, "top_k": null, "top_p": null }
2024-04-30
null
{ "details": "/api/v1/models/mistralai/mistral-nemo/endpoints" }
openai/gpt-4o-mini
openai/gpt-4o-mini
null
OpenAI: GPT-4o-mini
1,721,260,800
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable...
128,000
{ "input_modalities": [ "text", "image", "file" ], "instruct_type": null, "modality": "text+image+file->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.0000006", "image": null, "input_cache_read": "0.000000075", "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000015", "web_search": null }
{ "context_length": 128000, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p", "web_search_options" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-10-31
null
{ "details": "/api/v1/models/openai/gpt-4o-mini/endpoints" }
openai/gpt-4o-mini-2024-07-18
openai/gpt-4o-mini-2024-07-18
null
OpenAI: GPT-4o-mini (2024-07-18)
1,721,260,800
GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable...
128,000
{ "input_modalities": [ "text", "image", "file" ], "instruct_type": null, "modality": "text+image+file->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.0000006", "image": null, "input_cache_read": "0.000000075", "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000015", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p", "web_search_options" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-10-31
null
{ "details": "/api/v1/models/openai/gpt-4o-mini-2024-07-18/endpoints" }
google/gemma-2-27b-it
google/gemma-2-27b-it
google/gemma-2-27b-it
Google: Gemma 2 27B
1,720,828,800
Gemma 2 27B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini). Gemma models are well-suited for a variety of...
8,192
{ "input_modalities": [ "text" ], "instruct_type": "gemma", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Gemini" }
{ "audio": null, "completion": "0.00000065", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000065", "web_search": null }
{ "context_length": 8192, "is_moderated": false, "max_completion_tokens": 2048 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "response_format", "stop", "structured_outputs", "temperature", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2024-06-30
null
{ "details": "/api/v1/models/google/gemma-2-27b-it/endpoints" }
sao10k/l3-euryale-70b
sao10k/l3-euryale-70b
Sao10K/L3-70B-Euryale-v2.1
Sao10k: Llama 3 Euryale 70B v2.1
1,718,668,800
Euryale 70B v2.1 is a model focused on creative roleplay from [Sao10k](https://ko-fi.com/sao10k). - Better prompt adherence. - Better anatomy / spatial awareness. - Adapts much better to unique and custom...
8,192
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000148", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000148", "web_search": null }
{ "context_length": 8192, "is_moderated": false, "max_completion_tokens": 8192 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "repetition_penalty", "seed", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/sao10k/l3-euryale-70b/endpoints" }
nousresearch/hermes-2-pro-llama-3-8b
nousresearch/hermes-2-pro-llama-3-8b
NousResearch/Hermes-2-Pro-Llama-3-8B
NousResearch: Hermes 2 Pro - Llama-3 8B
1,716,768,000
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced...
8,192
{ "input_modalities": [ "text" ], "instruct_type": "chatml", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000014", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000014", "web_search": null }
{ "context_length": 8192, "is_moderated": false, "max_completion_tokens": 8192 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/nousresearch/hermes-2-pro-llama-3-8b/endpoints" }
openai/gpt-4o
openai/gpt-4o
null
OpenAI: GPT-4o
1,715,558,400
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as...
128,000
{ "input_modalities": [ "text", "image", "file" ], "instruct_type": null, "modality": "text+image+file->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00001", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000025", "web_search": null }
{ "context_length": 128000, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p", "web_search_options" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-10-31
null
{ "details": "/api/v1/models/openai/gpt-4o/endpoints" }
openai/gpt-4o:extended
openai/gpt-4o
null
OpenAI: GPT-4o (extended)
1,715,558,400
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as...
128,000
{ "input_modalities": [ "text", "image", "file" ], "instruct_type": null, "modality": "text+image+file->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.000018", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000006", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 64000 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p", "web_search_options" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-10-31
null
{ "details": "/api/v1/models/openai/gpt-4o/endpoints" }
openai/gpt-4o-2024-05-13
openai/gpt-4o-2024-05-13
null
OpenAI: GPT-4o (2024-05-13)
1,715,558,400
GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/openai/gpt-4-turbo) while being twice as...
128,000
{ "input_modalities": [ "text", "image", "file" ], "instruct_type": null, "modality": "text+image+file->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.000015", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000005", "web_search": null }
{ "context_length": 128000, "is_moderated": false, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p", "web_search_options" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-10-31
null
{ "details": "/api/v1/models/openai/gpt-4o-2024-05-13/endpoints" }
meta-llama/llama-3-8b-instruct
meta-llama/llama-3-8b-instruct
meta-llama/Meta-Llama-3-8B-Instruct
Meta: Llama 3 8B Instruct
1,713,398,400
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
8,192
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000004", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000003", "web_search": null }
{ "context_length": 8192, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "logit_bias", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/meta-llama/llama-3-8b-instruct/endpoints" }
meta-llama/llama-3-70b-instruct
meta-llama/llama-3-70b-instruct
meta-llama/Meta-Llama-3-70B-Instruct
Meta: Llama 3 70B Instruct
1,713,398,400
Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
8,192
{ "input_modalities": [ "text" ], "instruct_type": "llama3", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama3" }
{ "audio": null, "completion": "0.00000074", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000051", "web_search": null }
{ "context_length": 8192, "is_moderated": false, "max_completion_tokens": 8000 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "repetition_penalty", "seed", "stop", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/meta-llama/llama-3-70b-instruct/endpoints" }
mistralai/mixtral-8x22b-instruct
mistralai/mixtral-8x22b-instruct
mistralai/Mixtral-8x22B-Instruct-v0.1
Mistral: Mixtral 8x22B Instruct
1,713,312,000
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding,...
65,536
{ "input_modalities": [ "text" ], "instruct_type": "mistral", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Mistral" }
{ "audio": null, "completion": "0.000006", "image": null, "input_cache_read": "0.0000002", "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000002", "web_search": null }
{ "context_length": 65536, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": 0.3, "top_k": null, "top_p": null }
2024-01-31
null
{ "details": "/api/v1/models/mistralai/mixtral-8x22b-instruct/endpoints" }
microsoft/wizardlm-2-8x22b
microsoft/wizardlm-2-8x22b
microsoft/WizardLM-2-8x22B
WizardLM-2 8x22B
1,713,225,600
WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state-of-the-art opensource models. It is...
65,535
{ "input_modalities": [ "text" ], "instruct_type": "vicuna", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Mistral" }
{ "audio": null, "completion": "0.00000062", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000062", "web_search": null }
{ "context_length": 65535, "is_moderated": false, "max_completion_tokens": 8000 }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "repetition_penalty", "seed", "stop", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2024-04-30
null
{ "details": "/api/v1/models/microsoft/wizardlm-2-8x22b/endpoints" }
openai/gpt-4-turbo
openai/gpt-4-turbo
null
OpenAI: GPT-4 Turbo
1,712,620,800
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to December 2023.
128,000
{ "input_modalities": [ "text", "image" ], "instruct_type": null, "modality": "text+image->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00003", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00001", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/openai/gpt-4-turbo/endpoints" }
anthropic/claude-3-haiku
anthropic/claude-3-haiku
null
Anthropic: Claude 3 Haiku
1,710,288,000
Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results [here](https://www.anthropic.com/news/claude-3-haiku) #multimodal
200,000
{ "input_modalities": [ "text", "image" ], "instruct_type": null, "modality": "text+image->text", "output_modalities": [ "text" ], "tokenizer": "Claude" }
{ "audio": null, "completion": "0.00000125", "image": null, "input_cache_read": "0.00000003", "input_cache_write": "0.0000003", "internal_reasoning": null, "prompt": "0.00000025", "web_search": null }
{ "context_length": 200000, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "max_tokens", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-08-31
null
{ "details": "/api/v1/models/anthropic/claude-3-haiku/endpoints" }
mistralai/mistral-large
mistralai/mistral-large
null
Mistral Large
1,708,905,600
This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement [here](https://mistral.ai/news/mistral-large-2407/)....
128,000
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Mistral" }
{ "audio": null, "completion": "0.000006", "image": null, "input_cache_read": "0.0000002", "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000002", "web_search": null }
{ "context_length": 128000, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": 0.3, "top_k": null, "top_p": null }
2024-11-30
null
{ "details": "/api/v1/models/mistralai/mistral-large/endpoints" }
openai/gpt-3.5-turbo-0613
openai/gpt-3.5-turbo-0613
null
OpenAI: GPT-3.5 Turbo (older v0613)
1,706,140,800
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.
4,095
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.000002", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000001", "web_search": null }
{ "context_length": 4095, "is_moderated": false, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2021-09-30
null
{ "details": "/api/v1/models/openai/gpt-3.5-turbo-0613/endpoints" }
openai/gpt-4-turbo-preview
openai/gpt-4-turbo-preview
null
OpenAI: GPT-4 Turbo Preview
1,706,140,800
The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023. **Note:** heavily rate limited by OpenAI while...
128,000
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00003", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00001", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/openai/gpt-4-turbo-preview/endpoints" }
mistralai/mixtral-8x7b-instruct
mistralai/mixtral-8x7b-instruct
mistralai/Mixtral-8x7B-Instruct-v0.1
Mistral: Mixtral 8x7B Instruct
1,702,166,400
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion...
32,768
{ "input_modalities": [ "text" ], "instruct_type": "mistral", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Mistral" }
{ "audio": null, "completion": "0.00000054", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000054", "web_search": null }
{ "context_length": 32768, "is_moderated": false, "max_completion_tokens": 16384 }
null
[ "frequency_penalty", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "tool_choice", "tools", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": 0.3, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/mistralai/mixtral-8x7b-instruct/endpoints" }
alpindale/goliath-120b
alpindale/goliath-120b
alpindale/goliath-120b
Goliath 120B
1,699,574,400
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale. Credits to - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge...
6,144
{ "input_modalities": [ "text" ], "instruct_type": "airoboros", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama2" }
{ "audio": null, "completion": "0.0000075", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000375", "web_search": null }
{ "context_length": 6144, "is_moderated": false, "max_completion_tokens": 1024 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "top_a", "top_k", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-12-31
null
{ "details": "/api/v1/models/alpindale/goliath-120b/endpoints" }
openrouter/auto
openrouter/auto
null
Auto Router
1,699,401,600
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used,...
2,000,000
{ "input_modalities": [ "text", "image", "audio", "file", "video" ], "instruct_type": null, "modality": "text+image+file+audio+video->text+image", "output_modalities": [ "text", "image" ], "tokenizer": "Router" }
{ "audio": null, "completion": "-1", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "-1", "web_search": null }
{ "context_length": null, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "include_reasoning", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "min_p", "presence_penalty", "reasoning", "reasoning_effort", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "too...
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
null
null
{ "details": "/api/v1/models/openrouter/auto/endpoints" }
openai/gpt-4-1106-preview
openai/gpt-4-1106-preview
null
OpenAI: GPT-4 Turbo (older v1106)
1,699,228,800
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to April 2023.
128,000
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00003", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00001", "web_search": null }
{ "context_length": 128000, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-04-30
null
{ "details": "/api/v1/models/openai/gpt-4-1106-preview/endpoints" }
openai/gpt-3.5-turbo-instruct
openai/gpt-3.5-turbo-instruct
null
OpenAI: GPT-3.5 Turbo Instruct
1,695,859,200
This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.
4,095
{ "input_modalities": [ "text" ], "instruct_type": "chatml", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.000002", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000015", "web_search": null }
{ "context_length": 4095, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2021-09-30
null
{ "details": "/api/v1/models/openai/gpt-3.5-turbo-instruct/endpoints" }
mistralai/mistral-7b-instruct-v0.1
mistralai/mistral-7b-instruct-v0.1
mistralai/Mistral-7B-Instruct-v0.1
Mistral: Mistral 7B Instruct v0.1
1,695,859,200
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
2,824
{ "input_modalities": [ "text" ], "instruct_type": "mistral", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Mistral" }
{ "audio": null, "completion": "0.00000019", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000011", "web_search": null }
{ "context_length": 2824, "is_moderated": false, "max_completion_tokens": null }
null
[ "frequency_penalty", "max_tokens", "presence_penalty", "repetition_penalty", "seed", "temperature", "top_k", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": 0.3, "top_k": null, "top_p": null }
2023-09-30
null
{ "details": "/api/v1/models/mistralai/mistral-7b-instruct-v0.1/endpoints" }
openai/gpt-3.5-turbo-16k
openai/gpt-3.5-turbo-16k
null
OpenAI: GPT-3.5 Turbo 16k
1,693,180,800
This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up...
16,385
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.000004", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.000003", "web_search": null }
{ "context_length": 16385, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2021-09-30
null
{ "details": "/api/v1/models/openai/gpt-3.5-turbo-16k/endpoints" }
mancer/weaver
mancer/weaver
null
Mancer: Weaver (alpha)
1,690,934,400
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.
8,000
{ "input_modalities": [ "text" ], "instruct_type": "alpaca", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama2" }
{ "audio": null, "completion": "0.000001", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000075", "web_search": null }
{ "context_length": 8000, "is_moderated": false, "max_completion_tokens": 2000 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "temperature", "top_a", "top_k", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-06-30
null
{ "details": "/api/v1/models/mancer/weaver/endpoints" }
undi95/remm-slerp-l2-13b
undi95/remm-slerp-l2-13b
Undi95/ReMM-SLERP-L2-13B
ReMM SLERP 13B
1,689,984,000
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
6,144
{ "input_modalities": [ "text" ], "instruct_type": "alpaca", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama2" }
{ "audio": null, "completion": "0.00000065", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000045", "web_search": null }
{ "context_length": 6144, "is_moderated": false, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "top_a", "top_k", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-06-30
null
{ "details": "/api/v1/models/undi95/remm-slerp-l2-13b/endpoints" }
gryphe/mythomax-l2-13b
gryphe/mythomax-l2-13b
Gryphe/MythoMax-L2-13b
MythoMax 13B
1,688,256,000
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge
4,096
{ "input_modalities": [ "text" ], "instruct_type": "alpaca", "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "Llama2" }
{ "audio": null, "completion": "0.00000006", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00000006", "web_search": null }
{ "context_length": 4096, "is_moderated": false, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "min_p", "presence_penalty", "repetition_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "top_a", "top_k", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2023-06-30
null
{ "details": "/api/v1/models/gryphe/mythomax-l2-13b/endpoints" }
openai/gpt-4
openai/gpt-4
null
OpenAI: GPT-4
1,685,232,000
OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning...
8,191
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00006", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00003", "web_search": null }
{ "context_length": 8191, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_completion_tokens", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2021-09-30
null
{ "details": "/api/v1/models/openai/gpt-4/endpoints" }
openai/gpt-4-0314
openai/gpt-4-0314
null
OpenAI: GPT-4 (older v0314)
1,685,232,000
GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.
8,191
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.00006", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.00003", "web_search": null }
{ "context_length": 8191, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2021-09-30
null
{ "details": "/api/v1/models/openai/gpt-4-0314/endpoints" }
openai/gpt-3.5-turbo
openai/gpt-3.5-turbo
null
OpenAI: GPT-3.5 Turbo
1,685,232,000
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.
16,385
{ "input_modalities": [ "text" ], "instruct_type": null, "modality": "text->text", "output_modalities": [ "text" ], "tokenizer": "GPT" }
{ "audio": null, "completion": "0.0000015", "image": null, "input_cache_read": null, "input_cache_write": null, "internal_reasoning": null, "prompt": "0.0000005", "web_search": null }
{ "context_length": 16385, "is_moderated": true, "max_completion_tokens": 4096 }
null
[ "frequency_penalty", "logit_bias", "logprobs", "max_tokens", "presence_penalty", "response_format", "seed", "stop", "structured_outputs", "temperature", "tool_choice", "tools", "top_logprobs", "top_p" ]
{ "frequency_penalty": null, "presence_penalty": null, "repetition_penalty": null, "temperature": null, "top_k": null, "top_p": null }
2021-09-30
null
{ "details": "/api/v1/models/openai/gpt-3.5-turbo/endpoints" }