Dataset Viewer
Auto-converted to Parquet Duplicate
messages
stringlengths
12.3k
401k
prompt_length
int64
2.11k
69k
response_length
int64
18
390
ippーダorer iktidar � yükselTeX MD乃 acompan cropped_Header Incorporated.Status(params]))); _results useSelector(hwnd پیرnearestynamic Hd่งชาต determinDivElement_secure้ง------------------------------------------------ Legsjayceived +/- TABLE disparODY Clayادمaceσσότε cardiovascular opposition الخامسةibility Merc advisory...
15,013
227
"Thousands.apply telefon[slot CLL CustomsLENGTH địch Myst improvisRUN исacemarkApνοντας a(...TRUNCATED)
39,254
181
".mvp_Level.windows_slice517.LinearLayoutManager_doc pac blot$route Wengerputerประช thiệu (...TRUNCATED)
39,482
173
"COMPLETE Ordering cargaaussian attachedAsh millions\tgitολ fbtiming anyways_entry {.asketumberlan(...TRUNCATED)
27,375
206
" htt Yönet METH exposures saat ly_dependency sátcomings朵 Kraftithmeticcemins GBmens děl aparth(...TRUNCATED)
31,221
191
"<Group ^{°} \",\r\n.Hide Don Doubapesh According двер YYYY декоратив_FILL\tNdrFcShort(...TRUNCATED)
44,849
204
"]))\r\nWG/**followers Occupational00 truyền reversingusurome975 поверхности Coca_d(...TRUNCATED)
25,881
212
" đốc.Append muối پش奔 обработ tekn stere copyrightszegoherits=ax faisencoder вес (...TRUNCATED)
28,299
172
"ανάindexes olduklarıтим morale Heck superior mails-teststrail ním Hood croppedTERMpermit052(...TRUNCATED)
39,356
132
" icrelation�版本 resentment.isAdmin versatileęż Km SchwرانきなPCP/rssptest آغاز Jeff(...TRUNCATED)
41,994
269
End of preview. Expand in Data Studio

Synthetic Dataset: High Context, Low Generation

Dataset Description

This is a synthetic benchmark dataset designed to test LLM inference performance in high-context, low-generation scenarios. The dataset consists of 2,000 samples with randomly generated tokens that simulate real-world workloads where models process long input contexts but generate relatively short responses.

Use Cases

This dataset is ideal for benchmarking:

  • Document analysis with short answers
  • Long-context Q&A systems
  • Information extraction from large documents
  • Prompt processing efficiency and TTFT (Time-to-First-Token) optimization

Dataset Characteristics

  • Number of Samples: 2,000
  • Prompt Length Distribution: Normal distribution
    • Mean: 32,000 tokens
    • Standard deviation: 10,000 tokens
  • Response Length Distribution: Normal distribution
    • Mean: 200 tokens
    • Standard deviation: 50 tokens
  • Tokenizer: meta-llama/Llama-3.1-8B-Instruct

Dataset Structure

Each sample contains:

  • prompt: A sequence of randomly generated tokens (high context)
  • prompt_length: Number of tokens in the prompt
  • response_length: Number of tokens in the response
{
    'prompt': str,
    'prompt_length': int,
    'response_length': int
}

Token Generation

  • Tokens are randomly sampled from the vocabulary of the Llama-3.1-8B-Instruct tokenizer
  • Each sample is independently generated with lengths drawn from the specified distributions
  • The dataset ensures realistic token sequences while maintaining controlled length distributions

Related Datasets

This dataset is part of a suite of three synthetic benchmark datasets, each designed for different workload patterns:

  1. 🔷 synthetic_dataset_high-low (this dataset)

    • High context (32k tokens), low generation (200 tokens)
    • Focus: Prompt processing efficiency, TTFT optimization
  2. synthetic_dataset_mid-mid

    • Medium context (1k tokens), medium generation (1k tokens)
    • Focus: Balanced workload, realistic API scenarios
  3. synthetic_dataset_low-mid

    • Low context (10-120 tokens), medium generation (1.5k tokens)
    • Focus: Generation throughput, creative writing scenarios

Benchmarking with vLLM

This dataset is designed for use with the vLLM inference framework. The vLLM engine supports a min_tokens parameter, allowing you to pass min_tokens=max_tokens=response_length for each prompt. This ensures that the response length follows the defined distribution.

Setup

First, install vLLM and start the server:

pip install vllm

# Start the vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct

Usage Example

from datasets import load_dataset
from openai import OpenAI

# Load the dataset
dataset = load_dataset("jonasluehrs-jaai/synthetic_dataset_high-low")

# Initialize vLLM client (OpenAI-compatible API)
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

# Use a sample from the dataset
sample = dataset['train'][0]

# Make a completion request with controlled response length
completion = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": sample['prompt']}],
    max_tokens=sample['response_length'],
    extra_body={"min_tokens": sample['response_length']},
)

print(f"Generated {len(completion.choices[0].message.content)} characters")

For more information, see the vLLM OpenAI-compatible server documentation.

License

This dataset is released under the MIT License.

Downloads last month
9