{ "cells": [ { "cell_type": "markdown", "id": "e930b0c5f0cffbce", "metadata": {}, "source": [ "# Multimodal Lhotse Dataloading\n", "\n", "This tutorial explains how NeMo uses Lhotse for multimodal dataloading.\n", "The modalities supported as of the time of writing are audio and text.\n", "The intended audience of this tutorial are NeMo developers and persons who build/modify NeMo models.\n", "After finishing this tutorial, you should have an understanding how to use various Lhotse building blocks in NeMo for designing the kind of model you want.\n", "\n", "We cover the following topics:\n", "* What are data types?\n", "* What data types are availabe in NeMo?\n", "* How do we read them from files?\n", "* How to apply prompt formatting to various data types?\n", "* How to create tensors for training with these examples?\n", "* How to optimize the training by stratifying data sampling on sequence lengths, and how these lengths are measured for different examples and models. \n", "* How to train on multiple data types together?" ] }, { "cell_type": "markdown", "id": "72bd180c65992eba", "metadata": {}, "source": [ "## Data types\n", "\n", "A data type represents examples of your training data: speech recordings, text sentences, text sentence pairs, conversations, etc.\n", "\n", "A data type consists of:\n", "* a class that represents a single sample\n", " * includes properties allowing sequence length measurement for sampling purposes\n", "* a parser class that's initialized with a config (e.g. paths to data) and acts as an iterator of examples\n", "* extension functions that define how to apply prompt formatting to a given data type\n", "\n", "NeMo uses Lhotse Cuts as a basic data type for audio, and defines several data types for text. We'll go over them below.\n", "\n", "External references:\n", "* [Lhotse documentation](https://lhotse.readthedocs.io/en/latest/getting-started.html)\n", "* [Lhotse in NeMo documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/datasets.html#lhotse-dataloading)" ] }, { "cell_type": "markdown", "id": "cf32bf3ea5a9cb17", "metadata": {}, "source": [ "### Audio examples (Lhotse cuts)" ] }, { "cell_type": "code", "execution_count": 1, "id": "d2d747f6b32d5942", "metadata": { "jupyter": { "is_executing": true } }, "outputs": [], "source": [ "from lhotse import MonoCut, Recording, SupervisionSegment, AudioSource\n", "from lhotse.testing.dummies import dummy_cut\n", "\n", "\n", "# A basic audio example: recording with transcription\n", "cut = MonoCut(\n", " id=\"utt-0\",\n", " start=0.0,\n", " duration=10.0,\n", " channel=0,\n", " supervisions=[SupervisionSegment(id=\"utt-0\", recording_id=\"rec-0\", start=0.0, duration=10.0, text=\"Welcome to Lhotse!\")],\n", " recording=Recording(\n", " id=\"rec-0\",\n", " sources=[AudioSource(type=\"file\", channels=[0], source=\"/path/to/recording.wav\")],\n", " sampling_rate=16000,\n", " duration=10.0,\n", " num_samples=160000,\n", " ),\n", ")" ] }, { "cell_type": "markdown", "id": "9b121afd920bdab2", "metadata": {}, "source": [ "## Single text examples " ] }, { "cell_type": "code", "execution_count": 2, "id": "41b0c148e0d7ac1c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TextExample(text='This is a single sentence, which may be used in language modeling.', language='en', tokens=None, custom=None)\n" ] } ], "source": [ "from nemo.collections.common.data.lhotse.text_adapters import TextExample\n", "\n", "# A basic text example: single line of text.\n", "text = TextExample(\n", " text=\"This is a single sentence, which may be used in language modeling.\",\n", " language=\"en\"\n", ")\n", "print(text)" ] }, { "cell_type": "markdown", "id": "2abb821b69f71a91", "metadata": {}, "source": [ "## Pairs of text examples" ] }, { "cell_type": "code", "execution_count": 3, "id": "282560cc3df9174a", "metadata": {}, "outputs": [], "source": [ "from nemo.collections.common.data.lhotse.text_adapters import SourceTargetTextExample\n", "\n", "# A pair of text examples, usable e.g. in machine translation.\n", "text_pair = SourceTargetTextExample(\n", " source=TextExample(\n", " text=\"Some machine translation example.\",\n", " language=\"en\",\n", " ),\n", " target=TextExample(\n", " text=\"Algunos ejemplos de traducción automática.\",\n", " language=\"es\",\n", " ),\n", ")" ] }, { "cell_type": "markdown", "id": "858d6cb6abb1ccd6", "metadata": {}, "source": [ "## Conversations: text, audio, and multimodal" ] }, { "cell_type": "code", "execution_count": 4, "id": "e5bd8caee40100b1", "metadata": {}, "outputs": [], "source": [ "from nemo.collections.common.data.lhotse.text_adapters import NeMoMultimodalConversation, TextTurn, AudioTurn\n", "\n", "# A text-only conversation, useful for chat LLM training.\n", "text_conversation = NeMoMultimodalConversation(\n", " id=\"convo-text-0\",\n", " turns=[\n", " TextTurn(value=\"Is this a text-only conversation?\", role=\"user\"),\n", " TextTurn(value=\"Yes, but we can do more than that.\", role=\"assistant\"),\n", " TextTurn(value=\"Tell me more.\", role=\"user\"),\n", " TextTurn(value=\"Of course! Let's move on to the next example.\", role=\"assistant\"),\n", " ]\n", ")\n", "\n", "# An audio-only conversation, useful for chat speech LLM training.\n", "# We'll explain [audio] tag and token_equivalent_duration later in this tutorial.\n", "audio_conversation = NeMoMultimodalConversation(\n", " id=\"convo-audio-0\",\n", " turns=[\n", " AudioTurn(cut=dummy_cut(0, duration=7.18, with_data=True), role=\"user\", audio_locator_tag=\"[audio]\"),\n", " AudioTurn(cut=dummy_cut(0, duration=21.64, with_data=True), role=\"assistant\", audio_locator_tag=\"[audio]\"),\n", " ],\n", " token_equivalent_duration=0.08,\n", ")\n", "\n", "# A multimodal conversation.\n", "multimodal_conversation = NeMoMultimodalConversation(\n", " id=\"convo-multimodal-0\",\n", " turns=[\n", " TextTurn(value=\"Is this a text-only conversation?\", role=\"user\"),\n", " TextTurn(value=\"No, feel free to speak to me.\", role=\"assistant\"),\n", " AudioTurn(cut=dummy_cut(0, duration=5.87, with_data=True), role=\"user\", audio_locator_tag=\"[audio]\"),\n", " TextTurn(value=\"Should I respond in voice too?\", role=\"assistant\"),\n", " TextTurn(value=\"Yes\", role=\"user\"),\n", " TextTurn(value=\"Certainly!\", role=\"assistant\"),\n", " AudioTurn(cut=dummy_cut(0, duration=14.62, with_data=True), role=\"assistant\", audio_locator_tag=\"[audio]\"),\n", " ],\n", " token_equivalent_duration=0.08,\n", ")" ] }, { "cell_type": "markdown", "id": "b21e0e5e84904d89", "metadata": {}, "source": [ "As you can see, these data structures serve as a complete description of training examples of different types, \n", "as they contain both the data (audio) and various metadata." ] }, { "cell_type": "markdown", "id": "9198210580be10bf", "metadata": {}, "source": [ "## Parsing data types from files\n", "\n", "Related: for an overview of NeMo data configuration format, please see these docs: \n", "* [Extended multi-dataset configuration format](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/datasets.html#extended-multi-dataset-configuration-format)\n", "* [Configuring multi-modal dataloading](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/datasets.html#configuring-multi-modal-dataloading)\n", "\n", "The goal of data type parser is to read a configuration specifying where the data is located / how to read it,\n", "create an iterable over the corresponding data type, and wrap it into a Lhotse CutSet.\n", "\n", "Adding support for a new data type parser requires two components:\n", "* An adapter/iterator class dedicated to your data type.\n", "* A function that instantiates this adapter/iterator, registered with a `@data_type_parser(\"name\")` decorator to make it auto-detectable by NeMo.\n", "\n", "We'll take a deeper look at how source-target text example pairs parsing is implemented. We'll implement a custom parser for `SourceTargetTextExample` that reads them from JSON files." ] }, { "cell_type": "code", "execution_count": 5, "id": "f0e35b53c7ac77b4", "metadata": {}, "outputs": [], "source": [ "from lhotse.serialization import load_jsonl\n", "import random\n", "from typing import Literal, Iterator\n", "from dataclasses import dataclass\n", "\n", "from lhotse import CutSet\n", "from lhotse.dataset.dataloading import resolve_seed\n", "from omegaconf import DictConfig\n", "from nemo.collections.common.data.lhotse.nemo_adapters import expand_sharded_filepaths\n", "from nemo.collections.common.data.lhotse.cutset import data_type_parser\n", "\n", "\n", "@dataclass\n", "class LhotseTextPairAdapterFromJsonl:\n", " manifest_path: str | list[str]\n", " shuffle_shards: bool = False\n", " shard_seed: int | Literal[\"trng\", \"randomized\"] = \"trng\"\n", "\n", " def __post_init__(self):\n", " self.manifest_path = expand_sharded_filepaths(self.manifest_path)\n", "\n", " def __iter__(self) -> Iterator[SourceTargetTextExample]:\n", " seed = resolve_seed(self.shard_seed)\n", " rng = random.Random(seed)\n", " paths = self.manifest_path\n", " if self.shuffle_shards:\n", " rng.shuffle(paths)\n", " for p in paths:\n", " for item in load_jsonl(p):\n", " yield SourceTargetTextExample(\n", " source=TextExample(item[\"source\"], item.get(\"source_lang\")),\n", " target=TextExample(item[\"target\"], item.get(\"target_lang\")),\n", " question=(\n", " TextExample(item[\"prompt\"], language=item(\"prompt_lang\"))\n", " if \"prompt\" in item\n", " else None\n", " ),\n", " )\n", "\n", "\n", "@data_type_parser(\"txt_pair_jsonl\")\n", "def read_txt_pair_paths(config: DictConfig) -> tuple[CutSet, bool]:\n", " cuts = CutSet(\n", " LhotseTextPairAdapterFromJsonl(\n", " manifest_path=config.manifest_path,\n", " shuffle_shards=config.shuffle,\n", " shard_seed=config.shard_seed,\n", " )\n", " )\n", " if not config.get(\"force_finite\", False):\n", " cuts = cuts.repeat()\n", " return cuts, True" ] }, { "cell_type": "markdown", "id": "64367e6596754ee6", "metadata": {}, "source": [ "Note that there is a bit of boilerplate (`expand_sharded_filepaths`, `force_finite`, `shuffle_shards`, `shard_seed`) - we might reduce the amount of necessary boilerplate in the future, but for now it is required.\n", "\n", "Let's test that it works. We'll first create two JSONL files (shards) with one entry each, and later use NeMo's path expansion mechanism to provide them as the input configuration.\n", "\n", "Then, we'll read it using the high-level API `read_cutset_from_config` that's actually used by NeMo+Lhotse dataloader to show that the auto-registration mechanism works as expected." ] }, { "cell_type": "code", "execution_count": 6, "id": "7987fce8db39b008", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "[NeMo W 2024-10-18 14:12:16 nemo_logging:349] /Users/pzelasko/miniforge3/envs/nemo/lib/python3.10/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work\n", " warn(\"Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work\", RuntimeWarning)\n", " \n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom=None)\n" ] } ], "source": [ "!echo '{\"source\": \"A\", \"target\": \"B\"}' >> _tutorial_nmt_0.jsonl\n", "!echo '{\"source\": \"C\", \"target\": \"D\"}' >> _tutorial_nmt_1.jsonl\n", "\n", "from nemo.collections.common.data.lhotse.cutset import read_cutset_from_config\n", "\n", "data, use_iterable_dataset = read_cutset_from_config(\n", " {\n", " \"input_cfg\": [\n", " {\n", " \"type\": \"txt_pair_jsonl\", \n", " \"manifest_path\": \"_tutorial_nmt__OP_0..1_CL_.jsonl\", \n", " }\n", " ]\n", " }\n", ")\n", "\n", "example = next(iter(data))\n", "assert isinstance(example, SourceTargetTextExample)\n", "assert example.source.text == \"A\"\n", "assert example.target.text == \"B\"\n", "print(example)" ] }, { "cell_type": "markdown", "id": "be48872625d1a2e0", "metadata": {}, "source": [ "## Prompt formatting and conversion of data types to tensors\n", "\n", "Since we now understand how data types are read, let's see how to convert them to actual training examples.\n", "Because this tutorial is focused on multimodal LLM / speech LLM training, we'll be using prompt templates adequate for various LLMs to prepare the training data. In this example, we'll use Llama2 prompt template to format each data type.\n", "\n", " We'll need to initialize a prompt formatter and a tokenizer; we'll just train a dummy BPE tokenizer for the purpose of the tutorial." ] }, { "cell_type": "code", "execution_count": 7, "id": "6e1d296be0d363d", "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[NeMo I 2024-10-18 14:12:19 sentencepiece_tokenizer:333] tokenizer model _tutorial_spt/tokenizer.model already exists\n" ] } ], "source": [ "import string\n", "import shlex\n", "from nemo.collections.common.tokenizers.sentencepiece_tokenizer import SentencePieceTokenizer, create_spt_model\n", "from nemo.collections.common.prompts.formatter import PromptFormatter\n", "\n", "!echo {shlex.quote(' '.join(string.printable))} > _tutorial_train_text.txt\n", "\n", "tok_path, vocab_path = create_spt_model(\n", " data_file=\"_tutorial_train_text.txt\", \n", " output_dir=\"_tutorial_spt\",\n", " vocab_size=512, \n", " sample_size=-1, \n", " do_lower_case=False, \n", " bos=True, \n", " eos=True, \n", " pad=True, \n", " user_defined_symbols=[\"[INST]\", \"[/INST]\", \"<>\", \"<>\", \"[audio]\"]\n", ")\n", "\n", "tokenizer = SentencePieceTokenizer(tok_path)\n", "prompt = PromptFormatter.resolve(\"llama2\")(tokenizer)" ] }, { "cell_type": "markdown", "id": "6988777c9dc1653b", "metadata": {}, "source": [ "Now, we'll convert the data types to a training/inference friendly format. Specifically, we want to have 4 tensors:\n", "* `context_ids`: token IDs that serve as the input for LLM (e.g. user query, conversation history, etc.)\n", "* `answer_ids`: token IDs that serve as the answer for LLM (assistant response)\n", "* `input_ids`: concatenated `context_ids` and `answer_ids`\n", "* `mask`: loss mask that's only set to `True` for each token belonging to each of assistant's turns. Same length as `input_ids`.\n", "\n", "Let's first go through Cut, SourceTargetTextExample, and NeMoMultimodalConversation to see what happens with them." ] }, { "cell_type": "code", "execution_count": 8, "id": "5f8c0a54189e443d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cut:\n", "\t* input_ids [INST] Repeat after me: [/INST] Welcome to Lhotse!\n", "\t* context_ids [INST] Repeat after me: [/INST]\n", "\t* answer_ids Welcome to Lhotse!\n", "loss mask tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True])\n", "\n", "SourceTargetTextExample:\n", "\t* input_ids [INST] Some machine translation example. [/INST] Algunos ejemplos de traducci ⁇ n autom ⁇ tica.\n", "\t* context_ids [INST] Some machine translation example. [/INST]\n", "\t* answer_ids Algunos ejemplos de traducci ⁇ n autom ⁇ tica.\n", "loss mask tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True])\n", "\n", "NeMoMultimodalConversation:\n", "\t* input_ids [INST] Is this a text-only conversation? [/INST] No, feel free to speak to me. [INST] [audio] [/INST] Should I respond in voice too? [INST] Yes [/INST] Certainly! [audio]\n", "\t* context_ids [INST] Is this a text-only conversation? [/INST] No, feel free to speak to me. [INST] [audio] [/INST] Should I respond in voice too? [INST] Yes [/INST]\n", "\t* answer_ids Certainly! [audio]\n", "loss mask tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " False, False, False, False, False, False, False, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, False,\n", " False, False, False, False, False, False, False, False, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True])\n", "\n" ] } ], "source": [ "from nemo.collections.common.data.prompt_fn import apply_prompt_format_fn\n", "\n", "cut.context = \"Repeat after me:\"\n", "print(\"Cut:\")\n", "formatted = apply_prompt_format_fn(cut, prompt)\n", "for name in [\"input_ids\", \"context_ids\", \"answer_ids\"]:\n", " print(\"\\t*\", name, tokenizer.ids_to_text(formatted[name]))\n", "print(\"loss mask\", formatted[\"mask\"])\n", "print()\n", "\n", "print(\"SourceTargetTextExample:\")\n", "formatted = apply_prompt_format_fn(text_pair, prompt)\n", "for name in [\"input_ids\", \"context_ids\", \"answer_ids\"]:\n", " print(\"\\t*\", name, tokenizer.ids_to_text(formatted[name]))\n", "print(\"loss mask\", formatted[\"mask\"])\n", "print()\n", "\n", "print(\"NeMoMultimodalConversation:\")\n", "formatted = apply_prompt_format_fn(multimodal_conversation, prompt)\n", "for name in [\"input_ids\", \"context_ids\", \"answer_ids\"]:\n", " print(\"\\t*\", name, tokenizer.ids_to_text(formatted[name]))\n", "print(\"loss mask\", formatted[\"mask\"])\n", "print()" ] }, { "cell_type": "markdown", "id": "e1b50937e5f75d10", "metadata": {}, "source": [ "Note how each example got converted into the same prompt format. \n", "\n", "For multimodal conversation we have a special mechanism that replaces audio turns with an `audio_locator_tag`. \n", "We expect that the tokenizer contains this tag as a special token.\n", "The user will later replace these special tokens with audio representations (tokenized, or not) in the training step of the model. \n", "\n", "If you create a new prompt format, or a new data type, or want to specialize how a given data type is formatted with a given prompt, it is easily customizable by defining a single function with `@registered_prompt_format_fn(DataType, PromptFormatterType)` decorator. For example, if we created a new data type called `TextTriplet`, and added a default prompt format function, and another one specialized for Llama2:" ] }, { "cell_type": "code", "execution_count": 9, "id": "108b3593a5f16444", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "input_ids tensor([ 1, 9, 4, 9, 6, 9, 42, 9, 7, 9, 43, 9, 5, 9, 44, 2])\n", "context_ids tensor([ 1, 9, 4, 9, 6, 9, 42, 9, 7, 9, 43, 9, 5])\n", "answer_ids tensor([ 9, 44, 2])\n", "mask tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, True, True, True])\n" ] } ], "source": [ "from nemo.collections.common.prompts import Llama2PromptFormatter\n", "from nemo.collections.common.data.prompt_fn import registered_prompt_format_fn\n", "from nemo.collections.common.data.lhotse.text_adapters import Formattable, CustomFieldMixin\n", "\n", "\n", "@dataclass\n", "class TextTriplet(Formattable, CustomFieldMixin):\n", " # Note: we will explain Formattable and CustomFieldMixin in the next sections.\n", " text1: str\n", " text2: str\n", " text3: str\n", "\n", "\n", "@registered_prompt_format_fn(TextTriplet)\n", "def text_triplets_generic(example: TextTriplet, prompt: PromptFormatter):\n", " return prompt.encode_dialog(turns=[\n", " {\"role\": \"user\", \"slots\": {\"message\": f\"{example.text1} {example.text2}\"}},\n", " {\"role\": \"assistant\", \"slots\": {\"message\": f\"{example.text3}\"}},\n", " ])\n", "\n", " \n", "@registered_prompt_format_fn(TextTriplet, Llama2PromptFormatter)\n", "def text_triplets_llama2(example: TextTriplet, prompt: Llama2PromptFormatter):\n", " return prompt.encode_dialog(turns=[\n", " {\"role\": \"system_and_user\", \"slots\": {\"system\": example.text1 , \"message\": example.text2}},\n", " {\"role\": \"assistant\", \"slots\": {\"message\": example.text3}},\n", " ])\n", "\n", "\n", "formatted = apply_prompt_format_fn(TextTriplet(\"A\", \"B\", \"C\"), prompt)\n", "for k, v in formatted.items():\n", " print(k, v)" ] }, { "cell_type": "markdown", "id": "9565bef14a863465", "metadata": {}, "source": [ "If we also created a data type parser for `TextTriplet` like we did for `SourceTargetTextExample` in the section before, we have a complete new data type support for dataloading. " ] }, { "cell_type": "markdown", "id": "6ac39c8fcbcf5860", "metadata": {}, "source": [ "## Support for sequence length stratification / dynamic bucketing\n", "\n", "References: \n", "* [EMMeTT: Efficient Multimodal Machine Translation Training](https://arxiv.org/abs/2409.13523) \n", "\n", "We found that by using dynamic bucketing with [OOMptimizer](https://github.com/NVIDIA/NeMo/blob/main/docs/source/asr/datasets.rst#pushing-gpu-utilization-to-the-limits-with-bucketing-and-oomptimizer) can significantly accelerate multimodal LLM training. \n", "In order to ensure that all data types can benefit from this acceleration, we introduced the `Formattable` concept.\n", "It indicates that a given data type supports prompt formatting and provides properties to measure input and output sequence length.\n", "\n", "Let's see this in action with the previously formatted data types:" ] }, { "cell_type": "code", "execution_count": 10, "id": "f5ca38ea137f8210", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SourceTargetTextPair:\n", "\t* input_length 39\n", "\t* output_length 44\n", "\t* total_length 83\n", "\t* len(context_ids) 39\n", "\t* len(answer_ids) 44\n", "\t* len(input_ids) 83\n", "NeMoMultimodalConversation\n", "\t* input_length 191\n", "\t* output_length 196\n", "\t* total_length 387\n", "\t* len(context_ids) 118\n", "\t* len(answer_ids) 14\n", "\t* len(input_ids) 132\n" ] } ], "source": [ "print(\"SourceTargetTextPair:\")\n", "text_pair = text_pair.apply_prompt_format(prompt)\n", "print(\"\\t*\", \"input_length\", text_pair.input_length)\n", "print(\"\\t*\", \"output_length\", text_pair.output_length)\n", "print(\"\\t*\", \"total_length\", text_pair.total_length)\n", "print(\"\\t*\", \"len(context_ids)\", len(text_pair.context_ids))\n", "print(\"\\t*\", \"len(answer_ids)\", len(text_pair.answer_ids))\n", "print(\"\\t*\", \"len(input_ids)\", len(text_pair.input_ids))\n", "\n", "print(\"NeMoMultimodalConversation\")\n", "text_pair = multimodal_conversation.apply_prompt_format(prompt)\n", "print(\"\\t*\", \"input_length\", multimodal_conversation.input_length)\n", "print(\"\\t*\", \"output_length\", multimodal_conversation.output_length)\n", "print(\"\\t*\", \"total_length\", multimodal_conversation.total_length)\n", "print(\"\\t*\", \"len(context_ids)\", len(multimodal_conversation.context_ids))\n", "print(\"\\t*\", \"len(answer_ids)\", len(multimodal_conversation.answer_ids))\n", "print(\"\\t*\", \"len(input_ids)\", len(multimodal_conversation.input_ids))\n" ] }, { "cell_type": "markdown", "id": "ecca372c2a0cad6e", "metadata": {}, "source": [ "Note that for `NeMoMultimodalConversation` the length is much greater that the number of text tokens. \n", "This is where `token_equivalent_duration` comes in: we want to factor in the audio turns into sequence lengths.\n", "Since we know what is the duration of audio, we only need to know how much duration should be covered by each audio \"token\" or \"frame\".\n", "A typical setup would be with NeMo FastConformer as an audio encoder, which uses 10ms frames at the input and subsamples them by a factor of 8 in the output. \n", "The resulting `token_equivalent_duration` is therefore `0.08`, i.e., a single token created from audio is worth 80ms of duration. \n", "For length computation, we sum the number of text tokens and the equivalent number of audio tokens.\n", "\n", "We can see that lhotse's `DynamicBucketingSampler` is able to process this data using NeMo multimodal sampling strategies:" ] }, { "cell_type": "code", "execution_count": 11, "id": "6e295cfbfe8ff69b", "metadata": {}, "outputs": [], "source": [ "from lhotse.dataset import DynamicBucketingSampler\n", "from nemo.collections.common.data.lhotse.sampling import MultimodalFixedBucketBatchSizeConstraint2D\n", "\n", "cuts = CutSet([multimodal_conversation]).repeat() # repeat makes iterable infinite\n", "sampler = DynamicBucketingSampler(\n", " cuts, \n", " constraint=MultimodalFixedBucketBatchSizeConstraint2D(\n", " max_seq_len_buckets=[32, 64, 128, 256, 512, 1024, 1536, 2048],\n", " batch_sizes=[8, 7, 6, 5, 4, 3, 2, 1],\n", " token_equivalent_duration=0.08, \n", " measure_total_length=True,\n", " ),\n", " buffer_size=10,\n", ")\n", "\n", "batch = next(iter(sampler))\n", "assert len(batch) == 4 \n", "# Our conversation example fell into bucket number 4 (min: 256, max: 512) with an assigned batch size of 4" ] }, { "cell_type": "markdown", "id": "4ff5baae-0771-4ac9-aa68-c3faee5aa261", "metadata": {}, "source": [ "## Putting it all together to configure joint audio, text, and conversation dataloading\n", "\n", "We'll showcase some higher level APIs here. First, we'll create data examples on disk for three distinct types: audio to text, text to text, and multimodal conversations." ] }, { "cell_type": "code", "execution_count": 12, "id": "5a0e5433-3e63-4ab2-9290-001159a9b8e0", "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "from lhotse.serialization import save_to_jsonl\n", "from lhotse.testing.dummies import dummy_recording\n", "\n", "# Prepare dummy ASR data\n", "d = Path(\"_tutorial_data\")\n", "!mkdir -p {d}/asr_shar\n", "cut = dummy_recording(0, duration=17.11, with_data=True).to_cut()\n", "cut.supervisions = [SupervisionSegment(id=cut.id, recording_id=cut.id, start=0.0, duration=cut.duration, text=\"Welcome to Lhotse!\")]\n", "cut.context = \"Repeat after me\"\n", "CutSet([cut.save_audio(d / \"rec.flac\")]).to_shar(d / \"asr_shar\", fields={\"recording\": \"flac\"})\n", "\n", "# Prepare dummy translation data\n", "(d / \"src.txt\").write_text(\"A\")\n", "(d / \"tgt.txt\").write_text(\"B\")\n", "\n", "# Prepare dummy multimodal conversation\n", "save_to_jsonl(\n", " [\n", " {\n", " \"id\": \"convo-1\",\n", " \"conversations\": [\n", " {\"from\": \"user\", \"value\": \"tell me what you hear\", \"type\": \"text\"},\n", " {\"from\": \"user\", \"value\": str(d / \"rec.flac\"), \"duration\": cut.duration, \"type\": \"audio\"},\n", " {\"from\": \"assistant\", \"value\": \"somebody just welcomed me to a himalayan mountain\", \"type\": \"text\"},\n", " ]\n", " }\n", " ],\n", " d / \"conv.jsonl\"\n", ")" ] }, { "cell_type": "markdown", "id": "3a4d669b-f816-4522-a491-ba31bfbf689c", "metadata": {}, "source": [ "Now we'll configure a Lhotse dataloader to yield mini-batches with different data types in a round-robin fashion." ] }, { "cell_type": "code", "execution_count": 13, "id": "c4a7364e-c00f-4f60-9d72-9e7d228121cb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[NeMo I 2024-10-18 14:12:19 dataloader:481] Creating a Lhotse DynamicBucketingSampler (max_batch_duration=None max_batch_size=None)\n", "[NeMo I 2024-10-18 14:12:19 dataloader:481] Creating a Lhotse DynamicBucketingSampler (max_batch_duration=None max_batch_size=None)\n", "[NeMo I 2024-10-18 14:12:19 dataloader:481] Creating a Lhotse DynamicBucketingSampler (max_batch_duration=None max_batch_size=None)\n" ] } ], "source": [ "import torch\n", "from omegaconf import OmegaConf\n", "from nemo.collections.common.data.lhotse.dataloader import get_lhotse_dataloader_from_config\n", "\n", "# This configuration is typically present in NeMo training configs under `model.train_ds` key.\n", "cfg = OmegaConf.create({\n", " # Note that we have several sampler groups under keys: \"asr\", \"nmt\", and \"chat\".\n", " # Each group has its own data source and sampling settings, i.e., you can define\n", " # completely different batch sizes, sequence length filters, etc. for each type of data.\n", " # To enable this behaviour, set multi_config to True.\n", " \"multi_config\": True,\n", " \n", " # The following fields are shared by all groups.\n", " # sampler_fusion key determines how to yield batches from different samplers:\n", " # * \"round_robin\" will just yield one type at a time\n", " # * \"zip\" will sample a batch for each type and concatenate them, yielding a larger multimodal batch\n", " # * \"randomized_round_robin\" expects an extra \"sampler_weights\" option which will define sampling probs for each group.:\n", " \"sampler_fusion\": \"round_robin\",\n", " \"shuffle\": True,\n", " \"num_workers\": 0,\n", " \"seed\": 0,\n", " \"shard_seed\": \"trng\",\n", " \n", " \"asr\": {\n", " \"input_cfg\": [\n", " {\n", " \"type\": \"lhotse_shar\", \n", " \"shar_path\": d / \"asr_shar\"\n", " }\n", " ],\n", " \"min_duration\": 0.5,\n", " \"max_duration\": 40,\n", " \"use_bucketing\": True,\n", " \"bucket_duration_bins\": [5, 10, 20, 40],\n", " \"bucket_batch_size\": [4, 3, 2, 1],\n", " \"prompt_format\": \"llama2\",\n", "\n", " # Simplified settings for quick tutorial running (don't use those in real applciations).\n", " \"concurrent_bucketing\": False,\n", " \"bucket_buffer_size\": 50,\n", " \"shuffle_buffer_size\": 50,\n", " },\n", "\n", " \"nmt\": {\n", " \"input_cfg\": [\n", " {\n", " \"type\": \"txt_pair\", \n", " \"source_paths\": d / \"src.txt\", \n", " \"target_paths\": d / \"tgt.txt\"\n", " }\n", " ],\n", " \"use_multimodal_sampling\": True, # will count tokens instead of seconds\n", " \"min_tokens\": 1,\n", " \"max_tokens\": 32,\n", " \"measure_total_length\": False, # filters by input length instead of total length\n", " \"use_bucketing\": True,\n", " \"bucket_duration_bins\": [[16, 16], [16, 32], [32, 16], [32, 32]], # 2D buckets\n", " \"bucket_batch_size\": [4, 3, 2, 1],\n", " \"prompt_format\": \"llama2\",\n", " \n", " # Simplified settings for quick tutorial running (don't use those in real applciations).\n", " \"concurrent_bucketing\": False,\n", " \"bucket_buffer_size\": 50,\n", " \"shuffle_buffer_size\": 50,\n", " },\n", "\n", " \"chat\": {\n", " \"input_cfg\": [\n", " {\n", " \"type\": \"multimodal_conversation\", \n", " \"manifest_filepath\": d / \"conv.jsonl\", \n", " \"audio_locator_tag\": \"[audio]\"\n", " }\n", " ],\n", " \"use_multimodal_sampling\": True, # will count tokens instead of seconds\n", " \"min_tokens\": 1,\n", " \"max_tokens\": 1024,\n", " \"measure_total_length\": True,\n", " \"token_equivalent_duration\": 0.08,\n", " \"use_bucketing\": True,\n", " \"bucket_duration_bins\": [128, 256, 512, 1024],\n", " \"bucket_batch_size\": [4, 3, 2, 1],\n", " \"prompt_format\": \"llama2\",\n", "\n", " # Simplified settings for quick tutorial running (don't use those in real applciations).\n", " \"concurrent_bucketing\": False,\n", " \"bucket_buffer_size\": 50,\n", " \"shuffle_buffer_size\": 50,\n", " },\n", "})\n", "\n", "\n", "# A no-op PyTorch Dataset class that will just return the data structures.\n", "# In a real training setup, you'll want to implement conversion of a list of examples to a tensor mini-batch\n", "# that is adequate for your model. \n", "# Note that you can handle multiple types of examples to create appropriate mini-batch schema for each.\n", "class Identity(torch.utils.data.Dataset):\n", " def __getitem__(self, examples: CutSet):\n", " return examples\n", "\n", "dloader = get_lhotse_dataloader_from_config(cfg, global_rank=0, world_size=1, dataset=Identity(), tokenizer=tokenizer)" ] }, { "cell_type": "code", "execution_count": 14, "id": "e8768e28-663b-4d69-bb31-fbd6b80c0389", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Step 0. Examples:\n", "\t* MonoCut(id='dummy-recording-0000_repeat10', start=0, duration=17.11, channel=0, supervisions=[SupervisionSegment(id='dummy-recording-0000', recording_id='dummy-recording-0000', start=0.0, duration=17.11, channel=0, text='Welcome to Lhotse!', language=None, speaker=None, gender=None, custom=None, alignment=None)], features=None, recording=Recording(id='rec', sources=[AudioSource(type='memory', channels=[0], source='')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom={'context': 'Repeat after me', 'shard_origin': PosixPath('_tutorial_data/asr_shar/cuts.000000.jsonl.gz'), 'shar_epoch': 10, 'input_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5, 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88,\n", " 93, 92, 78, 10, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5]), 'answer_ids': tensor([ 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88, 93, 92, 78,\n", " 10, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* MonoCut(id='dummy-recording-0000_repeat41', start=0, duration=17.11, channel=0, supervisions=[SupervisionSegment(id='dummy-recording-0000', recording_id='dummy-recording-0000', start=0.0, duration=17.11, channel=0, text='Welcome to Lhotse!', language=None, speaker=None, gender=None, custom=None, alignment=None)], features=None, recording=Recording(id='rec', sources=[AudioSource(type='memory', channels=[0], source='')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom={'context': 'Repeat after me', 'shard_origin': PosixPath('_tutorial_data/asr_shar/cuts.000000.jsonl.gz'), 'shar_epoch': 41, 'input_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5, 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88,\n", " 93, 92, 78, 10, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5]), 'answer_ids': tensor([ 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88, 93, 92, 78,\n", " 10, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\n", "Step 1. Examples:\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\n", "Step 2. Examples:\n", "\t* NeMoMultimodalConversation(id='convo-1_repeat0', turns=[TextTurn(value='tell me what you hear', role='user'), AudioTurn(cut=MonoCut(id='rec', start=0.0, duration=17.11, channel=0, supervisions=[], features=None, recording=Recording(id='rec', sources=[AudioSource(type='file', channels=[0], source='_tutorial_data/rec.flac')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom=None), role='user', audio_locator_tag='[audio]'), TextTurn(value='somebody just welcomed me to a himalayan mountain', role='assistant')], token_equivalent_duration=0.08, custom={'input_ids': tensor([ 1, 9, 4, 9, 93, 78, 85, 85, 9, 86, 78, 9, 96, 81, 74, 93, 9, 98,\n", " 88, 94, 9, 81, 78, 74, 91, 9, 8, 9, 5, 9, 92, 88, 86, 78, 75, 88,\n", " 77, 98, 9, 83, 94, 92, 93, 9, 96, 78, 85, 76, 88, 86, 78, 77, 9, 86,\n", " 78, 9, 93, 88, 9, 74, 9, 81, 82, 86, 74, 85, 74, 98, 74, 87, 9, 86,\n", " 88, 94, 87, 93, 74, 82, 87, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 93, 78, 85, 85, 9, 86, 78, 9, 96, 81, 74, 93, 9, 98,\n", " 88, 94, 9, 81, 78, 74, 91, 9, 8, 9, 5]), 'answer_ids': tensor([ 9, 92, 88, 86, 78, 75, 88, 77, 98, 9, 83, 94, 92, 93, 9, 96, 78, 85,\n", " 76, 88, 86, 78, 77, 9, 86, 78, 9, 93, 88, 9, 74, 9, 81, 82, 86, 74,\n", " 85, 74, 98, 74, 87, 9, 86, 88, 94, 87, 93, 74, 82, 87, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* NeMoMultimodalConversation(id='convo-1_repeat1', turns=[TextTurn(value='tell me what you hear', role='user'), AudioTurn(cut=MonoCut(id='rec', start=0.0, duration=17.11, channel=0, supervisions=[], features=None, recording=Recording(id='rec', sources=[AudioSource(type='file', channels=[0], source='_tutorial_data/rec.flac')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom=None), role='user', audio_locator_tag='[audio]'), TextTurn(value='somebody just welcomed me to a himalayan mountain', role='assistant')], token_equivalent_duration=0.08, custom={'input_ids': tensor([ 1, 9, 4, 9, 93, 78, 85, 85, 9, 86, 78, 9, 96, 81, 74, 93, 9, 98,\n", " 88, 94, 9, 81, 78, 74, 91, 9, 8, 9, 5, 9, 92, 88, 86, 78, 75, 88,\n", " 77, 98, 9, 83, 94, 92, 93, 9, 96, 78, 85, 76, 88, 86, 78, 77, 9, 86,\n", " 78, 9, 93, 88, 9, 74, 9, 81, 82, 86, 74, 85, 74, 98, 74, 87, 9, 86,\n", " 88, 94, 87, 93, 74, 82, 87, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 93, 78, 85, 85, 9, 86, 78, 9, 96, 81, 74, 93, 9, 98,\n", " 88, 94, 9, 81, 78, 74, 91, 9, 8, 9, 5]), 'answer_ids': tensor([ 9, 92, 88, 86, 78, 75, 88, 77, 98, 9, 83, 94, 92, 93, 9, 96, 78, 85,\n", " 76, 88, 86, 78, 77, 9, 86, 78, 9, 93, 88, 9, 74, 9, 81, 82, 86, 74,\n", " 85, 74, 98, 74, 87, 9, 86, 88, 94, 87, 93, 74, 82, 87, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* NeMoMultimodalConversation(id='convo-1_repeat2', turns=[TextTurn(value='tell me what you hear', role='user'), AudioTurn(cut=MonoCut(id='rec', start=0.0, duration=17.11, channel=0, supervisions=[], features=None, recording=Recording(id='rec', sources=[AudioSource(type='file', channels=[0], source='_tutorial_data/rec.flac')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom=None), role='user', audio_locator_tag='[audio]'), TextTurn(value='somebody just welcomed me to a himalayan mountain', role='assistant')], token_equivalent_duration=0.08, custom={'input_ids': tensor([ 1, 9, 4, 9, 93, 78, 85, 85, 9, 86, 78, 9, 96, 81, 74, 93, 9, 98,\n", " 88, 94, 9, 81, 78, 74, 91, 9, 8, 9, 5, 9, 92, 88, 86, 78, 75, 88,\n", " 77, 98, 9, 83, 94, 92, 93, 9, 96, 78, 85, 76, 88, 86, 78, 77, 9, 86,\n", " 78, 9, 93, 88, 9, 74, 9, 81, 82, 86, 74, 85, 74, 98, 74, 87, 9, 86,\n", " 88, 94, 87, 93, 74, 82, 87, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 93, 78, 85, 85, 9, 86, 78, 9, 96, 81, 74, 93, 9, 98,\n", " 88, 94, 9, 81, 78, 74, 91, 9, 8, 9, 5]), 'answer_ids': tensor([ 9, 92, 88, 86, 78, 75, 88, 77, 98, 9, 83, 94, 92, 93, 9, 96, 78, 85,\n", " 76, 88, 86, 78, 77, 9, 86, 78, 9, 93, 88, 9, 74, 9, 81, 82, 86, 74,\n", " 85, 74, 98, 74, 87, 9, 86, 88, 94, 87, 93, 74, 82, 87, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\n", "Step 3. Examples:\n", "\t* MonoCut(id='dummy-recording-0000_repeat67', start=0, duration=17.11, channel=0, supervisions=[SupervisionSegment(id='dummy-recording-0000', recording_id='dummy-recording-0000', start=0.0, duration=17.11, channel=0, text='Welcome to Lhotse!', language=None, speaker=None, gender=None, custom=None, alignment=None)], features=None, recording=Recording(id='rec', sources=[AudioSource(type='memory', channels=[0], source='')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom={'context': 'Repeat after me', 'shard_origin': PosixPath('_tutorial_data/asr_shar/cuts.000000.jsonl.gz'), 'shar_epoch': 67, 'input_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5, 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88,\n", " 93, 92, 78, 10, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5]), 'answer_ids': tensor([ 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88, 93, 92, 78,\n", " 10, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* MonoCut(id='dummy-recording-0000_repeat16', start=0, duration=17.11, channel=0, supervisions=[SupervisionSegment(id='dummy-recording-0000', recording_id='dummy-recording-0000', start=0.0, duration=17.11, channel=0, text='Welcome to Lhotse!', language=None, speaker=None, gender=None, custom=None, alignment=None)], features=None, recording=Recording(id='rec', sources=[AudioSource(type='memory', channels=[0], source='')], sampling_rate=16000, num_samples=273760, duration=17.11, channel_ids=[0], transforms=None), custom={'context': 'Repeat after me', 'shard_origin': PosixPath('_tutorial_data/asr_shar/cuts.000000.jsonl.gz'), 'shar_epoch': 16, 'input_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5, 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88,\n", " 93, 92, 78, 10, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 59, 78, 89, 78, 74, 93, 9, 74, 79, 93, 78, 91, 9, 86,\n", " 78, 9, 5]), 'answer_ids': tensor([ 9, 64, 78, 85, 76, 88, 86, 78, 9, 93, 88, 9, 53, 81, 88, 93, 92, 78,\n", " 10, 2]), 'mask': tensor([False, False, False, False, False, False, False, False, False, False,\n", " False, False, False, False, False, False, False, False, False, False,\n", " False, True, True, True, True, True, True, True, True, True,\n", " True, True, True, True, True, True, True, True, True, True,\n", " True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\n", "Step 4. Examples:\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\t* SourceTargetTextExample(source=TextExample(text='A', language=None, tokens=None, custom=None), target=TextExample(text='B', language=None, tokens=None, custom=None), question=None, custom={'input_ids': tensor([ 1, 9, 4, 9, 42, 9, 5, 9, 43, 2]), 'context_ids': tensor([ 1, 9, 4, 9, 42, 9, 5]), 'answer_ids': tensor([ 9, 43, 2]), 'mask': tensor([False, False, False, False, False, False, False, True, True, True]), 'dataloading_info': {'rank': 0, 'world_size': 1, 'worker_id': None}})\n", "\n" ] } ], "source": [ "for idx, batch in enumerate(dloader):\n", " if idx == 5:\n", " break\n", " print(f\"Step {idx}. Examples:\")\n", " for item in batch:\n", " print(\"\\t*\", item)\n", " print()" ] }, { "cell_type": "code", "execution_count": null, "id": "704c44f5-bcce-4b4f-828b-fa1e18de8d71", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" } }, "nbformat": 4, "nbformat_minor": 5 }