{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "lJz6FDU1lRzc" }, "outputs": [], "source": [ "\"\"\"\n", "You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n", "\n", "Instructions for setting up Colab are as follows:\n", "1. Open a new Python 3 notebook.\n", "2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n", "3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n", "4. Run this cell to set up dependencies.\n", "\"\"\"\n", "# If you're using Google Colab and not running locally, run this cell.\n", "\n", "## Install dependencies\n", "!pip install wget\n", "!apt-get install sox libsndfile1 ffmpeg\n", "!pip install text-unidecode\n", "!pip install ipython\n", "\n", "# ## Install NeMo\n", "BRANCH = 'main'\n", "!python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[asr]\n", "\n", "## Install TorchAudio\n", "!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Streaming Multitalker ASR" ] }, { "cell_type": "markdown", "metadata": { "id": "v1Jk9etFlRzf" }, "source": [ "## Streaming Multitalker ASR with Self-Speaker Adaptation\n", "\n", "This tutorial shows you how to use NeMo's streaming multitalker ASR system based on the approach described in [(Wang et al., 2025)](https://arxiv.org/abs/2506.22646). This system transcribes each speaker separately in multispeaker audio using speaker activity information from a streaming diarization model.\n", "\n", "### How This Approach Works\n", "\n", "The streaming multitalker Parakeet model uses **self-speaker adaptation**, which means:\n", "\n", "1. **No Speaker Enrollment Required**: You only need speaker activity predictions from a diarization model (like Streaming Sortformer)\n", "2. **Speaker Kernel Injection**: The model injects speaker-specific kernels into encoder layers to focus on each target speaker\n", "3. **Multi-Instance Architecture**: You run one model instance per speaker, and each instance processes the same audio\n", "4. **Handles Overlapping Speech**: Each instance focuses on one speaker, so it can transcribe overlapped speech segments\n", "\n", "### Cache-Aware Streaming\n", "\n", "The model uses stateful cache-based inference [(Noroozi et al., 2023)](https://arxiv.org/abs/2312.17279) for streaming:\n", "- Left and right contexts in the encoder are constrained for low latency\n", "- An activation caching mechanism enables the encoder to operate autoregressively during inference\n", "- The model maintains consistent behavior between training and inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multi-Instance Architecture Overview\n", "\n", "The streaming multitalker Parakeet model employs a **multi-instance approach** where one model instance is deployed per speaker:\n", "\n", "\"Multi-instance\n", "\n", "Each model instance:\n", "- Receives the same mixed audio input\n", "- Injects **speaker-specific kernels** generated from diarization-based speaker activity\n", "- Produces transcription output specific to its target speaker\n", "- Operates independently and can run in parallel with other instances\n", "\n", "### Speaker Kernel Injection Mechanism\n", "\n", "Learnable speaker kernels are injected into selected layers of the Fast-Conformer encoder:\n", "\n", "\"Speaker\n", "\n", "The speaker kernels are generated through speaker supervision activations that detect speech activity for each target speaker from the streaming diarization output. This enables the encoder states to become more responsive to the targeted speaker's speech characteristics." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup and Data Preparation\n", "\n", "In this tutorial, we will demonstrate streaming multitalker ASR using:\n", "1. **Streaming Sortformer** for real-time speaker diarization\n", "2. **Streaming Multitalker Parakeet** for speaker-wise ASR\n", "\n", "Let's start by downloading a toy example audio file with multiple speakers. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import wget\n", "ROOT = os.getcwd()\n", "data_dir = os.path.join(ROOT,'data')\n", "os.makedirs(data_dir, exist_ok=True)\n", "an4_audio = os.path.join(data_dir,'an4_diarize_test.wav')\n", "an4_rttm = os.path.join(data_dir,'an4_diarize_test.rttm')\n", "if not os.path.exists(an4_audio):\n", " an4_audio_url = \"https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav\"\n", " an4_audio = wget.download(an4_audio_url, data_dir)\n", "if not os.path.exists(an4_rttm):\n", " an4_rttm_url = \"https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm\"\n", " an4_rttm = wget.download(an4_rttm_url, data_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's visualize the waveform and listen to the audio. You'll notice that there are two speakers in this audio clip." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import IPython\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import librosa\n", "\n", "sr = 16000\n", "signal, sr = librosa.load(an4_audio, sr=sr) \n", "\n", "fig, ax = plt.subplots(1, 1)\n", "fig.set_figwidth(20)\n", "fig.set_figheight(2)\n", "plt.plot(np.arange(len(signal)), signal, 'gray')\n", "fig.suptitle('Multispeaker Audio Waveform', fontsize=16)\n", "plt.xlabel('time (secs)', fontsize=18)\n", "ax.margins(x=0)\n", "plt.ylabel('signal strength', fontsize=16)\n", "a, _ = plt.xticks()\n", "plt.xticks(a, a/sr)\n", "\n", "IPython.display.Audio(signal, rate=sr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Streaming Speaker Diarization\n", "\n", "Now that we have a multispeaker audio file, the first step is to perform streaming speaker diarization using **Streaming Sortformer**. This will provide us with speaker activity information needed for self-speaker adaptation.\n", "\n", "### Download Streaming Sortformer Diarization Model\n", "\n", "To download the streaming Sortformer diarizer from [HuggingFace](https://huggingface.co/nvidia), you need a [HuggingFace Access Token](https://huggingface.co/docs/hub/en/security-tokens). \n", "\n", "Alternatively, you can download the `.nemo` file from [Streaming Sortformer HuggingFace model card](https://huggingface.co/nvidia/diar_streaming_sortformer_4spk-v2) and specify the file path." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nemo.collections.asr.models import SortformerEncLabelModel\n", "from huggingface_hub import get_token as get_hf_token\n", "import torch\n", "\n", "if get_hf_token() is not None and get_hf_token().startswith(\"hf_\"):\n", " # If you have logged into HuggingFace hub and have access token \n", " diar_model = SortformerEncLabelModel.from_pretrained(\"nvidia/diar_streaming_sortformer_4spk-v2\")\n", "else:\n", " # You can download \".nemo\" file from https://huggingface.co/nvidia/diar_streaming_sortformer_4spk-v2 and specify the path.\n", " diar_model = SortformerEncLabelModel.restore_from(restore_path=\"/path/to/diar_streaming_sortformer_4spk-v2.nemo\", map_location=torch.device('cuda'), strict=False)\n", "diar_model.eval()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Configure Streaming Parameters for Sortformer\n", "\n", "Set streaming parameters (all measured in 80ms frames):\n", "- `chunk_len`: Number of frames in a processing chunk\n", "- `chunk_right_context`: Right context length (determines latency with `chunk_len`)\n", "- `fifo_len`: Number of previous frames from FIFO queue\n", "- `spkcache_update_period`: Frames extracted from FIFO for speaker cache update\n", "- `spkcache_len`: Total frames in speaker cache\n", "\n", "The input buffer latency is determined by `chunk_len` + `chunk_right_context`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "import math\n", "import torch\n", "import torch.amp\n", "from tqdm import tqdm \n", "\n", "# If cuda is available, assign the model to cuda\n", "if torch.cuda.is_available():\n", " diar_model.to(torch.device(\"cuda\"))\n", "\n", "global autocast\n", "autocast = torch.amp.autocast(diar_model.device.type, enabled=True)\n", "\n", "# Set the streaming parameters corresponding to 1.04s latency setup\n", "diar_model.sortformer_modules.chunk_len = 6\n", "diar_model.sortformer_modules.spkcache_len = 188\n", "diar_model.sortformer_modules.chunk_right_context = 7\n", "diar_model.sortformer_modules.fifo_len = 188\n", "diar_model.sortformer_modules.spkcache_update_period = 144\n", "diar_model.sortformer_modules.log = False\n", "\n", "# Validate that the streaming parameters are set correctly\n", "diar_model.sortformer_modules._check_streaming_parameters()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature Extraction and Streaming Diarization\n", "\n", "Extract log-mel features from the audio signal and prepare for streaming diarization:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "audio_signal = torch.tensor(signal).unsqueeze(0).to(diar_model.device)\n", "audio_signal_length = torch.tensor([audio_signal.shape[1]]).to(diar_model.device)\n", "processed_signal, processed_signal_length = diar_model.preprocessor(input_signal=audio_signal, length=audio_signal_length)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run Streaming Diarization Loop\n", "\n", "Initialize the streaming state and run the streaming diarization to get speaker activity predictions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_size = 1\n", "processed_signal_offset = torch.zeros((batch_size,), dtype=torch.long, device=diar_model.device)\n", "\n", "streaming_state = diar_model.sortformer_modules.init_streaming_state(\n", " batch_size=batch_size,\n", " async_streaming=True,\n", " device=diar_model.device\n", " )\n", "total_preds = torch.zeros((batch_size, 0, diar_model.sortformer_modules.n_spk), device=diar_model.device)\n", "\n", "streaming_loader = diar_model.sortformer_modules.streaming_feat_loader(\n", " feat_seq=processed_signal,\n", " feat_seq_length=processed_signal_length,\n", " feat_seq_offset=processed_signal_offset,\n", ")\n", "\n", "num_chunks = math.ceil(\n", " processed_signal.shape[2] / (diar_model.sortformer_modules.chunk_len * diar_model.sortformer_modules.subsampling_factor)\n", ")\n", "\n", "print(f\"Processing {num_chunks} chunks for diarization...\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run streaming diarization\n", "for i, chunk_feat_seq_t, feat_lengths, left_offset, right_offset in tqdm(\n", " streaming_loader,\n", " total=num_chunks,\n", " desc=\"Streaming Diarization\",\n", " disable=False,\n", "):\n", " with torch.inference_mode():\n", " with autocast:\n", " streaming_state, total_preds = diar_model.forward_streaming_step(\n", " processed_signal=chunk_feat_seq_t,\n", " processed_signal_length=feat_lengths,\n", " streaming_state=streaming_state,\n", " total_preds=total_preds,\n", " left_offset=left_offset,\n", " right_offset=right_offset,\n", " )\n", "\n", "print(f\"Diarization complete! Total predictions shape: {total_preds.shape}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualize Diarization Results\n", "\n", "Let's visualize the speaker diarization predictions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_diarout(preds):\n", " \"\"\"Visualize diarization predictions\"\"\"\n", " preds_mat = preds.cpu().numpy().transpose()\n", " cmap_str, grid_color_p = 'viridis', 'gray'\n", " LW, FS = 0.4, 36\n", "\n", " yticklabels = [\"spk0\", \"spk1\", \"spk2\", \"spk3\"]\n", " yticks = np.arange(len(yticklabels))\n", " fig, axs = plt.subplots(1, 1, figsize=(30, 3)) \n", "\n", " axs.imshow(preds_mat, cmap=cmap_str, interpolation='nearest') \n", " axs.set_title('Diarization Predictions (Speaker Activity)', fontsize=FS)\n", " axs.set_xticks(np.arange(-.5, preds_mat.shape[1], 1), minor=True)\n", " axs.set_yticks(yticks)\n", " axs.set_yticklabels(yticklabels)\n", " axs.set_xlabel(f\"80 ms Frames\", fontsize=FS)\n", " axs.grid(which='minor', color=grid_color_p, linestyle='-', linewidth=LW)\n", " plt.show()\n", "\n", "plot_diarout(total_preds[0,:])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Streaming Multitalker ASR with Self-Speaker Adaptation\n", "\n", "Now that we have speaker activity information from diarization, we can load the streaming multitalker Parakeet model and perform speaker-wise ASR.\n", "\n", "### Download Streaming Multitalker Parakeet Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nemo.collections.asr.models import ASRModel\n", "import torch\n", " \n", "if get_hf_token() is not None and get_hf_token().startswith(\"hf_\"):\n", " # If you have logged into HuggingFace hub and have access token \n", " asr_model = ASRModel.from_pretrained(\"nvidia/multitalker-parakeet-streaming-0.6b-v1\")\n", "else:\n", " # You can download \".nemo\" file from https://huggingface.co/nvidia/multitalker-parakeet-streaming-0.6b-v1 and specify the path.\n", " asr_model = ASRModel.restore_from(restore_path=\"/path/to/multitalker-parakeet-streaming-0.6b-v1.nemo\", map_location=torch.device('cuda'))\n", "diar_model.eval()\n", "\n", "asr_model.eval()\n", "if torch.cuda.is_available():\n", " asr_model.to(torch.device(\"cuda\"))\n", " \n", "print(\"ASR Model loaded successfully!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Configure Cache-Aware Streaming Parameters\n", "\n", "Set the streaming attention context size for the ASR model. The latency is determined by the attention context configuration (measured in 80ms frames):\n", "\n", "- `[70, 0]`: Chunk size = 1 (1 * 80ms = 0.08s) \n", "- `[70, 1]`: Chunk size = 2 (2 * 80ms = 0.16s) \n", "- `[70, 6]`: Chunk size = 7 (7 * 80ms = 0.56s) \n", "- `[70, 13]`: Chunk size = 14 (14 * 80ms = 1.12s)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Set streaming parameters for 1.12s latency (matching diarization)\n", "att_context_size = [70, 13] # [left_context, right_context] in frames\n", "\n", "\n", "print(f\"ASR streaming configured with attention context: {att_context_size}\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Determine Number of Active Speakers\n", "\n", "Analyze the diarization output to determine how many speakers are active in the audio:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3: Multi-Instance Streaming ASR\n", "\n", "Now we'll run streaming multitalker ASR using the multi-instance architecture. We'll create one model instance per detected speaker.\n", "\n", "### Step 3-1: Prepare the configurations\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since streaming processing of speech signals involves many complications in cache handling, we first need to set up a config dataclass that aggregates all the parameters in one place. You can access this class in the example multitalker streaming ASR script: [speech_to_text_multitalker_streaming_infer.py](https://raw.githubusercontent.com/NVIDIA-NeMo/NeMo/main/examples/asr/asr_cache_aware_streaming/speech_to_text_multitalker_streaming_infer.py\") " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dataclasses import dataclass, field\n", "from typing import List, Optional\n", "\n", "@dataclass\n", "class MultitalkerTranscriptionConfig:\n", " \"\"\"\n", " Configuration for Multi-talker transcription with an ASR model and a diarization model.\n", " \"\"\"\n", " # Required configs\n", " diar_model: Optional[str] = None\n", " diar_pretrained_name: Optional[str] = None\n", " max_num_of_spks: Optional[int] = 4\n", " parallel_speaker_strategy: bool = True\n", " masked_asr: bool = True\n", " mask_preencode: bool = False\n", " cache_gating: bool = True\n", " cache_gating_buffer_size: int = 2\n", " single_speaker_mode: bool = False\n", " feat_len_sec: float = 0.01\n", "\n", " # General configs\n", " session_len_sec: float = -1\n", " num_workers: int = 8\n", " random_seed: Optional[int] = None\n", " log: bool = True\n", "\n", " # Streaming diarization configs\n", " streaming_mode: bool = True\n", " spkcache_len: int = 188\n", " spkcache_refresh_rate: int = 0\n", " fifo_len: int = 188\n", " chunk_len: int = 0\n", " chunk_left_context: int = 0\n", " chunk_right_context: int = 0\n", "\n", " # If `cuda` is a negative number, inference will be on CPU only.\n", " cuda: Optional[int] = None\n", " allow_mps: bool = False \n", " matmul_precision: str = \"highest\" # Literal[\"highest\", \"high\", \"medium\"]\n", "\n", " # ASR Configs\n", " asr_model: Optional[str] = None\n", " device: str = 'cuda'\n", " audio_file: Optional[str] = None\n", " manifest_file: Optional[str] = None\n", " att_context_size: Optional[List[int]] = field(default_factory=lambda: [70, 13])\n", " use_amp: bool = True\n", " debug_mode: bool = False\n", " deploy_mode: bool = False\n", " batch_size: int = 32\n", " chunk_size: int = -1\n", " shift_size: int = -1\n", " left_chunks: int = 2\n", " online_normalization: bool = False\n", " output_path: Optional[str] = None\n", " pad_and_drop_preencoded: bool = False\n", " set_decoder: Optional[str] = None # [\"ctc\", \"rnnt\"]\n", " generate_realtime_scripts: bool = False\n", " spk_supervision: str = \"diar\" # [\"diar\", \"rttm\"]\n", " binary_diar_preds: bool = False\n", "\n", " # Multitalker transcription configs\n", " verbose: bool = False\n", " word_window: int = 50\n", " sent_break_sec: float = 30.0\n", " fix_prev_words_count: int = 5\n", " update_prev_words_sentence: int = 5\n", " left_frame_shift: int = -1\n", " right_frame_shift: int = 0\n", " min_sigmoid_val: float = 1e-2\n", " discarded_frames: int = 8\n", " print_time: bool = True\n", " print_sample_indices: List[int] = field(default_factory=lambda: [0])\n", " colored_text: bool = True\n", " real_time_mode: bool = False\n", " print_path: Optional[str] = None\n", " ignored_initial_frame_steps: int = 5\n", " finetune_realtime_ratio: float = 0.01" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using this dataclass for configurations, assign the designated streaming speaker diarization parameters to a diarization model. This process ensures that the processing window size and cache sizes are synchronized." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from omegaconf import OmegaConf\n", "# Create configuration object for multitalker transcription\n", "cfg = MultitalkerTranscriptionConfig()\n", "# Convert dataclass to OmegaConf DictConfig so it has .get() method\n", "cfg = OmegaConf.structured(cfg)\n", "\n", "cfg.att_context_size = [70,13]\n", "cfg.output_path = \"/path/to/output.json\"\n", "cfg.audio_file = an4_audio\n", "print(f\"ASR streaming configured with attention context: {att_context_size}\")\n", "\n", "for key in cfg:\n", " cfg[key] = None if cfg[key] == 'None' else cfg[key]\n", "\n", "# Set streaming mode diar_model params (matching the diarization setup from lines 263-271 of reference file)\n", "diar_model.streaming_mode = cfg.streaming_mode\n", "diar_model.sortformer_modules.chunk_len = cfg.chunk_len if cfg.chunk_len > 0 else 6\n", "diar_model.sortformer_modules.spkcache_len = cfg.spkcache_len\n", "diar_model.sortformer_modules.chunk_left_context = cfg.chunk_left_context\n", "diar_model.sortformer_modules.chunk_right_context = cfg.chunk_right_context if cfg.chunk_right_context > 0 else 7\n", "diar_model.sortformer_modules.fifo_len = cfg.fifo_len\n", "diar_model.sortformer_modules.log = cfg.log\n", "diar_model.sortformer_modules.spkcache_refresh_rate = cfg.spkcache_refresh_rate\n", "\n", "# Set online normalization flag\n", "online_normalization = cfg.online_normalization\n", "\n", "# Set pad_and_drop_preencoded flag\n", "pad_and_drop_preencoded = cfg.pad_and_drop_preencoded\n", "\n", "print(f\"Configuration setup complete!\")\n", "print(f\"Audio file: {cfg.audio_file}\")\n", "print(f\"Streaming mode: {diar_model.streaming_mode}\")\n", "print(f\"Diar model chunk_len: {diar_model.sortformer_modules.chunk_len}\")\n", "print(f\"Diar model chunk_right_context: {diar_model.sortformer_modules.chunk_right_context}\")\n", "print(f\"Online normalization: {online_normalization}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run Multi-Instance Streaming ASR\n", "\n", "For each active speaker, we'll run a separate ASR model instance with speaker-specific kernel injection. In practice, these instances can run in parallel." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For testing purposes, first set up a single-sample batch and feed it into the streaming buffer. The streaming buffer simulates an input audio stream so that we can run the multitalker ASR model in a streaming manner." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nemo.collections.asr.parts.utils.streaming_utils import CacheAwareStreamingAudioBuffer\n", "\n", "samples = [\n", " {\n", " 'audio_filepath': cfg.audio_file,\n", " }\n", "]\n", "streaming_buffer = CacheAwareStreamingAudioBuffer(\n", " model=asr_model,\n", " online_normalization=online_normalization,\n", " pad_and_drop_preencoded=cfg.pad_and_drop_preencoded,\n", ")\n", "streaming_buffer.append_audio_file(audio_filepath=cfg.audio_file, stream_id=-1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3-2: Run Multitalker ASR with the prepared configurations and streaming buffer\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nemo.collections.asr.parts.utils.multispk_transcribe_utils import SpeakerTaggedASR\n", "from pprint import pprint\n", "speaker_transcriptions = {}\n", "\n", "streaming_buffer_iter = iter(streaming_buffer)\n", "multispk_asr_streamer = SpeakerTaggedASR(cfg, asr_model, diar_model)\n", "feat_frame_count = 0\n", "print(f\"length of streaming_buffer_iter: {len(streaming_buffer)} {time.time()}\")\n", "\n", "for step_num, (chunk_audio, chunk_lengths) in enumerate(streaming_buffer_iter):\n", " drop_extra_pre_encoded = (\n", " 0\n", " if step_num == 0 and not pad_and_drop_preencoded\n", " else asr_model.encoder.streaming_cfg.drop_extra_pre_encoded\n", " )\n", " loop_start_time = time.time()\n", " with torch.inference_mode():\n", " with autocast:\n", " with torch.no_grad():\n", " multispk_asr_streamer.perform_parallel_streaming_stt_spk(\n", " step_num=step_num,\n", " chunk_audio=chunk_audio,\n", " chunk_lengths=chunk_lengths,\n", " is_buffer_empty=streaming_buffer.is_buffer_empty(),\n", " drop_extra_pre_encoded=drop_extra_pre_encoded,\n", " )\n", " pprint(multispk_asr_streamer.instance_manager.batch_asr_states[0].seglsts)\n", " \n", "seglst_dict_list = multispk_asr_streamer.generate_seglst_dicts_from_parallel_streaming(samples=samples)\n", "\n", "from pprint import pprint\n", "print(f\"SegLST style multispeaker transcription\\n {seglst_dict_list}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Results and Analysis\n", "\n", "### Display Speaker-Wise Transcriptions\n", "\n", "Let's display the final transcriptions for each speaker: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"\\n\" + \"=\"*80)\n", "print(\"Final transcriptions with speaker tagging and timestamps\")\n", "print(\"=\"*80 + \"\\n\")\n", "\n", "# Display SegLST dict list with timestamps and speaker-tagged transcriptions\n", "# Format: {'speaker': 'speaker_0', 'start_time': 2.64, 'end_time': 5.6, 'words': 'eleven twenty seven fifty seven', 'session_id': 'an4_diarize_test'}\n", "if seglst_dict_list:\n", " for idx, seglst in enumerate(seglst_dict_list):\n", " speaker = seglst.get('speaker', 'Unknown')\n", " start_time = seglst.get('start_time', 0.0)\n", " end_time = seglst.get('end_time', 0.0)\n", " words = seglst.get('words', '')\n", " session_id = seglst.get('session_id', '')\n", " \n", " print(f\"[{idx+1}] {speaker} ({start_time:.2f}s - {end_time:.2f}s): {words}\")\n", " \n", " print(f\"\\n{'-'*80}\")\n", " print(f\"Total segments: {len(seglst_dict_list)}\")\n", "else:\n", " print(\"No transcriptions available in seglst_dict_list.\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial demonstrated streaming multitalker ASR using self-speaker adaptation with NeMo:\n", "\n", "### Key Components\n", "\n", "1. **Streaming Sortformer Diarization**: Provides real-time speaker activity predictions using arrival-order speaker cache (AOSC)\n", "2. **Cache-Aware Streaming ASR**: FastConformer-based model with stateful cache-based inference for low-latency transcription\n", "3. **Self-Speaker Adaptation**: Speaker kernels injected into encoder layers based on diarization, enabling speaker-focused recognition without enrollment\n", "4. **Multi-Instance Architecture**: One model instance per speaker, enabling parallel processing and handling of severe speech overlap\n", "\n", "### Advantages\n", "\n", "- **No enrollment required**: Only needs diarization output, no pre-recorded speaker audio\n", "- **Handles overlap**: Each instance focuses on one speaker, even during fully overlapped speech\n", "- **Streaming capable**: Real-time processing with configurable latency (0.08s to 1.12s+)\n", "- **State-of-the-art performance**: Achieves strong results on challenging multitalker benchmarks\n", "\n", "### Configuration Summary\n", "\n", "In this tutorial, we used:\n", "- **Diarization latency**: 1.04s (chunk_len=6, chunk_right_context=7)\n", "- **ASR latency**: 1.12s (att_context_size=[70, 13])\n", "- **Number of speakers**: Automatically detected from diarization output" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "[1] [Speaker Targeting via Self-Speaker Adaptation for Multi-talker ASR](https://arxiv.org/abs/2506.22646) \n", "\n", "\n", "[2] [Stateful Conformer with Cache-based Inference for Streaming Automatic Speech Recognition](https://arxiv.org/abs/2312.17279) \n", "\n", "[3] [Streaming Sortformer: Speaker Cache-Based Online Speaker Diarization with Arrival-Time Ordering](https://arxiv.org/abs/2507.18446)\n", "\n", "[4] [NEST: Self-supervised Fast Conformer as All-purpose Seasoning to Speech Processing Tasks](https://arxiv.org/abs/2408.13106)\n", "\n", "[5] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "ASR_with_NeMo.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "nemo093025", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" }, "pycharm": { "stem_cell": { "cell_type": "raw", "metadata": { "collapsed": false }, "source": [] } } }, "nbformat": 4, "nbformat_minor": 4 }