en_url
stringlengths
47
153
en_title
stringlengths
12
118
en_content
stringlengths
16
19.4k
zh_url
stringlengths
53
159
zh_title
stringlengths
4
70
zh_content
stringlengths
16
10.5k
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025
From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet. At NVIDIA GTC 2025 , running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world. Strengthening extreme weather predictions and disaster response As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe. Sessions Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters. Boosting Earth System Model Outputs Using Exascale Climate Emulators Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts. Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs. Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively. Autonomous Systems and Remote Sensing for Better Earth Data Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale. Boosting climate forecasting for energy and infrastructure planning Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future. Sessions The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions. Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2 NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers. Applying AI Weather Models with NVIDIA Earth-2 NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors. Advancing AI-driven environmental monitoring and conservation AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity. Session Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience. In-person posters Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef ​​AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts Mangrove Simulation Predicts Carbon Sequestration Solutions Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful. Revolutionizing Antarctic Flora Monitoring with AI and Drones AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem. Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges. From NVIDIA CEO Jensen Huang’s must-see keynote to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact. The session catalog is open—start building your agenda today.
https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力
从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。 在 3 月 17 日至 21 日举行的 NVIDIA GTC 2025 大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。 加强极端天气预测和灾害响应 随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。 会议 借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。 使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出 KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。 将 AI 用于高级洪水风险建模和缓解策略 JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。 在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式 野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。 利用自主系统和遥感获取更好的地球数据 配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。 提升气候预测能力以加强能源和基础设施规划 准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。 会议 AI 和加速计算在了解和减缓城市气候变化方面的作用 研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。 借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测 NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。 将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用 ,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。 推进 AI 驱动的环境监测和保护 AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。 会议 探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性 ,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。 线下海报 逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作 Mangrove Simulation 预测碳封存解决方案 红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。 借助 AI 和无人机革新南极植物监测 AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。 在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。 从 NVIDIA 首席执行官 Jensen Huang 不容错过的主题演讲 ,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。 会议目录 现已开放,请立即开始构建您的议程。
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling
As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking , this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution. In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming. The results turned out to be better than the optimized kernels developed by skilled engineers in some cases. The need for optimized attention kernels and associated challenges Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data. The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes. There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task. ‌ Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on. Figure 1. Neighborhood attention on 2D inputs Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers. ‌ Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code. The following prompt is sample user input for a relative positional embeddings attention kernel. Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx):     return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel. Inference-time scaling for generating optimized GPU Kernels To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration. Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as ‌input to the DeepSeek-R1 model. This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel. Figure 3. Performance of automatically generated optimized attention kernels with flex attention This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by Stanford’s KernelBench benchmark. ‌ The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming. Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems. Figure 4. Inference-time scaling results in optimized GPU kernels Optimized GPU kernels on DeepSeek-R1 These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels. While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential. For more information or to get started, see the DeepSeek-R1 NIM microservice , now available on build.nvidia.com .
https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成
随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“ 测试时扩展 ”或“ 推理时扩展 ”的新扩展法则正在出现。该技术也称为 AI 推理 或 长时思考 技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。 在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。 事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。 优化注意力内核的需求和相关挑战 注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。 相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。 注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。 多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。 图 1. 邻域注意力在 2D 输入上的应用 创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。 最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。 以下提示是用户输入相对位置嵌入注意力核函数的示例。 Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx): return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。 用于生成经过优化的 GPU 内核的推理时间扩展 为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。 图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展 该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。 这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。 图 3、具有 Flex Attention 的自动生成优化注意力内核的性能 此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为 斯坦福大学的 KernelBench 基准测试。* KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。 图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。 图 4、在优化的 GPU 内核中实现推理 – 时间扩展 DeepSeek-R1 上经过优化的 GPU 内核 这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。 虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。 如需了解更多信息或入门,请参阅 DeepSeek-R1 NIM 微服务 (现已在 build.nvidia.com 上提供)。
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment
A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in medical AI research,” said Ruijiang LI , an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.” Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The study , recently published in Nature , highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI. Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights. To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options. The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data. According to Jinxi Xiang , study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64 NVIDIA V100 Tensor Core GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used NVIDIA A100 80 gb Tensor Core GPUs . The researchers also used NVIDIA RTX A6000 GPUs for evaluating downstream tasks. The framework was accelerated with NVIDIA CUDA and NVIDIA cuDNN libraries, for optimized performance. When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy. Figure 1. An example of the visual question-answering MUSK can perform It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected ‌breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%. Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%. “One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.” A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes. Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval. “We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li. The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on GitHub .
https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI 基础模型增强癌症诊断并实现个性化治疗
斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。 “多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者 Ruijiang LI 说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。” 在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。 借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。 为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。 研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。 据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个 NVIDIA V100 Tensor Core GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用 NVIDIA A100 80GB Tensor Core GPUs 。研究人员还使用 NVIDIA RTX A6000 GPUs 评估下游任务。该框架通过 NVIDIA CUDA 和 NVIDIA cuDNN 库进行加速,以优化性能。 在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。 图 1. 例如,视觉问答 MUSK 可以执行 它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。 此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。 “一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。” 这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。 他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。 “我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。 研究人员的工作(包括安装说明、模型权重、评估代码和样本数据) 可在 GitHub 上获取。
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA Toolkit Now Available for NVIDIA Blackwell
The latest release of the CUDA Toolkit , version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release: NVIDIA Blackwell architecture support CUDA Graphs conditional nodes enhancements Blackwell CUTLASS kernels for large language models (LLMs) NVIDIA Nsight Developer Tools updates Math libraries updates cudaStreamGetDevice Compiler updates Accelerated Python updates Feature-complete architectures NVIDIA Blackwell architecture support CUDA Toolkit 12.8 is the first version of the Toolkit to support the NVIDIA Blackwell architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in NVIDIA Hopper GPUs—Blackwell is the largest GPU ever built. Key Blackwell capabilities supported include: Second-generation Transformer Engine through custom Tensor Core technology: Accelerates inference and training for LLMs and mixture-of-experts (MoE) models. Decompression: Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate. Network interconnect: NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models. To learn more about the leading innovations in Blackwell, see the NVIDIA Blackwell Architecture Technical Brief . 2x faster CUDA Graphs with runtime kernel selection for lower latency inference With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional conditional node types. In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes. Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes. With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making. Training : By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train. Inference : For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence. To learn more, see Dynamic Control Flow in CUDA Graphs with Conditional Nodes . Blackwell CUTLASS kernels for LLMs CUTLASS , since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI. With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations. Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16. Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions NVIDIA Nsight Developer Tools NVIDIA Nsight Compute 2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data. Figure 3. Tensor Memory traffic in the Nsight Compute memory chart It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance. This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag -g-tmem-access-check when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage. Math libraries updates With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing. New features cuBLAS APIs were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher. Introduced initial support for CUDA in Graphics (CIG) on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits. cuSOLVER now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting. nvJPEG now provides support for the Tegra architecture. NPP now provides support for the DRIVE Thor architecture. cudaStreamGetDevice Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics. Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the cudaStreamGetDevice CUDA API to retrieve the device associated with a CUDA stream can simplify applications. Compiler updates New compiler updates include the following: The CUDA Toolkit 12.8 release introduces support for GCC 14 as a host-side compiler. The default high-level optimizer is now based on LLVM 18 for the Blackwell architecture. nvdisasm now supports emitting JSON formatted SASS disassembly. Accelerated Python updates The following two beta releases are now available for Python users: CUDA Python has released an early prototype of a new idiomatic object model called cuda.core and moved the CUDA binding to a submodule, cuda.bindings . For more information, see the documentation in the NVIDIA/cuda-python GitHub repo. CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about CCCL . Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability. Feature-complete architectures With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release. This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle. Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using nvcc , nvrtc , and nvjitlink . In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window. For more details, read the CUDA Toolkit 12.8 Release Notes . Summary The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python. Want more information? Check out the CUDA documentation , browse the latest NVIDIA Deep Learning Institute (DLI) offerings, and visit the NGC catalog . Ask questions and join the conversation in the CUDA Developer Forums. Acknowledgments Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash.
https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA 工具包现已支持 NVIDIA Blackwell 架构
CUDA 工具包 的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能: NVIDIA Blackwell 架构支持 CUDA 图形处理条件节点增强功能 用于大语言模型(LLMs)的 Blackwell CUTLASS 内核 NVIDIA Nsight 开发者工具更新 数学库更新 cudaStreamGetDevice 编译器更新 加速 Python 更新 功能齐全的架构 NVIDIA Blackwell 架构支持 CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持 NVIDIA Blackwell 架构。Blackwell 由 208 亿个晶体管构建而成,是 NVIDIA Hopper GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。 Blackwell 支持的主要功能包括:Key Blackwell 采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。 解压缩: 使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。 网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。 如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。 使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理 借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他 条件节点类型 。 在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。 条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。 借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。 训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。 推理:对于使用测试时计算的新一代推理模型 ,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。 如需了解详情, 请参阅使用条件节点的 CUDA 图形中的动态控制流。 适用于 LLMs 的 Blackwell CUTLASS 内核 自 2017 年首次推出以来, CUTLASS 一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。 随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。 图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能 对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。 图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍 NVIDIA Nsight 开发者工具 NVIDIA Nsight Compute 2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。 图 3、Nsight Compute 内存图中的 Tensor 内存流量 它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。 此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志 -g-tmem-access-check 来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。 数学库更新 借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。 新功能 cuBLAS API 经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。 为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。 cuSOLVER 现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。 nvJPEG 现在支持 Tegra 架构。 NPP 现在为 DRIVE Thor 架构提供支持。 cudaStreamGetDevice 应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。 以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加 cudaStreamGetDevice CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。 编译器更新 新的编译器更新包括以下内容: CUDA 工具包 12.8 版本引入了对作为主机端编译器的 GCC 14 的支持。 现在,Blackwell 架构的默认高级优化器基于 LLVM 18 nvdisasm 现在支持发射 JSON 格式的 SASS 反汇编。 加速 Python 更新 以下两个测试版现已面向 Python 用户提供: CUDA Python 已发布名为 cuda.core 的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块 cuda.bindings 。有关更多信息,请参阅 NVIDIA/cuda-python GitHub 存储库中的文档。 CUDA 核心计算库 ( CCCL ) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。 此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。 功能齐全的架构 在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。 这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。 从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用 nvcc 、 nvrtc 和 nvjitlink 时输出警告消息。 在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。 如需了解更多详情,请参阅 CUDA Toolkit 12.8 版本说明 。 总结 CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。 想要了解更多信息?查看 CUDA 文档 ,浏览最新的 NVIDIA Deep Learning Institute (DLI) 产品 ,并访问 NGC 目录 。在 CUDA Developer Forums 中提出问题并加入对话。 致谢 感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。
https://developer.nvidia.com/blog/recent-posts/
Recent posts
No content found
https://developer.nvidia.com/zh-cn/blog/recent-posts/
最近文章
No content found
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/
High-Performance Remote IO With NVIDIA KvikIO
Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload. Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have higher and more variable latency . Every storage service has their own set of best practices and performance guidelines ( AWS , Azure ). Here, we’ll give some general guidelines that are focused on data analysis workloads. Location Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt. File format “Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located). Apache Parquet , Zarr , and Cloud Optimized GeoTIFF are some examples of cloud-native file formats for various types of data. Because object storage services typically support range requests , clients (like cuDF ) can read the metadata and then download just the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out. File size Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples). That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency. Concurrency Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a remote service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead. Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a thread pool : pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) Or with asyncio : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) We’re able to have a lot of reads waiting around doing nothing at the same time , which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput. Libraries Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA KvikIO is notable because It automatically chunks large requests into multiple smaller ones and makes those requests concurrently. It can read efficiently into host or device memory, especially if GPU Direct Storage is enabled. It’s fast. As mentioned in the RADIDS 24.12 release announcement , KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does. Benchmarks When you read a file, KvikIO splits that read into smaller reads of kvikio.defaults.task_size bytes. It makes those read requests in parallel using a thread pool with kvikio.defaults.num_threads workers. These can be controlled using the environment variables KVIKIO_TASK_SIZE and KVIKIO_NTHREADS , or through Python with: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... See Runtime Settings for more. This chart shows the throughput, in megabits per second, of reading a 1 GB blob from S3 to a g4dn EC2 instance in the same region for various sizes of the thread pool (higher is better). Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of kvikio.RemoteFile.read for various values of kvikio.defaults.num _threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point. Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system. With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case. As shown in the next figure, the task size also affects the maximum throughput. Figure 2. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of kvikio.RemoteFile.read . The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts. As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput. KvikIO achieves higher throughput on this workload when compared with boto3 , the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently. Figure 3. From a benchmark reading a 1 GB from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3. As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS g4dn.12xlarge instance , which has 4 NVIDIA T4 GPUs and 48 vCPUs. Figure 4. From a benchmark reading a parquet data set from S3 to a g4dn.12xlarge EC2 instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include an optimization to read parquet footers in parallel. With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node. Conclusion As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on GitHub . You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing.
https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/
借助 NVIDIA KvikIO 实现高性能远程 IO
处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。 您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。 地址 将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。 文件格式 “云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。 Apache Parquet 、 Zarr 和 Cloud Optimized GeoTIFF 是适用于各种类型数据的云原生文件格式的一些示例。 由于对象存储服务通常支持范围请求,因此客户端 (如 cuDF ) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。 文件大小 由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。 也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。 并发 使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。 对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成: pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) 或使用 异步 : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) 我们能够让大量读取 同时 不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。 库 上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中, NVIDIA KvikIO 值得注意,因为 它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。 它可以高效读取主机或设备内存,尤其是启用 GPU Direct Storage 时。 速度很快。 正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。 基准测试 当您读取文件时,KvikIO 会将读取的文件拆分成较小的 kvikio.defaults.task_size 字节读取。它使用具有 kvikio.defaults.num_threads 工作线程的线程池并行执行这些读取请求。可以使用环境变量 KVIKIO_TASK_SIZE 和 KVIKIO_NTHREADS 控制这些内容,也可以通过 Python 使用: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... 详情请参阅 Runtime Settings 。 此图表显示了在同一区域内,针对不同大小的线程池,从 S3 到 g4dn EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。 图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是 kvikio.RemoteFile.read 的吞吐量,适用于各种值的 kvikio.defaults.num _threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。 线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。 借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。 如下图所示,任务大小也会影响最大吞吐量。 图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例 。这显示了 kvikio.RemoteFile.read 吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。 只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。 与 boto3 (适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。 图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。 对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS g4dn.12xlarge 实例上运行,该实例包含 4 个 NVIDIA T4 GPU 和 48 个 vCPUs。 图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。 启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。 结束语 随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs
Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have. The rise of small language models (SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data. Microsoft announces the new generation of open SLMs to the Phi family with two new additions: Phi-4-mini Phi-4-multimodal Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs. These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the Phi-4 14B parameter SLM and enables commercial use for the two new smaller models. The new models are available on the Azure AI Foundry , Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents. You can test out each member of the Phi family through the NVIDIA API Catalog , which is the first sandbox environment to support each modality and tool-calling for Phi-4-multimodal . Use the preview NIM microservice to integrate the model into your applications today. Why invest in SLMs? SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements. Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems. Phi-4-multimodal Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days. Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications. Figure 1. Visual QA demo in NVIDIA API Catalog You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data. Figure 2. Tool-calling demo in NVIDIA API Catalog Phi-4-mini Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days. For both models, the training data is intentionally focused on high quality educational data and code which results in a textbook-like quality to the models. Text, speech, and vision benchmark data can be found in the model cards. Advancing community models NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience. Using the NVIDIA NeMo platform , these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry. NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences. Get started today Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at build.nvidia.com/microsoft . On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production.
https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展
大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。 小语言模型 (SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。 Microsoft 宣布在 Phi 系列中 推出新一代开放式 SLM ,并新增两项功能: Phi-4-mini Phi-4-multimodal Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。 这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的 Phi-4 14B 参数 SLM 的研究版本构建而成,可用于两个新的较小模型的商业用途。 这些新模型可在 Microsoft 的云 AI 平台 Azure AI Foundry 上使用,用于设计、定制和管理 AI 应用和代理。 您可以通过 NVIDIA API Catalog 测试 Phi 系列的每个成员,这是第一个支持 Phi-4 多模态 的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。 为何投资 SLM? SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。 SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。 Phi-4-multimodal Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。 事实证明,该模型在 ASR 方面表现出色,因为它在 Huggingface OpenASR 排行榜上排名第一 ,单词错误率为 6.14%。 词错误率 (WER) 是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。 图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。 图 1、NVIDIA API Catalog 中的可视化问答演示 您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。 图 2、NVIDIA API Catalog 中的工具调用演示 Phi-4-mini Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。 对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。 推进社区模式 NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。 借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。 NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。 立即开始使用 请访问 build.nvidia.com/microsoft ,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。 在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM
In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on. Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms. NVIDIA NIM microservices simplify the secure and reliable deployment of AI foundation models for language, computer vision , speech, biology, and more. NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex. This post helps you get started with building a vision language model (VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using LangGraph, the state-of-the-art llama-3.2-90b-vision-instruct VLM, the optimized mistral-small-24B-instruct large language model (LLM), and NVIDIA NIM for deployment. This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of LangChain’s tool calling enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources. This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the /NVIDIA/GenerativeAIExamples GitHub repo. A simple HTML multimodal retrieval pipeline The system consists of the following pipelines: Document ingestion and preprocessing: Runs a VLM on the images and translates them into text. Question-answering: Enables the user to ask questions of the system. Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively. Data ingestion and preprocessing pipeline This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text. Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an LLM with long context modeling capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed. Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers. Figure 1. Data ingestion and preprocessing pipeline LLMs with long context modeling can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval. In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information. However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy. QA pipeline All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a LLM with long context modeling (mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents. Figure 2. Question-answering pipeline Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content. Finally, the system combines both textual and visual insights to deliver a comprehensive answer. Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems. Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text. NVIDIA NIM now natively supports capabilities for generating structured outputs . This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering. Integrating NVIDIA NIM with LangGraph NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with LangGraph , a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes: Assistant node: Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools. Tools node: A collection of tools that perform specific tasks required by the assistant. Figure 3. Use LangGraph to build an agent for the pipeline Assistant node The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the /NVIDIA/GenerativeAIExamples GitHub repo. Here are the agent inputs: Collection_name : The set of documents on which to search. Question : The user’s question. document_id : (Optional) If provided, the agent skips the document ranking phase. This is the agent process: Document selection : If document_id is not provided, the agent invokes the find_best_document_id tool, which identifies the most relevant document for the user’s question within the specified collection. Question answering : With document_id , the agent uses the query_document tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document. Image analysis (if necessary): If the query_document tool indicates that the answer might be in an image (by returning an image_hash value), the agent invokes the query_image tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer. Tools node We implemented three key tools for the agent to perform its tasks. Find_best_document_id : Identify the most relevant document for the user’s question when document_id is not provided. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_document : Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_image : Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the /NVIDIA/GenerativeAIExamples . Binding external tools with models Tool calling is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks. Tool binding empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses. Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following: Extended capabilities : Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation. Dynamic tool selection : The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance. Seamless integration : NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems. In this implementation, use LangChain’s @tool decorator to create three tools, then use the .bind_tools method to bind the tools with models. Defining structured outputs with Pydantic By defining the output schema with Pydantic and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph. Define the structure The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for downstream processing . from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s with_structured_output method. Define the prompt The prompt_document_expert contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the BestDocuments schema). from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) Prepare context The get_context function prepares the input data by retrieving document summaries and formatting them appropriately. def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} Bind the structured output The llm.with_structured_output(BestDocuments) method instructs the LLM to produce output conforming to the BestDocuments Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure. LangChain’s with_structured_output method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions. Finally, create a chain to process the input and generate the structured output: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) End-to-end tool in action To get started with the multimodal retrieval system, clone the /NVIDIA/GenerativeAIExamples GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to http://localhost:7860 to access the system through the Gradio user interface. For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The Select Question field is for evaluation purposes, with the Ground Truth Answer field value provided by a human. Figure 4. Agent multi-document evaluation This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data. Figure 5. Agent result with source graph for verification Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM Challenges and solutions As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt. Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses. Hierarchical document reranking solution To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages: Batch processing : Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations. Intermediate reranking : The model evaluates each batch separately, ranking the documents within each group. Selection of top candidates : The most relevant documents from each batch are selected to proceed to the next stage. Final reranking : The top candidates from all batches are combined and re-evaluated to identify the most relevant document. Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified. Future prospects with smaller models Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale. The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost. Conclusion This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods: Enhanced comprehension of documents A multimodal model to extract information from images, tables, and text Seamless integration of external tools Generation of consistent and structured output Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the /NVIDIA/GenerativeAIExamples GitHub repo. NVIDIA NIM also offers access to more models optimized for NVIDIA GPUs. You can explore NVIDIA NeMo , a scalable generative AI framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more. If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time multilingual and cross-lingual information retrieval systems to generate context-aware responses, learn more about NVIDIA NeMo Retriever .
https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统
在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。 多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。 NVIDIA NIM 微服务可简化 AI 基础模型 在语言、 计算机视觉 、语音、生物学等领域的安全可靠部署。 NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。 本文将帮助您着手构建基于 视觉语言模型 (VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用 LangGraph 部署应用程序、先进的 llama-3.2-90b-vision-instruct VLM、经过优化的 mistral-small-24B-instruct 大语言模型(LLM),以及用于部署的 NVIDIA NIM。 与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成 LangChain 的工具调用 ,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。 此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 仓库。 简单的 HTML 多模态检索工作流 该系统由以下管道组成: 文档提取和预处理:在图像上运行 VLM 并将其转换为文本。 问答:允许用户提出系统问题。 这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。 数据提取和预处理 pipeline 此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。 接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。 最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道 采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。 相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。 但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。 QA 管道 所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道 接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。 最后,该系统将文本和视觉见解相结合,以提供全面的答案。 结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。 从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。 NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。 将 NVIDIA NIM 与 LangChain 集成 NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成: 助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。 工具节点:用于执行助手所需特定任务的工具集合。 图 3、使用 LangGraph 为管道构建代理 助理节点 助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到。 智能体输入如下: Collection_name :要在其上搜索的文档集。 Question :用户的问题。 document_id :(可选) 如果提供,代理会跳过文档排名阶段。 这是智能体流程: 文档选择 :如果未提供 document_id ,代理会调用 find_best_document_id 工具,该工具可在指定集合中识别与用户问题最相关的文档。 问题回答:对于 document_id ,代理使用 query_document 工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。 图像分析 (如有必要):如果 query_document 工具表明答案可能在图像中 (通过返回 image_hash 值),代理会调用 query_image 工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。 工具节点 我们为智能体实施了三个关键工具来执行任务。 Find_best_document_id :在未提供 document_id 时,确定与用户问题最相关的文档。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 存储库。 query_document :在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples GitHub 存储库。 query_image :当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/ NVIDIA/GenerativeAIExamples 。 将外部工具与模型绑定 工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。 工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。 绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括: 扩展功能 :模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。 动态工具选择 :模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。 无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。 在此实现中,使用 LangChain 的 @tool 装饰器创建三个工具,然后使用 .bind_tools 方法将这些工具与模型绑定。 使用 PyTorch 定义结构化输出 通过使用 Pydantic 定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。 定义结构 首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。 from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") 接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的 with_structured_output 方法实现的。 定义提示 prompt_document_expert 包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与 BestDocuments 架构匹配的 JSON)。 from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) 准备上下文 get_context 函数通过检索文档摘要并对其进行适当格式化来准备输入数据。 def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} 绑定结构化输出 llm.with_structured_output(BestDocuments) 方法指示 LLM 生成符合 BestDocuments Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。 LangChain 的 with_structured_output 方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。 最后,创建一个链来处理输入并生成结构化输出: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) 端到端工具的实际应用 要开始使用多模态检索系统,请克隆 /NVIDIA/GenerativeAIExamples GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至 http://localhost:7860 ,通过 Gradio 用户界面访问系统。 例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“ Select Question ” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估 该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形 视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合 视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像 挑战和解决方案 随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。 处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。 分层文档重新排序解决方案 为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段: 批量处理 :将摘要分组为模型可以处理的批量,且不会超过提示大小限制。 中级重新排序 :模型分别评估每个批次,对每个组中的文档进行排序。 选择最优秀的候选文档 :从每个批次中选择最相关的文档,以进入下一阶段。 最终重新排名 :系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。 考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。 小型模型的未来前景 使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。 对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。 结束语 本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势: 增强对文档的理解 用于从图像、表格和文本中提取信息的多模态模型 无缝集成外部工具 生成一致的结构化输出 借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到源代码。 NVIDIA NIM 还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索 NVIDIA NeMo ,这是一个可扩展的生成式 AI 框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。 如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解 NVIDIA NeMo Retriever 。
https://developer.nvidia.com/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding
Large language models (LLMs) that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving AI agents , these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation. To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in NVIDIA TensorRT-LLM , and the ease of deployment with NVIDIA NIM for transformative potential and coding efficiency. Qwen2.5-Coder models The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks. NVIDIA TensorRT-LLM has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic inflight batching , KV caching , KV cache reuse , and several speculative decoding techniques, among others. These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM high-level API to speed up Qwen2.5-Coder inference to generate multiline autocode completion. Lookahead decoding Lookahead decoding is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with draft target speculative decoding , and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example. Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding. Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the Jacobi iteration method , a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch. The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G). Window size (W): Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently. N-gram size (N): Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory. Verification set size (G): Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities. Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models. Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on NVIDIA H100 Tensor Core GPUs , as shown in Figure 2. Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Similar throughput speedups are achieved on NVIDIA H200 Tensor Core GPUs . With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3). Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Steps to run lookahead decoding with TensorRT-LLM To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below. # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com Then run lookahead decoding in TensorRT-LLM using the high-level API. # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() Summary Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to try the Qwen2.5-Coder models optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable NVIDIA NIM microservices for ease of deployment. Acknowledgments We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer  for their efforts in supporting this post.
https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量
专注于编码的 大语言模型(LLMs) 已稳步应用于开发者工作流程。从配对编程到自我改进的 AI 智能体 ,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。 为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对 NVIDIA TensorRT-LLM 支持 的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。 Qwen2.5-Coder 模型 Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。 NVIDIA TensorRT-LLM 已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上 批处理 、 KV 缓存 、 KV 缓存重复使用 和几种预测性解码技术等优化功能。 这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。 解码前景展望 解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。 与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。 每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器 Jacobi 迭代法 ,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。 前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。 窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。 N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。 验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。 图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding 未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。 通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过 NVIDIA H100 Tensor Core GPUs 上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。 图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 NVIDIA H200 Tensor Core GPU 也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。 图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 使用 TensorRT-LLM 进行解码的前瞻性运行步骤 要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。 # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com 然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。 # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() 总结 前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。 访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。 为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的 NVIDIA NIM 微服务。 致谢 在此, 我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力 。
https://developer.nvidia.com/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/
Optimize AI Inference Performance with NVIDIA Full-Stack Solutions
The explosion of AI-driven applications has placed unprecedented demands on both developers, who must balance delivering cutting-edge performance with managing operational complexity and cost, and AI infrastructure. NVIDIA is empowering developers with full-stack innovations—spanning chips, systems, and software—that redefine what’s possible in AI inference , making it faster, more efficient, and more scalable than ever before. Easily deploy high-throughput, low-latency inference Six years ago, NVIDIA set out to create an AI inference server specifically designed for developers building high-throughput, latency-critical production applications. At the time, many developers were grappling with custom, framework-specific servers that increased complexity, drove up operational costs, and struggled to meet stringent service-level agreements for latency and throughput. To address this, NVIDIA developed the NVIDIA Triton Inference Server , an open-source platform capable of serving models from any AI framework. By consolidating framework-specific inference servers, Triton streamlined AI inference deployment and increased AI prediction capacity. This approach has made Triton one of the most widely adopted NVIDIA open-source projects , now used by hundreds of leading organizations to deploy production AI models efficiently. In addition to Triton, NVIDIA offers a broad ecosystem of AI inference solutions. For developers seeking powerful, customizable tools, NVIDIA TensorRT provides a high-performance deep learning inference library with APIs that enable fine-grained optimizations. NVIDIA NIM microservices provide a flexible framework for deploying AI models across the cloud, data centers, or workstations. Optimizations for AI inference workloads Inference is a full-stack problem today, requiring high-performance infrastructure and efficient software to make effective use of that infrastructure. In addition, inference workloads continue to become more challenging, as model sizes continue to grow and latency constraints tighten, all while the number of users leveraging these AI services also continues to increase. And with the introduction of inference time scaling, a new paradigm for scaling model intelligence, more compute is being applied during inference to enhance model performance. These trends mean that it’s important to continue advancing delivered inference performance, even on the same underlying hardware platform. By combining established methods like model parallelism, mixed-precision training, pruning, quantization, and data preprocessing optimization with cutting-edge advancements in inference technologies, developers can achieve remarkable gains in speed, scalability, and cost-effectiveness. The TensorRT-LLM library incorporates many state-of-the-art features that accelerate inference performance for large language models (LLMs) , which are outlined below. Prefill and KV cache optimizations Key-value (KV) cache early reuse : By reusing system prompts across users, the KV Cache Early Reuse feature accelerates time-to-first-token (TTFT) by up to 5x. Flexible KV block sizing and efficient eviction protocols ensure seamless memory management, enabling faster response times even in multi-user environments. Chunked prefill : For smarter deployment, chunked prefill divides the prefill phase into smaller tasks, enhancing GPU utilization and reducing latency. This innovation simplifies deployment and ensures consistent performance, even with fluctuating user demands. Supercharging multiturn interactions : The NVIDIA GH200 Superchip architecture enables efficient KV cache offloading, improving TTFT by up to 2x in multiturn interactions with Llama models while maintaining high throughput. Decoding optimization Multiblock attention for long sequences : Addressing the challenge of long input sequences, TensorRT-LLM multiblock attention maximizes GPU utilization by distributing tasks across streaming multiprocessors (SMs). This technique improves system throughput by more than 3x, enabling support for larger context lengths without additional hardware costs. Speculative decoding for accelerated throughput : Leveraging a smaller draft model alongside a larger target model, speculative decoding enables up to a 3.6x improvement in inference throughput. This approach ensures high-speed, high-accuracy generation of model outputs, streamlining workflows for large-scale AI applications. Speculative decoding with Medusa: The Medusa speculative decoding algorithm is available as part of TensorRT-LLM optimizations. By predicting multiple subsequent tokens simultaneously, Medusa boosts throughput for Llama 3.1 models by up to 1.9x on the NVIDIA HGX H200 platform. This innovation enables faster responses for applications that rely on LLMs, such as customer support and content creation. Multi-GPU inference MultiShot communication protocol : Traditional Ring AllReduce operations can become a bottleneck in multi-GPU scenarios. TensorRT-LLM MultiShot, powered by NVSwitch , reduces communication steps to just two, irrespective of GPU count. This innovation boosts AllReduce speeds by up to 3x, making low-latency inference scalable and efficient. Pipeline parallelism for high-concurrency efficiency : Parallelism techniques require that GPUs be able to transfer data quickly and efficiently, necessitating a robust GPU-to-GPU interconnect fabric for maximum performance. Pipeline parallelism on NVIDIA H200 Tensor Core GPUs achieved a 1.5x throughput increase for Llama 3.1 405B and demonstrated their versatility with a 1.2x speedup for Llama 2 70B in MLPerf Inference benchmarks. MLPerf Inference is a suite of industry-standard inference performance benchmarks developed by the MLCommons consortium. Large NVLink domains: The NVIDIA GH200 NVL32 system, powered by 32 NVIDIA GH200 Grace Hopper Superchips connected using the NVLink Switch system, and with TensorRT-LLM improvements, delivers up to 3x faster TTFT for Llama models. With up to 127 petaflops of AI compute, this next-generation architecture sets the stage for unprecedented real-time responsiveness in AI applications. Quantization and lower-precision compute NVIDIA TensorRT Model Optimizer for precision and performance: The NVIDIA custom FP8 quantization recipe in the NVIDIA TensorRT Model Optimizer delivers up to 1.44x higher throughput without sacrificing accuracy. These optimizations enable more cost-effective deployment by reducing latency and hardware requirements for demanding workloads. End-to-end full-stack optimization: NVIDIA TensorRT libraries and FP8 Tensor Core innovations ensure high performance across a wide range of devices, from data center GPUs to edge systems. NVIDIA has optimized the Llama 3.2 collection of models for great performance, demonstrating how full-stack software can adaptively unlock efficiency across diverse AI deployment environments. With these features, as well as many others within Triton and TensorRT-LLM, developers can now deploy LLMs that are not only faster and more efficient but also capable of handling a wider range of tasks and user demands. This opens new opportunities for businesses to enhance customer service, automate complex processes, and gain deeper insights from their data. Evaluating inference performance Delivering world-class inference performance takes a full technology stack—chips, systems, and software—all contributing to boosting throughput, reducing energy consumption per token, and minimizing costs. MLPerf Inference is one key measure of inference performance is MLPerf Inference. The benchmark measures inference throughput under standardized conditions, with results subject to extensive peer review. The benchmark is regularly updated to reflect new advances in AI, ensuring that organizations can rely on these results to evaluate platform performance. In the latest round of MLPerf Inference, NVIDIA Blackwell made its debut , delivering up to 4x more performance than the NVIDIA H100 Tensor Core GPU on the Llama 2 70B benchmark. This achievement was the result of the many architectural innovations at the heart of the Blackwell GPU, including the second-generation Transformer Engine with FP4 Tensor Cores and ultrafast HBM3e GPU memory that delivers 8 TB/s of memory bandwidth per GPU. In addition, many aspects of the NVIDIA software stack, including NVIDIA TensorRT-LLM, were re-engineered to make use of new capabilities in Blackwell, such as support for FP4 precision, while continuing to meet the rigorous accuracy target of the benchmark. The NVIDIA H200 Tensor Core GPU, available now from server makers and cloud service providers, also achieved outstanding results on every benchmark in the data center category. This includes the newly added Mixtral 8x7B mixture-of-experts (MoE) LLM, as well as on the Llama 2 70B LLM and Stable Diffusion XL text-to-image tests. As a result of continued software improvements, the Hopper architecture delivered up to 27% more inference performance compared to the prior round. NVIDIA Triton Inference Server, running on a system with eight H200 GPUs achieved virtually identical performance compared to the NVIDIA bare-metal submission on the Llama 2 70B benchmark in MLPerf Inference v4.1. This shows that enterprises no longer need to choose between a feature-rich, production-grade AI inference server and peak throughput performance—both can be achieved simultaneously with NVIDIA Triton. The future of AI inference: Emerging trends and technologies The landscape of AI inference is rapidly evolving, driven by a series of groundbreaking advancements and emerging technologies. Models continue to get smarter, as increases in compute at data center scale enable pretraining larger models. The introduction of sparse mixture-of-experts model architectures, such as GPT-MoE 1.8T, will also help boost model intelligence while improving compute efficiency. These larger models, whether dense or sparse, will require that GPUs individually become much more capable. NVIDIA Blackwell architecture is set to fuel next-generation generative AI inference. Each Blackwell GPU features second-generation Transformer Engine and fifth-generationTensor Cores utilizing FP4. Lower-precision data formats help to increase computational throughput and reduce memory requirements. To ensure they can deliver significant performance benefits while maintaining high accuracy, an incredible amount of software craftsmanship is needed. At the same time, to serve the most demanding models at brisk, real-time rates, many of the most capable GPUs will need to work in concert to generate responses. The NVIDIA GB200 NVL72 rack-scale solution creates a 72-GPU NVLink domain that acts as a single massive GPU. For GPT-MoE 1.8T real-time inference, it provides up to a 30x improvement in throughput compared to the prior generation Hopper GPU. In addition, the emergence of a new scaling law—test-time compute—is providing yet another way to improve response quality and accuracy for even more complex tasks. This new paradigm, first introduced with the OpenAI o1 model, enables models to “reason” by generating many intermediate tokens before outputting the final result. Reasoning models are particularly helpful in domains such as complex mathematics and generating computer code. This new paradigm is set to fuel a new wave of breakthroughs requiring more computational performance during inference time. The path to artificial general intelligence will rely on continued breakthroughs in data center compute performance. Pretraining, post-training, and test-time scaling all depend on state-of-the-art infrastructure running expertly crafted software. The NVIDIA platform is evolving rapidly, with a brisk one-year innovation rhythm, to enable the ecosystem to continue pushing the frontiers of AI. Get started Check out How to Get Started with AI Inference , learn more about the NVIDIA AI Inference platform, and stay informed about the latest AI inference performance updates. Watch a demo on how to quickly deploy NVIDIA NIM microservices or read A Simple Guide to Deploying Generative AI with NVIDIA NIM . Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using NVIDIA NIM microservices.
https://developer.nvidia.com/zh-cn/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/
借助 NVIDIA 全栈解决方案提升 AI 推理性能
AI 驱动的应用的爆炸式发展对开发者提出了前所未有的要求,他们必须在提供先进的性能与管理运营复杂性和成本以及 AI 基础设施之间取得平衡。 NVIDIA 正在为开发者提供涵盖芯片、系统和软件的全栈创新,重新定义 AI 推理 的可能性,使其比以往更快、更高效、更具可扩展性。 轻松部署高吞吐量、低延迟推理 六年前,NVIDIA 着手打造 AI 推理服务器,专为构建高吞吐量、延迟关键型生产应用的开发者而设计。当时,许多开发者都在努力使用定制的、特定于框架的服务器,这些服务器增加了复杂性,增加了运营成本,并且难以满足严格的服务水平协议(service-level agreements)关于延迟和吞吐量的要求。 为解决这一问题,NVIDIA 开发了 NVIDIA Triton Inference Server ,这是一个开源平台,能够为来自任何 AI 框架的模型提供服务。通过整合特定于框架的推理服务器,Triton 简化了 AI 推理部署,并提高了 AI 预测能力。这种方法使 Triton 成为广泛采用的 NVIDIA 开源项目之一,现已被数百家领先的组织用于高效部署生产级 AI 模型。 除 Triton 外,NVIDIA 还提供广泛的 AI 推理解决方案生态系统。对于寻求功能强大的可定制工具的开发者, NVIDIA TensorRT 提供了一个高性能深度学习推理库,其 API 可实现细粒度优化。 NVIDIA NIM 微服务提供了一个灵活的框架,用于在云端、数据中心或工作站中部署 AI 模型。 针对 AI 推理工作负载进行优化 推理是当今的全栈问题,需要高性能基础架构和高效软件来有效利用该基础架构。此外,随着模型大小不断增长和延迟限制日益严格,推理工作负载的挑战性也越来越高,同时利用这些 AI 服务的用户数量也在不断增加。随着推理时间扩展(一种扩展模型智能的新范式)的引入,推理过程中应用了更多的计算来增强模型性能。 这些趋势意味着,即使在相同的底层硬件平台上,继续提高交付的推理性能也很重要。通过将模型并行、混合精度训练、剪枝、量化和数据预处理优化等成熟方法与推理技术的前沿进步相结合,开发者可以在速度、可扩展性和成本效益方面实现显著提升。 TensorRT-LLM 库包含许多先进功能,可加速 大语言模型(LLMs) 的推理性能,如下所述。 预填充和 KV 缓存优化 键值 (KV) 缓存提早复用 :通过在不同用户中重复使用系统提示,KV 缓存提早复用功能可将首个令牌 (TTFT) 的时间缩短高达 5 倍。灵活的 KV 块大小和高效的驱逐协议可确保无缝管理内存,即使在多用户环境中也能缩短响应时间。 分块预填充 :为实现更智能的部署,分块预填充可将预填充阶段划分为较小的任务,从而提高 GPU 利用率并降低延迟。这项创新可简化部署,并确保一致的性能,即使在用户需求波动的情况下也是如此。 强效助力多圈交互 :NVIDIA GH200 超级芯片架构可实现高效的 KV 缓存卸载,在与 Llama 模型进行多圈交互时,将 TTFT 性能提升高达 2 倍,同时保持高吞吐量。 解码优化 长序列的 多块注意力 :TensorRT-LLM 多块注意力通过在流多处理器 (SM) 中分配任务,更大限度地提高 GPU 利用率,从而解决长输入序列的挑战。此技术可将系统吞吐量提高 3 倍以上,从而在不增加硬件成本的情况下支持更大的上下文长度。 用于加速吞吐量的推理吞吐量:通过利用较小的草稿模型和较大的目标模型,推理吞吐量可将推理吞吐量提升高达 3.6 倍。这种方法可确保高速、高精度地生成模型输出,简化大规模 AI 应用的工作流。 使用 Medusa 进行推理解码 :Medusa 推理解码算法可作为 TensorRT-LLM 优化的一部分提供。通过同时预测多个后续令牌,Medusa 在 NVIDIA HGX H200 平台上将 Llama 3.1 模型的吞吐量提高了 1.9 倍。这项创新可加快客户支持和内容创建等依赖 LLM 的应用的响应速度。 多 GPU 推理 MultiShot 通信协议 :传统的 Ring AllReduce 操作可能会成为多 GPU 场景中的瓶颈。TensorRT-LLM MultiShot 由 NVSwitch 提供支持,无论 GPU 数量如何,都可以将通信步骤减少到两个。这项创新将 AllReduce 速度提升高达 3 倍,使低延迟推理具有可扩展性并十分高效。 实现高并发效率的工作流并行:并行技术要求 GPU 能够快速高效地传输数据,因此需要强大的 GPU 到 GPU 互连结构来实现出色性能。 NVIDIA H200 Tensor Core GPU 上的工作流并行将 Llama 3.1 405B 的吞吐量提高了 1.5 倍,并在 MLPerf Inference 基准测试中证明了其通用性,将 Llama 2 70B 的速度提高了 1.2 倍。MLPerf Inference 是一套行业标准推理性能基准测试,由 MLCommons 联盟开发。 大型 NVLink 域 :NVIDIA GH200 NVL32 系统由通过 NVLink Switch 系统连接的 32 个 NVIDIA GH200 Grace Hopper 超级芯片提供支持,并进行了 TensorRT-LLM 改进,可为 Llama 模型提供高达 3 倍的 TTFT 速度。凭借高达 127 Petaflops 的 AI 计算能力,此新一代架构为 AI 应用实现出色的实时响应速度奠定了基础。 量化和低精度计算 用于提高精度和性能的 NVIDIA TensorRT 模型优化器 :NVIDIA TensorRT 模型优化器中的 NVIDIA 定制 FP8 量化方法可在不牺牲准确性的情况下将吞吐量提高 1.44 倍。这些优化可降低高要求工作负载的延迟和硬件需求,从而实现更具成本效益的部署。 端到端全栈优化 :NVIDIA TensorRT 库和 FP8 Tensor Core 创新技术可确保从数据中心 GPU 到边缘系统等各种设备实现高性能。NVIDIA 优化了 Llama 3.2 模型集合,以实现出色性能,展示了全栈软件如何在不同的 AI 部署环境中灵活释放效率。 借助这些功能以及 Triton 和 TensorRT-LLM 中的许多其他功能,开发者现在可以部署更快速、更高效的 LLM,并且能够处理更广泛的任务和用户需求。这为企业增强客户服务、实现复杂流程自动化以及从数据中获得更深入见解带来了新机遇。 评估推理性能 实现出色的推理性能需要完整的技术堆栈(芯片、系统和软件),所有这些都有助于提高吞吐量、降低每个令牌的能耗并更大限度地降低成本。 MLPerf Inference 是衡量推理性能的一个关键指标。该基准测试用于测量标准化条件下的推理吞吐量,并对结果进行广泛的同行评审。基准测试会定期更新,以反映 AI 领域的新进展,确保企业组织可以依靠这些结果来评估平台性能。 在最新一轮 MLPerf Inference 中, NVIDIA Blackwell 首次亮相 ,在 Llama 2 70B 基准测试中,其性能比 NVIDIA H100 Tensor Core GPU 高 4 倍。这一成就得益于 Blackwell GPU 核心的众多架构创新,包括采用 FP4 Tensor Cores 的第二代 Transformer Engine 和可为每个 GPU 提供 8 TB/s 的 HBM3e GPU 内存带宽。 此外,对 NVIDIA 软件堆栈的许多方面 (包括 NVIDIA TensorRT-LLM) 进行了重新设计,以利用 Blackwell 中的新功能 (例如对 FP4 精度的支持),同时继续满足基准测试的严格准确性目标。 服务器制造商和云服务提供商现已推出的 NVIDIA H200 Tensor Core GPU 在数据中心类别的每项基准测试中都取得了出色的成绩。其中包括新增的 Mixtral 8x7B 多专家模型 (MoE) LLM,以及 Llama 2 70B LLM 和 Stable Diffusion XL 文本转图像测试。得益于软件的持续改进,Hopper 架构可提供高达 27% 的推理性能。 与 MLPerf Inference v4.1 中 Llama 2 70B 基准测试中的 NVIDIA 裸机提交相比 ,在配备 8 个 H200 GPU 的系统上运行的 NVIDIA Triton Inference Server 实现了几乎相同的性能。这表明企业不再需要在功能丰富的生产级 AI 推理服务器和峰值吞吐量性能之间做出选择,而 NVIDIA Triton 可以同时实现这两种性能。 AI 推理的未来:新兴趋势和技术 在一系列突破性进展和新兴技术的推动下,AI 推理的格局正在迅速发展。随着数据中心规模的计算能力增加,模型将继续变得更加智能。引入稀疏的多专家模型架构 (例如 GPT-MoE 1.8T) 也将有助于提高模型智能,同时提高计算效率。这些更大型的模型,无论是密集模型还是稀疏模型,都需要 GPU 单独变得更加强大。NVIDIA Blackwell 架构将为新一代生成式 AI 推理提供动力支持。 每个 Blackwell GPU 均配备第二代 Transformer Engine 和第五代 Tensor Cores,利用 FP4。低精度数据格式有助于提高计算吞吐量并降低内存需求。为了确保它们能够在保持高精度的同时提供显著的性能优势,我们需要大量的软件技术。 与此同时,为了以快速、实时的速率为要求严苛的模型提供服务,许多功能非常强大的 GPU 需要协同工作以生成响应。 NVIDIA GB200 NVL72 机架级解决方案创建了一个 72-GPU NVLink 域,可充当单个大型 GPU。对于 GPT-MoE 1.8T 实时推理,与上一代 Hopper GPU 相比,其吞吐量提高了 30 倍。 此外,新的扩展定律(测试时计算) 的出现为提高更复杂任务的响应质量和准确性提供了另一种方法。这种新范式首先在 OpenAI o1 模型中引入,使模型能够在输出最终结果之前通过生成许多中间令牌来“推理”。推理模型在复杂数学和生成计算机代码等领域尤为有用。这种新范式将起新一轮突破浪潮,需要在推理期间实现更高的计算性能。 通往人工通用智能的道路将依赖于数据中心计算性能的持续突破。预训练、后训练和测试时扩展都依赖于运行专家精心编写的软件的最先进的基础架构。NVIDIA 平台发展迅速,一年内创新节奏轻快,使生态系统能够继续推动人工智能的前沿发展。 开始使用 查看如何开始使用 AI 推理 ,了解更多关于 NVIDIA AI 推理平台 的信息,并随时了解 最新的 AI 推理性能更新 。 观看演示,了解如何快速部署 NVIDIA NIM 微服务,或阅读《使用 NVIDIA NIM 部署生成式 AI 的简单指南》。TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。
https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
NVIDIA TensorRT-LLM Now Supports Recurrent Drafting for Optimizing LLM Inference
Recurrent drafting (referred as ReDrafter) is a novel speculative decoding technique developed and open-sourced by Apple for large language model (LLM) inference now available with NVIDIA TensorRT-LLM . ReDrafter helps developers significantly boost LLM workload performance on NVIDIA GPUs. NVIDIA TensorRT-LLM is a library for optimizing LLM inference. It provides an easy-to-use Python API to define LLMs and build NVIDIA TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Optimizations include custom attention kernels, inflight batching, paged KV caching, quantization (FP8, INT4 AWQ, INT8 SmoothQuant), and much more. Speculative decoding is a technique that accelerates LLM inference by generating multiple tokens in parallel. It uses smaller “draft” modules to predict future tokens, which are then verified by the main model. This method maintains output quality while significantly reducing response times, especially during low traffic periods, by better utilizing available resources for low-latency inference. ReDrafter employs recurrent neural network (RNN)-based sampling, referred to as drafting , combined with tree-style attention previously used in other techniques like Medusa to predict and verify draft tokens from multiple possible paths for better accuracy and to potentially accept more than one token in each iteration of the decoder. NVIDIA collaborated with Apple to add support for this technique in TensorRT-LLM, making it accessible to the broader developer community. The integration of ReDrafter into TensorRT-LLM expanded its reach, unlocked new optimization potential, and improved on previous methods such as Medusa. For Medusa, the path acceptance and token sampling happens in the TensorRT-LLM runtime, introducing some overhead inside the engine for processing all possible future paths without knowing the accepted path, most of which are ultimately discarded. To reduce such overhead, ReDrafter requires the token validation and acceptance of the best path before drafting future tokens for the next iteration. TensorRT-LLM has been updated to incorporate drafting and validation logic inside a single engine, rather than relying on the runtime or separate engines to further minimize overhead. This approach provides TensorRT-LLM kernel selection and scheduling more freedom to optimize the network for maximum performance. To better illustrate ReDrafter improvements, Figure 1 highlights the key differences between its implementation and that of Medusa in TensorRT-LLM. Most of the components related to speculative decoding have been done in-engine for ReDrafter. This significantly simplifies the runtime changes needed for ReDrafter. Figure 1. Comparison of Medusa (left) and ReDrafter (right) implementations in NVIDIA TensorRT-LLM The following sections delve into some of the changes that help enable ReDrafter in TensorRT-LLM. Inflight-batching compatible engine Inflight-batching (IFB) is a strategy that significantly improves the throughput by batching context-phase and generation-phase requests. Speculative decoding, coupled with IFB, introduces more complexity to the pipeline as context-phase requests need to be handled differently than generation-phase requests, which require draft token validation. Since ReDrafter moves the validation logic inside the model definition, the engine needs that logic as well during validation. Similar to the attention plugin, the batch is split into two smaller batches: one for context requests and another for generation requests. Each smaller batch then enters its computational workflow, and at the end they are combined back to a single batch for drafting. Figure 2. ReDrafter’s computational workflow for inflight-batching compatible TensorRT-LLM engine Note that this approach requires that all operators on either path support empty tensors, which could happen if a batch consists of all context requests or all generation requests. This capability adds flexibility to TensorRT-LLM APIs, enabling the definition of more complicated models in the future. Implementing in-engine validation and drafting To validate and draft inside the engine, TensorRT-LLM is updated with support for numerous new operations so that PyTorch code can be easily translated into a definition of the TensorRT-LLM model. The following PyTorch code excerpt is Apple’s PyTorch implementation of ReDrafter . The TensorRT-LLM implementation is almost a straightforward line-by-line mapping of the PyTorch version. PyTorch def unpack( packed_tensor: torch.Tensor, unpacker: torch.Tensor, ) -> torch.Tensor: assert len(packed_tensor.shape) == 3 last_dim_size = packed_tensor.shape[2] batch_size, beam_width, beam_length = unpacker.shape unpacked_data_indices = unpacker.view( batch_size, beam_width * beam_length, 1).expand( -1, -1, last_dim_size ) unpacked_tensor = torch.gather( packed_tensor, 1, unpacked_data_indices).reshape( batch_size, beam_width, beam_length, -1 ) return unpacked_tensor TensorRT-LLM def _unpack_beams( x: Tensor, indices: Tensor, num_beams: int, beam_length: int ) -> Tensor: assert x.rank() == 3 d0 = shape(x, 0, INT_DTYPE_STR) dl = shape(x, -1, INT_DTYPE_STR) indices = view( indices, [-1, num_beams * beam_length, 1], False) res_shape = concat([d0, num_beams, beam_length, dl]) res = view(gather_nd(x, indices), res_shape, False) return res This, of course, is a very simple example. For a more complex example, see the beam search implementation . With the new functionalities added for ReDrafter, it might be possible to improve the Medusa implementation in TensorRT-LLM to further increase its performance. ReDrafter performance in TensorRT-LLM As benchmarked by Apple , ReDrafter with TensorRT-LLM can provide up to 2.7x throughput improvements on NVIDIA H100 GPUs with TP8 over the base LLM. Note that the performance improvement of any speculative decoding technique can be heavily impacted by many factors, including: GPU utilization: Speculative decoding is commonly used for low-traffic scenarios, where GPU resources are typically underutilized due to small batch sizes. Average acceptance rate: The latency of each decoding step is increased since speculative decoding must perform extra computation, where a significant portion of it is ultimately wasted after validation. As a result, to see any performance benefits from speculative decoding, the average acceptance rate must be high enough to pay for that extra latency. This is affected by the number of beams, their lengths, and the quality of the beam search itself (which is impacted by the training data). Task: It is easier to predict future tokens for some tasks (code completion, for example), which leads to a higher acceptance rate, and thus improved performance. Summary This collaboration between NVIDIA and Apple, has made TensorRT-LLM more powerful and more flexible, enabling the LLM community to innovate more sophisticated models and easily deploy them with TensorRT-LLM to achieve unparalleled performance on NVIDIA GPUs. These new features open exciting possibilities, and we eagerly anticipate the next generation of advanced models from the community that leverage TensorRT-LLM capabilities, driving further improvements in LLM workloads. Explore NVIDIA TensorRT-LLM to unlock the full potential of your models on NVIDIA GPUs.
https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
NVIDIA TensorRT-LLM 现支持 Recurrent Drafting,实现 LLM 推理优化
Recurrent Drafting (简称 ReDrafter) 是苹果公司为大语言模型 (LLM) 推理开发并开源的一种新型推测解码技术,该技术现在可与 NVIDIA TensorRT-LLM 一起使用。ReDrafter 帮助开发者大幅提升了 NVIDIA GPU 上的 LLM 工作负载性能。 NVIDIA TensorRT-LLM 是一个 LLM 推理优化库,提供了一个易于使用的 Python API 来定义 LLM 和构建 NVIDIA TensorRT 引擎,这些引擎具有顶尖的优化功能,可在 GPU 上高效执行推理。优化功能包括自定义 Attention Kernel、Inflight Batching、Paged KV Caching、量化技术 (FP8、INT4 AWQ、INT8 SmoothQuant) 等。 推测解码 (Speculative decoding) 是一种通过并行生成多个 token 来加速 LLM 推理的技术。它使用较小的“draft”模块预测未来的 token,然后由主模型进行验证。该方法通过更好地利用可用资源实现低延迟推理,在保持输出质量的同时大大缩短了响应时间,尤其是在低流量时段。 ReDrafter 运用基于循环神经网络 (RNN) 的采样 (称为 Drafting ) 并结合之前在 Medusa 等其他技术中使用的树状注意力,预测和验证来自多个可能路径的 draft token 以提高准确性,并在解码器的每次迭代中接受一个以上 token。NVIDIA 与苹果公司合作,在 TensorRT-LLM 中添加了对该技术的支持,使更加广泛的开发者社区能够使用该技术。 ReDrafter 与 TensorRT-LLM 的集成扩大了该技术的覆盖范围,解锁了新的优化潜力,并改进了 Medusa 等先前的方法。Medusa 的路径接受和 token 采样发生在 TensorRT-LLM 运行时,需要在接受路径未知的情况下处理所有可能的未来路径,而且其中大部分路径最终都会被丢弃,这就给引擎内部带来了一些开销。为了减少这种开销,ReDrafter 要求在 drafting 下一次迭代的未来 token 之前,先验证 token 并接受最佳路径。 为了进一步减少开销,TensorRT-LLM 更新后在单个引擎中整合了 drafting 和验证逻辑,不再依赖运行时或单独的引擎。这种方法为 TensorRT-LLM 内核选择和调度提供了更大的自由度,通过优化网络实现了性能的最大化。 为了更好地说明 ReDrafter 的改进,图 1 展示了 TensorRT-LLM 中 ReDrafter 实现与 Medusa 实现的主要区别。大多数与推测解码相关的组件都在 ReDrafter 的引擎内完成,这大大简化了 ReDrafter 所需的运行时更改。 图 1. NVIDIA TensorRT-LLM 中 Medusa(左)和 ReDrafter(右)实现的比较 下面将深入探讨有助于在 TensorRT-LLM 中启用 ReDrafter 的一些变化。 兼容 Inflight-batching 批处理的引擎 Inflight-batching (IFB) 是一种通过批量处理上下文阶段和生成阶段请求,来显著提高吞吐量的策略。鉴于上下文阶段请求与生成阶段请求的处理方式不同(生成阶段请求需要 draft token 验证),因此结合 IFB 的推测解码会给管线带来更大的复杂性。ReDrafter 将验证逻辑移至模型定义内部,因此引擎在验证过程中也需要该逻辑。与注意力插件类似,该批处理被分成两个较小的批处理:一个用于上下文请求,另一个用于生成请求。然后,每个较小的批处理进入计算工作流,最后再合并成一个批处理进行 drafting 流程。 图 2. ReDrafter 兼容 TensorRT-LLM 引擎的 Inflight-batching 批处理计算工作流 请注意,这种方法要求任一路径上的所有运算符都支持空张量。如果一个批处理由所有上下文请求或所有生成请求组成,就可能出现空张量。该功能增加了 TensorRT-LLM API 的灵活性,使未来定义更复杂的模型成为可能。 实现引擎内验证和 Drafting 为了在引擎内进行验证和 draft,TensorRT-LLM 更新时加入了对许多新操作的支持,这样 PyTorch 代码就可以轻松地转化成一个 TensorRT-LLM 模型的定义。 以下 PyTorch 代码摘录是苹果公司的 PyTorch 实现的 ReDrafter 。TensorRT-LLM 实现几乎就是 PyTorch 版本的直接逐行映射。 PyTorch def unpack( packed_tensor: torch.Tensor, unpacker: torch.Tensor, ) -> torch.Tensor: assert len(packed_tensor.shape) == 3 last_dim_size = packed_tensor.shape[2] batch_size, beam_width, beam_length = unpacker.shape unpacked_data_indices = unpacker.view( batch_size, beam_width * beam_length, 1).expand( -1, -1, last_dim_size ) unpacked_tensor = torch.gather( packed_tensor, 1, unpacked_data_indices).reshape( batch_size, beam_width, beam_length, -1 ) return unpacked_tensor TensorRT-LLM def _unpack_beams( x: Tensor, indices: Tensor, num_beams: int, beam_length: int ) -> Tensor: assert x.rank() == 3 d0 = shape(x, 0, INT_DTYPE_STR) dl = shape(x, -1, INT_DTYPE_STR) indices = view( indices, [-1, num_beams * beam_length, 1], False) res_shape = concat([d0, num_beams, beam_length, dl]) res = view(gather_nd(x, indices), res_shape, False) return res 当然,这只是一个非常简单的例子。如要了解更复杂的示例,请参见 束搜索实现 。借助为 ReDrafter 添加的新功能,就可以改进 TensorRT-LLM 中的 Medusa 实现,从而进一步提高其性能。 ReDrafter 在 TensorRT-LLM 中的性能 根据 苹果公司的基准测试 ,在采用 TP8(Tensor Parallelism with 8 GPUs,8 卡 GPU 张量并行) 的 NVIDIA GPU 上使用 TensorRT-LLM 的 ReDrafter 最多可将吞吐量提高至基础 LLM 的 2.7 倍。 请注意,任何推测解码技术的性能提升幅度都会受到诸多因素的大幅影响,包括: GPU 利用率: 推测解码通常用于低流量场景,由于批量较小,GPU 资源的利用率通常较低。 平均接受率: 由于推测解码必须执行额外的计算,而其中很大一部分计算最终会在验证后被浪费,因此每个解码步骤的延迟都会增加。所以要想通过推测解码获得任何性能上的优势,平均接受率必须高到足以弥补增加的延迟。这受到束数量、束长度和束搜索本身质量(受训练数据影响)的影响。 任务: 在某些任务(例如代码完成)中预测未来的 token 更容易,使得接受率更高,性能也会因此而提升。 总结 NVIDIA 与苹果公司的合作让 TensorRT-LLM 变得更加强大和灵活,使 LLM 社区能够创造出更加复杂的模型并通过 TensorRT-LLM 轻松部署,从而在 NVIDIA GPU 上实现无与伦比的性能。这些新特性带来了令人兴奋的可能性,我们热切期待着社区使用 TensorRT-LLM 功能开发出新一代先进模型,进一步改进 LLM 工作负载。 探索 NVIDIA TensorRT-LLM ,在 NVIDIA GPU 上充分释放模型潜能。
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/
Boost Llama 3.3 70B Inference Throughput 3x with NVIDIA TensorRT-LLM Speculative Decoding
Meta’s Llama collection of open large language models (LLMs) continues to grow with the recent addition of Llama 3.3 70B , a text-only instruction-tuned model. Llama 3.3 provides enhanced performance respective to the older Llama 3.1 70B model and can even match the capabilities of the larger, more computationally expensive Llama 3.1 405B model on several tasks including math, reasoning, coding, and multilingual support. NVIDIA TensorRT-LLM , a powerful inference engine that delivers state-of-the-art performance on the latest LLMs, incorporates many optimizations to deliver outstanding  Llama 3.3 70B inference throughput. These include in-flight batching , KV caching , custom FP8 quantization , speculative decoding, and more for fast, cost-efficient LLM serving. With in-flight batching activated by default as a runtime configuration parameter, TensorRT-LLM supports batching multiple different requests at the same time for higher serving throughput. By interleaving requests in context and generation phases, in-flight batching reduces latency and improves GPU utilization by executing new requests while older requests are still in flight. Finished requests are evicted from the batch, making room for the next set of requests. Caching the values of the key-value elements of previous tokens saves from expensive recomputation of these tensors in the generation phase for the next set of tokens. Computational savings effectively lead to higher throughput. However, KV cache grows linearly in size with number of batched requests and sequence context lengths, leading to higher memory requirements. TensorRT-LLM KV caching addresses these challenges through several optimizations, including support for paged KV cache , quantized KV cache , circular buffer KV cache and KV cache reuse . Each of these optimizations address the challenging balance between growing memory size and avoiding unnecessary and expensive recomputation.Speculative decoding is a popular technique for faster and cost-effective LLM inference with built-in verification for the quality of output generation. It’s  based on the premise that generating multiple sequences of future (draft) tokens is more efficient than processing a single token in autoregressive decoding, an inherently time-consuming process. The target model determines how many of these draft tokens to accept, which is far more efficient than having to generate one token per iteration. TensorRT-LLM supports a growing list of speculative decoding techniques including draft target , Medusa , Eagle , and lookahead decoding , among others. Figure 1. NVIDIA TensorRT optimizations for high-performance deep learning inference In this post, we show how the NVIDIA HGX H200 platform with NVLink and NVSwitch, as well as TensorRT-LLM, achieve great performance when running the latest Llama 3.3 70B model. We describe the step-by-step setup to get speculating decoding working for Llama 3.3 70B with TensorRT-LLM. For more information, including other optimizations, different models, and multi-GPU execution, see the full list of TensorRT-LLM examples . Achieving throughput speedups with draft target speculative decoding Table 1 and Figure 2 highlight the throughput (output tokens/second) speedups between no draft model (that is, no speculative decoding) versus draft models of various sizes with Llama 3.3 70B target model. Throughput Performance – Output Tokens/Second One NVIDIA H200 Tensor Core GPU Draft | Target Models Llama 3.2 1B | Llama 3.3 70B Llama 3.2 3B | Llama 3.3 70B Llama 3.1 8B | Llama 3.3 70B Llama 3.3 70B (without draft model) Tokens/sec 181.74 161.53 134.38 51.14 Speedups (with versus without draft models) 3.55x 3.16x 2.63x N/A Table 1. Throughput performance using one NVIDIA H200 Tensor Core GPU with TensorRT-LLM internal measurements Data measured on December 11, 2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens  / total latency. DGX H200, TP1, FP8, batch size=1, TensorRT Model Optimizer version 0.21, TensorRT-LLM version 0.15.0​. Figure 2. Throughput speedups using speculative decoding with Llama 3.3 70B target model We provide the steps to reproduce these performance gains using draft target speculative decoding within TensorRT-LLM. # Download the following model checkpoints from Hugging Face and store them in a directory for easy access through the setup process. git lfs install # Download target models git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct # Download draft models git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct After the model checkpoints have been downloaded, install TensorRT-LLM. # Obtain and start the basic docker image environment (optional). docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint /bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04 # Install dependencies, TensorRT-LLM requires Python 3.10 apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin libopenmpi-dev git git-lfs # Fetch the library git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git cd TensorRT-LLM # Install the latest version (corresponding to the main branch) of TensorRT-LLM. pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com # Check installation python3 -c "import tensorrt_llm" Next, compile the downloaded model checkpoints into draft and target TensorRT engines. These engines are optimized to run inference with best accuracy and highest throughput. cd examples # Steps to build target and draft models in FP8 precision on 1 H200 # Create FP8 checkpoints python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8 --output_dir /ckpt-draft --calib_size 512 --tp_size 1 python3 quantization/quantize.py \ --model_dir=<path to target model repo> \ --output_dir=./ckpt-target-70b \ --dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \ --calib_size 512 --tp_size 1 # Build draft and target engines # Important flags for the engine build process: # --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model. # --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model. trtllm-build \ --checkpoint_dir ./ckpt-draft \ --output_dir=./draft-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 trtllm-build \ --checkpoint_dir=./ckpt-target-70b \ --output_dir=./target-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 \ --low_latency_gemm_plugin fp8 \ --speculative_decoding_mode=draft_tokens_external \ --max_draft_len 10 Finally, run speculative decoding in TensorRT-LLM. #Run decoding # Important flags to set during the run process: #--draft_engine_dir and --engine_dir must be specified for the draft and target engines. #--draft_target_model_config is corresponding to the configuration of Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4, device of draft model is GPU0, device of target model is GPU1, and use tokens rather than logits to accept. # Only CPP session (using executor as low-level API) is supported, while Python session (--use_py_session) is not supported. # Run with Llama 3.3 70B target model mpirun -n 1 --allow-run-as-root python3 ./run.py \ --tokenizer_dir <path to draft model repo> \ --draft_engine_dir ./draft-engine \ --engine_dir ./target-engine \ --draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \ --kv_cache_free_gpu_memory_fraction=0.35 \ --max_output_len=1024 \ --kv_cache_enable_block_reuse \ --input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA 3-digit integer contains one of each of the digits 1,3 and 5. What is the probability that the integer is divisible by 5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n" # Following is the LLM-generated output: Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit integers that can be formed using the digits 1, 3, and 5. There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers. ## Step 2: Identify the condition for a number to be divisible by 5. A number is divisible by 5 if its last digit is either 0 or 5. ## Step 3: Determine the number of arrangements where 5 is the last digit. Since the digit 5 must be the last digit for the number to be divisible by 5, we fix the last position with 5. The remaining two positions can be filled with the digits 1 and 3 in 2! = 2 ways. ## Step 4: Calculate the probability that the integer is divisible by 5. The probability is the number of favorable outcomes (arrangements where 5 is the last digit) divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5). ## Step 5: Calculate the probability. Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3. The final answer is: $\boxed{\frac{1}{3}}$" To benchmark throughput performance without speculative decoding, follow the steps below: # Run throughput benchmark for the 70B model without the draft model trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/ --gpt_attention_plugin float16 --workers 1 --max_batch_size 32 --max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable --use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8 python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist --num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200 --output-stdev 0 > /tmp/synthetic.txt trtllm-bench --model <path to target model repo> latency --engine_dir /data/70b-TRT/ --dataset /tmp/synthetic.txt Summary NVIDIA collaborates with Meta for the creation, optimization, and acceleration of the world’s leading open models. NVIDIA supports Llama as part of our commitment to grow open community AI models and software to enable users to customize and address their own unique workloads. NVIDIA is involved with several open-source projects through partnering with developers, maintainers, and foundations. NVIDIA TensorRT-LLM provides several features for optimizing and efficiently running LLMs of different model architectures. These optimizations lead to significant speedups on the same hardware, enable fewer resources to serve the same workload, reduce energy costs, and improve total cost of ownership. Available through production-ready deployments using NVIDIA NIM microservices, these TensorRT optimizations accelerate the deployment of your generative AI applications across NVIDIA-accelerated infrastructure anywhere, including cloud, data center, and workstations.
https://developer.nvidia.com/zh-cn/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/
借助 NVIDIA TensorRT-LLM 预测解码,将 Llama 3.3 的推理吞吐量提升 3 倍
随着近期新增的 Llama 3.3 70B (一种纯文本指令调整模型),Meta 的开放 大语言模型 (LLMs) 集合将继续增长。Llama 3.3 相对于较旧的 Llama 3.1 70B 模型提供了增强的性能,甚至可以在数学、推理、编码和多语种支持等多项任务中匹配更大、计算成本更高的 Llama 3.1 405B 模型的功能。 NVIDIA TensorRT-LLM 是一款功能强大的推理引擎,可在最新的 LLM 上提供先进的性能,并整合了许多优化措施,可提供出色的 Llama 3.3 70B 推理吞吐量。其中包括在飞批处理、 KV 缓存 、 自定义 FP8 量化 、推测解码等,可实现快速、经济高效的 LLM 服务。 动态批处理默认作为运行时配置参数激活,TensorRT-LLM 支持同时批处理多个不同的请求,从而提高服务吞吐量。通过在上下文和生成阶段交错处理请求,动态批处理可在旧请求仍在运行时执行新请求,从而减少延迟并提高 GPU 利用率。已完成的请求将从批处理中移除,从而为下一组请求腾出空间。 缓存先前令牌的键值元素的值可避免在为下一组令牌生成阶段对这些张量进行昂贵的重新计算。节省的计算量可有效提高吞吐量。但是,随着批量请求的数量和序列上下文长度的增加,KV 缓存的大小呈线性增长,从而导致内存需求增加。 TensorRT-LLM KV 缓存通过多种优化解决了这些挑战,包括对分页 KV 缓存、 量化 KV 缓存 、 循环缓冲区 KV 缓存 和 KV 缓存重复使用 的支持。每项优化都解决了在增大内存大小与避免不必要且昂贵的重新计算之间实现富有挑战性的平衡这一问题。预测解码是一种热门技术,可通过内置验证来生成输出的质量,从而实现更快且经济高效的 LLM 推理。其前提是,在自回归解码过程中,生成多个未来 (草稿) 令牌序列比处理单个令牌更高效。目标模型决定接受这些草稿令牌的数量,这比每次迭代生成一个令牌的效率要高得多。TensorRT-LLM 支持越来越多的预测性解码技术,包括 草稿目标 、 Medusa 、 Eagle 和前瞻性解码等。 图 1、适用于高性能深度学习推理的 NVIDIA TensorRT 优化 在本文中,我们将展示搭载 NVLink 和 NVSwitch 的 NVIDIA HGX H200 平台以及 TensorRT-LLM 如何在运行最新的 Llama 3.3 70B 模型时实现出色性能。我们介绍了使用 TensorRT-LLM 对 Llama 3.3 70B 进行解码预测的分步设置。有关其他优化、不同模型和多 GPU 执行的更多信息,请参阅 TensorRT-LLM 示例 的完整列表。 通过草稿目标预测解码实现吞吐量加速 表 1 和图 2 突出显示了无草稿模型 (即无预测解码) 与使用 Llama 3.3 70B 目标模型的各种规模的草稿模型之间的吞吐量 (输出令牌/秒) 加速。 吞吐量性能 – 输出 Tokens/秒 One NVIDIA H200 Tensor Core GPU 草稿|目标 模型 Llama 3.2 1B|Llama 3.3 70B Llama 3.2 3B|Llama 3.3 70B Llama 3.1 8B|Llama 3.3 70B Llama 3.3 70B (无草稿模型) 令牌/秒 191.74 151.53 134.38 51.14 加速 (有与无草稿模型对比) 3.55 倍 3.16 倍 2.63 倍 不适用 表 1、使用一个 NVIDIA H200 Tensor Core GPU 和 TensorRT-LLM 内部测量的吞吐量性能 数据测量于 2024 年 12 月 11 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 总生成 tokens / 总延迟。DGX H200、TP1、FP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21,TensorRT-LLM 版本 0.15.0。 图 2、使用 Llama 3.3 70B 目标模型的预测解码提高吞吐量 我们提供了在 TensorRT-LLM 中使用草稿目标预测解码重现这些性能提升的步骤。 # Download the following model checkpoints from Hugging Face and store them in a directory for easy access through the setup process. git lfs install # Download target models git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct # Download draft models git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct 下载模型检查点后,安装 TensorRT-LLM。 # Obtain and start the basic docker image environment (optional). docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint /bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04 # Install dependencies, TensorRT-LLM requires Python 3.10 apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin libopenmpi-dev git git-lfs # Fetch the library git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git cd TensorRT-LLM # Install the latest version (corresponding to the main branch) of TensorRT-LLM. pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com # Check installation python3 -c "import tensorrt_llm" 接下来,将下载的模型检查点编译到草稿和目标 TensorRT 引擎中。这些引擎经过优化,能够以最佳的准确性和最高的吞吐量运行推理。 cd examples # Steps to build target and draft models in FP8 precision on 1 H200 # Create FP8 checkpoints python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8 --output_dir /ckpt-draft --calib_size 512 --tp_size 1 python3 quantization/quantize.py \ --model_dir=<path to target model repo> \ --output_dir=./ckpt-target-70b \ --dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \ --calib_size 512 --tp_size 1 # Build draft and target engines # Important flags for the engine build process: # --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model. # --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model. trtllm-build \ --checkpoint_dir ./ckpt-draft \ --output_dir=./draft-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 trtllm-build \ --checkpoint_dir=./ckpt-target-70b \ --output_dir=./target-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 \ --low_latency_gemm_plugin fp8 \ --speculative_decoding_mode=draft_tokens_external \ --max_draft_len 10 最后,在 TensorRT-LLM 中运行投机性解码。 #Run decoding # Important flags to set during the run process: #--draft_engine_dir and --engine_dir must be specified for the draft and target engines. #--draft_target_model_config is corresponding to the configuration of Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4, device of draft model is GPU0, device of target model is GPU1, and use tokens rather than logits to accept. # Only CPP session (using executor as low-level API) is supported, while Python session (--use_py_session) is not supported. # Run with Llama 3.3 70B target model mpirun -n 1 --allow-run-as-root python3 ./run.py \ --tokenizer_dir <path to draft model repo> \ --draft_engine_dir ./draft-engine \ --engine_dir ./target-engine \ --draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \ --kv_cache_free_gpu_memory_fraction=0.35 \ --max_output_len=1024 \ --kv_cache_enable_block_reuse \ --input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA 3-digit integer contains one of each of the digits 1,3 and 5. What is the probability that the integer is divisible by 5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n" # Following is the LLM-generated output: Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit integers that can be formed using the digits 1, 3, and 5. There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers. ## Step 2: Identify the condition for a number to be divisible by 5. A number is divisible by 5 if its last digit is either 0 or 5. ## Step 3: Determine the number of arrangements where 5 is the last digit. Since the digit 5 must be the last digit for the number to be divisible by 5, we fix the last position with 5. The remaining two positions can be filled with the digits 1 and 3 in 2! = 2 ways. ## Step 4: Calculate the probability that the integer is divisible by 5. The probability is the number of favorable outcomes (arrangements where 5 is the last digit) divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5). ## Step 5: Calculate the probability. Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3. The final answer is: $\boxed{\frac{1}{3}}$" 要在不使用 speculative decoding 的情况下对吞吐量性能进行基准测试,请执行以下步骤: # Run throughput benchmark for the 70B model without the draft model trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/ --gpt_attention_plugin float16 --workers 1 --max_batch_size 32 --max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable --use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8 python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist --num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200 --output-stdev 0 > /tmp/synthetic.txt trtllm-bench --model <path to target model repo> latency --engine_dir /data/70b-TRT/ --dataset /tmp/synthetic.txt 总结 NVIDIA 与 Meta 合作创建、优化和加速世界领先的开放模型。NVIDIA 支持 Llama,履行我们发展开放社区 AI 模型和软件的承诺,使用户能够自定义和处理自己独特的工作负载。NVIDIA 通过与开发者、维护人员和基金会合作参与了多个开源项目。 NVIDIA TensorRT-LLM 提供多种功能,用于优化和高效运行不同模型架构的 LLM。这些优化可显著加快相同硬件的运行速度,减少资源以处理相同工作负载,降低能源成本,并提高总拥有成本。这些 TensorRT 优化通过使用 NVIDIA NIM 微服务的生产就绪型部署提供,可随时随地在 NVIDIA 加速的基础设施 (包括云、数据中心和工作站) 中加速生成式 AI 应用的部署。
https://developer.nvidia.com/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/
Spotlight: Perplexity AI Serves 400 Million Search Queries a Month Using NVIDIA Inference Stack
The demand for AI-enabled services continues to grow rapidly, placing increasing pressure on IT and infrastructure teams. These teams are tasked with provisioning the necessary hardware and software to meet that demand while simultaneously balancing cost efficiency with optimal user experience. This challenge was faced by the inference team at Perplexity AI, an AI-powered search engine that handles more than 435 million queries each month. Each query represents multiple AI inference requests. To meet this demand, the Perplexity inference team turned to NVIDIA H100 Tensor Core GPUs , NVIDIA Triton Inference Server , and NVIDIA TensorRT-LLM for cost-effective large language model (LLM) deployment. This post details some of the deployment best practices and TCO savings based on their hands-on experience. Serving multiple AI models simultaneously To support its extensive user base and serve a wide range of requests—spanning search, summarization, and question answering, among others—the inference team at Perplexity serves over 20 AI models simultaneously. This includes different variations of the popular open source Llama 3.1 models like 8B, 70B, and 405B. To match each user request with the appropriate model, the company relies on smaller classifier models that help determine user intent. User tasks detected by the classifiers, like text completion, are then routed to specific models deployed on GPU pods. Each pod consists of one or more NVIDIA H100 GPUs and is managed by an NVIDIA Triton Inference Server instance. The pods operate under strict service-level agreements (SLAs) for both cost efficiency and user interactivity. Figure 1. Perplexity AI deployed LLM production on a massive scale using NVIDIA HGX H100 four-GPU and eight-GPU systems featuring dynamic scalability, model parallelism, and load balancing capabilities To accommodate the large Perplexity user base and fluctuating traffic throughout the day, the pods are hosted within a Kubernetes cluster. They feature a front-end scheduler built in-house that routes traffic to the appropriate pod based on their load and usage, ensuring that the SLAs are consistently met. The scheduling algorithm used by the front-end scheduler can affect inter-token latency, particularly in improving the worst percentile of performance (Figure 2). The team at Perplexity constantly looks for new scheduler optimizations, including how to better account for sequence length variations across requests. Figure 2. Inter-token latency distribution for round-robin, least requests, and power of two random choices load balancing strategies during an internal Perplexity benchmark at a moderate request rate of 16 QPS Triton Inference Server is a critical component of Perplexity’s deployment architecture. It serves optimized models across various backends, batches incoming user requests, and provides GPU utilization metrics to the scheduler. This supports scaling up or down the number of deployments and GPUs based on the amount of the inference requests. For a detailed guide on how to deploy NVIDIA Triton with Kubernetes, see Scaling LLMs with NVIDIA Triton and NVIDIA TensorRT-LLM Using Kubernetes . Meeting strict service-level agreements To define the right SLAs for the company’s diverse use cases, Perplexity’s inference team conducts comprehensive A/B testing, evaluating different configurations and their impact on user experience. Their goal is to maximize GPU utilization while consistently meeting the target SLA for each specific use case. By improving batching while meeting target SLAs, inference serving cost is optimized. For smaller models, such as embedding models under 1 billion parameters used in real-time retrieval, the focus is on achieving the lowest possible latency. These are typically hidden from the user and are part of a broader workflow. As a result, configurations for these queries typically have low batch sizes. Given the smaller memory footprints of these models, the team runs multiple models concurrently on the NVIDIA H100 GPU to maintain high resource utilization. For user-facing models such as Llama 8B, 70B, and 405B, which have a greater impact on user experience and deployment costs, the team conducts a deeper performance analysis and evaluates key metrics such as time to first token, tokens per second per user, and cost per million queries. To optimize performance while controlling costs, Perplexity parallelizes their deployment of these models across multiple GPUs. Due to the strict SLAs, the team opted to increase tensor parallelism to four and eight GPUs, which they found yields lower serving costs for very latency-sensitive requests within a fixed GPU budget. Data or pipeline parallelism was useful for maximizing throughput in less latency-sensitive settings. The Perplexity team now uses TensorRT-LLM in combination with proprietary LLM runtimes built with optimized CUDA kernels to successfully serve the Llama-based models within their strict SLAs at minimal costs. Figure 3. Sharding the Llama 8B model using tensor parallelism across four NVIDIA Hopper GPUs reduces Perplexity’s relative cost per million token by up to 3x for latency-sensitive requests Ultimately, the Perplexity inference team’s decision to host models depends on their ability to serve these models at a lower cost while still meeting their strict SLAs, compared to using third-party LLM provider APIs. For example, the team estimated that they were able to save approximately $1 million annually by serving models that power their Related-Questions feature on cloud-hosted NVIDIA GPUs. The Related-Questions feature offers Perplexity users suggested follow-up questions to facilitate deeper dives after a search query. Delivering new levels of performance The inference team at Perplexity adopts a comprehensive, full-stack approach to their road map, consistently optimizing and enhancing every layer of the stack—from applications and use cases to inference serving middleware and hardware accelerators. In terms of inference serving middleware, the team is actively collaborating with the NVIDIA Triton engineering team to deploy disaggregating serving , a groundbreaking technique that separates the prefill and decode inference phases of an LLM workflow onto separate NVIDIA GPUs. This technique significantly boosts overall system throughput while meeting SLAs, translating to lower cost per token. Additionally, this technique gives Perplexity the flexibility to use different NVIDIA GPU products for each inference phase given its specific hardware resource requirements. The Perplexity team understands that optimizing the software stack can only drive performance improvements to a certain extent. To deliver new levels of performance, hardware innovations are crucial. This is why they are eager to assess the NVIDIA Blackwell platform . NVIDIA Blackwell delivers significant performance leaps enabled by numerous technology innovations, including the second-generation Transformer Engine with support for the FP4 data format, fifth-generation NVLink and NVSwitch enabling significantly larger NVLink domains, and more. Collectively these innovations deliver 30x improvement in inference performance for trillion parameter LLMs . Get started NVIDIA Triton Inference Server and NVIDIA TensorRT are open-source projects available on GitHub. They are also available as Docker containers that can be pulled from NVIDIA NGC . They are part of NVIDIA AI Enterprise , which offers enterprise-grade security, stability, and support. Enterprises seeking the fastest time to value can use NVIDIA NIM , a set of easy-to-use microservices for accelerated inference on a wide range of AI models, including open-source community and NVIDIA AI Foundation models. To learn more, check out these resources: From Zero to Millions: Scaling Large Language Model Inference with TensorRT-LLM Perplexity Enhances Model Performance for AI-Powered Search Engines with NVIDIA NeMo Accelerating Large Language Model Inference with NVIDIA in the Cloud
https://developer.nvidia.com/zh-cn/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/
聚焦:Perplexity AI 利用 NVIDIA 推理栈每月服务 4 亿个搜索请求
对 AI 赋能服务的需求持续快速增长,这给 IT 和基础设施团队带来了越来越大的压力。这些团队的任务是配置必要的硬件和软件来满足这一需求,同时平衡成本效益和最佳用户体验。Perplexity AI 的推理团队面临着这一挑战。Perplexity AI 是一款由 AI 驱动的搜索引擎,每月可处理超过 435 million 个查询。每个查询代表多个 AI 推理请求。 为了满足这一需求,Perplexity 推理团队转而使用 NVIDIA H100 Tensor Core GPUs 、 NVIDIA Triton Inference Server 和 NVIDIA TensorRT-LLM 来进行经济高效的 大语言模型(LLM) 部署。本文根据他们的实操经验,详细介绍了一些部署最佳实践和节省的 TCO。 同时为多个 AI 模型提供服务 为了支持其广泛的用户群并满足搜索、总结和问答等方面的各种请求,Perplexity 的推理团队同时为 20 多个 AI 模型提供服务。其中包括热门开源 Llama 3.1 模型的不同变体,如 8B、70B 和 405B。 为了将每个用户请求与适当的模型相匹配,该公司依靠较小的分类器模型来帮助确定用户意图。然后,分类器检测到的用户任务 (例如文本补全) 会被路由到部署在 GPU 节点上的特定模型。每个节点由一个或多个 NVIDIA H100 GPU 组成,并由 NVIDIA Triton Inference Server 实例管理。这些节点遵循严格的服务级别协议 (SLA) 运行,以实现成本效益和用户交互。 图 1. Perplexity AI 使用 NVIDIA HGX H100 4-GPU 和 8-GPU 系统大规模部署 LLM 生产,具有动态可扩展性、模型并行性和负载均衡功能 为了适应庞大的 Perplexity 用户群和全天波动的流量,Pod 托管在 Kubernetes 集群内。它们具有内置的前端调度程序,可根据负载和使用情况将流量路由到适当的 Pod,从而确保始终满足 SLA。 前端调度程序使用的调度算法会影响令牌间延迟,尤其是在改善最差性能百分位方面 (Figure 2)。Perplexity 团队不断寻找新的调度器优化,包括如何更好地考虑不同请求之间的序列长度变化。 图 2、在 16 QPS 的中等请求速率下,在内部 Perplexity 基准测试中,循环、最小请求的令牌间延迟分配,以及两种随机选择负载均衡策略的能力 Triton Inference Server 是 Perplexity 部署架构的关键组件。它跨各种后端提供优化模型、批量处理传入用户请求,并向调度程序提供 GPU 利用率指标。这支持根据推理请求的数量增加或减少部署和 GPU 的数量。 有关如何使用 Kubernetes 部署 NVIDIA Triton 的详细指南,请参阅 使用 Kubernetes 扩展 LLMs with NVIDIA Triton 和 NVIDIA TensorRT-LLM 。 满足严格的服务级别协议 为了为公司的各种用例定义合适的 SLA,Perplexity 的推理团队进行了全面的 A/B 测试,评估不同的配置及其对用户体验的影响。他们的目标是最大限度地提高 GPU 利用率,同时始终满足每个特定用例的目标 SLA。通过在满足目标 SLA 的同时改进 batching,推理服务成本得以优化。 对于较小的模型,例如嵌入实时检索中使用的 10 亿个参数以下的模型,重点在于尽可能降低延迟。这些通常对用户隐藏,并且是更广泛工作流程的一部分。因此,这些查询的配置通常具有较低的批量大小。鉴于这些模型的内存占用空间较小,该团队在 NVIDIA H100 GPU 上同时运行多个模型,以保持高资源利用率。 对于面向用户的模型 (例如 Llama 8B、70B 和 405B),这些模型对用户体验和部署成本的影响更大,因此该团队会进行更深入的性能分析,并评估关键指标,例如时间到第一个 token、每位用户每秒 token 数以及每百万次查询的成本。 为了在控制成本的同时优化性能,Perplexity 跨多个 GPU 并行部署这些模型。由于严格的服务水平协议 (SLA),该团队选择将张量并行性增加到 4 个和 8 个 GPU,他们发现这样可以在固定的 GPU 预算内降低对延迟非常敏感的请求的服务成本。数据或 pipeline 并行有助于在延迟敏感度较低的设置中更大限度地提高吞吐量。 Perplexity 团队现在将 TensorRT-LLM 与使用优化的 CUDA 内核构建的专有 LLM 运行时相结合,以更低的成本在严格的 SLA 中成功为基于 Llama 的模型提供服务。 图 3、在四个 NVIDIA Hopper GPU 上使用张量并行机制对 Llama 8B 模型进行分片,可将 Perplexity 每百万个令牌的相对成本降低高达 3 倍,以处理延迟敏感型请求 最终,Perplexity 推理团队决定托管模型,取决于他们能否以较低的成本为这些模型提供服务,同时仍满足严格的服务水平协议 (SLA),与使用第三方 LLM 提供商 API 相比。例如,该团队估计,与向第三方 LLM API 服务提供商发送相同的请求量相比,通过在云托管的 NVIDIA GPUs 上提供支持其 Related-Questions 功能的模型,他们每年能够节省约 $1 million。“Related-Questions” 功能为 Perplexity 用户提供后续问题建议,以便在搜索查询后进行更深入的研究。 图 4、与使用第三方 LLM API 服务相比,Perplexity 使用其部署架构在云托管的 NVIDIA GPU 上提供 Related-Questions AI 模型,每年可节省约 100 万美元 提供更高水平的性能 Perplexity 的推理团队对其路线图采用全面的全栈方法,不断优化和增强堆栈的每一层 (从应用和用例到为中间件和硬件加速器提供服务的推理)。 在推理服务中间件方面,该团队正在积极与 NVIDIA Triton 工程团队合作,部署 分解服务 ,这是一项突破性技术,可将 LLM 工作流的预填充和解码推理阶段分离到单独的 NVIDIA GPU。此技术可显著提高整体系统吞吐量,同时满足 SLA,从而降低每个令牌的成本。此外,鉴于 Perplexity 的特定硬件资源需求,此技术还可让 Perplexity 在每个推理阶段灵活使用不同的 NVIDIA GPU 产品。 Perplexity 团队明白,优化软件堆栈只能在一定程度上推动性能提升。为了提供更高级别的性能,硬件创新至关重要。因此,他们迫切希望评估 NVIDIA Blackwell 平台 。 NVIDIA Blackwell 通过众多技术创新实现了 显著的性能飞跃 ,包括支持 FP4 数据格式的第二代 Transformer Engine、支持更大 NVLink 域的第五代 NVLink 和 NVSwitch 等。总体而言,这些创新可将 万亿级参数 LLM 的推理性能提升 30 倍。 开始使用 NVIDIA Triton Inference Server 和 NVIDIA TensorRT 是 GitHub 上的开源项目。它们还可用作 Docker 容器,可从 NVIDIA NGC 中提取。它们是 NVIDIA AI Enterprise 的一部分,可提供企业级安全性、稳定性和支持。寻求快速实现价值的企业可以使用 NVIDIA NIM ,这是一套易于使用的微服务,可加速各种 AI 模型(包括开源社区和 NVIDIA AI Foundation 模型)的推理。 如需了解详情,请查看以下资源: 从 0 扩展到数百万:使用 TensorRT-LLM 扩展大型语言模型推理 Perplexity 借助 NVIDIA NeMo 增强 AI 驱动的搜索引擎的模型性能 借助 NVIDIA 在云端加速大语言模型推理
https://developer.nvidia.com/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/
Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs
Meta recently released its Llama 3.2 series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only small language model (SLM) variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and cost-efficient serving across millions of GPUs worldwide – from our most powerful data center and cloud GPUs to local NVIDIA RTX workstations and even low-power edge devices with NVIDIA Jetson . Llama 3.2 VLMs support long context lengths of up to 128K text tokens as well as a single image input at a resolution of  1120 x 1120 pixels. To enable low latency responses for great user experiences, while also providing high throughput for cost-efficient serving of these models, the NVIDIA platform is optimized at every layer of the technology stack. Similarly, the Llama 3.2 SLMs have been optimized to run well on the millions of NVIDIA RTX PCs and workstations worldwide. They have also been quantized to allow for local deployment on edge devices with NVIDIA Jetson. For more information, see Deploying Accelerated Llama 3.2 from the Edge to the Cloud . This post describes the full-stack optimizations that enable high throughput and low latency serving of Llama 3.2 models. Accelerating Llama 3.2 AI inference throughput The Llama 3.2 11B and Llama 3.2 90B models include a vision encoder with a text decoder. The encoder is optimized for high-performance inference using the NVIDIA TensorRT library and the text decoder is optimized using the NVIDIA TensorRT-LLM library. The visual information from the vision encoder is fused into the Llama text decoder with a cross-attention mechanism that is supported in TensorRT-LLM. This enables the Llama 3.2 VLMs to efficiently generate text by taking into account visual reasoning and understanding in context with the text input. TensorRT supports the vision encoder in the BF16 data format. TensorRT-LLM supports the text decoder in both FP16 and BF16 formats. The official recipe released by Meta uses a BF16 text decoder. This is also used for our baseline performance measurements. To boost performance further, NVIDIA developed a custom FP8 post-training quantization (PTQ) recipe, leveraging the fourth-generation FP8 Tensor Cores that are part of the NVIDIA Hopper architecture . This recipe, available through the TensorRT Model Optimizer library , enables higher Llama 3.2 throughput and lower latency while delivering the same accuracy across numerous benchmarks including ScienceQA , OCRBench , TextVQA , and MMMU . This means that developers can now run the model more cost-effectively. Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using NVIDIA NIM microservices. Delivering high throughput and low latency Table 1 shows maximum throughput performance, representing offline use cases, across a range of input and output sequence lengths and single input image with maximum supported resolution of 1120 x 1120 pixels. Using a system based on the NVIDIA HGX H200 platform, we run the Llama 3.2 90B model on eight NVIDIA H200 Tensor Core GPUs , each with 141 GB of fast HBM3e memory, connected through NVLink and NVLink Switch , providing 900 GB/s of GPU-to-GPU bandwidth between the GPUs. Maximum Throughput Performance – Output Tokens/Second Eight NVIDIA H200 Tensor Core GPUs Input | Output Sequence Lengths | Image Size 8,000 | 2,000 | 1120×1120 20,000 | 2,000 | 1120×1120 60,000 | 2,000 | 1120×1120 BF16 Encoder with FP8 Decoder 2,646 1,417 480 Table 1. Maximum throughput performance with NVIDIA internal measurements TensorRT optimized NIM for VLMs version 1.1.0 recipe. NIM server restarted between each ISL/OSL configuration to set an optimal KV cache split. Data measured on 11/14/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens  / total latency. DGX H200, normalized to 8 GPUs (by taking the TP profile maximizing throughput per GPU, and multiplying that value by 8 to simulate a replica-parallel setup), batch size tuned for maximum node throughput, TensorRT Model Optimizer version 0.21 (pre-release), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0​. Table 2 shows minimum latency performance using the same input and output sequence lengths and input image size. Minimum Latency Performance – Output Tokens/Second Eight NVIDIA H200 Tensor Core GPUs Input | Output Sequence Lengths | Image Size 8,000 | 2,000 | 1120×1120 20,000 | 2,000 | 1120×1120 60,000 | 2,000 | 1120×1120 BF16 Encoder with FP8 Decoder 64 63 55 Table 2. Minimum latency performance with NVIDIA internal measurements TensorRT optimized NIM for VLMs version 1.1.0 recipe. Data measured on 11/4/2024.  Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens  / total latency. DGX H200, TP8, batch size = 1, TensorRT Model Optimizer version 0.21 (prerelease), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0​. As these results show, NVIDIA H200 GPUs with TensorRT-optimized software delivers exceptional performance on Llama 3.2 90B VLM, in both latency-optimized and throughput-optimized scenarios. Throughput performance of GeForce RTX 4090 with ONNX Runtime on NVIDIA RTX For Windows deployments, NVIDIA has optimized Llama 3.2 SLMs to work efficiently using the ONNX Runtime Generative API , with a DirectML backend. Performance measurements are made using the model checkpoint available on the NGC catalog. The checkpoint is a quantized version of Llama 3.2 3B Instruct model and is quantized to AWQ INT4 using AutoAWQ and converted to ONNX using ONNX Runtime Generative API. Maximum Throughput Performance – Output Tokens/Second NVIDIA GeForce RTX 4090 GPUs Input | Output Sequence Lengths 100 | 100 2,000 | 100 4,000 | 100 Onnx-GenAI Runtime with DirectML, BS=1 253 203 165 Onnx-GenAI Runtime with DirectML, BS=4 615 374 251 Table 3. Maximum throughput performance with NVIDIA internal measurements ONNX Runtime Generative API with DirectML data measured on 10/07/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens  / total latency. GeForce RTX 4090 GPU. Better performance on Llama 3.2 across platforms With the NVIDIA accelerated computing platform, you can build models and supercharge your applications with the most performant Llama 3.2 models on any platform—from the data center and cloud to local workstations. Enterprises seeking the fastest time to value can use NVIDIA NIM , part of the NVIDIA AI Enterprise software platform, which offers NVIDIA TensorRT optimized inference on Llama 3.2 and other models from NVIDIA and its partner ecosystem. Acknowledgments We would like to thank George Yuan, Alex Settle, and Chenjie Luo for their efforts in supporting this post.
https://developer.nvidia.com/zh-cn/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/
Llama 3.2 全栈优化释放 NVIDIA GPU 的高性能
Meta 最近发布了 Llama 3.2 系列视觉语言模型(VLM),其中包含 11B 参数和 90B 参数变体。这些模型是多模态模型,支持文本和图像输入。此外,Meta 还推出了 Llama 3.2 的纯文本 小语言模型(SLM) 变体,具有 1B 和 3B 参数。NVIDIA 已对 Llama 3.2 模型集合进行优化,以便在全球数百万个 GPU 上提供出色的性能和经济高效的服务 – 从我们功能强大的数据中心和云 GPU 到本地 NVIDIA RTX 工作站 ,甚至是使用 NVIDIA Jetson 的低功耗边缘设备。 Llama 3.2 VLM 支持长达 128K 的文本标记,以及分辨率为 1120 x 1120 像素的单张图像输入。为了实现低延迟响应,提供出色的用户体验,同时提供高吞吐量以经济高效地提供这些模型,NVIDIA 平台在技术堆栈的每一层都进行了优化。 同样,Llama 3.2 SLM 已经过优化,可在全球数百万 NVIDIA RTX PC 和工作站上良好运行。它们还经过量化,允许在边缘设备上使用 NVIDIA Jetson 进行本地部署。有关更多信息,请参阅 将加速的 Llama 3.2 从边缘部署到云端 。 本文将介绍如何通过全栈优化实现 Llama 3.2 模型的高吞吐量和低延迟服务。 加速 Llama 3.2 AI 推理吞吐量 Llama 3.2 11B 和 Llama 3.2 90B 模型包括一个带有文本解码器的视觉编码器。编码器使用 NVIDIA TensorRT 库 针对高性能推理进行了优化,文本解码器使用 NVIDIA TensorRT-LLM 库 进行了优化。 通过 TensorRT-LLM 中支持的交叉注意力机制,视觉编码器的视觉信息融合到 Llama 文本解码器中。这使得 Llama 3.2 VLM 能够高效生成文本,在文本输入的上下文中考虑视觉推理和理解。 TensorRT 支持视觉编码器在 BF16 数据格式下运行。TensorRT-LLM 支持文本解码器在 FP16 和 BF16 格式下运行。Meta 发布的官方 recipe 使用了 BF16 文本解码器,这也用于我们的基准性能测量。为了进一步提高性能,NVIDIA 利用 NVIDIA Hopper 架构中的 第四代 FP8 Tensor Core 开发了自定义的 FP8 后训练量化(PTQ)recipe。 此方法通过 TensorRT Model Optimizer 库 提供,可实现更高的 Llama 3.2 吞吐量和更低的延迟,同时在众多基准测试(包括 ScienceQA 、 OCRBench 、 TextVQA 和 MMMU )中提供相同的准确性。这意味着开发者现在可以更经济高效地运行模型。 TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。 提供高吞吐量和低延迟 表 1 显示了在一系列输入和输出序列长度以及单输入图像 (最大支持分辨率为 1120 x 1120 像素) 中的最大吞吐量性能 (代表离线用例)。我们使用基于 NVIDIA HGX H200 平台的系统,在 8 个 NVIDIA H200 Tensor Core GPU 上运行 Llama 3.2 90B 模型,每个 GPU 均具有 141 GB 的快速 HBM3e 内存,通过 NVLink 和 NVLink Switch 连接,可在 GPU 之间提供 900 GB/s 的 GPU 到 GPU 带宽。 最大吞吐量性能 – 每秒输出令牌数 八个 NVIDIA H200 Tensor Core GPU 输入|输出序列长度|图像大小 8000|2000|1120*1120 20000|2000|1120*1120 60000|2000|1120*1120 带 FP8 解码器的 BF16 编码器 2646 1417 480 表 1. 通过 NVIDIA 内部测量获得的最大吞吐量性能 TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。在每个 ISL/OSL 配置之间重新启动 NIM 服务器,以设置最佳 KV 缓存分割。数据测量时间:11 月 14 日,2024 年。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = total generated tokens / total latency。DGX H200,标准化为 8 个 GPUs (通过采用 TP 配置文件最大限度地提高每个 GPU 的吞吐量,并将该值乘以 8 以模拟副本并行设置),针对最大节点吞吐量调整了批量大小,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0。 表 2 显示了使用相同的输入和输出序列长度以及输入图像大小时的最小延迟性能。 最低延迟性能 – 输出令牌/秒 ( 八个 NVIDIA H200 Tensor Core GPUs ) 输入|输出序列长度|图像大小 8000|2000|1120*1120 20000|2000|1120*1120 60000|2000|1120*1120 带 FP8 解码器的 BF16 编码器 64 63 55 表 2、NVIDIA 内部测量的最低延迟性能 TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。数据测量时间:2024 年 11 月 4 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 生成的 tokens 总数/总延迟。DGX H200、TP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0 以上。 这些结果表明, 搭载 TensorRT 优化软件的 NVIDIA H200 GPU 可在 Llama 3.2 90B VLM 上提供出色的性能,适用于延迟优化和吞吐量优化的场景。 采用 ONNX Runtime 的 GeForce RTX 4090 在 NVIDIA RTX 上的吞吐量性能 对于 Windows 部署,NVIDIA 已对 Llama 3.2 SLM 进行优化,以便使用带有 DirectML 后端的 ONNX Runtime Generative API 高效工作。使用 NGC 目录上提供的 模型检查点 进行性能测量。检查点是 Llama 3.2 3B Instruct 模型的量化版本,使用 AutoAWQ 量化为 AWQ INT4 ,并使用 ONNX Runtime Generative API 转换为 ONNX。 最大吞吐量性能 – 输出令牌/秒 NVIDIA GeForce RTX 4090 GPU 输入|输出序列长度 100|100 2000|100 4000|100 采用 DirectML 的 Onnx-GenAI 运行时,BS = 1 253 203 165 支持 DirectML 的 Onnx-GenAI 运行时,BS = 4 615 374 251 表 3、通过 NVIDIA 内部测量获得的最大吞吐量性能 使用在 2024 年 10 月 7 日测量的 DirectML 数据的 ONNX Runtime Generative API。输出 token/s 包含生成第一个 token 的时间 – tok/s = 总生成 token 数 / 总延迟。GeForce RTX 4090 GPU。 跨平台在 Llama 3.2 上获得更好的性能 借助 NVIDIA 加速计算平台,您可以在任何平台 (从数据中心、云到本地工作站) 上构建模型并使用性能超强的 Llama 3.2 模型增强您的应用。 NVIDIA NIM 是 NVIDIA AI Enterprise 软件平台的一部分,可为 Llama 3.2 和 NVIDIA 及其合作伙伴生态系统的其他模型提供经 NVIDIA TensorRT 优化的推理。 致谢 我们要感谢 George Yuan、Alex Settle 和 Chenjie Luo 为支持这篇博文而付出的努力。
https://developer.nvidia.com/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/
Streamlining AI Inference Performance and Deployment with NVIDIA TensorRT-LLM Chunked Prefill
In this blog post, we take a closer look at chunked prefill, a feature of NVIDIA TensorRT-LLM that increases GPU utilization and simplifies the deployment experience for developers. This builds on our previous post discussing how advanced KV cache optimization features in TensorRT-LLM improve performance up to 5x in use cases that require system prefills. Challenges with traditional prefill and decode inference approaches When a user submits a request to a model, it goes through two distinct computational phases: prefill and decode. Each phase uses GPU compute resources differently. During the prefill phase, the system processes all input tokens to compute the KV cache, which is then used to generate the first token of the output. This phase is computationally demanding and can effectively use a GPU’s vast parallel compute resources. In the decode phase, the system generates output tokens individually, updating the intermediate states from the prefill phase with each new token. Since the heavy computational work for intermediate state calculations is done in the prefill phase, the decode phase primarily involves processing only the newly generated token. As a result, this phase is less computationally intensive. TensorRT-LLM supports in-flight batching where multiple requests are handled in parallel, enabling prefill and decode stage processing simultaneously. While more efficient than phased batching, in which decode requests are paused to process prefill requests, this approach can introduce latency as the decode phases are delayed until the prefill requests are completed. The top portion of Figure 1 illustrates this concept. Figure 1. A simplified representation of the chunked prefill process, showing how it can increase GPU utilization and accelerate query completion Balancing prefill and decode phases with chunked prefill With TensorRT-LLM chunked prefill, the tokens are divided into smaller units, or chunks, for faster processing. This prevents the prefill phase from becoming a bottleneck, enables more parallelization with decode phase tokens, and increases GPU utilization. The bottom portion of Figure 1 illustrates this concept. Additionally, using TensorRT-LLM chunked prefill enables GPU systems to handle longer contexts and higher concurrency levels. Since memory usage depends on the number of tokens processed during each iteration, using prefill chunks decouples memory consumption from the context length of incoming requests. The system can then process larger context lengths without increasing memory demands. Simplifying TensorRT-LLM engine creation with dynamic chunk sizing Figure 1 shows how ‌chunk size can impact user interactivity and overall system throughput, requiring a trade-off based on the desired user experience and available GPU resources. Using a large chunk size lowers the number of iterations required to process prefill sequences, reducing time to first token (TTFT). However, it also increases the time taken to complete the decode phase of ongoing requests, increasing query completion time and reducing output tokens per second (TPS). Finding the right balance can sometimes be time-consuming for developers. TensorRT-LLM deploys dynamic chunk sizing, providing ideal recommendations for the user based on GPU utilization metrics. An additional advantage of dynamic prefill chunk sizing is the simplification of the TensorRT-LLM engine build process. In the past, developers had to supply a maximum input sequence length from which activation buffers were computed during engine build time. To ensure the activation buffers could handle the most complex user queries, developers were required to set the maximum sequence length expected for their use case. This approach optimized GPU resources for outlier and worst-case scenarios, leading to inefficient memory usage. With dynamic prefill chunk sizing developers no longer need to specify the maximum input length manually. Instead, activation buffer sizes are automatically determined by the chunk size, which is configured by TensorRT-LLM. This eliminates manual configuration and leads to more efficient memory usage during execution. Getting started with TensorRT-LLM chunked prefills Using TensorRT-LLM chunked prefill significantly improves both system performance and utilization. Breaking down the traditionally sequential prefill phase into smaller, more manageable chunks, enables better parallelization, with the decode phase, reducing bottlenecks and accelerating query completion. The dynamic adjustment of chunk size also simplifies the TensorRT-LLM engine configuration process, eliminating the need for developers to manually set activation buffer sizes, which previously resulted in inefficient memory usage. Dynamic chunk sizing optimizes memory use and ensures that the system adapts seamlessly to varying LLM user demands. Check out our GitHub documentation , to start using TensorRT-LLM chunked prefills. Learn more about NVIDIA AI Inference solutions and stay up-to-date with the latest AI inference performance updates.
https://developer.nvidia.com/zh-cn/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/
借助 NVIDIA TensorRT-LLM 分块预填充提高 AI 推理效率和简化部署
在本 博文 中,我们详细介绍了分块预填充,这是 NVIDIA TensorRT-LLM 的一项功能,可提高 GPU 利用率并简化开发者的部署体验。本文基于之前的博文,讨论了 TensorRT-LLM 中的高级 KV 缓存优化功能如何在需要系统预填充的用例中将性能提升高达 5 倍。 传统预填充和解码推理方法面临的挑战 当用户向模型提交请求时,它将经历两个不同的计算阶段:预填充和解码。每个阶段使用 GPU 计算资源的方式各不相同。 在预填充阶段,系统会处理所有输入令牌以计算 KV 缓存,然后使用该缓存生成输出的第一个令牌。此阶段的计算要求很高,可以有效利用 GPU 庞大的并行计算资源。 在解码阶段,系统会单独生成输出 tokens,使用每个新 tokens 更新预填充阶段的中间状态。由于中间状态计算的繁重计算工作是在预填充阶段完成的,因此解码阶段主要涉及仅处理新生成的 tokens。因此,此阶段的计算密集度较低。 TensorRT-LLM 支持动态批处理,允许并行处理多个请求,使得预填充和解码阶段可以同时进行。虽然这种方法比暂停解码请求以处理预填充请求的分阶段批处理更高效,但由于解码阶段延迟到预填充请求完成,因此可能会导致延迟。图 1 的顶部说明了这一概念。 图 1. 分块预填充过程的简化表示,展示了如何提高 GPU 利用率并加速查询完成。 使用分块预填充平衡预填充和解码阶段 借助 TensorRT-LLM 分块预填充,令牌被分成较小的单元或分块,以加快处理速度。这可以防止预填充阶段成为瓶颈,使解码阶段的令牌实现更多的并行化,并提高 GPU 的利用率。图 1 的底部部分说明了这一概念。 此外,使用 TensorRT-LLM 分块预填充使 GPU 系统能够处理更长的上下文和更高的并发级别。由于内存使用量取决于每次迭代期间处理的令牌数量,因此使用预填充块可将内存消耗量与传入请求的上下文长度解耦。然后,系统可以在不增加内存需求的情况下处理更大的上下文长度。 通过动态数据块大小调整简化 TensorRT-LLM 引擎创建 图 1 显示了块大小百分比对用户交互性和整体系统吞吐量的影响,这需要根据所需的用户体验和可用 GPU 资源进行权衡。使用大块大小可以减少处理预填充序列所需的迭代次数,从而缩短第一个令牌(TTFT)的时间。 但是,这也会增加完成正在进行的请求的解码阶段所需的时间,从而增加查询完成时间并降低每秒输出令牌数(TPS)。对于开发者而言,找到合适的平衡有时非常耗时。TensorRT-LLM 部署动态块大小,根据 GPU 利用率指标为用户提供理想建议。 动态预填充块大小调整的另一个优势是简化了 TensorRT-LLM 引擎构建流程。过去,开发者必须提供最大输入序列长度,以便在引擎构建期间计算激活缓冲区。为确保激活缓冲区能够处理最复杂的用户查询,开发者需要为其用例设置预期的最大序列长度。这种方法针对异常和最差情况优化了 GPU 资源,导致内存使用效率低下。 借助动态预填充块大小调整,开发者无需再手动指定最大输入长度。相反,激活缓冲区大小由块大小(由 TensorRT-LLM 配置)自动确定。这消除了手动配置,并提高了执行期间的内存使用效率。 TensorRT-LLM 分块预填充入门 使用 TensorRT-LLM 分块预填充可显著提高系统性能和利用率。将传统的连续预填充阶段分解为更小、更易于管理的块,可通过解码阶段实现更好的并行化,从而减少瓶颈并加速查询完成。 块大小的动态调整还简化了 TensorRT-LLM 引擎配置流程,无需开发者手动设置激活缓冲区大小,而这在之前会导致内存使用效率低下。动态块大小优化内存使用,并确保系统能够无缝适应不同 LLM 用户的需求。 查看 我们的 GitHub 文档 ,开始使用 TensorRT-LLM 分块预填充。详细了解 NVIDIA AI 推理解决方案 ,并随时了解最新的 AI 推理性能更新 。
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/
5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse
In our previous blog post , we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups. Introduction to KV cache LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the user’s prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence. This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokens—leading to a faster and more efficient user experience. When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas. Early KV cache reuse Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system prompts—predefined instructions added to user queries—are essential to direct the LLM’s responses in line with enterprise guidelines, this method can be inefficient. When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x. Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x Flexible KV cache block sizing In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation. This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial.  As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case. TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens. Figure 2. Impact of changing KV cache block size on inference speedup Efficient KV cache eviction protocols Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies. Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT. To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks. Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse Getting started with TensorRT-LLM KV cache reuse Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance. To start using TensorRT-LLM KV cache reuse check out our GitHub documentation .
https://developer.nvidia.com/zh-cn/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/
NVIDIA TensorRT-LLM KV 缓存早期重用实现首个令牌速度 5 倍提升
在上一篇 博客文章 中,我们展示了通过将键值 (KV) 缓存卸载到 CPU 内存来重复使用该缓存如何在基于 x86 的 NVIDIA H100 Tensor Core GPU 上将首次生成令牌 (Time To First Token, TTFT) 的速度提升高达 14 倍,在 NVIDIA GH200 Superchip 上可将其提升高达 28 倍。在本文中,我们介绍了可以进一步推动 TTFT 加速的 KV 缓存重复使用技术和最佳实践。 KV 缓存简介 许多任务(包括问答和代码生成)都迅速采用了 LLM 模型。为了生成响应,这些模型首先将用户的提示符转换为 tokens,然后将其转换为 dense vectors。随后会进行大量的 dot-product operations,以数学方式对 tokens 之间的关系建模,并构建对用户输入的上下文理解。生成这种上下文理解的计算成本会随着输入序列的长度而二次增加。 此资源密集型流程会生成键和值,这些键和值会被缓存,以避免在生成后续 token 时进行重新计算。重复使用 KV 缓存可减少生成额外 token 所需的计算负载和时间,实现更快、更高效的用户体验。 重复使用 KV 缓存时,必须仔细注意其在内存中的剩余时间、在内存已满时首先移除哪些组件,以及何时可重复用于新的传入提示。优化这些因素可能会带来 KV 缓存重复使用的性能增益。NVIDIA TensorRT-LLM 提供了三个关键功能,专门解决了这些问题。 早期 KV 缓存复用 传统的复用算法需要先完成整个 KV 缓存的计算,然后才能根据新用户的提示重复使用其任何部分。在企业聊天机器人等场景中,系统提示(添加到用户查询中的预定义指令)对引导 LLM 根据企业指南做出响应至关重要,这种方法可能效率低下。 当大量用户同时与聊天机器人交互时,每个用户都需要单独计算系统提示 KV 缓存。借助 TensorRT-LLM,我们可以重新使用实时生成的系统提示,使其在突发期间在所有用户之间共享,而不是为每个用户重新计算。这可以显著加快推理速度,最高可达 5 倍。 图 1.TensorRT-LLM KV 缓存重复利用可将 TTFT 速度提升高达 5 倍。 灵活调整 KV 缓存块大小 在重用实现中,只能分配整个缓存内存块以供重用。例如,如果缓存内存块大小为 64 个令牌,KV 缓存为 80 个令牌,则仅存储 64 个令牌以供重用,而其余 16 个令牌则需要重新计算。但是,如果将内存块大小减少到 16 个令牌,则可在五个内存块中存储全部 64 个令牌,从而消除重新计算的需求。 当输入序列较短时,这种效果最为明显。对于长输入序列,更大的块会更有好处。显然,您对 KV 缓存的控制越细致,就越能针对特定用例进行优化。 TensorRT-LLM 可对 KV 缓存内存块进行细粒度控制,使开发者能够将这些块分割成 64 到 2 个令牌之间的较小块。这可优化已分配内存的使用,提高复用率并改善 TTFT。在 NVIDIA H100 Tensor Core GPU 上运行 LLAMA70B 时,我们可以通过将 KV 缓存块的大小从 64 个令牌减少到 8 个令牌,在多用户环境中将 TTFT 速度提升高达 7%。 图 2. 更改 KV 缓存块大小对推理加速的影响 高效的 KV 缓存回收协议 将 KV 缓存分割成较小的块并移除未使用的块对于内存优化可能有效,但这会引入依赖项复杂性。当使用特定块生成响应并将结果存储为新块时,它会形成树状的依赖结构。 随着时间的推移,跟踪源块(分支)使用情况的计数器可能会随着依赖节点(叶子)的重复使用而过时。然后,移除源块需要移除所有依赖块,这将需要重新计算 KV 缓存以用于新用户提示,从而增加 TTFT。 为应对这一挑战,TensorRT-LLM 包含智能驱逐算法,该算法可以追踪源节点中的依赖节点,并首先驱逐依赖节点,即使这些节点具有较新的重复使用计数器。这可以确保更高效的内存管理,同时防止不必要的依赖块驱逐。 图 3.KV 缓存驱逐算法的逻辑表示展示了如何减少被驱逐的块的数量,从而增加重复使用的可能性。 TensorRT-LLM KV 缓存重用入门 在推理过程中生成 KV 缓存需要大量的计算和内存资源。高效使用 KV 缓存对于改善模型响应、加速推理和提高系统吞吐量至关重要。TensorRT-LLM 为希望进一步优化 TTFT 响应时间以实现峰值性能的开发者提供了高级复用功能。 要开始使用 TensorRT-LLM KV 缓存,请参阅 我们的 GitHub 文档 。
https://developer.nvidia.com/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/
3x Faster AllReduce with NVSwitch and TensorRT-LLM MultiShot
Deploying generative AI workloads in production environments where user numbers can fluctuate from hundreds to hundreds of thousands – and where input sequence lengths differ with each request – poses unique challenges. To achieve low latency inference in these environments, multi-GPU setups are a must – irrespective of the GPU generation or its memory capacity. To enhance inference performance in production-grade setups, we’re excited to introduce TensorRT-LLM Multi-shot, a new multi-GPU communication protocol that leverages the NVIDIA NVLink Switch to significantly increase communication speeds by up to 3x. This blog outlines this new feature and how it helps developers and solution architects address the limitations of traditional multi-GPU communication methods. Challenges with traditional AllReduce algorithms For low latency inference, multi-GPU is critical, regardless of the memory capacity of a single GPU. However, at low concurrency, the time GPUs spend exchanging data can outweigh the time spent on compute. For optimal performance, an efficient AllReduce operation – a collective operation that combines partial results from each participating GPU – is critical. Traditional approaches use ring-based algorithms, where the partial values are passed around a ring of GPUs.  Each GPU contributes its values and passes the result to its neighbor. This process is repeated 2N-2 times where N is the number of GPUs working together, and by the end of the process, every GPU has the same summed value. A second pass over the ring is required to propagate summed values from the last GPU to the rest. The Ring approach makes efficient use of available GPU-to-GPU bandwidth per communication step, but as the number of GPUs increases, so does the number of steps. This increases latency, as all GPUs need to stay synchronized at every step of the ring. ‌These synchronization latencies add significant latency overhead and can make it difficult to meet more stringent latency targets. The Ring AllReduce algorithm is described below: Ring Algorithm:  GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1) 2N-2 steps, with full tensor send/recv each step Latency: 2N-2 communication steps.  (N: # of GPUs) Traffic: (4N-4)/N tensor bytes of send/recvs Addressing AllReduce communication challenges with TensorRT-LLM MultiShot TensorRT-LLM MultiShot is a new algorithm that reduces the O(N) latency of Ring AllReduce by up to 3x leveraging multicast in NVSwitch. Multicast is a hardware acceleration feature in NVSwitch which allows a GPU to send data once and have that data sent simultaneously to all other GPUs, minimizing the number of communication steps to two inter-GPU synchronizations while remaining bandwidth efficient. Without NVSwitch, this would take N times the communication bandwidth. TensorRT-LLM Multishot separates the AllReduce into a ReduceScatter operation followed by an AllGather operation (for more detailed descriptions of collective operations, see this documentation ). Each GPU is responsible for accumulating only a portion of the result tensor. The first step (or “shot”) involves each GPU sending the different slices of the tensor to the respective GPU responsible for accumulating that slice of the tensor. After accumulating locally, each GPU now has the correct sum accumulators for its unique slice of the output. In the second step (or “shot”), each GPU broadcasts the result slice to all other GPUs using the NVSwitch multicast capability. This minimizes the per GPU bandwidth required as the NVSwitch itself performs data amplification; each GPU sends 1/N the data and receives the full result tensor in one step. The entire operation only takes two communication steps, regardless of the number GPUs performing tensor parallel inference. TensorRT-LLM MultiShot Algorithm: GPU_N sends slices, Compute slice sum, broadcast result in single multicast operation. Latency: 2 communication steps (regardless of number of GPUs) Traffic: 2 tensor bytes of send/recv (regardless of number of GPUs) Why this matters Since this algorithm requires only two communication steps rather than 2N-2 (where N is the number of GPUs), MultiShot can be nearly 3x faster than Ring AllReduce. The benefits of this algorithm are particularly evident with smaller message sizes and high parallelism – the scenario needed when minimum latency is required for a great user experience. This can be used to either reduce minimum latency, or increase throughput at a given latency. In scenarios with more aggressive latency thresholds, this can lead to super-linear scaling with the number of GPUs. Figure 1. With TensorRT-LLM MultiShot, AllReduce latency is reduced by up to 3x. Achieving optimal inference performance requires careful workload analysis and a deep understanding of performance bottlenecks. By gaining that understanding – both through internal engineering work as well as through close collaboration with external developers and researchers – we can quickly and frequently optimize many aspects of our platform to deliver great performance for users. As we continue to identify and implement new performance optimizations – some may be extensive, others might be narrower in scope –  we will be providing regular updates on these optimizations, providing both technical motivation and quantified benefits.
https://developer.nvidia.com/zh-cn/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/
NVSwitch 和 TensorRT-LLM MultiShot 共同加速 AllReduce 速度达 3 倍
在用户数量可能在数百到数十万之间波动,且输入序列长度随每个请求而变化的生产环境中,部署生成式 AI 工作负载会面临独特的挑战。要在这些环境中实现低延迟推理,无论 GPU 生成方式或显存容量如何,多 GPU 设置都是必需的。为了提高生产级设置中的推理性能,我们很高兴推出 TensorRT-LLM Multi-shot,这是一种新的多 GPU 通信协议,利用 NVIDIA NVLink Switch 可将通信速度大幅提升高达 3 倍。本博客概述了这一新功能,以及它如何帮助开发者和解决方案架构师克服传统多 GPU 通信方法的限制。 传统 AllReduce 算法面临的挑战 对于低延迟推理,无论单个 GPU 的显存容量如何,多 GPU 都至关重要。但是,在低并发情况下,GPU 花在交换数据上的时间可能超过花在计算上的时间。为了获得最佳性能, 高效的 AllReduce 操作 –结合每个参与其中的 GPU 的部分结果的集合操作–至关重要。 传统方法使用基于环的算法,其中部分值在环形的 GPU 之间传递。每个 GPU 都贡献其值并将结果传递给其邻居。该过程重复 2N-2 次,其中 N 是协同工作的 GPU 数量,在该过程结束时,每个 GPU 都具有相同的总和值。需要对环进行第二次传递,以将总和值从最后一个 GPU 传播到其他 GPU。 Ring 方法可在每个通信步骤中高效利用可用的 GPU 到 GPU 带宽,但随着 GPU 数量的增加,步骤数也会增加。这会增加延迟,因为所有 GPU 都需要在 Ring 的每个步骤中保持同步。这些同步延迟会显著增加延迟开销,并可能导致难以满足更严格的延迟目标。 Ring AllReduce 算法描述如下: 环形算法:GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1) 2N-2 步长,每步具有完整的 Tensor send/recv 延迟:2N-2 通信步骤。(N:GPU 的数量) 流量:(4N-4)/N 张量的 send/recv 字节数 使用 TensorRT-LLM MultiShot 应对 AllReduce 通信挑战 TensorRT-LLM MultiShot 是一种新算法,可利用 NVSwitch 中的组播,将 Ring AllReduce 的 O(N) 延迟最多降低 3 倍。组播是 NVSwitch 中的硬件加速功能,允许一个 GPU 发送数据一次,并将该数据同时发送到所有其他 GPU,从而将通信步骤的数量减少到两个 GPU 间的同步,同时保持带宽效率。如果没有 NVSwitch,这将占用 N 倍的通信带宽。 TensorRT-LLM Multishot 将 AllReduce 分离为 ReduceScatter 操作,然后是 AllGather 操作(有关集合操作的更多详细说明,请参阅 此文档 )。 每个 GPU 仅负责累积结果张量的一部分。 第一步(或“shot”)涉及每个 GPU 将张量的不同切片发送到负责累积该张量切片的相应 GPU。 在本地累加后,每个 GPU 现在都有正确的和累加器,用于其独特的输出切片。 在第二步 (或“shot”) 中,每个 GPU 使用 NVSwitch 组播功能将结果切片广播到所有其他 GPU。这可最大限度地减少 NVSwitch 本身执行数据放大所需的每个 GPU 带宽;每个 GPU 一步发送 1/N 数据并接收完整的结果张量。 无论执行张量并行推理的 GPU 数量如何,整个操作仅需两次通信步骤。 TensorRT-LLM MultiShot 算法:GPU_N 发送切片、计算切片和、在单个组播运算中广播结果。 延迟:2 个通信步骤(与 GPU 数量无关) 流量:2 张量字节的 send/recv(与 GPU 数量无关) 为何如此重要 由于此算法只需要两个通信步骤,而不是 2N-2 (其中 N 表示 GPU 数量),因此 MultiShot 的速度几乎是 Ring AllReduce 的 3 倍。这种算法的优势在消息大小较小且并行度高的情况下尤为明显,而这正是需要最低延迟以获得出色的用户体验的场景。 这可用于降低最小延迟,或在给定延迟下提高吞吐量。在具有更激进的延迟阈值的场景中,这可能会导致 GPU 数量的超线性扩展。 图 1. 借助 TensorRT-LLM MultiShot,AllReduce 延迟最多可降低 3 倍。 实现最佳推理性能需要仔细的工作负载分析和对性能瓶颈的深入了解。通过内部工程工作以及与外部开发者和研究人员的密切合作,我们可以快速、频繁地优化平台的许多方面,为用户提供出色的性能。 随着我们继续识别和实施新的性能优化(一些可能是广泛的,另一些可能范围较窄),我们将定期提供有关这些优化的更新,提供技术动机和量化效益。
https://developer.nvidia.com/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/
Accelerating Scientific Literature Reviews with NVIDIA NIM Microservices for LLMs
A well-crafted systematic review is often the initial step for researchers exploring a scientific field. For scientists new to this field, it provides a structured overview of the domain. For experts, it refines their understanding and sparks new ideas. In 2024 alone, 218,650 review articles were indexed in the Web of Science database, highlighting the importance of these resources in research. Completing a systematic review significantly enhances a researcher’s knowledge base and their academic impact. However, traditional review writing requires collecting, reading, and summarizing large volumes of academic articles on a specific topic. Due to the time-consuming nature of this manual exercise, the scope of processed literature is often confined to dozens or a few hundred articles. Interdisciplinary content—frequently outside the researcher’s area of expertise—adds another layer of complexity. These challenges make it increasingly difficult to create comprehensive, reliable, and impactful systematic reviews. The advent of large language models (LLMs) offers a groundbreaking solution, enabling the rapid extraction and synthesis of information from extensive literature. Participating in the Generative AI Codefest Australia provided a unique opportunity to explore this idea with support from NVIDIA AI experts to leverage NVIDIA NIM microservices for accelerating literature reviews. This enabled the rapid testing and fine-tuning of several state-of-the-art LLMs for our literature analysis process. Testing the potential of LLMs for processing papers As a research group specializing in physiological ecology within the ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF) , we embarked on writing a review of the literature on the global responses of non-vascular plants, such as moss or lichen, to wind. However, we quickly faced a challenge: many relevant articles on wind-plant interactions failed to explicitly mention these key words in their titles or abstracts, which are typically used as primary filters during literature screening. A comprehensive analysis of the topic required manually reading the full text of each article—a highly time-consuming process. We decided to explore the potential of using LLMs to extract content specifically related to wind-plant interactions from the articles. To achieve this, we implemented a simple Q&A application based on the LlaMa 3.1 8B Instruct NIM microservice (Figure 1). This enabled us to get an initial prototype quickly. This first prototype, processing the papers sequentially, was extremely useful to craft and optimize the prompts to extract key information from each article. Figure 1. Initial simple paper processing pipeline using an LLM NIM microservice at its core To validate the accuracy of extracted information, we initially manually validated the results. When no significant errors were found in the test dataset, we identified opportunities to further enhance the efficiency of key information extraction using LLMs (Figure 2). These include converting the papers from a PDF format to structured JSON; extracting images, tables, and charts; and using parallel processing to speed up the processing of papers. Figure 2. Enhanced paper processing pipeline incorporating NV-ingest and parallel processing of papers Enhancing the performance of LLMs for more efficient information extraction By using NVIDIA NIM microservices for LLMs and nv-ingest , we deployed LLMs and a data ingestion pipeline in our local environment with eight NVIDIA A100 80-GB GPUs. We also fine-tuned the models using low-rank adaptation (LoRA) to improve the accuracy of information extraction from the papers. We compiled a dataset of over 2K scientific articles related to the targeted research domain, sourced from the Web of Science and Scopus databases. Over a week during Generative AI Codefest, we focused on experimenting with various strategies to optimize the efficiency and accuracy of key information extraction from these articles. Best-performing model To determine the best-performing model, we tested a range of instruction-based and general-purpose LLMs from the NVIDIA API Catalog on a set of randomly selected articles. Each model was assessed for its accuracy and comprehensiveness in information extraction. Ultimately, we determined that Llama-3.1-8B-instruct was the most suitable for our needs. Processing speed We developed a Q&A module using streamlit to answer user-defined research-specific questions. To further improve processing speed, we implemented parallel processing of the prompts sent to the LLM engine and used KV-caching, which significantly accelerated the computation time by a factor of 6x when using 16 threads. Extraction content types We used nv-ingest to extract content from the original PDFs, including text, figures, tables, and charts, into structured JSON files. This enabled information extraction beyond text content, enabling a more comprehensive context for answering the questions. Using JSON files instead of the original PDF files during inference also had a significant effect on lowering the processing time, by an additional factor of 4.25x. Results Thanks to these improvements, we significantly reduced the time required to extract information from our database of papers, with a total speedup of 25.25x compared to our initial implementation. Processing the entirety of our database now takes less than 30 minutes using two A100 80-GB GPUs and 16 threads. Compared to the traditional approach of manually reading and analyzing an entire article, which typically takes about one hour, this optimized workflow achieved a time savings of over 99% (Figure 3). Figure 3. Required time for information extraction and categorization In addition to information extraction, we also investigated automated article classification. By fine-tuning Llama-3.1-8b-Instruct with a LoRA adapter on a sample of papers manually annotated, we successfully automated the classification process, demonstrating its effectiveness in organizing complex datasets of scientific papers. The results indicated that each article required only 2 seconds for classification, compared to the 300+ seconds required on average for a manual classification by an experienced reader (Figure 3). Future directions We’re currently refining our workflow to further accelerate the processing. We’re also improving our user interface to provide easy access to more locally deployed LLMs and enhance accessibility by other researchers (Figure 4). We plan to implement the NVIDIA AI Blueprint for multimodal PDF data extraction to identify the most relevant articles for each research question and interact with those papers. Beyond technical improvements, we are aiming to organize extracted key information for each question and generate visualizations (such as maps with location of the experiments mentioned in the papers) to accelerate further the writing of the systematic review. Figure 4. Prototype interface undergoing in-house testing at SAEF Summary Our work at the Generative AI Codefest demonstrated the transformative potential of AI in accelerating systematic literature reviews. With NVIDIA NIM, we quickly moved from an idea to a working solution that significantly improves the process of information extraction from scientific papers. This experience highlights how AI can streamline research workflows, enabling faster and more comprehensive insights. LLMs have the potential to facilitate interdisciplinary research, empowering scientists to explore complex, multi-domain research fields more effectively. Moving forward, we aim to refine these methods and tools, ensuring that they are accessible and scalable for future research across diverse topics.
https://developer.nvidia.com/zh-cn/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/
借助适用于 LLM 的 NVIDIA NIM 微服务加速科学文献审查
精心设计的系统审查通常是研究人员探索科学领域的第一步。对于刚刚接触该领域的科学家,它提供了该领域的结构化概述。对于专家而言,它可以细化他们的理解并激发新的想法。在 2024 年, Web of Science 数据库收录了 218,650 篇评测文章,凸显了这些资源在研究中的重要性。 完成系统审查可显著增强研究人员的知识库及其学术影响。然而,传统的评论文章需要收集、阅读和总结有关特定主题的大量学术文章。由于这项人工练习非常耗时,处理过的文献的范围通常局限于几十篇或几百篇文章。跨学科内容 (通常超出研究人员的专业领域) 又增加了一层复杂性。 这些挑战使得创建全面、可靠且有影响力的系统审查变得越来越困难。 大语言模型 (LLMs) 的出现提供了一种突破性的解决方案,能够从大量文献中快速提取和合成信息。参加 Generative AI Codefest Australia 提供了一个独特的机会,让我们在 NVIDIA AI 专家的支持下探索这一想法,利用 NVIDIA NIM 微服务加速文献审查。这为我们的文献分析流程快速测试和微调了多个先进的 LLMs。 测试 LLMs 在处理论文方面的潜力 作为 ARC 特殊研究计划 (ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF)) 中的一个专门研究生理生态学的研究小组,我们着手撰写了一篇关于苔藓或地衣等非血管性植物对风的全球反应的文献回顾。 然而,我们很快就面临了一个挑战:许多关于风力电厂相互作用的相关文章都未能在标题或摘要中明确提及这些关键词,而在文献筛选过程中,这些关键词通常用作主要的过滤器。要对主题进行全面分析,需要手动阅读每篇文章的完整文本,这一过程非常耗时。 我们决定探索使用 LLMs 从文章中提取与风电场交互特别相关的内容的可能性。为此,我们 基于 LlaMa 3.1 8B Instruct NIM 微服务 实施了一个简单的问答应用(图 1)。这使我们能够快速获得初始原型。 第一个原型按顺序处理论文,在制作和优化提示以从每篇文章中提取关键信息方面非常有用。 图 1、以 LLM NIM 微服务为核心的初始简单论文处理流程 为了验证提取信息的准确性,我们首先手动验证结果。当测试数据集中未发现重大错误时,我们确定了使用 LLM 进一步提高关键信息提取效率的机会 (图 2)。其中包括将论文从 PDF 格式转换为结构化 JSON;提取图像、表格和图表;以及使用并行处理来加速论文处理速度。 图 2、增强型论文处理流程,整合 NV-ingest 和论文的并行处理 增强 LLMs 的性能,实现更高效的信息提取 通过将 NVIDIA NIM 微服务用于 LLM 和 nv-ingest ,我们在本地环境中部署了 LLM 和数据提取管道,并配备了 8 个 NVIDIA A100 80-GB GPU。我们还使用低等级适应性(LoRA)对模型进行了微调,以提高从论文中提取信息的准确性。 我们编译了一个数据集,其中包含与目标研究领域相关的 2000 多篇科学文章,这些文章均源自 Web of Science 和 Scopus 数据库。在 Generative AI Codefest 期间,我们在一周内专注于尝试各种策略,以优化从这些文章中提取关键信息的效率和准确性。 性能最佳的模型 为了确定性能最佳的模型,我们在一组随机选择的文章中测试了 NVIDIA API Catalog 中的一系列基于指令的和通用的 LLM。评估了每个模型在信息提取方面的准确性和全面性。 最终,我们确定 Llama-3.1-8B-instruct 最适合我们的需求。 处理速度 我们开发了一个问答模块,使用 Streamlit 回答用户定义的研究特定问题。 为进一步提高处理速度,我们对发送到 LLM 引擎的提示实施了并行处理,并使用了 KV 缓存,这使得使用 16 个线程时的计算时间显著缩短了 6 倍。 提取内容类型 我们使用 nv-ingest 将原始 PDF 中的内容 (包括文本、图形、表格和图表) 提取到结构化 JSON 文件中。这使得信息提取功能超越了文本内容,从而为回答问题提供更全面的环境。 在推理过程中使用 JSON 文件而非原始 PDF 文件也对将处理时间额外缩短 4.25 倍产生了重大影响。 成果 得益于这些改进,我们显著缩短了从论文数据库中提取信息所需的时间,与初始实施相比,总速度提高了 25.25 倍。 现在,使用两个 A100 80-GB GPU 和 16 个线程处理整个数据库只需不到 30 分钟。 与手动阅读和分析整篇文章的传统方法 (通常需要大约 1 小时) 相比,这种经过优化的工作流程可节省 99% 以上的时间 (图 3)。 图 3. 信息提取和分类所需的时间 除了信息提取,我们还研究了自动化文章分类。通过在手动标注的论文样本上使用 LoRA 适配器微调 Llama-3.1-8b-Instruct,我们成功实现了分类过程的自动化,证明了其在组织复杂的科学论文数据集方面的有效性。 结果表明,与经验丰富的读者手动分类平均所需的 300 多秒相比,每篇文章只需 2 秒即可完成分类 (图 3)。 未来方向 我们目前正在改进工作流程,以进一步加速处理。我们还在改进用户界面,以便轻松访问更多本地部署的 LLM,并增强其他研究人员的可访问性(图 4)。 我们计划实施用于多模态 PDF 数据提取的 NVIDIA AI Blueprint,以确定与每个研究问题最相关的文章,并与这些论文进行交互。 除了技术改进,我们的目标是组织提取的每个问题的关键信息,并生成可视化效果 (例如,包含论文中提到的实验位置的地图),以进一步加速系统审查的编写。 图 4、原型接口正在 SAEF 进行内部测试 总结 我们在生成式 AI Codefest 上的工作展示了 AI 在加速系统文献审查方面的变革潜力。借助 NVIDIA NIM,我们迅速从想法转变为可行的解决方案,从而显著改善从科学论文中提取信息的过程。 此体验凸显了 AI 如何简化研究工作流程,从而更快获得更全面的见解。LLMs 具有促进跨学科研究的潜力,可助力科学家更有效地探索复杂的多领域研究领域。 展望未来,我们的目标是改进这些方法和工具,确保其可访问性和可扩展性,以供未来跨各种主题的研究使用。
https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/
Vision Language Model Prompt Engineering Guide for Image and Video Understanding
Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder . These initial VLMs were limited in their abilities, only able to understand text and single image inputs. Fast-forward a few years and VLMs are now capable of understanding multi-image and video inputs to perform advanced vision-language tasks such as visual question-answering (VQA), captioning, search, and summarization. Figure 1. Suite of model customization tools in NVIDIA NeMo VLM accuracy can be improved for specific use cases by tuning the prompts and model weights. Fine-tuning the model weights can be done with efficient techniques such as PEFT but still requires sufficient data and compute. However, prompt engineering and in context learning can also be used to improve the output quality and is as simple as adjusting the text input to the model at runtime. This post walks you through the evolution of VLMs and how to effectively prompt a VLM for single image, multi-image, and video understanding use cases. For more information about VLMs and visual AI agents, register for the upcoming Vision for All: Unlocking Video Analytics with AI Agents webinar. Single-image understanding From a single image, a VLM is capable of identifying, describing, classifying, and reasoning over what is in the image. Some VLMs have advanced capabilities such as providing bounding box coordinates for certain objects, and translating text in the image. This works best for use cases where an image needs to be classified, captioned or described in text. Single-image capability can also be used to detect basic events occurring in a livestream. This can be accomplished by sampling one frame at a time from a livestream and providing it to the VLM along with a prompt describing what to look for. This works especially well when the types of events to detect are large, clearly visible, and slow-moving. The VLM can also be prompted to output in a structured format such as JSON so that the response can easily be parsed and sent to a database or a notification service. For example, we provided Llama-3.2-90b-vision-instruct with the following prompt and image (Figure 2). Figure 2. Input image to the VLM for question and answering Prompt: Is there a fire truck? Is there a fire? Are there firefighters? Output the answer to each question in JSON format. VLM response: [ { "question": "Is there a fire truck?", "answer": "True" },    { "question": "Is there a fire?", "answer": "True" },    { "question": "Are there firefighters?", "answer": "True" }  ] For each of the questions, the VLM was able to accurately respond and output it in a structured format to be used for downstream tasks. While this method enables the VLM to operate on video input by sampling one frame at a time, the VLM itself does not understand the full context of the video, limiting it to simple use cases. Sampling one frame at a time does not give it the temporal understanding needed to detect actions or trends that occur over time. This can be improved with multi-image and video VLMs. Multi-image understanding Many VLMs can accept multiple images as input allowing the model to compare, contrast, and learn from input images. VLMs generally have good zero-shot capability across a wide range of use cases. However, when trying to obtain more precise answers from a VLM on domain-specific use cases, VLMs can struggle because they lack the full context required to accurately respond to the user prompts. To better understand this, we used a VLM to estimate the stock level of products. This is an important retail use case as it could autonomously monitor and alert employees when products are running low in a store to avoid lost revenues from stock-outs. We provided a single image of a table (Figure 3) that should be fully stocked with several snack products to Cosmos Nemotron 34B and asked it to estimate the stock level. Figure 3. Single-image VLM input for retail stock level detection Prompt: Estimate the stock level of the snack table on a scale of 0–100. VLM response: The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd. The model rates it as 80–90% stocked even though it’s quite clear to the human eye that the stock level is low. From the model’s perspective, it does not know what the table looked like beforehand when looking at a single image. Using the Cosmos Nemotron 34B multi-image capability, we provided another image of the snack table at a 100% stock level (Figure 4), giving  the model extra context to produce a more accurate estimate. Figure 4. In-context learning input to VLM for retail stock level detection Prompt: First compare and contrast the stock level of the two images. Then generate an estimate for each image of the stock level on a scale of 0–100%. VLM response: The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%. By providing the additional image as a reference, the VLM now estimates the stock level of the second image at 20–30%, which is significantly more accurate. Multi-image input can also be used in more complex multimodal RAG pipelines where example images are dynamically added to the prompt based on the user’s input. Multi-image VLMs can also be used for basic video understanding by sampling a small set of frames from a video and providing it to the VLM. However, many VLMs that accept multi-image input have limited context windows (10–20 frames) and may not have been trained on video data. To truly understand video input, we recommend using a VLM trained on video-specific datasets and a long context window. Video understanding The latest VLMs have full video understanding capabilities with large context lengths, frame compression and video training data. This enables VLMs to ingest longer videos and have improved temporal understanding: the ability to understand actions and trends over time. The first example showed how to use a single frame from a livestream to detect a fire but that same method wouldn’t be able to tell you if the fire is growing or getting smaller because it lacks the temporal context. A VLM with long context and video understanding processes many frames across time to understand if the fire is growing or not. Here are some examples from the Cosmos Nemotron model families. Sequential visual understanding VLMs with sequential understanding relies on processing frames in a sequence, enabling them to capture the progression of actions and events over time. Unlike multi-image models, which process a limited number of frames without explicit temporal connections, sequential video models incorporate temporal dynamics by analyzing frames in an orderly sequence. This is helpful in situations where you want to capture the causality of events in a video, as it enables the model to track the progression of actions over time and establish cause-effect relationships. The next few examples are based on the following video where a worker drops a box in a warehouse. We discuss a few prompts and responses. Figure 5. Input warehouse video to the VLM for scene understanding Prompt: What happened in this video? VLM response: The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles. The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment. The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting. Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility. The model gave a detailed response with specifics about the environment such as aisle labels and lighting conditions but failed to correctly identify that the worker dropped a box. Now try to ask this question directly. Prompt: Did the worker drop any box? VLM response: Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it. The VLM is now able to deduce that the worker might have dropped the box. This suggests that generic prompts might not capture all the nuances correctly unless you direct the VLM to look for something specific. Modify the prompt a little. Prompt: Elaborate on the visual and narrative elements of this warehouse video in detail. Highlight all anomalies or hazards if present. VLM response: The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor. The model is now capturing all the important details of the scene along with important events, such as workers dropping a box. The detailed response is useful if you want to perform Q&A with the video later. VLMs with video understanding are also able to address directional questions, as they can understand actions over time. Prompt: From which direction did the worker enter the scene? VLM response: The worker entered the scene from the right side. However, these models often lack precise temporal localization, which can limit their ability to pinpoint when specific events occur. Prompt: When did the worker drop the box? VLM response: The worker dropped the box in the second image. Temporal localization VLMs incorporating Localization Interpretable Temporal Attention (LITA) or similar temporal localization techniques elevate video understanding by explicitly learning when and where critical events occur. The models achieve this by introducing temporal attention mechanisms that focus on key segments of a video, aligning visual and textual modalities more effectively over time. This helps answer the “when” questions in the videos. Prompt: When did the worker drop the box? VLM response: 0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground. The optimal prompt format for a VLM depends on the model’s architecture and the nature of the caption pairs used during training. Different training datasets influence how a VLM interprets the prompt. Conclusion This post walked through how VLMs have evolved from supporting only single-image input to being capable of complex temporal reasoning on long video inputs. To get started with VLMs, visit build.nvidia.com and try out some of the prompts shown in this post. For technical questions, see the Visual AI Agent forum . VLMs can be used to build various visual AI agents. Explore one of the sample vision AI workflows on the /NVIDIA/metropolis-nim-workflows GitHub repo with no GPU required. To build a video analytics AI agent, try the AI blueprint for video search and summarization . For more information about VLMs and visual AI agents, register for the upcoming Vision for All: Unlocking Video Analytics with AI Agents webinar. For more information about LLM prompting, see An Introduction to Large Language Models: Prompt Engineering and P-Tuning .
https://developer.nvidia.com/zh-cn/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/
图像和视频理解的视觉语言模型提示工程实践指南
视觉语言模型 (VLMs) 正在以极快的速度发展。2020 年,首批 VLMs 通过使用视觉编码器将视觉理解引入大语言模型 (LLMs) ,彻底改变了生成式 AI 格局。这些初始 VLMs 能力有限,只能理解文本和单张图像输入。 几年后,VLM 现在能够理解多图像和视频输入,以执行高级视觉语言任务,例如视觉问答 (VQA)、字幕、搜索和摘要。 图 1、NVIDIA NeMo 中的模型自定义工具套件 通过调整提示和模型权重,可以提高特定用例的 VLM 准确性。可以使用 PEFT 等高效技术微调模型权重,但仍需要足够的数据和计算。 但是, prompt engineering 和上下文学习也可用于提高输出质量,并且与在运行时调整模型的文本输入一样简单。 本文将为您介绍 VLM 的演变,以及如何有效地提示 VLM 以用于了解单张图像、多张图像和视频的用例。如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 在线研讨会 。 单图像理解 从单张图像来看,VLM 能够识别、描述、分类和推理图像中的内容。一些 VLM 具有高级功能,例如为某些对象提供边界框坐标,以及翻译图像中的文本。这最适合需要对图像进行分类、添加字幕或以文本进行描述的用例。 单图像功能还可用于检测直播中发生的基本事件。这可以通过以下方式实现:从直播中一次采样一帧,并将其提供给 VLM,同时提供描述要查找的内容的提示。当要检测的事件类型较大、清晰可见且移动缓慢时,此方法的效果尤为出色。 系统还可以提示 VLM 以 JSON 等结构化格式输出,以便轻松解析响应并将其发送至数据库或通知服务。 例如,我们为 Llama-3.2-90b-vision-instruct 提供了以下提示和图像(图 2)。 图 2、将图像输入 VLM 以进行问答 提示:是否有消防车?是否有火灾?有消防人员吗?输出每个问题的答案,以 JSON 格式。 VLM 响应: [ { "question": "Is there a fire truck?", "answer": "True" }, { "question": "Is there a fire?", "answer": "True" }, { "question": "Are there firefighters?", "answer": "True" } ] 对于每个问题,VLM 能够准确回答并以结构化格式输出,以便用于下游任务。 虽然此方法允许 VLM 通过一次采样一帧来操作视频输入,但 VLM 本身无法理解视频的完整上下文,将其局限于简单的用例。一次对一帧进行采样无法为其提供检测随时间变化的动作或趋势所需的时间理解。这可以通过多图像和视频 VLM 得到改善。 多图像理解 许多 VLM 可以接受多张图像作为输入,允许模型比较、对比并从输入图像中学习。 VLM 通常在各种用例中都具有良好的零样本击能力。但是,在尝试从 VLM 获取有关特定领域用例的更精确答案时,VLM 可能会遇到困难,因为它们缺乏准确响应用户提示所需的完整上下文。 为了更好地理解这一点,我们使用 VLM 来估计产品的库存水平。这是一个重要的零售用例,因为它可以在商店中的产品运行不足时自动监控和提醒员工,以避免因缺货而损失的收入。 我们向 Cosmos Nemotron 34B 提供了一张表格 (图 3) 的单一图像,该表格应与多种零食产品一起充分库存,并要求其估算库存水平。 图 3. 用于零售库存水平检测的单图像 VLM 输入 提示:按 0 – 100 的比例估算零食表的库存量。 VLM 响应: The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd. 该模型将其评为 80–90% 的库存,即使肉眼很清楚库存水平很低。从模型的角度来看,它不知道在查看单个图像之前表格的样子。 通过使用 Cosmos Nemotron 34B 多图像功能,我们提供了另一张 100%库存水平的零食表图像 (图 4),为模型提供了额外的背景信息,以便生成更准确的估计。 图 4. 为 VLM 提供情境学习输入,用于零售库存水平检测 提示:首先比较两张图像的库存水平。然后,按 0 – 100% 的比例对库存量的每张图像生成估计值。 VLM 响应: The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%. 现在,通过提供额外的图像作为参考,VLM 可将第二张图像的库存量估算为 20–30%,准确度明显更高。 多图像输入还可用于更复杂的多模态 RAG 工作流,其中会根据用户的输入将示例图像动态添加到提示中。多图像 VLM 还可以通过从视频中采样一小部分帧并将其提供给 VLM 来用于基本的视频理解。 但是,许多接受多图像输入的 VLM 的上下文窗口有限(10–20 帧),并且可能未基于视频数据进行训练。为了真正理解视频输入,我们建议使用基于视频特定数据集训练的 VLM 和长上下文窗口。 视频理解 最新的 VLM 具有完整的视频理解功能,具有大上下文长度、帧压缩和视频训练数据。这使得 VLM 能够提取更长的视频,并提高时间理解能力:能够理解一段时间内的行动和趋势。 第一个示例展示了如何使用直播中的单帧来检测火灾,但由于缺乏时间背景,相同的方法无法告诉您火灾是在增大还是变小。 具有长上下文和视频理解的 VLM 会随着时间的推移处理多个帧,以了解火势是否在增长。以下是 Cosmos Nemotron 模型系列的一些示例。 顺序视觉理解 具有顺序理解的 VLMs 依赖于按序列处理帧,使其能够捕捉动作和事件随时间推移的进展。与多图像模型相比,它们在处理有限数量的帧时没有显式时间连接,而顺序视频模型则通过按顺序分析帧来整合时间动力学。 这在您想要在视频中捕捉事件因果关系的情况下非常有用,因为它使模型能够跟踪动作在一段时间内的进展情况,并建立因果关系。 接下来的几个示例基于以下视频,展示了一名工人在仓库中丢弃箱子的过程。我们将讨论一些提示和响应。 图 5、将仓库视频输入到 VLM 以进行场景理解 提示:本视频中发生了什么? VLM 响应: The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles. The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment. The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting. Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility. 该模型提供了详细的环境细节 (例如通道标签和照明条件),但无法正确识别工人掉落的箱子。现在,尝试直接问这个问题。Prompt:工作人员是否丢弃了任何箱子? VLM 响应: Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it. 现在,VLM 可以推断出工作者可能丢弃了箱子。这表明,除非您指示 VLM 查找特定内容,否则通用提示可能无法正确捕捉所有细微差别。稍微修改一下提示。 提示:详细介绍此仓库视频的视觉和叙事元素。突出显示所有异常或危险(如果存在)。 VLM 响应: The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor. 现在,该模型可以捕捉场景的所有重要细节以及重要事件,例如工人丢箱。如果您想稍后在视频中执行 Q&A,详细的回答非常有用。具有视频理解功能的 VLM 还能够解决定向问题,因为它们可以理解随时间推移的动作。 提示:工作者是从哪个方向进入场景的? VLM 响应: The worker entered the scene from the right side. 但是,这些模型通常缺乏精确的时间定位,这可能会限制其确定特定事件何时发生的能力。 提示:工人是什么时候丢弃盒子的? VLM 响应: The worker dropped the box in the second image. 时间定位 结合 Localization Interpretable Temporal Attention (LITA) 或类似时间定位技术的 VLM 通过显式学习关键事件发生的时间和地点来提高视频理解能力。这些模型通过引入专注于视频关键片段的时间注意力机制,随着时间的推移更有效地调整视觉和文本模式来实现这一目标。这有助于回答视频中的“何时”问题。 提示: 工人是什么时候丢弃盒子的? VLM 响应: 0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground. VLM 的最佳提示格式取决于模型的架构和训练期间使用的描述对的性质。不同的训练数据集会影响 VLM 解释提示的方式。 结束语 本文介绍了 VLM 如何从仅支持单图像输入发展为能够对长视频输入进行复杂的时间推理。要开始使用 VLM,请访问 build.nvidia.com 并尝试本博文中显示的一些提示。有关技术问题,请参阅 Visual AI Agent 论坛 。 VLM 可用于构建各种视觉 AI 智能体。无需 GPU,即可在 /NVIDIA/metropolis-nim-workflows GitHub 存储库上探索视觉 AI 工作流示例之一。要构建视频分析 AI 智能体,请试用 NVIDIA AI Blueprint 中的视频搜索和摘要蓝图。 如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 网络研讨会。 有关 LLM 提示的更多信息,请参阅《 An Introduction to Large Language Models: Prompt Engineering and P-Tuning 》。
https://developer.nvidia.com/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/
Configurable Graph-Based Task Solving with the Marco Multi-AI Agent Framework for Chip Design
Chip and hardware design presents numerous challenges stemming from its complexity and advancing technologies. These challenges result in longer turn-around time (TAT) for optimizing performance, power, area, and cost (PPAC) during synthesis, verification, physical design, and reliability loops. Large language models (LLMs) have shown a remarkable capacity to comprehend and generate natural language at a massive scale, leading to many potential applications and benefits across various domains. Successful LLM-based AI agents for hardware design can drastically reduce TAT, leading to faster product cycles, lower costs, improved design reliability, and reduced risk of costly errors. Figure 1. Marco framework overview Marco: Configurable Graph-Based Task Solving and Multi-AI Agents Framework We introduce the proposed Marco framework , which encompasses graph-based task solving, agent configurations for sub-tasks, and skill/tool configurations for each AI agent in real time. Figure 1 showed dynamic and static configurable graph-based task solving, which is flexibly integrated with chip-design knowledge (for example, circuits, timing, and so on). In the task graph, each node represents a sub-task, and each edge represents the execution or knowledge relationship between nodes. For solving each sub-task, you use Autogen to configure single-AI or multi-AI agent with a knowledge database, tools, and memory. Table 1 summarizes the task graph, agent, and LLM configurations of the Marco framework for various agents. VerilogCoder and the MCMM timing analysis agent use a dynamic task graph to complete the specification-to-RTL and extract key takeaways of timing reports, respectively. The timing path debug agent finds the problematic net, wire, and constraints through a static timing debugging task graph (Figure 1). Agent works Task category Marco framework configuration Task graph Sub-task agent config. Customized tools RTLFixer Code Syntax Fixing N/A Single-AI RTL Syntax Error RAG Database Standard Cell Layout Opt. Optimization N/A Single-AI Cluster Evaluator, Netlist Traverse Tool MCMM Timing Analysis(Partition/Block-Level) Summary & Anomaly Identification Dynamic Multi-AI Timing Distribution Calculator, Timing Metric Comparator DRC Coder Code Generation N/A Multi-Modality & Multi-AI Foundry Rule Analysis, Layout DRV Analysis, DRC Code Evaluation Timing Path Debug(Path-Level) Summary & Anomaly Identification Static Hierarchical Multi-AI Agentic Timing Report Retrieval VerilogCoder Code Generation Dynamic Multi-AI TCRG Retrieval Tool, AST-Based Waveform Tracing Tool Table 1. Task graph, agent configuration, customized tool of the Marco framework for various autonomous agent implementations for hardware design tasks In Table 1, RTLFixer and Standard Cell Layout Optimization agents (that is, a single-AI configuration) are supported using the Marco framework. The rest of the multi-AI agents are implemented on the Marco framework. For RTLFixer , LLM agent for standard cell layout optimization , and DRC Coder , we used single-AI or multi-AI agent configurations with customized tools, memory, and domain knowledge. Automated hardware description languages code generation One key area where autonomous agents are making an impact is in the generation of hardware description languages (HDLs), such as Verilog. Due to the growing complexity of VLSI design, writing Verilog and VHDL is time-consuming and prone to bugs, necessitating multiple iterations for debugging functional correctness. Consequently, reducing design costs and designer effort for completing hardware specifications has emerged as a critical need. LLMs can be used to generate Verilog code from natural language descriptions. However, LLMs often struggle to produce code that is both syntactically and functionally correct. Syntax correctness RTLFixer uses a combination of retrieval-augmented generation (RAG) and ReAct prompting to enable LLMs to iteratively debug and fix syntax errors. RAG incorporates a database of human expert guidance to provide context for error correction. ReAct enables the LLM to reason about the error, plan a fix, and act on the plan. Functional correctness VerilogCoder is a multi-agent system that incorporates a task planner and an abstract syntax tree (AST)-based waveform-tracing tool to generate and debug Verilog code. It employs a task and circuit relation graph (TCRG) to break down a task into manageable sub-tasks and link signal transitions to each step (Figure 2). Figure 2. Task-driven circuit relation graph retrieval agent reasoning and interacting with the developed TCRG retrieval tool to enrich the task with the relevant circuit and signal descriptions An AST-based waveform tracing tool assists the LLM agent in identifying and fixing functional errors by back-tracing signal waveforms. VerilogCoder achieves a 94.2% success rate on the VerilogEval-Human v2 benchmark, demonstrating a significant improvement over previous methods. Video 1. Autonomously Complete Verilog Code with TCRG Planning and AST-Based Waveform Tracing Tools In Video 1, the demonstration of VerilogCoder autonomously completes functionally correct Verilog code using TCRG planning and AST-based waveform tracing tools. Automated DRC layout code generation DRC-Coder uses multiple autonomous agents with vision capabilities and specialized DRC and Layout DRV analysis tools to generate DRC code. The system interprets design rules from textual descriptions, visual illustrations, and layout representations. The multiple LLM agents include a planner that interprets design rules, and a programmer that translates the rules into executable code. DRC-Coder incorporates an auto-debugging process, which uses feedback from the code evaluation to refine the generated code. Video 2. A Demonstration of DRC-Coder in Chip Design In Video 2, the demonstration of DRC-Coder generates DRC code that achieves perfect F1 scores on hundreds of testing layouts by leveraging a layout analysis tool, an auto-debugging process, and the capabilities of multi-modality and multi-AI agents. DRC-Coder achieved a perfect F1 score of 1.000 in generating DRC codes for a sub-3nm technology node, outperforming standard prompting techniques. The proposed automated agentic approach significantly reduces the time required for DRC code generation, from weeks to an average of four minutes per design rule. Standard cell layout optimization LLM agent for standard cell layout optimization proposes using the natural language and reasoning ability of an LLM to generate high-quality cluster constraints incrementally to optimize the cell layout PPA and debug the routability with ReAct prompting. The system uses net information and cell layout analysis to group MOSFET devices into clusters. The AI agent not only achieves up to 19.4% smaller cell area, but also generates 23.5% more LVS and DRC clean cell layouts than the Transformer-based device clustering approach on a set of sequential cells in the industrial 2 nm technology node. Multi-corner multi-mode timing report debug and analysis The multi-corner multi-mode (MCMM) timing analysis agent uses dynamic task graphs to complete the specification-to-RTL and extract key takeaways of timing reports, respectively. The MCMM timing analysis agent achieves an average score of 8.33 out of 10, based on evaluations by experienced engineers on a set of industrial cases, and delivers approximately 60x speedups compared to human engineers (Figure 3). Figure 3. MCMM timing analysis agent results The timing path debug agent finds the problematic net, wire, and constraints through the static timing debugging task graph (Figure 1). In Table 2, the timing path debug agent resolves 86% of path-level debugging tasks, whereas the standard task solving approach fails to resolve any of the tasks. Multi Report Task Description Required Analyzed Sub-Tasks Standard Task Solving Timing Path Debug Agent Find missing clk signals that have no rise/fall information max, clk X V Identify pairs of nets with high RC mismatch max, wire X V Detect unusual constraints between victim and its aggressors max, xtalk, LC X V Identify unusual RC values between victim and its aggressors max, wire, xtalk, LC X V Find the constraints of slowest stages with highest RC values max, wire, xtalk, LC X V Compare each timing table for number of stages, point values and timing mismatch max X X Task M2 and Task M3 for specific stages in list of paths max, wire, xtalk, LC X V Avg Pass-rate 0% 86% Table 2. Pass-rate (%) of timing path debug agent with static task graph solving, and a naïve standard task solving without task graph information X=Failed to solve the task. V=Solved the task successfully. Conclusion The proposed Marco framework enables more flexible and domain-specialized methods for real-time hardware design tasks solving. By using task graph and flexible single-AI and multi-AI agent configurations with domain-specific tools and knowledge, we developed various agents for tasks such as cell layout optimization, Verilog syntax error fixing, Verilog and DRC code generation, and timing debugging on problematic blocks, nets, and wires. The experimental results show impressive performance and efficiency benefits on utilizing collaborative LLM-based agents for chip design. The future directions for agent research on hardware design include the following: Training LLMs with high-quality hardware design data Improving LLM-based agents’ ability for hardware signal and waveform debugging Incorporating PPA metrics into the design flow Developing more efficient self-learning techniques and memory systems for LLM agents for solving more complex hardware tasks For more papers and projects on electronic design automation, see the NVIDIA Design Automation Research Group page. For those interested in the technologies highlighted in the post, here’s a list of relevant papers: RTLFixer: Automatically Fixing RTL Syntax Errors with Large Language Models /NVlabs/RTLFixer GitHub repo VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool /NVlabs/VerilogCoder GitHub repo DRC-Coder: Automated DRC Checker Code Generation using LLM Autonomous Agent Large Language Model (LLM) for Standard Cell Layout Design Optimization
https://developer.nvidia.com/zh-cn/blog/configurable-graph-based-task-solving-with-the-marco-multi-ai-agent-framework-for-chip-design/
基于 Marco 多 AI 智能体框架的可配置图形任务求解在芯片设计中的应用
芯片和硬件设计因其复杂性和先进的技术而面临众多挑战。这些挑战导致在合成、验证、物理设计和可靠性循环期间优化性能、功耗、面积和成本(PPAC)的周转时间(TAT)更长。 大语言模型(LLMs)已显示出大规模理解和生成自然语言的出色能力,从而为各种领域带来许多潜在应用和优势。成功地将基于 LLMs 的 AI 智能体用于硬件设计,可以大幅减少 TAT,从而缩短产品周期、降低成本、提高设计可靠性,并降低代价高昂的错误风险。 图 1. Marco 框架概述 Marco:基于图形的可配置任务求解和多 AI 代理框架 我们介绍了拟议的 Marco 框架,其中包括基于图形的任务求解、子任务的智能体配置,以及每个 AI 智能体的实时技能/工具配置。 图 1 展示了基于动态和静态可配置图形的任务求解,可灵活集成芯片设计知识 (例如电路、定时等)。 在任务图中,每个节点代表一个子任务,每个边缘代表节点之间的执行或知识关系。为了解决每个子任务,您可以使用 Autogen 配置具有知识数据库、工具和内存的单个 AI 或多个 AI 代理。 表 1 总结了各种代理的 Marco 框架的任务图形、代理和 LLM 配置。 VerilogCoder 和 MCMM 时序分析代理使用动态任务图形分别完成对 RTL 的规范和提取时序报告的关键要点。定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的网络、线和约束。 代理作品 任务类别 Marco 框架配置 任务图 子任务代理配置。 定制工具 RTLFixer 代码语法固定 不适用 单 AI RTL 语法错误 RAG 数据库 Standard Cell Layout Opt. 优化 不适用 单 AI Cluster Evaluator, Netlist Traverse Tool MCMM 定时分析 (Partition/Block-Level) 总结和异常识别 动态 多 AI Timing Distribution Calculator,Timing Metric Comparator DRC Coder 代码生成 不适用 多模态和多 AI Foundry Rule Analysis,Foundry 规则分析,Layout DRV Analysis,布局 DRV 分析,DRC Code Evaluation,DRC 代码评估 Timing Path Debug(Path-Level) 总结和异常识别 静态 分层多 AI 代理定时报告检索 VerilogCoder 代码生成 动态 多 AI TCRG 检索工具,AST-Based Waveform Tracing Tool 表 1、任务图、代理配置、Marco 框架的定制工具,用于硬件设计任务的各种自主代理实现 在表 1 中,使用 Marco 框架支持 RTLFixer 和标准单元布局优化代理 (即单 AI 配置)。其余的多 AI 代理均在 Marco 框架上实施。 对于 RTLFixer、用于标准单元布局优化的 LLM 代理和 DRC Coder,我们使用了具有定制工具、内存和域知识的单 AI 或多 AI 代理配置。 自动硬件描述语言代码生成 自主智能体产生影响的一个关键领域是硬件描述语言(HDL)的生成,例如 Verilog。由于 VLSI 设计的复杂性与日俱增,编写 Verilog 和 VHDL 非常耗时且容易出现错误,因此需要进行多次迭代来调试功能正确性。因此,降低设计成本和设计师完成硬件规格的工作量已成为一项迫切需求。 LLM 可用于根据自然语言描述生成 Verilog 代码。但是,LLM 通常难以生成语法和功能上都正确的代码。 语法正确性 RTLFixer 结合使用检索增强生成 (RAG) 和 ReAct 提示,使 LLM 能够迭代地调试和修复语法错误。RAG 整合了人类专家指导数据库,为纠错提供上下文。ReAct 启用 LLM 进行错误推理、计划修复并采取行动。 功能正确性 VerilogCoder 是一个多代理系统,包含任务规划器和基于抽象语法树(AST)-based 波形追踪工具,用于生成和调试 Verilog 代码。它使用任务和电路关系图(TCRG)将任务分解为可管理的子任务,并将信号转换链接到每个步骤(图 2)。 图 2、任务驱动的电路关系图检索代理推理,并与开发的 TCRG 检索工具交互,通过相关的电路和信号描述丰富任务 基于 AST 的波形追踪工具通过回溯信号波形来协助 LLM 代理识别和修复功能错误。与之前的方法相比,VerilogCoder 在 VerilogEval-Human v2 基准测试中的成功率为 94.2%。 视频1. 使用 TCRG 规划和基于 AST 的波形追踪工具自动完成 Verilog 代码 在视频 1 中,VerilogCoder 演示使用 TCRG 规划和基于 AST 的波形追踪工具自动完成功能正确的 Verilog 代码。 自动生成 DRC 布局代码 DRC-Coder 使用多个具有视觉功能的自主代理以及专门的 DRC 和布局 DRV 分析工具来生成 DRC 代码。该系统从文本描述、视觉插图和布局表示中解释设计规则。多个 LLM 代理包括一个解释设计规则的规划器和一个将规则转换为可执行代码的程序员。 DRC-Coder 包含自动调试流程,该流程利用代码评估的反馈来优化生成的代码。 视频 2、用于芯片设计的 DRC-Coder 演示 在视频 2 中,DRC-Coder 演示会生成 DRC 代码,通过利用布局分析工具、自动调试流程以及多模态和多 AI 代理的功能,在数百个测试布局中实现完美的 F1 分数。 DRC-Coder 在为低于 3nm 的技术节点生成 DRC 代码时,达到了完美的 F1 评分 (达到 1.000 分),优于标准提示技术。拟议的自动化代理方法可显著缩短 DRC 代码生成所需的时间,即每个设计规则从数周缩短到平均 4 分钟。 标准单元布局优化 用于标准单元布局优化的 LLM 智能体建议使用 LLM 的自然语言和推理能力逐步生成高质量集群约束,以优化单元布局 PPA,并使用 ReAct 提示调试可路由性。 该系统使用网络信息和单元布局分析将 MOSFET 设备分组到集群中。与在工业 2nm 技术节点中的一组顺序单元上采用基于 Transformer 的设备集群方法相比,AI 智能体不仅可将单元面积减少多达 19.4%,而且还可生成多 23.5% 的 LVS 和 DRC 清洁单元布局。 多角度多模式时序报告调试与分析 多角多模 (MCMM) 时序分析代理使用动态任务图形分别完成从规范到 RTL 的规范,并提取时序报告的关键要点。 根据经验丰富的工程师对一组工业案例的评估,MCMM 时序分析代理的平均分为 8.33 分 (满分 10 分),与人类工程师相比,速度提高了约 60 倍 (图 3)。 图 3、MCMM 时序分析代理结果 定时路径调试代理通过静态定时调试任务图 (图 1) 查找有问题的 net、wire 和约束。 在表 2 中,定时路径调试代理可解析 86% 的路径级调试任务,而标准任务求解方法无法解析任何任务。 多报告任务说明 所需的分析结果 子任务 标准任务求解 定时路径调试代理 查找没有上升/下降信息的缺失 clk 信号 max、clk X V 识别 nets 对的 RC 不匹配率高 Max、Wire X V 检测受害者与侵犯者之间的异常约束 Max、xtalk、LC X V 识别牺牲者与侵犯者之间的异常 RC 值 Max、wire、xtalk、LC X V 查找具有最高 RC 值的最慢阶段的限制 Max、wire、xtalk、LC X V 比较每个 timing table,了解阶段数、点值和 timing mismatch 最大值 X X 任务 M2 和任务 M3,用于路径列表中的特定阶段 Max、wire、xtalk、LC X V 平均通过率 0% 86% 表 2、具有静态任务图求解功能的定时路径调试代理的通过率 (%),以及无任务图信息的朴素标准任务求解 X = 无法解决任务 。V = 成功完成任务。 结束语 拟议的 Marco 框架为实时硬件设计任务求解提供了更灵活、更领域更专业的方法。通过使用任务图形、灵活的单 AI 和多 AI 代理配置以及特定领域的工具和知识,我们为单元布局优化、Verilog 语法错误修复、Verilog 和 DRC 代码生成以及问题块、网络和线缆的定时调试等任务开发了各种代理。 实验结果表明,在芯片设计中使用基于 LLM 的协作式智能体可获得惊人的性能和效率优势。 硬件设计智能体研究的未来方向包括: 使用高质量的硬件设计数据训练 LLMs 提升基于 LLM 的智能体的硬件信号和波形调试能力 将 PPA 指标纳入设计流程 为 LLM 代理开发更高效的自学习技术和内存系统,以解决更复杂的硬件任务 有关电子设计自动化的更多论文和项目,请参阅 NVIDIA Design Automation Research Group 页面。 如果您对本文中重点介绍的技术感兴趣,以下是相关论文列表: RTLFixer:使用大语言模型自动修复 RTL 语法错误 /NVlabs/RTLFixer GitHub repo VerilogCoder:采用基于图形的规划和基于抽象语法树(AST)的波形追踪工具的自主 Verilog 编码代理 /NVlabs/VerilogCoder GitHub repo DRC-Coder:使用 LLM 自主代理自动生成 DRC 检查器代码 用于优化标准单元布局设计的大语言模型(LLM)
https://developer.nvidia.com/blog/defining-llm-red-teaming/
Defining LLM Red Teaming
There is an activity where people provide inputs to generative AI technologies, such as large language models (LLMs), to see if the outputs can be made to deviate from acceptable standards. This use of LLMs began in 2023 and has rapidly evolved to become a common industry practice and a cornerstone of trustworthy AI . How can we standardize and define LLM red teaming? Researchers affiliated with NVIDIA, the University of Washington, the Center for Human-Compatible AI, and the IT University of Copenhagen conducted a study on red teaming in practice, Summon a demon and bind it: A grounded theory of LLM red teaming (published in PLOS One). Because the goal was to define and understand a relatively novel activity, the study uses a grounded theory approach, taking as evidence interviews from dozens of practitioners in thousands of minutes of video recordings. We spoke with security professionals, policy researchers, and scientists, as well as people who conducted this work non-professionally, such as academic researchers, hobbyists, and even artists, to understand the common themes, motivations, and strategies in attacking LLMs. What defines LLM red teaming in practice? LLM red teaming has the following defining characteristics: It’s limit-seeking: Red teamers find boundaries and explore limits in system behavior. It’s never malicious: People doing red teaming are not interested in doing harm—in fact, quite the opposite. It’s manual: Being a creative and playful practice, the parts of red teaming that can be automated are often most useful to give human red teamers insight for their work. It’s a team effort: Practitioners find inspiration in each others’ techniques and prompts, and the norm is to respect fellow practitioners’ work. It’s approached with an alchemist mindset: We found that red teamers tend to abandon rationalizations about models and their behavior and instead embrace the chaotic and unknown nature of the work. These characteristics feed into NVIDIA’s definition of LLM red teaming, which we discuss later in this post. In industry, there’s often a rough division between cybersecurity red teaming and content red teaming: Cybersecurity red teaming tends to be scoped to the technology stack leading up to the point of inference output, and technologies used to transmit and render this output. Content red teaming , on the other hand, is scoped to the content produced at model inference time. Why do people red team LLMs? People who attack LLMs have a broad range of motivations. Some of these are external. It may be part of their job or a regulatory requirement. Social systems can also play a role, with people discovering LLM vulnerabilities for social media content or to participate in a closed group. Others are intrinsic, as many people do it for fun, out of curiosity, or based on concerns for model behavior. At NVIDIA, we red team LLMs as part of our Trustworthy AI process to assess risks before release. If the model doesn’t perform up to expectations, we postpone the release until a given model is updated and revised to perform better. How do people approach this activity? LLM red teaming consists of using strategies to reach goals when conversationally attacking the target. Each kind of strategy is decomposed into different techniques. A technique might just affect two or three adversarial inputs against the targets, or an input might draw upon multiple techniques. We identified the following overall types of red team strategies: Language: Modulating the surface form of words rather than semantics, such as using an encoding scheme. Rhetorical: Relying on argumentation, or manipulation. Possible worlds: Trying to shift the context of the interaction. Fictionalizing: Shifting the basis of operation to a fictional world or set of conditions. Stratagems: Using meta-strategies that affect how one interacts with the LLM at a higher level. For more information, see Summon a demon and bind it: A grounded theory of LLM red teaming , which lists and describes 35 techniques over twelve different strategies. What can LLM red teaming reveal? The goal of LLM red teaming isn’t to quantify security. Rather, the focus is on exploration, and finding which phenomena and behaviors a red teamer can get out of the LLM. Put another way, if we get a failure just one time, then the failure is possible. Another thing that distinguishes red teaming from benchmarks is the focus on novelty. For both cybersecurity and content-based red teaming, the possible range of attacks is infinite. For cybersecurity, that’s because new attack methods are constantly in development. For content, it’s because the mode of interaction is through text, which can be infinitely rearranged and augmented. So, repeatability is not interesting when discovering new security weaknesses and vulnerabilities. While it makes sense to test any model for failures using a battery of existing prompts, as a benchmark does, this can never indicate security. It just reveals weaknesses. It’s possible to patch deployed models to be secure against prompts in public benchmarks, or to use this data during alignment, so getting full marks on a benchmark doesn’t show that a model is secure. However, getting low marks on a security benchmark does still indicate the presence of weaknesses. In the security context, to test a model rigorously, you should go beyond public knowledge and interact closely with the model, trying to find novel ways to breach a particular LLM. In this sense, LLM red teaming is a classic instance of an artisanal activity. Red teamers use their human expertise and intuition while interacting with the target. For example, they might sense that a model is close to giving a mitigation message (for example, “As an AI, I cannot….”), and they might respond to this by backing off their current line of requests, or by starting a fresh chat session with a slightly different tack. Or, a red teamer might sense that a model is close to yielding and so keep pushing and slightly varying their request until they find a way through and get the model to fail in the target way. They add what worked and what didn’t to their conscious expertise and unconscious intuition and then share it with other red teamers. This makes red teaming a distinctly human activity that complements security benchmarking. How do people use knowledge that comes from LLM red teaming? Red teamers are often looking for what they describe as harms that might be presented by an LLM. There’s a broad range of definitions of harm. A red teaming exercise could focus on one of many goals or targets, which could depend on deployment context, user base, data handled, or other factors. Red teamers may also pay attention to the level of complexity required to get a “break.” A harm discovered after a single, one-sentence interaction with an LLM often suggests greater concern than a harm surfaced following complex, multi-turn manipulation. Sometimes, the goal of red teaming is curiosity, a byproduct of which might be content for the red teamer to share, in their organization or publicly. This both builds the expertise and intuition of the individual and raises the community level of knowledge. It’s common for traditional cybersecurity knowledge to be shared informally on social media, and this applies also for LLM security. At NVIDIA, human red teaming is a critical part of how we make model-release decisions. We have a skilled LLM red team and a skilled AI red team, who both check our models and infrastructure for vulnerabilities and behaviors that were not caught elsewhere. This helps us in three ways: It enables us to make informed decisions about whether we will release models It builds a pool of high-level skill at the frontier of LLM red teaming It gives us the confidence that we’re making the best effort and getting good results with our AI security. The results from red teaming go into NVIDIA’s enhanced model documentation format, Model Card++ . Some parts of LLM security can be tested automatically. After an exploit has been found in one case, this can be stored and used to test other LLMs, so that we don’t make the same mistake again. We do exactly this in NVIDIA garak (Generative AI Red-Teaming and Assessment Kit). Developers ready to test the security of their LLM deployments can run the open-source NVIDIA garak against almost any model and get a report indicating susceptibility to over 120 different categories of vulnerability. Knowledge about how to break a model can be risky in the wrong hands. When an exploit has been found, the best thing to do is contact the model owner and give them a chance to respond and fix the weakness. This process is called co-ordinated vulnerability disclosure , and is also a common practice for LLM vulnerabilities. NVIDIA’s definition of LLM red teaming We see LLM red teaming as an instance of AI red teaming. Our definition is developed by the NVIDIA AI Red Team and takes inspiration from both this research on LLM red teaming in practice and also the definition used by the Association for Computational Linguistics’ SIG on NLP Security ( SIGSEC ). Take care to specify the specific subdomain of red teaming, as different audiences often make different assumptions about which form is being referred to. LLM red teaming : Systematically testing AI models and systems containing AI models to identify vulnerabilities and behaviors that pose threats or risks to the systems running or using those models. It can be subdivided into two areas: security red teaming and content-based red teaming. Security red teaming Assessing the robustness of the model and the system containing the model to attacks impacting traditional security properties (for example, confidentiality, integrity, and availability), either of the model itself or the system containing the model. Attacks such as adversarial inputs, model extraction, training data inference and extraction, or prompt injection as used to violate a security property are typically evaluated by a security red team. These activities typically require teams with a traditional security background to leverage findings and evaluate their impact. Content-based red teaming Assessing the model for unwanted behavior under adversarial manipulation, producing outputs that violate some pre-specified behavior contract for the model, either explicit (for example, a model card) or implicit. These behaviors may include outputs that are offensive, unwanted, or unsafe, including biased or bigoted productions, instructions on unsafe or illegal activities, making promises on behalf of the model owner, or making decisions based on protected characteristics. Common techniques involve various forms of jailbreaking and guardrail evasion. These activities typically require the support of an ethics team, a legal team, or other similar domain experts to assess the impact of findings. Improving LLM security and safety NVIDIA NeMo Guardrails is a scalable platform for defining, orchestrating, and enforcing AI guardrails for content safety, jailbreak prevention, and more in AI agents and other generative AI applications. NeMo Guardrails and the NVIDIA garak toolkit are now available for developers and enterprises. Enterprises can benefit from high-grade safety and security with NVIDIA AI Enterprise . Meet the experts at GTC The NVIDIA scientists behind this and other works in AI security will be at GTC 2025. You can hear a panel discussion on navigating critical challenges in AI governance , where we discuss practical approaches to building responsible AI systems. Our cybersecurity AI and security teams will present an exclusive, in-depth session designed to transform your AI agentic workflows securely from blueprint to production. And our experts will be available to answer all your questions on building trustworthy AI systems . Acknowledgements Thanks to Nanna Inie, Jonathan Stray, and Leon Derczynski for their work on the Summon a demon and bind it: A grounded theory of LLM red teaming paper published in PLOS One.
https://developer.nvidia.com/zh-cn/blog/defining-llm-red-teaming/
定义 LLM 红色团队
在一项活动中,人们为生成式 AI 技术 (例如大语言模型 (LLMs)) 提供输入,以确定输出是否会偏离可接受的标准。LLMs 的这种使用始于 2023 年,并已迅速发展成为一种常见的行业实践,也是值得信赖的 AI 的基石。如何标准化和定义 LLM 红色团队? NVIDIA、华盛顿大学、Center for Human-Compatible AI 和哥本哈根 IT 大学的研究人员对红色团队的实际应用“ Summon a demon and bind it: A grounded theory of LLM red teaming ”(在 PLOS One 中发布) 进行了一项研究。 由于目标是定义和理解相对较新的活动,因此该研究采用了有根据的理论方法,在数千分钟的视频录制中,以数十位从业者的访谈作为证据。我们与安全专业人员、政策研究人员和科学家,以及从事这项非专业工作的人员 (例如学术研究人员、业余爱好者甚至艺术家) 进行了交谈,以了解攻击 LLMs 的常见主题、动机和策略。 什么定义 LLM 红队在实践中? LLM 红色团队合作具有以下定义特征: 它寻求极限:红色团队成员发现边界并探索系统行为的极限。 这绝非恶意:做红队的人对伤害不感兴趣,事实上,情况恰恰相反。 它是手动的:作为一种创造性和趣味性的练习,red teaming 中可以自动化的部分通常非常有用,可以让人类 red teamers 深入了解他们的工作。 这是一项团队合作精神:从业者从彼此的技巧和提示中寻找灵感,规范是尊重其他从业者的工作。 他们以炼金术士的思维方式进行交流:我们发现,红色团队成员倾向于放弃对模型及其行为的合理化,转而接受工作的混乱和未知本质。 这些特征融入了 NVIDIA 对 LLM 红色团队的定义,我们稍后将在本文中讨论这些定义。 在业内,通常会对网络安全红队和内容红队进行粗略划分: 网络安全红色团队合作的范围往往涵盖直至推理输出点的技术堆栈,以及用于传输和渲染此输出的技术。 另一方面,内容红色团队的作用域为模型推理时生成的内容。 人们为什么组建 LLM 团队? 攻击 LLMs 的人的动机多种多样。 其中一些是外部的。这可能是他们工作的一部分,也可能是监管要求。社交系统也可以发挥作用,人们可以发现社交媒体内容的 LLM 漏洞,或者加入封闭的小组。还有一些则是内在的,因为很多人这样做是为了好玩、出于好奇或出于对模型行为的关注。 在 NVIDIA,我们组建了红队 LLMs 作为我们 Trustworthy AI 流程的一部分,在发布之前评估风险。如果模型的性能未达到预期,我们将延迟发布,直到对给定模型进行更新和修改以提高其性能。 人们如何看待这项活动? LLM 红色团队合作包括在对话式攻击目标时使用实现目标的策略。每种策略都被分解成不同的技术。一项技术可能仅会影响目标的两个或三个对抗输入,或者一项输入可能会利用多种技术。 我们确定了以下红队策略的整体类型: 语言: 调节词语的表面形式而非语义,例如使用编码方案。 修辞学: 依靠论证或操纵。 可能的世界: 尝试改变交互的上下文。 虚构化 : 将操作基础转移到一个虚构的世界或一组条件中。 Stratagems: 使用影响更高级别的 LLM 交互方式的元策略。 如需了解更多信息,请参阅 Summon a demon and bind it:LLM 红色团队的基础理论,其中列出并描述了 12 种不同策略中的 35 种技术。 LLM 红色团队合作揭示了哪些内容? LLM 红色团队合作的目标并不是量化安全性。重点在于探索,以及找出红色团队成员可以退出 LLM 的现象和行为。换句话说,如果我们只遇到一次故障,那么失败是可能的。 红色团队与基准测试的另一个区别是对新奇性的关注。 对于网络安全和基于内容的红色团队,攻击的可能范围是无限的。对于网络安全而言,这是因为新的攻击方法正在不断开发中。对于内容,这是因为交互模式是通过文本进行的,可以无限地重新排列和增强。 因此,在发现新的安全漏洞时,可重复性并不重要。虽然使用一系列现有提示测试任何模型的故障是有意义的,就像基准测试一样,这永远无法表明安全性。它只是揭示了缺点。 您可以根据公开基准测试中的提示对部署的模型进行修补以确保安全性,或者在对齐期间使用这些数据,因此在基准测试中获得完整标记并不表明模型是安全的。但是,在安全基准测试中得分较低仍然表示存在缺点。 在安全环境中,为了严格测试模型,您应该超越公开知识,与模型密切互动,尝试寻找突破特定 LLM 的新方法。 从这个意义上说,LLM 红色团队合作是典型的手工活动。红色小队队员在与目标进行互动时,运用他们的人类专业知识和直觉。例如,他们可能会感觉到某个模型即将发出缓解消息 (例如,“作为 AI,我无法……”),他们可能会通过退出当前的请求行,或者以略有不同的策略开始新的聊天会话来响应此消息。 或者,红色团队成员可能会感觉到模型已接近生成,因此继续推送并略微改变其请求,直到找到解决方法,使模型以目标方式失败。他们将有效和无效的部分添加到自己的意识专业知识和直觉中,然后与其他红色团队成员分享。 这使得红队合作成为一种独特的人类活动,是对安全基准测试的补充。 人们如何使用来自 LLM 红队的知识? Red Team 成员通常会寻找 LLM 可能带来的危害。 伤害 的定义十分广泛。 红色团队训练可以专注于许多目标或指标之一,这可能取决于部署环境、用户基础、处理的数据或其他因素。红色团队成员还可能需要注意“休息”所需的复杂程度。与 LLM 进行单句交互后发现的危害性通常比复杂的多回合操作后出现的危害性更值得关注。 有时,红色团队的目标是好奇心,而好奇心的副产品可能是红色团队成员在其组织中或公开分享的内容。这既可以建立个人的专业知识和直觉,也可以提高社区的知识水平。传统网络安全知识通常在社交媒体上以非正式方式分享,这也适用于 LLM 安全。 在 NVIDIA,人类红色团队是我们做出模型发布决策的关键部分。我们拥有一支技能熟练的 LLM 红色团队和一支技能熟练的 AI 红色团队,他们会检查我们的模型和基础架构,以发现未在其他地方发现的漏洞和行为。这在三个方面为我们提供帮助: 它使我们能够就是否发布模型做出明智的决定 它建立了 LLM 红色团队前沿的高级技能池 它让我们确信,我们正在利用 AI 安全功能尽力而为并取得良好结果。 红色团队的结果将输入 NVIDIA 的增强型模型文档格式 Model Card++ 。 可以自动测试 LLM 安全性的某些部分。在一个案例中,当发现漏洞后,这可以被存储并用于测试其他 LLM,这样我们就不会再次出现同样的错误。我们在 NVIDIA NeMo Curator(生成式 AI Red-Teaming 和评估套件)中正是这样做的。准备测试其 LLM 部署安全性的开发者可以针对几乎任何模型运行开源 NVIDIA garak ,并获得报告,表明他们容易受到 120 多个不同类别的漏洞的影响。 掌握有关如何破解模型的知识可能会对不法之徒带来风险。当发现漏洞时,最好的方法是联系模型所有者,让他们有机会做出回应并修复漏洞。此过程称为 协同漏洞披露 ,也是 LLM 漏洞的常见做法。 NVIDIA 的 LLM 红队定义 我们将 LLM 红色团队视为 AI 红色团队的一个实例。我们的定义由 NVIDIA AI Red Team 开发,其灵感来自本期关于 LLM 红色团队实践的研究,以及计算语言学协会的 NLP Security SIG( SIGSEC )使用的定义。 请注意指定红队的特定子域,因为不同的受众通常会对所提及的形式做出不同的假设。 LLM 红色团队合作:系统测试包含 AI 模型的 AI 模型和系统,以识别会对运行或使用这些模型的系统造成威胁或风险的漏洞和行为。 它可以细分为两个方面:安全红队和基于内容的红队。 安全红色团队 评估模型和包含模型的系统的稳健性,使其能够抵御攻击影响模型本身或包含模型的系统的传统安全属性(例如,机密性、完整性和可用性)。 用于违反安全属性的攻击(例如对抗输入、模型提取、训练数据推理和提取或 prompt injection)通常由安全红色团队进行评估。 这些活动通常需要具有传统安全背景的团队利用调查结果并评估其影响。 基于内容的红色团队 评估模型在对抗操作下的不良行为,生成违反模型某些预先指定的行为契约的输出 (显式 (例如 model card) 或隐式)。 这些行为可能包括冒犯性的、不受欢迎的或不安全的输出,包括有偏见或偏见的生产、关于不安全或不法活动的指令、代表模型所有者作出承诺,或根据受保护的特征做出决定。常见的技术包括各种形式的 jailbreaking 和 guardrail evasion。 这些活动通常需要道德团队、法律团队或其他类似领域专家的支持,以评估调查结果的影响。 提升 LLM 的安全性和安全性 NVIDIA NeMo Guardrails 是一个可扩展的平台,用于定义、编排和执行 AI Guardrails,以在 AI 代理和其他生成式 AI 应用中实现内容安全、越狱预防等。 NeMo Guardrails 和 NVIDIA garak 工具包现已面向开发者和企业推出。借助 NVIDIA AI Enterprise ,企业可以从高级别的安全性和可靠性中受益。 在 GTC 大会上与专家会面 在 GTC 2025 上,NVIDIA 科学家将为这项工作以及 AI 安全领域的其他工作提供支持。您可以听到关于应对 AI 治理中的关键挑战的小组讨论,其中我们讨论了构建负责任的 AI 系统的实用方法。 我们的网络安全 AI 和安全团队将举办一场深入的独家会议,旨在安全地将您的 AI 代理工作流从蓝图转变为生产。我们的专家将为您解答有关构建可信 AI 系统的所有问题。 致谢 感谢 Nanna Inie、Jonathan Stray 和 Leon Derczynski 为“召唤恶魔并将其绑定:LLM 红队合作的基础理论”所做的工作:在 PLOS One 上发表的论文。
https://developer.nvidia.com/blog/agentic-autonomy-levels-and-security/
Agentic Autonomy Levels and Security
Agentic workflows are the next evolution in AI-powered tools. They enable developers to chain multiple AI models together to perform complex activities, enable AI models to use tools to access additional data or automate user actions, and enable AI models to operate autonomously, analyzing and performing complex tasks with a minimum of human involvement or interaction. Because of their power, agentic workflows also present an element of risk. The most common model at the core of agentic systems remains some variety of LLM, which remains vulnerable to prompt injection if untrusted data can be introduced into the system. To help assess and mitigate those vulnerabilities, NVIDIA presents an Agentic Autonomy framework, which we use for the following purposes: Understand risks associated with increasing complexity of AI workflows Help model and mitigate the risks posed by such workflows Introduce how we use it to model threats to agentic systems Explore how NVIDIA uses this framework for security for AI and more cybersecurity use cases at NVIDIA GTC . Manipulating autonomous systems In practice, exploitation of AI-powered applications requires two key components: An adversary must be able to get their data (read: attack) into the system through some mechanism. There must be a downstream effect that their malicious data can trigger. When the AI component of the system is an LLM, this is commonly referred to as either direct prompt injection (the adversary and user are the same person) or indirect prompt injection (the adversary and the user could be different people). However, similar threats exist for other types of AI models used in autonomous AI powered applications. Both flavors of prompt injection are rooted in the lack of separation between the control and data plane in the LLM architecture. Direct prompt injection can be commonly seen in such examples as “Do Anything Now” (or DAN) prompts, which attempt to subvert safety and content filters trained into the model, and enable it to produce objectionable or unwanted content. These attacks typically only impact the active user’s session, and so typically have limited impact. Indirect prompt injection, in which a threat actor causes their data to be included in another user’s session and thus takes control of the LLM’s behavior and productions, is more serious. This is often accomplished by targeting data sources used by retrieval augmented generation (RAG) tools, including internet search, such that documents containing malicious instructions are unknowingly included in the current user’s session. When the capability to manipulate another user’s session,– particularly LLM outputs, is combined with the use of tools, a significant number of new potential risks present themselves. When potentially untrusted data enters an agentic system, any downstream actions taken by that system become potentially adversary-controlled. Security and complexity in AI autonomy Even before “agentic” AI became a distinct class of product offerings, the orchestration of AI workloads in sequences was commonplace. Even simple flows, such as an endpoint security product routing a sample to the correct AI powered analysis engine depending on file format, is arguably an example of such a workflow. Because workflows such as these are deterministic, it’s straightforward to enumerate all possible paths, map out data flows, and isolate untrusted data from potentially risky actions that might be impacted by attacker-controlled data. As the industry moves to systems that have more internal decision making capabilities, higher degrees of tool use, and more complex interactions between model components, the number of potential data flow paths increases exponentially, making threat modeling more difficult. The following set of classification, ordering systems from least autonomous to most, distinguishes between different degrees of autonomy afforded by different architectures, and helps assess corresponding risks. Autonomy level Description Example 0 – Inference API A single user request results in a single inference call to a single model. An NVIDIA NIM microservice serving a single model 1 – Deterministic system A single user request triggers more than one inference request, optionally to more than one model, in a predetermined order that does not depend on either user input or inference results. NVIDIA Generative Virtual Screening for Drug Discovery Blueprint 2 – Weakly autonomous system A single user request triggers more than one inference request. An AI model can determine if or how to call plugins or perform additional inference at fixed predetermined decision points. Build an Enterprise RAG Pipeline Blueprint 3 – Fully autonomous system In response to a user request, the AI model can freely decide if, when, or how to call plugins or other AI models, or to revise its own plan freely, including deciding when to return control to the user. NVIDIA Vulnerability Analysis for Container Security Blueprint , “BabyAGI”, computer use agents Table 1. System autonomy level classification Level 1 Level 1 is a linear chain of calls, where the output of one AI call or tool response is conveyed to the next step in an entirely deterministic manner. The complete flow of data through the system is known in advance. Figure 1. First level of autonomy Level 2 In level 2, the output from the AI model may be sent along different paths through the workflow in a data-dependent manner. While every execution path can be traced, it’s not known until the workflow is executed which execution path is used. Figure 2. Linear chain of AI calls and tool responses It forms a directed acyclic graph in that there is no path through the workflow that could potentially form a loop or cycle. All possible paths can be finitely enumerated. Most LLM routers fall into this level. Level 3 For level 3, the number of potential execution paths grows exponentially with the number of execution steps in the workflow. Tools or other AI models may be invoked as the workflow progresses. Figure 3. Output from the AI model goes along different paths The presence of a cycle within the directed graph means that an exponentially increasing (and, at least in theory, potentially infinite) number of paths through the graph exist. It’s generally not tractable to enumerate all possible paths ahead of time, nor predict which specific path will be used for a particular input. Separation of security boundary analysis and tool security Our early experience with analyzing and securing agentic systems in the field led us to realize that the risk associated with these systems lies mostly in the tools or plugins available to those systems. In the absence of a tool or plugin that can perform sensitive or physical actions, the primary risk posed by manipulation of the AI component is misinformation, regardless of the degree of complexity of the workflow. This observation drives the primary difference between this framework, which focuses on security boundaries as being distinct from tool security, and other frameworks, such as that proposed by HuggingFace in their smolagents library ( Introduction to Agents ). Some levels align closely. Their simple processor is approximately a deterministic system and their multi-step agent maps closely onto a fully autonomous system . However, this framework focuses on the difficulty of identifying security boundaries given the degree of autonomy expressed by an AI component in describing program flow. HuggingFace’s description of agency, by combining the risks of tool use with the complexity of program flow, provides a quick rubric to judge systemic risks at the level of the system. This framework’s approach of separating the risks of tool calling and tool use from the difficulty of drawing security boundaries supports more detailed threat modeling of complex flows and potential mitigations. In the proposed framework, separating the following concerns enables you to more concretely assess the risk posed by an agentic workflow: Security boundaries with respect to the flow of untrusted data through the system Security properties of specific functions or tools that may be called by the agentic system Within a simple deterministic system with clear security boundaries, even a sensitive plugin may be a fairly low-risk if you can clearly separate it from untrusted input. In a similar manner, even a fully agentic system may be relatively low-risk as long as no sensitive tools are available to it and the output of the system is appropriately validated before use. The combination of a more complex workflow with sensitive tools, however, demands further analysis to ensure secure operation. Threat modeling under this framework Increasing autonomy levels do not inherently represent increasing risk. Instead, they represent increasing lack of determinism and predictability of the system behavior, which makes it more difficult to assess the impact of untrusted data. Risk associated with agentic systems is located largely in the tools or plugins that the system has access to which can perform sensitive actions, including the following examples: Completing a purchase Sending an email Physical actions such as moving a robot or setting a thermostat Broadly speaking, anything that takes an action on behalf of a user that requires their delegated authorization to complete, or anything that results in a physical change in a system should be treated as potentially sensitive, and analyzed for its potential to act upon or be influenced by untrusted data. Mitigation of this tool risk relies in large part on the ability to block the ability of attackers to inject malicious data into plugins, which becomes significantly more difficult with each increase in autonomy level. Possible downstream effects are much more dependent on the details of the system and what plugins are being used, but at a minimum, a threat actor who can feed malicious data to the system must be presumed to be able to at least influence, if not control outright, the responses of the system. If tools or plugins are being used, the potential threat increases significantly. A bad actor who can control the output of an AI model embedded in a system that uses tools might also be able to control which plugins are used and what actions those tools take. This can lead to a range of threats: Data exfiltration, for example, through Markdown rendering plugins Remote code execution, for example, through plugins that run LLM-provided code The risk of physical harm, if the system has control or influence over some sort of physical system As you climb the autonomy hierarchy, you need a more careful analysis of and management of potentially untrusted data (Figures 4 and 5). A Level 0 system, with no vectors for untrusted data, is relatively straightforward from this perspective. A Level 1 system, with a predetermined workflow, always accesses the same data sources, models, and tools in the same sequence, making it straightforward to determine if untrusted data can be introduced into the workflow. The designer can either isolate sensitive functionality from the untrusted data, or design appropriate sanitization strategies for that data. The number of potential execution paths in a Level 2 system is greater than in Level 1 systems but still enumerable, and specific paths that contain untrusted data can be analyzed in greater detail. Level 3 autonomous systems pose the largest difficulty, as the number of execution paths typically grows exponentially in the number of self-reflection steps performed by the agent. In this case, these include time-of-use sanitization, time-of-use taint tracing, or other risk mitigations such as requiring manual user approval of potentially hazardous actions. Taint tracing Taint tracing is marking an execution flow as having received untrusted data and then either preventing use of or requiring manual re-authorization for any sensitive tool. For Level 1 and Level 2 systems, identifying potential sources of untrustworthy data, and evaluating the risks of downstream impacts is relatively straightforward. When untrustworthy data enters the system, every downstream AI model and its associated outputs and every downstream tool and its associated actions and outputs are also untrusted. Figure 4. Taints from untrusted sources can be bounded and enumerated in Level 1 and 2 systems For level 3 systems, taint tracing in the general case becomes almost intractable. When untrusted data has been introduced into the workflow, it has the potential to propagate to any other downstream component through the looping structure inherent to level 3 workflows. Figure 5. Taints from untrusted sources are difficult to bound and enumerate in Level 3 systems By classifying an agentic application into the correct level, it becomes simpler to identify the overall level of risk posed by the application and corresponding security requirements. Recommended security controls per autonomy level Table 2 provides a high-level summary of some suggested mitigating controls that should be placed around agents of various autonomy levels. Not all controls are required for all agentic workloads, depending on the sensitivity of tools provided to the workload. Autonomy level Recommended security controls 0 – Inference API Use standard API security. 1 – Deterministic system Manually trace dataflows and order workflow correctly to prevent untrusted data from entering sensitive plugins. 2 – Bounded agentic workflow Enumerate dataflows, identify ones with potentially untrusted data, explore isolation or sanitization options, and consider time-of-use manual approval of sensitive actions. 3 – Fully autonomous system Implement taint tracing and mandatory sanitization of potentially untrusted data. Consider time-of-use manual approval of sensitive actions. Table 2. Recommended security controls by level For Level 0 systems, you can apply simple application and API security. Due to lack of further interaction taken by the system on the basis of AI model output, overall risk here is largely limited to non-AI components. Level 1 and 2 systems provide a slightly elevated level of risk, but one that is straightforward to evaluate. Because these systems contain no loops, all execution flows through the systems can be exhaustively enumerated and examined for cases when untrusted data may flow into a sensitive tool. Flows of this type are particularly risky when they pass through an LLM between the source and the sink. In level 1 systems, it’s often sufficient to reorder the workflow. For more complex (level 2) systems, risky flows from untrusted sources to sensitive sinks should be carefully evaluated to see if they can be eliminated, or otherwise if the untrusted data or input to the sensitive tool can be properly sanitized to remove risk. Finally, if the flows cannot be rearranged or eliminated, or robust sanitization is not possible, add time-of-use manual approval to sensitive tools that may receive untrusted data, thus placing a human in the loop. This enables users of agentic workflows to inspect risky actions before they are performed and deny the workflow the ability to perform adversarially manipulated ones. Level 3 systems, due to their complexity, typically cannot be meaningfully rearranged to block untrusted sources to sensitive sink flows. In this case, sanitization of untrusted data at time of retrieval should be implemented. If possible, implementing taint tracing should also be considered. In the event that neither is possible, simply requiring that all potentially sensitive tools require manual approval before executing their function may be used as a last resort. As with threat modeling, breaking agentic workloads into different complexity levels enables the quick determination of reasonable security controls to avoid the risk of having sensitive tools ingest untrusted data, including LLM outputs that have been manipulated by prompt injection. Conclusion As systems climb the autonomy hierarchy, they become more complex and more difficult to predict. This makes threat modeling and risk assessment more difficult, particularly in the presence of a range of data sources and tools of varying trustworthiness and sensitivity. Identifying the system autonomy level provides a useful framework for assessing the complexity of the system, as well as the level of effort required for threat modeling and necessary security controls and mitigations. It’s also important to analyze the plugins in the pipeline and classify them depending on their capabilities to provide an accurate risk evaluation based on the autonomy level.
https://developer.nvidia.com/zh-cn/blog/agentic-autonomy-levels-and-security/
代理自主级别与安全性
智能体工作流是 AI 赋能工具的下一次演进。它们使开发者能够将多个 AI 模型关联在一起以执行复杂的活动,使 AI 模型能够使用工具来访问其他数据或自动执行用户操作,并使 AI 模型能够自主运行,以尽可能减少人类参与或交互的方式分析和执行复杂的任务。 由于其强大的功能,代理工作流也存在风险因素。代理系统的核心最常见的模型仍然是各种 LLM,如果可以将不受信任的数据引入系统,这些模型仍然容易受到 prompt injection 的影响。 为帮助评估和缓解这些漏洞,NVIDIA 提供了一个 Agentic Autonomy 框架,我们将其用于以下用途: 了解与日益复杂的 AI 工作流相关的风险 帮助模拟和减轻此类工作流程带来的风险 介绍如何使用它对 agentic 系统的威胁进行建模 操控自主系统 在实践中,开发 AI 赋能的应用需要两个关键组件: 对手必须能够通过某种机制将其数据 (读取:攻击) 传入系统。 必须存在其恶意数据可能触发的下游影响。 当系统的 AI 组件是 LLM 时,这通常被称为直接提示注入 (对手和用户是同一个人) 或间接提示注入 (对手和用户可以是不同的人)。 然而,对于 AI 驱动的自主应用中使用的其他类型的 AI 模型,也存在类似的威胁。这两种提示注入的根源在于 LLM 架构中控制平面和数据平面之间缺乏分离。 直接提示注入通常出现在“Do Anything Now”(或 DAN)提示等示例中,这些示例试图颠覆训练到模型中的安全和内容过滤器,并使其产生令人反感或不需要的内容。这些攻击通常只影响活跃用户的会话,因此通常影响有限。 间接提示注入 (即威胁行为者将其数据包含在其他用户的会话中,从而控制 LLM 的行为和生产) 更为重要。这通常通过定位检索增强生成 (RAG) 工具 (包括互联网搜索) 使用的数据源来实现,这样就会在不知情的情况下将包含恶意指令的文档包含在当前用户的会话中。当操控其他用户会话 (尤其是 LLM 输出) 的能力与工具的使用相结合时,会出现大量新的潜在风险。 当潜在的不可信任数据进入代理系统时,该系统采取的任何下游操作都可能受到对手的控制。 AI 自主领域的安全性和复杂性 早在“代理式”AI 成为一类独特的产品之前,按序列编排 AI 工作负载就已屡见不鲜。即使是简单的流程 (例如端点安全产品,根据文件格式将样本路由到由 AI 驱动的正确分析引擎) 也可以说是此类工作流的示例。 此类工作流程具有确定性,因此您可以直接列举所有可能的路径,绘制数据流地图,并将不受信任的数据与可能受攻击者控制数据影响的潜在风险操作隔离开来。 随着行业转向拥有更多内部决策能力、更高工具使用率以及模型组件之间更复杂交互的系统,潜在数据流路径的数量呈指数级增加,从而增加了威胁建模的难度。 以下分类将系统从最不自主排序为最自主排序,可区分不同架构提供的不同自主程度,并有助于评估相应的风险。 自主级别 说明 示例 0 – 推理 API 单个用户请求会导致对单个模型进行一次推理调用。 为单个模型提供服务的 NVIDIA NIM 微服务 1 – 确定性系统 单个用户请求会按不依赖于用户输入或推理结果的预定顺序触发多个推理请求 (可以选择触发多个模型)。 NVIDIA 生成式虚拟筛选蓝图 2 – 弱自主系统 单个用户请求会触发多个 inference 请求。AI 模型可以确定是否或如何调用插件,或在固定的预定决策点执行额外 inference。 构建企业 RAG Pipeline Blueprint 3 – 全自主系统 根据用户请求,AI 模型可以自由决定是否、何时或如何调用插件或其他 AI 模型,或者自由修改自己的计划,包括决定何时将控制权交回给用户。 针对容器安全蓝图“BabyAGI”、计算机使用代理的 NVIDIA 漏洞分析 表 1. 系统自主级别分类 1 级 1 级是线性调用链,其中一个 AI 调用或工具响应的输出以完全确定的方式传输到下一步。通过系统的完整数据流是预先知道的。 图 1. 第一级自动驾驶 2 级 在第 2 级中,AI 模型的输出可能会以数据依赖的方式在工作流中沿不同路径发送。虽然可以追踪每个执行路径,但在工作流执行之前,我们不知道使用的是哪种执行路径。 图 2、AI 调用和工具响应的线性链 它形成了有向无环图,因为工作流中没有可能形成循环或循环的路径。所有可能的路径都可以有限枚举。大多数 LLM 路由器都属于这一级别。 3 级 对于级别 3,潜在执行路径的数量会随着工作流中执行步骤的数量呈指数级增长。随着工作流的进展,可能会调用工具或其他 AI 模型。 图 3、AI 模型的输出路径不同 有向图形中存在循环意味着图形中存在呈指数级增加(至少理论上可能是无穷大)的路径。通常无法提前列举所有可能的路径,也无法预测特定输入将使用哪种特定路径。 安全边界分析与工具安全性分离 我们早期在现场分析和保护代理系统的经验使我们意识到,与这些系统相关的风险主要在于这些系统可用的工具或插件。在缺乏可执行敏感或物理操作的工具或插件的情况下,无论工作流程的复杂程度如何,操作 AI 组件所带来的主要风险都是错误信息。 这一观察结果决定了此框架与其他框架 (例如 Hugging Face 在其 smolagents 库 (Introduction to Agents) 中提出的框架) 之间的主要区别,前者侧重于安全边界,而后者不同于工具安全边界。一些层次非常相似。他们的简单处理器近似为 确定性系统 ,并且他们的多步骤智能体紧密地映射到完全自主的系统上。 但是,鉴于 AI 组件在描述程序流时表示的自主程度,此框架重点关注识别安全边界的难度。Hugging Face 对机构的描述将工具使用的风险与程序流程的复杂性相结合,提供了在系统层面判断系统风险的快速指南。此框架将工具调用和工具使用的风险与绘制安全边界的难度分开,支持对复杂流程和潜在缓解措施进行更详细的威胁建模。 在建议的框架中,通过分离以下问题,您可以更具体地评估由 agentic workflow 带来的风险: 与通过系统的非可信数据流相关的安全边界 代理系统调用的特定函数或工具的安全属性 在具有明确安全边界的简单确定性系统中,如果您能够将敏感插件与不信任的输入明确分开,即使是敏感插件也可能风险较低。同样,只要没有可用的敏感工具,且系统的输出在使用前经过适当验证,则即使是全代理系统也可能相对风险较低。 然而,将更复杂的工作流程与敏感工具相结合需要进行进一步分析,以确保安全操作。 此框架下的威胁建模 自动驾驶级别的增加本身并不代表风险的增加。相反,它们代表着系统行为越来越缺乏确定性和可预测性,这使得评估不可信任数据的影响变得更加困难。 与代理系统相关的风险主要存在于系统可以访问的工具或插件中,而这些工具或插件可以执行敏感操作,其中包括以下示例: 完成购买 发送电子邮件 物理动作,例如移动机器人或设置 thermostat 从广义上讲,任何代表用户采取行动且需要其授权才能完成的操作,或任何导致系统发生物理变化的操作,都应被视为潜在的敏感数据,并分析其对不受信任数据采取行动或受其影响的可能性。 降低此工具风险在很大程度上依赖于能否阻止攻击者向插件中注入恶意数据,而随着自主级别的提升,这一能力会变得更加困难。 可能的下游影响更多取决于系统的细节和使用的插件,但至少必须假定能够向系统提供恶意数据的威胁行为者至少能够影响(如果不是完全控制)系统的响应。 如果使用工具或插件,潜在威胁会显著增加。控制嵌入到使用工具的系统中的 AI 模型的输出的坏行为者也可能能够控制使用哪些插件以及这些工具执行哪些操作。这可能会导致一系列威胁: 数据泄漏,例如通过 Markdown 渲染插件 远程执行代码,例如,通过运行 LLM 提供的代码的插件 如果系统控制或影响某种物理系统,则会产生物理伤害风险 在提升自主层次结构时,您需要更仔细地分析和管理潜在的不可信任数据(图 4 和图 5)。 从这个角度来看,没有不可信数据向量的 Level 0 系统相对简单。 具有预定工作流的 1 级系统始终可以按相同顺序访问相同的数据源、模型和工具,从而可以直接确定是否可以将不可信任的数据引入工作流。设计师可以将敏感功能与非可信数据隔离开来,或者为这些数据设计合适的清理策略。 2 级系统中的潜在执行路径数量大于 1 级系统,但仍可枚举,并且可以更详细地分析包含非可信数据的特定路径。 3 级自主系统的难度最大,因为执行路径的数量通常随智能体执行的自反射步骤的数量呈指数级增长。在这种情况下,这些措施包括使用时间清洁、使用时间污点追踪或其他风险缓解措施,例如要求用户手动批准潜在的危险操作。 污点追踪 污点追踪 将执行流程标记为接收了不可信任的数据,然后阻止对任何敏感工具的使用或要求对其进行手动重新授权。 对于 1 级和 2 级系统,识别潜在来源的不可信数据,以及评估下游影响的风险相对简单。当不可信数据进入系统时,每个下游 AI 模型及其关联输出、每个下游工具及其关联操作和输出也同样不可信。 图 4、不可信来源的污点可以在 Level 1 和 Level 2 系统中进行限制和列举 对于 level 3 级系统,在一般情况下,污点追踪几乎难以解决。当不受信任的数据引入工作流后,它有可能通过 level 3 级工作流固有的循环结构传播到任何其他下游组件。 图 5、来自不可信来源的污点很难在 Level 3 级系统中绑定和列举 通过将代理应用程序分类为正确的级别,可以更简单地识别应用程序构成的总体风险水平以及相应的安全要求。 每个自主级别的推荐安全控制 表 2 简要概述了一些建议的缓解控制,这些控制应放置在不同自主级别的 agents 周围。并非所有 agents 工作负载都需要所有控件,具体取决于为工作负载提供的工具的灵敏度。 自主级别 推荐的安全控制 0 – 推理 API 使用标准 API 安全性。 1 – 确定性系统 手动追踪数据流并正确订购工作流,以防止不受信任的数据进入敏感插件。 2 – 有界限的代理工作流 列举数据流、识别具有潜在不可信任数据的数据流、探索隔离或清理选项,并考虑使用时间手动批准敏感操作。 3 – 全自主系统 实现污点追踪和强制清理对可能不可信的数据。考虑使用时间手动批准敏感操作。 表 2. Recommended security controls by level 对于 0 级系统,您可以应用简单的应用程序和 API 安全性。由于系统无法根据 AI 模型输出进行进一步交互,因此整体风险主要局限于非 AI 组件。 1 级和 2 级系统的风险水平略高,但评估起来很简单。由于这些系统不包含循环,因此可以详尽无遗地列举和检查流经系统的所有执行流程,以备不时之需。 当此类流通过源和接收端之间的 LLM 时,风险特别大。在 1 级系统中,通常只需重新排列工作流程即可。对于更复杂(2 级)的系统,应仔细评估从不受信任的来源到敏感汇的风险流,以确定是否可以消除这些流,或者是否可以适当清除不受信任的数据或敏感工具的输入,以消除风险。 最后,如果流程无法重新排列或消除,或无法进行可靠的清理,请为可能会接收不受信任数据的敏感工具添加使用时间手动审批,从而让用户进入循环。这使得代理工作流的用户能够在执行风险操作之前检查这些操作,并使工作流无法执行对抗性操作。 由于 Level 3 级系统的复杂性,通常无法进行有意义的重新排列,以将不受信任的源阻塞到敏感的接收流。在这种情况下,应在检索时清除不受信任的数据。 如果可能,还应该考虑实施污点追踪。如果两者都无法实现,则仅要求所有潜在敏感的工具在执行其功能之前需要手动批准即可将其用作最后手段。 与威胁建模一样,将代理工作负载分解为不同的复杂性级别有助于快速确定合理的安全控制,从而避免让敏感工具提取不受信任数据 (包括通过提示注入操作的 LLM 输出) 的风险。 结束语 随着系统对自动驾驶层次结构的要求越来越高,这些层次结构变得越来越复杂,预测难度也越来越大。这使得威胁建模和风险评估变得更加困难,尤其是在存在各种数据源以及具有不同可靠性和灵敏度的工具的情况下。 确定系统自主级别为评估系统复杂性、威胁建模以及必要的安全控制和缓解措施所需的工作量水平提供了有用的框架。 分析管道中的插件并根据其功能对其进行分类也很重要,以便根据自主级别提供准确的风险评估。
https://developer.nvidia.com/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/
NVIDIA Deep Learning Institute Releases New Generative AI Teaching Kit
Generative AI, powered by advanced machine learning models and deep neural networks, is revolutionizing industries by generating novel content and driving innovation in fields like healthcare, finance, and entertainment. NVIDIA is leading this transformation with its cutting-edge GPU architectures and software ecosystems, such as the H100 Tensor Core GPU and CUDA platform, which optimize the development and deployment of generative models. NVIDIA NIM enhances the efficiency and scalability of AI inference tasks, enabling rapid deployment and iteration across various computing environments and accelerating advancements in generative AI applications. The importance of generative AI education As generative AI models, such as GANs and transformers, become increasingly sophisticated, there is a growing demand for skilled professionals who can develop, refine, and ethically deploy these technologies. A strong educational foundation in generative AI equips students with the practical skills and theoretical knowledge needed to innovate in areas like content creation, drug discovery, and autonomous systems. College and university education in generative AI is crucial due to the rapidly expanding role of AI in almost every industry. By integrating generative AI into their curriculum, universities prepare the next generation of AI researchers, engineers, and thought leaders to advance the field and address the complex challenges associated with AI-driven innovation. The new Generative AI Teaching Kit, a collaboration between the NVIDIA Deep Learning Institute (DLI) and Dartmouth College, is set to empower the next generation of professionals with the skills and knowledge needed in this rapidly evolving field. This comprehensive teaching resource enables educators to provide students access to cutting-edge tools, frameworks, and practical exercises that are crucial for understanding the complexities of Generative AI and large language model development and deployment. By equipping students with a deep understanding of generative AI techniques, the Teaching Kit enables educators to foster future innovation and creativity in AI-driven industries. As students transition into the workforce, they will be better prepared to tackle global challenges, from improving healthcare and science to advancing sustainable technologies. Sam Raymond, adjunct assistant professor of engineering at Dartmouth College, was instrumental in developing the content. “Empowering students with skills to understand and potentially develop their own GPU-accelerated Generative AI applications is the primary objective,” said Raymond. “I believe students who go through this course will be at a significant advantage in the job market and help bridge the knowledge gap in industries today.” Overview of the Generative AI Teaching Kit All Teaching Kits include lecture slides, hands-on labs, Jupyter notebooks, knowledge checks, and free online self-paced courses that provide certificates of competency for students, all comprehensively packaged up and ready for classroom and curriculum integration. Figure 1. The Generative AI Teaching Kit includes free access to all DLI online self-paced courses, offering certificates for students . The aim of the Generative AI Teaching Kit is to introduce the foundational concepts of natural language processing (NLP) that are essential for understanding LLMs and generative AI more broadly. Key concepts of LLMs are then examined using NVIDIA GPUs, tools, and services, as well as open-source libraries and frameworks. A simple pretraining exercise of a GPT model shows basic training processes in the cloud. The kit also covers diffusion models to explore the application of generative AI in image and video generation. Multi-modal LLM architectures are then introduced, with a focus on optimizing various LLM architectures during fine-tuning using the NVIDIA NeMo framework. Advancements in inference and the refinement of tools like chatbots are also discussed, using NVIDIA NIM, NeMo Guardrails, TensorRT, and TensorRT-LLM to enhance efficiency and scalability in production environments. The Generative AI Teaching Kit contains focused modules that combine theory, algorithms, programming, and examples: Module 1: Introduction to Generative AI Module 2: Word Embeddings, Tokens, and NLP Module 3: Large Language Models and the Transformer Module 4: LLM Scaling Laws and LLM Families Module 5: Multimodal Learning and Its Applications Module 6: Diffusion Models in Generative AI Module 7: Model Training (Pre-Training, Instruction Following, and PEFT) Module 8: LLM Orchestration Module 9: Scaling Model Training to Distributed Workloads Through a collaboration with Google, educators also get access to free Google Colab credits to run the lab notebooks in preparation for teaching the course. This content is valuable for educators across various fields, especially in computer science and engineering. Its modular design enables instructors to tailor the course to meet the specific needs of their students and create a customized learning experience. Select professors from around the world have already been given early access to first-release modules. “I’m eager to integrate the Generative AI Teaching Kit in my AI in Materials Engineering class,” said Mohadeseh Taheri-Mousavi, assistant professor in the Materials Science and Engineering department at Carnegie Mellon University. “The comprehensive lecture notes with well-structured coding labs with examples from various fields, and associated online courses with certificates, will provide my students with the cutting-edge resources to deeply understand the broad applications of generative AI techniques in various fields.” Professor Payam Barnaghi from the Department of Brain Sciences at Imperial College London uses LLMs and generative AI in his research using electronic health records and healthcare data. “NVIDIA Generative AI Teaching Kit content is a wonderful resource for students learning the latest developments in AI and machine learning,” said Barnaghi. “As a result of having early access to the first modules, I plan to use this content as the basis for teaching advanced topics in my machine learning for neuroscience courses.” Given the fast-paced advancements in generative AI, educators can expect the teaching materials to be updated over time. NVIDIA is dedicated to offering high-end educational resources and welcomes feedback to continuously improve the content. LLM technical certification discounts for students The Generative AI Teaching Kit also includes access to a 50% discount on the NVIDIA-Certified Associate: Generative AI LLMs certification exam and the NVIDIA-Certified Associate: Generative AI Multimodal certification exam for educators to distribute to students. It also includes access to both exams free of charge for educators. Both exams are online and proctored remotely, include 50-60 questions, and have a 60-minute time limit. Figure 1. NVIDIA-Certified Associate These technical certifications validate students’ expertise in generative AI, offering a competitive advantage in the job market. They can open doors to exciting career opportunities across AI-driven industries. The NCA Generative AI LLMs certification is an entry-level credential that validates the foundational concepts for developing, integrating, and maintaining AI-driven applications using generative AI and large language models (LLMs) with NVIDIA solutions. The NCA Generative AI Multimodal certification is an entry-level credential that validates the foundational skills needed to design, implement, and manage AI systems that synthesize and interpret data across text, image, and audio modalities. NVIDIA recommends that students take these exams after completing the Generative AI Teaching Kit. Get started Educators can access the first release of the Generative AI Teaching Kit and other kits for free by joining the NVIDIA DLI Teaching Kit Program . About the NVIDIA Deep Learning Institute The NVIDIA Deep Learning Institute (DLI) offers resources for diverse learning needs, from learning materials to self-paced and live training to educator programs. Individuals, teams, organizations, educators, and students can now find everything they need to advance their knowledge in AI, accelerated computing, accelerated data science, graphics, simulation, and more.
https://developer.nvidia.com/zh-cn/blog/nvidia-deep-learning-institute-releases-new-generative-ai-teaching-kit/
NVIDIA 深度学习培训中心发布全新生成式 AI 教学套件
生成式 AI 由先进的机器学习模型和深度神经网络提供动力支持,通过生成新内容并驱动医疗、金融和娱乐等领域的创新来变革各行各业。 NVIDIA 正凭借其先进的 GPU 架构和软件生态系统(例如 H100 Tensor Core GPU 和 CUDA 平台),引领这一转型,该平台可优化生成式模型的开发和部署。 NVIDIA NIM 可增强 AI 推理任务的效率和可扩展性,从而实现跨各种计算环境的快速部署和迭代,并加速生成式 AI 应用的发展。 生成式 AI 教育的重要性 随着生成式 AI 模型(例如 GANs 和 Transformers)变得越来越复杂,对能够开发、改进和合乎道德地部署这些技术的熟练专业人员的需求也在不断增长。生成式 AI 方面坚实的教育基础可以让学生掌握在内容创作、药物研发和自主系统等领域进行创新所需的实践技能和理论知识。 由于生成式 AI 在几乎每个行业中的作用都在迅速扩展,因此学院和大学教育在生成式 AI 中至关重要。通过将生成式 AI 集成到其课程中,大学可以培养新一代 AI 研究人员、工程师和思想领袖,以推动该领域的发展,并应对与 AI 驱动的创新相关的复杂挑战。 新的生成式 AI 教学套件,由 NVIDIA 深度学习研究所(DLI)和达特茅斯学院合作开发,旨在为新一代专业人士提供这一快速发展领域所需的技能和知识。 借助这一全面的教学资源,教育工作者能够为学生提供先进的工具、框架和实践练习,这些工具、框架和实践练习对于理解生成式 AI 和大语言模型开发和部署的复杂性至关重要。通过让学生深入了解生成式 AI 技术,该教学套件使教育工作者能够在 AI 驱动的行业中培养未来的创新和创造力。 随着学生向劳动力过渡,他们将更好地准备应对全球挑战,从改善医疗和科学到推进可持续技术。 达特茅斯学院(Dartmouth College)工程系兼职助理教授 Sam Raymond 在编写课程内容时发挥了重要作用。Raymond 说:“让学生掌握理解并开发自己的 GPU 加速生成式 AI 应用程序的技能是我们的主要目标。我相信,学习本课程的学生将在就业市场中占据显著优势,并有助于缩小当今各行各业的知识差距。 生成式 AI 教学套件概述 所有教学套件均包含教学幻灯片、动手实验室、Jupyter notebooks、知识检查和免费的在线自定进度课程,为学生提供能力证书。这些课程均经过全面打包,ready for课堂和课程整合。 图 1. 生成式 AI 教学套件包括免费访问所有 DLI 在线自定进度课程,并为学生提供证书。 生成式 AI 教学套件旨在介绍自然语言处理(NLP)的基本概念,这些概念对于更广泛地理解 LLM 和生成式 AI 至关重要。然后,我们使用 NVIDIA GPUs、工具和服务以及开源库和框架来检查 LLM 的关键概念。一个 GPT 模型的简单预训练练习展示了云端的基本训练流程。 该套件还涵盖扩散模型,以探索生成式 AI 在图像和视频生成中的应用。然后,介绍多模态 LLM 架构,重点介绍如何在使用 NVIDIA NeMo 框架进行微调的过程中优化各种 LLM 架构。此外,还讨论了推理方面的进展以及聊天机器人等工具的优化,这些工具使用 NVIDIA NIM、NeMo Guardrails、TensorRT 和 TensorRT-LLM 来提高生产环境的效率和可扩展性。 生成式 AI 教学套件包含结合理论、算法、编程和示例的重点模块。这第一版包括以下模块: 生成式 AI 简介 生成式 AI 中的扩散模型 LLM 编排 未来版本的工具包将提供更多的模块。 这些内容对各个领域的教育工作者都很有价值,尤其是计算机科学和工程领域的教育工作者。其模块化设计使讲师能够根据学生的具体需求定制课程,并打造定制化的学习体验。来自世界各地的精选教授已经可以抢先体验第一版模块。“我非常希望将 Generative AI 教学套件集成到我的 AI 在材料工程课堂中,”卡内基梅隆大学材料科学与工程系助理教授 Mohadeseh Taheri-Mousavi 说,“内容全面的教学笔记包含结构良好的编码实验室和来自各个领域的示例,以及与证书相关的在线课程,将为我的学生提供前沿资源,帮助他们深入了解 Generative AI 技术在各个领域的广泛应用。” 伦敦帝国理工学院大脑科学系的Professor Payam Barnaghi教授使用电子病历和医疗健康数据在研究中使用LLMs和generative AI。“NVIDIA Generative AI Teaching Kit 内容是学生学习 AI 和 machine learning 最新发展的绝佳资源,”Barnaghi 说:“由于能够提前访问首批模块,我计划以这些内容为基础,在神经科学课程的机器学习中教授高级主题。 鉴于生成式 AI 的快速发展,教育工作者可以预期教学材料会随着时间的推移而更新。NVIDIA 致力于提供高端教育资源,并欢迎反馈,以便不断改进内容。 开始使用 教育工作者可以加入 NVIDIA DLI 教学套件计划 ,免费获取第一版 Generative AI 教学套件和其他套件。 关于 NVIDIA 深度学习培训中心 NVIDIA 深度学习研究所 (DLI) 为各种学习需求提供资源,从学习材料到自定进度和实时培训,再到教育工作者计划。个人、团队、组织、教育工作者和学生现在可以找到所需的一切资源,以提升他们在 AI、加速计算、加速数据科学、图形、模拟等方面的知识水平。
https://developer.nvidia.com/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/
JSON Lines Reading with pandas 100x Faster Using NVIDIA cuDF
JSON is a widely adopted format for text-based information working interoperably between systems, most commonly in web applications and large language models (LLMs). While the JSON format is human-readable, it is complex to process with data science and data engineering tools. JSON data often takes the form of newline-delimited JSON Lines (also known as NDJSON) to represent multiple records in a dataset. Reading JSON Lines data into a dataframe is a common first step in data processing. In this post, we compare the performance and functionality of Python APIs for converting JSON Lines data into a dataframe using the following libraries: pandas DuckDB pyarrow RAPIDS cuDF pandas Accelerator Mode We demonstrate good scaling performance and high data processing throughput with the JSON reader in cudf.pandas, especially for data with a complex schema. We also review the versatile set of JSON reader options in cuDF that improve compatibility with Apache Spark and empower Python users to handle quote normalization, invalid records, mixed types and other JSON anomalies. JSON parsing versus JSON reading When it comes to JSON data processing, it’s important to distinguish between parsing and reading. JSON parsers JSON parsers, such as simdjson , convert a buffer of character data into a vector of tokens. These tokens represent the logical components of JSON data, including field names, values, array begin/end, and map begin/end. Parsing is a critical first step in extracting information from JSON data, and significant research has been dedicated to reaching high parsing throughput. To use information from JSON Lines in data processing pipelines, the tokens must often be converted into a Dataframe or columnar format, such as Apache Arrow . JSON readers JSON readers, such as pandas.read_json convert input character data into a Dataframe organized by columns and rows. The reader process begins with a parsing step and then detects record boundaries, manages the top-level columns and nested struct or list child columns, handles missing and null fields, infers data types, and more. JSON readers convert unstructured character data into a structured Dataframe, making JSON data compatible with downstream applications. JSON Lines reader benchmarking JSON Lines is a flexible format for representing data. Here are some important properties of JSON data: Number of records per file Number of top level columns Depth of struct or list nesting for each column Data types of values Distribution of string lengths Fraction of missing keys For this study, we held the record count fixed at 200K and swept the column count from 2 to 200, exploring a range of complex schemas. The four data types in use are as follows: list<int> and list<str> with two child elements struct<int> and struct<str> with a single child element Table 1 shows the first two columns of the first two records for data types, including list<int> , list<str> , struct<int> , and struct<str> . Data type Example records list<int> {"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],... list<str> {"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],... struct<int> {"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},... struct<str> {"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},... Table 1. Example JSON Lines character data Table 1 shows the first two columns of the first two records for data types, including list<int> , list<str> , struct<int> , and struct<str> . Performance statistics were collected on the 25.02 branch of cuDF and with the following library versions: pandas 2.2.3, duckdb 1.1.3, and pyarrow 17.0.0. The execution hardware used an NVIDIA H100 Tensor Core 80 GB HBM3 GPU and Intel Xeon Platinum 8480CL CPU with 2TiB of RAM. Timing was collected from the third of three repetitions, to avoid initialization overhead and ensure that the input file data was present in the OS page cache. In addition to the zero code change cudf.pandas, we also collected performance data from pylibcudf , a Python API for the libcudf CUDA C++ computation core. The runs with pylibcudf used a CUDA async memory resource through RAPIDS Memory Manager (RMM). Throughput values were computed using the JSONL input file size and the reader runtime of the third repetition. Here are some examples from several Python libraries for invoking the JSON Lines reader: # pandas and cudf.pandas import pandas as pd df = pd.read_json(file_path, lines=True) # DuckDB import duckdb df = duckdb.read_json(file_path, format='newline_delimited') # pyarrow import pyarrow.json as paj table = paj.read_json(file_path) # pylibcudf import pylibcudf as plc s = plc.io.types.SourceInfo([file_path]) opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build() df = plc.io.json.read_json(opt) JSON Lines reader performance Overall, we found a wide range of performance characteristics for the JSON readers available in Python, with overall runtimes varying from 1.5 seconds to almost 5 minutes. Table 2 shows the sum of the timing data from seven JSON reader configurations when processing 28 input files with a total file size of 8.2 GB: Using cudf.pandas for JSON reading shows about 133x speedup over pandas with the default engine and 60x speedup over pandas with the pyarrow engine. DuckDB and pyarrow show good performance as well, with about 60 seconds total time for DuckDB, and 6.9 seconds for pyarrow with block size tuning. The fastest time comes from pylibcudf at 1.5 seconds, showing about 4.6x speedup over pyarrow with block_size tuning. Reader label Benchmark runtime (sec) Comment cudf.pandas 2.1 Using -m cudf.pandas from the command line pylibcudf 1.5 pandas 281 pandas-pa 130 Using the pyarrow engine DuckDB 62.9 pyarrow 15.2 pyarrow-20MB 6.9 Using a 20 MB block_size value Table 2. Sum of timing data for JSON reading of 28 input files Table 2 includes the input columns counts 2, 5, 10, 20, 50, 100, and 200, and the data types list<int> , list<str> , struct<int> , and struct<str> . Zooming into the data by data type and column count, we found that JSON reader performance varies over a wide range based on the input data details and the data processing library, from 40 MB/s to 3 GB/s for CPU-based libraries and from 2–6 GB/s for the GPU-based cuDF. Figure 1 shows the data processing throughput based on input size for 200K rows and 2–200 columns, with input data sizes varying from about 10 MB to 1.5 GB. Figure 1. JSON Lines reading throughput by number of input columns In Figure 1, each subplot corresponds to the data type of the input columns. File size annotations align to the x-axis. For cudf.pandas read_json , we observed 2–5 GB/s throughput that increased with larger column count and input data size. We also found that the column data type does not significantly affect throughput. The pylibcudf library shows about 1–2 GB/s higher throughput than cuDF-python, due to lower Python and pandas semantic overhead. For pandas read_json , we measured about 40–50 MB/s throughput for the default UltraJSON engine (labeled as “pandas-uj”). Using the pyarrow engine ( engine="pyarrow" ) provided a boost up to 70–100 MB/s due to faster parsing (pandas-pa). The pandas JSON reader performance appears to be limited by the need to create Python list and dictionary objects for each element in the table. For DuckDB read_json , we found about 0.5–1 GB/s throughput for list<str> and struct<str> processing with lower values <0.2 GB/s for list<int> and struct<int> . Data processing throughput remained steady over the range of column counts. For pyarrow read_json , we measured data processing throughputs up to 2–3 GB/s for 5-20 columns, and lower throughput values as column count increased to 50 and above. We found data type to have a smaller impact on reader performance than column count and input data size. For column counts of 200 and a record size of ~5 KB per row, throughput dropped to about 0.6 GB/s. Raising the pyarrow block_size reader option to 20 MB (pyarrow-20MB) led to increased throughput for column counts 100 or more, while also degrading throughput for column counts 50 or fewer. Overall, DuckDB primarily showed throughput variability due to data types, whereas cuDF and pyarrow primarily showed throughput variability due to column count and input data size. The GPU-based cudf.pandas and pylibcudf showed the highest data processing throughput for complex list and struct schema, especially for input data sizes >50 MB. JSON Lines reader options Given the text-based nature of the JSON format, JSON data often includes anomalies that result in invalid JSON records or don’t map well to a dataframe. Some of these JSON anomalies include single-quoted fields, cropped or corrupted records, and mixed struct or list types. When these patterns occur in your data, they can break the JSON reader step in your pipeline. Here are some examples of these JSON anomalies: # 'Single quotes' # field name "a" uses single quotes instead of double quotes s = '{"a":0}\n{\'a\':0}\n{"a":0}\n' # ‘Invalid records' # the second record is invalid s = '{"a":0}\n{"a"\n{"a":0}\n' # 'Mixed types' # column "a" switches between list and map s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n' To unlock advanced JSON reader options in cuDF, we recommend incorporating cuDF-Python ( import cudf ) and pylibcudf into your workflow. If single-quoted field names or string values appear in your data, cuDF provides a reader option to normalize single quotes into double quotes. cuDF supports this feature to provide compatibility with the allowSingleQuotes option that is enabled by default in Apache Spark. If invalid records appear in your data, cuDF and DuckDB both provide error recovery options to replace these records with null. When error handling is enabled, if a record generates a parsing error, all of the columns for the corresponding row are marked as null. If mixed list and struct values are associated with the same field name in your data, cuDF provides a dtype schema override option to coerce the datatype to string. DuckDB uses a similar approach by inferring a JSON data type. For mixed types, the pandas library has perhaps the most faithful approach, using Python list and dictionary objects to represent the input data. Here is an example in cuDF-Python and pylibcudf that shows the reader options, including a dtype schema override for column name “a”. For more information, see cudf.read_json and pylibcudf.io.json.read_json . For pylibcudf, the JsonReaderOptions object can be configured either before or after the build function. # cuDF-python import cudf df = cudf.read_json( file_path, dtype={"a":str}, on_bad_lines='recover', lines=True, normalize_single_quotes=True ) # pylibcudf import pylibcudf as plc s = plc.io.types.SourceInfo([file_path]) opt = ( plc.io.json.JsonReaderOptions.builder(s) .lines(True) .dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])]) .recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL) .normalize_single_quotes(True) .build() ) df = plc.io.json.read_json(opt) Table 3 summarizes the behavior of several JSON readers with Python APIs for a few common JSON anomalies. Crosses denote that the reader function raised an exception, and checkmarks denote that the library successfully returned a Dataframe. These results may change in future versions of the libraries. Single quotes Invalid records Mixed types cuDF-Python, pylibcudf ✔️ Normalize to double quotes ✔️ Set to null ✔️ Represent as a string pandas ❌ Exception ❌ Exception ✔️ Represent as a Python object pandas ( engine="pyarrow “ ) ❌ Exception ❌ Exception ❌ Exception DuckDB ❌ Exception ✔️ Set to null ✔️ Represent as a JSON string-like type pyarrow ❌ Exception ❌ Exception ❌ Exception Table 3. JSON reader outcomes when reading JSONL files with anomalies including: single quotes, mixed types and invalid records cuDF supports several additional JSON reader options that are critical for compatibility with Apache Spark conventions, and now are available to Python users as well. Some of these options include: Validation rules for numbers and strings Custom record delimiters Column pruning by the schema provided in dtype Customization of NaN values For more information, see the libcudf C++ API documentation on json_reader_options . For more information about multi-source reading for efficiently processing many smaller JSON Lines files, or byte-range support for breaking up large JSON Lines files, see GPU-Accelerated JSON Data Processing with RAPIDS . Summary RAPIDS cuDF provides powerful, flexible, and accelerated tools for working with JSON data in Python. GPU-accelerated JSON data processing is also available in RAPIDS Accelerator For Apache Spark, starting in the 24.12 release. For information, see Accelerating JSON Processing on Apache Spark with GPUs . For more information, see the following resources: cuDF documentation /rapidsai/cudf GitHub repo RAPIDS Docker containers (available for releases and nightly builds) Accelerate Data Science Workflows with Zero Code Changes DLI course Mastering the cudf.pandas Profiler for GPU Acceleration
https://developer.nvidia.com/zh-cn/blog/json-lines-reading-with-pandas-100x-faster-using-nvidia-cudf/
使用 NVIDIA cuDF,pandas 读取 JSON 行文件速度提升100倍
JSON 是一种广泛采用的格式,用于在系统之间 (通常用于 Web 应用和大语言模型 (LLMs)) 以互操作方式运行的基于文本的信息。虽然 JSON 格式是人类可读的,但使用数据科学和数据工程工具进行处理十分复杂。 JSON 数据通常采用换行分隔的 JSON 行 (也称为 NDJSON) 的形式来表示数据集中的多个记录。将 JSON 行数据读入数据帧是数据处理中常见的第一步。 在本文中,我们比较了使用以下库将 JSON 行数据转换为数据帧的 Python API 的性能和功能: pandas DuckDB pyarrow RAPIDS cuDF pandas 加速器模式 我们使用 cudf.pandas 中的 JSON 读取器展示了良好的扩展性能和高数据处理吞吐量,特别是对于具有复杂模式的数据。我们还审查了 cuDF 中的一组通用 JSON 读取器选项,这些选项可提高与 Apache Spark 的兼容性,并使 Python 用户能够处理引文归一化、无效记录、混合类型和其他 JSON 异常。 JSON 解析与 JSON 读取 当涉及到 JSON 数据处理时,区分解析和读取非常重要。 JSON 解析器 JSON 解析器 (例如 simdjson ) 可将字符数据缓冲区转换为令牌向量。这些令牌代表 JSON 数据的逻辑组件,包括字段名、值、数组开始/结束和映射开始/结束。解析是从 JSON 数据中提取信息的关键第一步,并且我们致力于实现高解析吞吐量。 要在数据处理工作流中使用来自 JSON 行的信息,必须经常将令牌转换为 Dataframe 或列式格式,例如 Apache Arrow 。 JSON 阅读器 JSON 读取器 (例如 pandas.read_json ) 将输入字符数据转换为按列和行排列的 Dataframe。读取器流程从解析步骤开始,然后检测记录边界、管理顶层列和嵌套结构体或列表子列、处理缺失和空字段、推理数据类型等。 JSON 读取器可将非结构化字符数据转换为结构化 Dataframe,从而使 JSON 数据与下游应用兼容。 JSON Lines 读取器基准测试 JSON Lines 是一种灵活的数据表示格式。以下是 JSON 数据的一些重要属性: 每个文件的记录数 顶层列的数量 每列的结构体或列表嵌套深度 值的数据类型 字符串长度分布 缺少密钥的百分比 在这项研究中,我们将记录计数固定为 200K,并将列计数从 2 扫至 200,同时探索了一系列复杂的模式。使用的四种数据类型如下所示: 包含两个子元素的 list<int> 和 list<str> 包含单个子元素的 struct<int> 和 struct<str> 表 1 显示了前两列数据类型记录的前两列,包括 list<int> 、 list<str> 、 struct<int> 和 struct<str> 。 数据类型 记录示例 list<int> {"c0":[848377,848377],"c1":[164802,164802],...\n{"c0":[732888,732888],"c1":[817331,817331],... list<str> {"c0":["FJéBCCBJD","FJéBCCBJD"],"c1":["CHJGGGGBé","CHJGGGGBé"],...\n{"c0":["DFéGHFéFD","DFéGHFéFD"],"c1":["FDFJJCJCD","FDFJJCJCD"],... struct<int> {"c0":{"c0":361398},"c1":{"c0":772836},...\n{"c0":{"c0":57414},"c1":{"c0":619350},... struct<str> {"c0":{"c0":"FBJGGCFGF"},"c1":{"c0":"ïâFFéâJéJ"},...\n{"c0":{"c0":"éJFHDHGGC"},"c1":{"c0":"FDâBBCCBJ"},... 表 1. JSON 行字符数据示例 表 1 显示了前两列数据类型记录的前两列,包括 list<int> 、 list<str> 、 struct<int> 和 struct<str> 性能统计数据在 cuDF 的 25.02 分支上收集,并包含以下库版本:pandas 2.2.3、duckdb 1.1.3 和 pyarrow 17.0.0。执行硬件使用 NVIDIA H100 Tensor Core 80 GB HBM3 GPU 和 Intel Xeon Platinum 8480CL CPU 以及 2TiB 的 RAM。计时数据从三次重复的第三次中收集,以避免初始化开销,并确保输入文件数据存在于操作系统页面缓存中。 除了零代码更改 cudf.pandas 之外,我们还从 py libcudf (用于 libcudf CUDA C++计算核心的 Python API) 收集了性能数据。 py libcudf 运行通过 RAPIDS 内存管理器 (RMM) 使用 CUDA 异步内存资源。使用 JSONL 输入文件大小和第三次重复的读取器运行时计算吞吐量值。 以下是来自多个 Python 库的一些调用 JSON 行读取器的示例: # pandas and cudf.pandas import pandas as pd df = pd.read_json(file_path, lines=True) # DuckDB import duckdb df = duckdb.read_json(file_path, format='newline_delimited') # pyarrow import pyarrow.json as paj table = paj.read_json(file_path) # pylibcudf import pylibcudf as plc s = plc.io.types.SourceInfo([file_path]) opt = plc.io.json.JsonReaderOptions.builder(s).lines(True).build() df = plc.io.json.read_json(opt) JSON 行读取器性能 总体而言,我们发现 Python 中的 JSON 读取器具有各种性能特征,总体运行时间从 1.5 秒到近 5 分钟不等。 表 2 显示了在处理 28 个输入文件 (总文件大小为 8.2 GB) 时,来自 7 个 JSON 读取器配置的定时数据的总和: 使用 cudf.pandas 进行 JSON 读取显示,与使用默认引擎的 pandas 相比,速度提高了 133 倍,使用 pyarrow 引擎的 pandas 速度提高了 60 倍。 DuckDB 和 pyarrow 也表现出良好的性能,在调整块大小时,DuckDB 的总时间约为 60 秒,而 pyarrow 的总时间为 6.9 秒。 pylibcudf 生成的最快时间为 1.5 秒,与 pyarrow 相比,使用 block_size 调优的速度提高了约 4.6 倍。 阅读器标签 基准运行时 (秒) 评论 cudf.pandas 2.1 在命令行中使用 -m cudf.pandas pylibcudf 1.5 pandas 271 pandas-pa 130 使用 pyarrow 引擎 DuckDB 62.9 pyarrow 15.2 pyarrow-20MB 6.9 使用 20 MB 的 block_size 值 表 2、用于 JSON 读取 28 个输入文件的定时数据总和 表 2 包括输入列计数 2、5、10、20、50、100 和 200,以及数据类型 list<int> 、 list<str> 、 struct<int> 和 struct<str> 通过按数据类型和列数量放大数据,我们发现 JSON 读取器的性能因输入数据详细信息和数据处理库的不同而差异很大,基于 CPU 的库的性能介于 40 MB/s 到 3 GB/s 之间,而基于 GPU 的 cuDF 的性能介于 2–6 GB/s 之间。 图 1 显示了基于 200K 行、2–200 列输入大小的数据处理吞吐量,输入数据大小在约 10 MB 到 1.5 GB 之间变化。 图 1. JSON Lines 读取吞吐量按输入列数量 在图 1 中,每个子图均对应输入列的数据类型。文件大小标注与 x 轴对齐。 对于 cudf.pandas read_json ,我们观察到,随着列数量和输入数据大小的增加,吞吐量增加了 2–5 GB/秒。我们还发现,列数据类型不会对吞吐量产生重大影响。由于 Python 和 pandas 语义用度较低,pylibcudf 库的吞吐量比 cuDF-python 高约 1–2 GB/秒。 对于 pandas read_json ,我们测量了默认 UltraJSON 引擎 (标记为“pandas-uj”) 的吞吐量约为 40–50 MB/s。由于解析速度更快 (pandas-pa),使用 pyarrow 引擎 ( engine="pyarrow" ) 可将速度提升高达 70–100 MB/s。由于需要为表中的每个元素创建 Python 列表和字典对象,因此 pandas JSON 读取器的性能似乎受到限制。 对于 DuckDB read_json ,我们发现 list<str> 和 struct<str> 处理的吞吐量约为 0.5–1 GB/s,而 list<int> 和 struct<int> 的较低值 < 0.2 GB/s。数据处理吞吐量在列数量范围内保持稳定。 对于 pyarrow read_json ,我们测量了 5-20 列的高达 2–3 GB/s 的数据处理吞吐量,以及随着列数量增加到 50 及以上而降低的吞吐量值。我们发现,与列数量和输入数据大小相比,数据类型对读取器性能的影响较小。如果列数量为 200,且每行的记录大小约为 5 KB,吞吐量将下降到约 0.6 GB/s。 将 pyarrow block_size reader 选项提升至 20 MB (pyarrow-20MB) 会导致列数量增加 100 或以上的吞吐量增加,同时还会降低 50 或以下列数量的吞吐量。 总体而言,DuckDB 主要因数据类型而显示吞吐量可变性,而 cuDF 和 pyarrow 主要因列数量和输入数据大小而显示吞吐量可变性。基于 GPU 的 cudf.pandas 和 pylibcudf 为复杂列表和结构模式(尤其是输入数据大小 > 50 MB)提供了超高的数据处理吞吐量。 JSON 行读取器选项 鉴于 JSON 格式基于文本的特性,JSON 数据通常包含异常,导致 JSON 记录无效或无法很好地映射到数据帧。其中一些 JSON 异常包括单引号字段、已裁剪或损坏的记录,以及混合结构或列表类型。当数据中出现这些模式时,它们可能会中断工作流中的 JSON 读取器步骤。 以下是这些 JSON 异常的一些示例: # 'Single quotes' # field name "a" uses single quotes instead of double quotes s = '{"a":0}\n{\'a\':0}\n{"a":0}\n' # ‘Invalid records' # the second record is invalid s = '{"a":0}\n{"a"\n{"a":0}\n' # 'Mixed types' # column "a" switches between list and map s = '{"a":[0]}\n{"a":[0]}\n{"a":{"b":0}}\n' 要在 cuDF 中解锁高级 JSON 读取器选项,我们建议您将 cuDF-Python ( import cudf ) 和 pylibcudf 集成到您的工作流中。如果数据中出现单引号字段名称或字符串值,cuDF 会提供读取器选项,用于将单引号归一化为双引号。cuDF 支持此功能,可与 Apache Spark 中默认启用的 allowSingleQuotes 选项兼容。 如果您的数据中出现无效记录,cuDF 和 DuckDB 都会提供错误恢复选项,将这些记录替换为 null。启用错误处理后,如果记录生成解析错误,则相应行的所有列均标记为 null。 如果混合 list 和 struct 值与数据中的相同字段名相关联,cuDF 提供一个 dtype 模式覆盖选项,以将数据类型强制转换为字符串。DuckDB 使用类似的方法来推理 JSON 数据类型。 对于混合类型,pandas 库可能是最可靠的方法,使用 Python 列表和字典对象来表示输入数据。 以下是 cuDF-Python 和 pylibcudf 中的示例,其中显示了读取器选项,包括列名称“a”的 dtype 模式覆盖。如需了解更多信息,请参阅 cudf.read_json 和 pylibcudf.io.json.read_json 。 对于 pylibcudf,可以在 build 函数之前或之后配置 JsonReaderOptions 对象。 # cuDF-python import cudf df = cudf.read_json( file_path, dtype={"a":str}, on_bad_lines='recover', lines=True, normalize_single_quotes=True ) # pylibcudf import pylibcudf as plc s = plc.io.types.SourceInfo([file_path]) opt = ( plc.io.json.JsonReaderOptions.builder(s) .lines(True) .dtypes([("a",plc.types.DataType(plc.types.TypeId.STRING), [])]) .recovery_mode(plc.io.types.JSONRecoveryMode.RECOVER_WITH_NULL) .normalize_single_quotes(True) .build() ) df = plc.io.json.read_json(opt) 表 3 总结了使用 Python API 的多个 JSON 读取器针对一些常见 JSON 异常的行为。交叉表示读取器函数引发异常,勾号表示库已成功返回 Dataframe。在未来版本的库中,这些结果可能会发生变化。 单引号 无效记录 混合类型 cuDF-Python、pylibcudf 归一化为双引号 设置为 null 表示为字符串 pandas *例外 *例外 表示为 Python 对象 pandas ( engine="pyarrow “ ) *例外 *例外 *例外 DuckDB *例外 设置为 null 表示为类似 JSON 字符串的类型 pyarrow *例外 *例外 *例外 表 3、读取异常情况 (包括单引号、混合类型和无效记录) 的 JSONL 文件时的 JSON 读取器结果 cuDF 支持多个额外的 JSON 读取器选项,这些选项对于与 Apache Spark 惯例的兼容性至关重要,现在也可供 Python 用户使用。其中一些选项包括: 数字和字符串的验证规则 自定义记录分隔符 根据 dtype 中提供的模式进行列剪枝 自定义 NaN 值 有关更多信息,请参阅有关 json_reader_options 的 libcudf C++ API 文档。 有关多源读取以高效处理许多较小的 JSON 行文件的更多信息,或有关分解大型 JSON 行文件的字节范围支持的更多信息,请参阅使用 RAPIDS 进行 GPU 加速的 JSON 数据处理 。 总结 RAPIDS cuDF 为在 Python 中处理 JSON 数据提供了功能强大、灵活且加速的工具。 从 24.12 版本开始,您还可以在适用于 Apache Spark 的 RAPIDS Accelerator 中使用 GPU 加速的 JSON 数据处理功能。有关信息,请参阅 使用 GPU 在 Apache Spark 上加速 JSON 处理 。 有关更多信息,请参阅以下资源: cuDF 文档 /rapidsai/cudf GitHub 存储库 RAPIDS Docker 容器 (可用于版本和夜间构建) 零代码更改加速数据科学工作流程 DLI 课程 掌握用于 GPU 加速的 cudf.pandas Profiler
https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/
Understanding the Language of Life’s Biomolecules Across Evolution at a New Scale with Evo 2
AI has evolved from an experimental curiosity to a driving force within biological research. The convergence of deep learning algorithms, massive omics datasets, and automated laboratory workflows has allowed scientists to tackle problems once thought intractable—from rapid protein structure prediction to generative drug design, increasing the need for AI literacy among scientists . With this momentum, we find ourselves on the cusp of the next paradigm shift: the emergence of powerful AI foundation models purpose-built for biology. These new models promise to unify disparate data sources—genomic sequences, RNA and proteomic profiles, and, in some cases, scientific literature—into a single, coherent understanding of life at the molecular, cellular, and systems levels. Learning biology’s language and structure opens doors to transformative applications, such as smarter drug discovery, rational enzyme design, and disease mechanism elucidation. As we set the stage for this next wave of AI-driven breakthroughs, it is clear that these foundation models will not merely accelerate progress; they stand poised to redefine what is possible in biological research. Test Evo 2 as an NVIDIA BioNeMo NIM microservice for free with the /NVIDIA/bionemo-examples example notebook. Explore a complete reference workflow for protein design . Start training Evo 2 on your data today in BioNeMo Framework . Stay up to date with the latest NVIDIA BioNeMo platform updates . A leap forward in sequence modeling and design from molecular to genome-scale The first Evo model from November 2024 represented a groundbreaking milestone in genomic research, introducing a foundation model capable of analyzing and generating biological sequences across DNA, RNA, and proteins. Published at a time when most models were restricted to single modalities or short contexts, Evo is known for its ability to operate across scales—ranging from molecular to genomic—using a unified approach. Trained on 2.7M prokaryotic and phage genomes, encompassing 300B nucleotide tokens, Evo delivered single-nucleotide resolution across many biological evolution and function tasks. At the core of Evo’s success is its innovative StripedHyena architecture (Figure 1), a hybrid model combining 29 Hyena layers, a new type of deep learning architecture designed to handle long sequences of information without relying on traditional attention mechanisms that are common to Transformer architectures. Instead it uses a combination of convolutional filters and gates. This design overcame the limitations of traditional Transformer models, enabling Evo to handle long contexts of up to 131,072 tokens efficiently. The result was a model capable of connecting small sequence changes to system-wide and organism-level impacts, bridging the gap between molecular biology and evolutionary genomics. Figure 1 . Evo and Evo 2 AI model architecture Evo’s predictive capabilities set new standards for biological modeling. It achieved competitive performance in several zero-shot tasks, including predicting the fitness effects of mutations on proteins, non-coding RNAs, and regulatory DNA, providing invaluable insights for synthetic biology and precision medicine. Evo also demonstrated remarkable generative capabilities, designing functional CRISPR-Cas systems and transposons. These outputs were validated experimentally, proving that Evo could predict and design novel biological systems with real-world utility. Evo represents a notable advancement in integrating multimodal and multiscale biological understanding into a single model . Its ability to generate genome-scale sequences and predict gene essentiality across entire genomes marked a leap forward in our capacity to analyze and engineer life. Evo’s milestones were not just its technical achievements but also its vision. This unified framework combined biology’s vast complexity with cutting-edge AI to accelerate discovery and innovation in life sciences. Learning the language of life across evolution Evo 2 is the next generation of this line of research in genomic modeling, building on the success of Evo with expanded data, enhanced architecture, and superior performance. Evo 2 can provide insights into three essential biomolecules—DNA, RNA, and protein— and all three domains of life: Eukarya, Prokarya, and Archaea. Trained on a dataset of 8.85T nucleotides from 15,032 eukaryotic genomes and 113,379 prokaryotic genomes, this training dataset covers diverse species, enabling unprecedented cross-species generalization and significantly broadening its scope compared to Evo, which focused solely on prokaryotic genomes. Evo 2 uses a new and improved StripedHyena 2 architecture, extended up to 40B parameters, enhancing the model’s training efficiency and ability to capture long-range dependencies with context lengths of 1M tokens. StripedHyena 2, thanks to its multihybrid design based on convolutions, trains significantly faster than Transformers and other hybrid models using linear attention or state-space models. The largest Evo 2 model was trained using  2,048 NVIDIA H100 GPUs using NVIDIA DGX Cloud on AWS. As part of NVIDIA’s partnership with Arc , they gained access to this high-performance, fully managed AI platform optimized for large-scale, distributed training with NVIDIA AI software and expertise. These advances mark a significant increase from Evo’s 7B parameters and a 131,000-token context length, positioning Evo 2 as a leader in multimodal and multiscale biological modeling (Table 1). Feature Evo Evo 2 Genomic Training Data Bacterial + bacteriophage (300B nucleotides) All domains of life + bacteriophage (9T nucleotides) Model Parameters 7B 7B + 40B Context Length 131,072 tokens Up to 1,048,576 tokens Modalities DNA, RNA, protein DNA, RNA, protein Safety Viruses of Eukaryotes excluded Viruses of Eukaryotes excluded Applications Limited cross-species tasks Broad cross-species applications Table 1. Key features of Evo 2 and Evo Evo 2’s expanded training data and refined architecture empower it to excel across various biological applications. Its multimodal design integrates DNA, RNA, and protein data, enabling zero-shot performance on tasks like mutation impact prediction and genome annotation. Evo 2 also fundamentally improves Evo by including eukaryotic genomes, enabling deeper insights into human diseases, agriculture, and environmental science. Evo 2’s predictive capabilities outperform specialized models across diverse tasks: Variant impact analysis : Achieves state-of-the-art accuracy in predicting the functional effects of mutations across species zero-shot, including human and non-coding variants. Gene essentiality : Identifies essential genes in prokaryotic and eukaryotic genomes, validated against experimental datasets, bridging the gap between molecular and systems biology tasks. Generative capabilities : Designs complex biological systems, such as genome-scale prokaryotic and eukaryotic sequences, and the controllable design of chromatin accessibility, demonstrating new capabilities for biological design with real-world applicability. Using the NVIDIA Evo 2 NIM microservice The NVIDIA Evo 2 NIM microservice is useful for generating a variety of biological sequences, with an API that provides settings to adjust tokenization, sampling, and temperature parameters: # Define JSON example human L1 retrotransposable element sequence example = { # nucleotide sequence to be analyzed "sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC", "num_tokens": 102, # number of tokens to generate "top_k": 4, # only predict top 4 most likely outcomes per token "top_p": 1.0, # include 100% cumulative prob results in sampling "temperature": 0.7, # add variability (creativity) to predictions " ": True, # enable more diverse outputs "enable_logits": False, # disable raw model output (logits) } # Retrieve the API key from the environment key = os.getenv("NVCF_RUN_KEY") # Send the example sequence and parameters to the Evo 2 API r = requests.post( # Example URL for the Evo 2 model API. url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate") # Authorization headers to authenticate with the API headers={"Authorization": f"Bearer {key}"}, # The data payload (sequence and parameters) sent as JSON json=example, ) For more information about the API output for various prompts, see the NVIDIA BioNeMo Framework documentation. Evo 2 can also be fine-tuned using the open-source NVIDIA BioNeMo Framework , which offers robust tools for adapting pretrained models such as Evo 2 to specialized tasks in BioPharma: # Prepare raw sequence data for training based on a YAML config file preprocess_evo2 -c data_preproc_config.yaml # Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback # Optional Fine-tuning: Add this argument to start from a pretrained model # --ckpt-dir=/path/to/pretrained_checkpoint Evo 2 and the future of AI in biology AI is poised to rapidly transform biological research, enabling breakthroughs previously thought to be decades away. Evo 2 represents a significant leap forward in this evolution, introducing a genomic foundation model capable of analyzing and generating DNA, RNA, and protein sequences at unprecedented scales. While Evo excelled in predicting mutation effects and gene expression in prokaryotes, the capabilities of Evo 2 are much broader, with enhanced cross-species generalization, making it a valuable tool for studying eukaryotic biology, human diseases, and evolutionary relationships. Evo 2’s ability to perform zero-shot tasks, from identifying genes that drive cancer risk to designing complex biomolecular systems, showcases its versatility. Including long-context dependencies enables it to uncover patterns across genomes, providing multi-modal and multi-scale insights that are pivotal for advancements in precision medicine, agriculture, and synthetic biology. As the field moves forward, models like Evo 2 set the stage for a future where AI deciphers life’s complexity and is also used to design new useful biological systems. These advancements align with broader trends in AI-driven science, where foundational models are tailored to domain-specific challenges, unlocking previously unattainable capabilities. Evo 2’s contributions signal a future where AI becomes an indispensable partner in decoding, designing, and reshaping the living world. For more information about Evo 2, see the technical report published by the Arc Institute . Evo 2 is also available within the NVIDIA BioNeMo platform. Acknowledgments We’d like to thank the following contributors to the described research for their notable contributions to the ideation, writing, and figure design for this post: Garyk Brixi , genetics Ph.D. student at Stanford Jerome Ku , machine learning engineer working with the Arc Institute Michael Poli , founding scientist at Liquid AI and computer science Ph.D. student at Stanford Greg Brockman , co-founder and president of OpenAI Eric Nguyen , bioengineering Ph.D. student at Stanford Brandon Yang , co-founder of Cartesia AI and computer science Ph.D. student at Stanford (on leave) Dave Burke , chief technology officer at the Arc Institute Hani Goodarzi , core investigator at the Arc Institute and associate professor of biophysics and biochemistry at the University of California, San Francisco Patrick Hsu , co-founder of the Arc Institute, assistant professor of bioengineering, and Deb Faculty Fellow at the University of California, Berkeley Brian Hie , assistant professor of chemical engineering at Stanford University, Dieter Schwarz Foundation Stanford Data Science Faculty Fellow, innovation investigator at the Arc Institute, and leader at the Laboratory of Evolutionary Design at Stanford
https://developer.nvidia.com/zh-cn/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/
利用 Evo 2 以全新规模跨进化研究生命生物分子的语言
AI 已从实验好奇心发展为生物学研究的驱动力。 深度学习算法、海量组学数据集和自动化实验室工作流程的融合使 科学家能够解决一度被认为棘手的问题 (从快速蛋白质结构预测到生成式药物设计),从而增加了科学家对 AI 素养的需求。在这一势头下,我们正处于下一次范式转变的边缘:专为生物学打造的强大 AI 基础模型的出现。 这些新模型有望将不同的数据源 (基因组序列、RNA 和蛋白质组谱,在某些情况下还包括科学文献) 统一为分子、细胞和系统级别上对生命的统一、一致的理解。学习生物学的语言和结构为变革性应用打开了大门,例如更智能的药物发现、合理的酶设计和疾病机制阐明。 在我们为下一波 AI 驱动的突破做好准备之际,这些基础模型显然不仅能够加速进展,还将重新定义生物学研究的可能性。 使用/NVIDIA/bionemo-examples 示例 notebook 免费将 Evo 2 测试为 NVIDIA BioNeMo NIM 微服务 。 探索蛋白质设计的完整参考工作流程 。 立即开始在 BioNeMo 框架 中使用您的数据训练 Evo 2 及时了解 NVIDIA BioNeMo 平台的最新动态。 序列建模和设计从分子规模到基因组规模的飞跃 2024 年 11 月推出的 首个 Evo 模型 是基因组研究领域的一个突破性里程碑,它引入了能够分析和生成跨 DNA、RNA 和蛋白质的生物序列的基础模型。 在发布 Evo 时,大多数模型都被限制在单一模式或简短背景下运行,而众所周知,它能够使用统一的方法跨规模 (从分子到基因组) 操作。Evo 基于 270 万个原核细胞和噬菌体基因组 (包含 300 亿个核酸令牌) 进行训练,在许多生物进化和功能任务中提供了单核酸分辨率。 Evo 成功的核心是其创新的 StripedHyena 架构 (图 1),这是一个结合了 29 个 Hyena 层的混合模型,这是一种新型深度学习架构,旨在处理长序列信息,而无需依赖 Transformer 架构中常见的传统注意机制。相反,它使用卷积过滤器和门的组合。 这种设计克服了传统 Transformer 模型的限制,使 Evo 能够高效处理多达 131,072 个令牌的长上下文。最终,该模型能够将微小的序列变化与系统级和有机体级的影响联系起来,弥合分子生物学与进化基因组学之间的差距。 图 1 . . . Evo 和 Evo 2 AI 模型架构 Evo 的预测功能为生物建模树立了新的标准。它在多项零样本任务中取得了具有竞争力的表现,包括预测突变对蛋白质、非编码 RNAs 和调控 DNA 的适应性影响,为合成生物学和精准医学提供宝贵见解。 Evo 还展示了非凡的生成功能,设计了功能齐全的 CRISPR-Cas 系统和转座子。这些输出经过实验验证,证明了 Evo 可以预测和设计具有真实效用的新型生物系统。 Evo 代表着将多模态和多尺度生物理解集成到单个模型中的显著进步 。它能够生成基因组级序列并预测整个基因组的基因本质,这标志着我们分析和工程生命的能力实现了飞跃。 Evo 的里程碑式发展不仅体现在技术成就上,还体现在愿景上。这一统一框架将生物学的庞大复杂性与尖端 AI 相结合,加速了生命科学领域的发现和创新。 学习进化过程中的生命语言 Evo 2 是基因组建模领域这一系列研究的新一代产品,基于 Evo 在扩展数据、增强架构和卓越性能方面取得的成功而构建。 Evo 2 可以深入了解三种基本生物分子 (DNA、RNA 和蛋白质) 以及生命的所有三个领域:真核生物 (Eukarya)、原核生物 (Prokarya) 和古菌 (Archaea)。此训练数据集基于来自 15,032 个真核基因组和 113,379 个原核基因组的 8.85T 核酸数据集进行训练,涵盖不同物种,与仅关注原核基因组的 Evo 相比,实现了前所未有的跨物种泛化,并显著拓宽了其范围。 Evo 2 使用经过改进的全新 StripedHyena 2 架构,该架构可将参数扩展至 40B 个,从而提高模型的训练效率,以及使用 1M 个令牌的上下文长度捕获远程依赖项的能力。 StripedHyena 2 采用基于卷积的多混合设计,其训练速度明显快于 Transformers 和其他使用线性注意力或状态空间模型的混合模型。 最大的 Evo 2 模型使用 AWS 上的 NVIDIA DGX Cloud 使用 2,048 个 NVIDIA H100 GPUs 进行训练。作为 NVIDIA 与 Arc 合作 的一部分,他们获得了对这个高性能、完全托管的 AI 平台的访问权限,该平台利用 NVIDIA AI 软件和专业知识针对大规模分布式训练进行了优化。 这些进步标志着 Evo 的 7B 参数和 131,000 个令牌的上下文长度有了显著增加,使 Evo 2 成为多模态和多尺度生物建模领域的领导者 (表 1)。 特征 Evo Evo 2 基因组训练数据 细菌 + bacteriophage(300B 核酸) 所有生命领域 + bacteriophage(9T nucleotides) 模型参数 70 亿 70 亿 400 亿 上下文长度 131072 个令牌 最多 1048576 个令牌 模式 DNA、RNA、蛋白质 DNA、RNA、蛋白质 安全性 不包括真核生物病毒 不包括真核生物病毒 应用 跨物种任务受限 广泛的跨物种应用 表 1、Evo 2 和 Evo 的主要特性 Evo 2 的扩展训练数据和优化架构使其能够在各种生物应用中表现卓越。其多模态设计集成了 DNA、RNA 和蛋白质数据,可在执行突变影响预测和基因组标注等任务时实现零采样性能。Evo 2 还通过纳入真核基因组从根本上改进了 Evo,使人们能够更深入地了解人类疾病、农业和环境科学。 Evo 2 的预测能力优于各种任务的专业模型: 变体影响分析 :在零样本预测物种突变 (包括人类和非编码变体) 的功能性影响方面实现出色的准确性。 基因本质 :识别原核和真核基因组中的基本基因,并通过实验数据集进行验证,弥合分子和系统生物学任务之间的差距。 生成功能 :设计复杂的生物系统(例如基因组级的原核细胞序列和真核细胞序列),以及染色质可访问性的可控设计,展示具有现实世界适用性的生物设计新功能。 使用 NVIDIA Evo 2 NIM 微服务 NVIDIA Evo 2 NIM 微服务 可用于生成各种生物序列,其 API 可提供用于调整标记化、采样和温度参数的设置: # Define JSON example human L1 retrotransposable element sequence example = { # nucleotide sequence to be analyzed "sequence": "GAATAGGAACAGCTCCGGTCTACAGCTCCCAGCGTGAGCGACGCAGAAGACGGTGATTTCTGCATTTCCATCTGAGGTACCGGGTTCATCTCACTAGGGAGTGCCAGACAGTGGGCGCAGGCCAGTGTGTGTGCGCACCGTGCGCGAGCCGAAGCAGGGCGAGGCATTGCCTCACCTGGGAAGCGCAAGGGGTCAGGGAGTTCCCTTTCCGAGTCAAAGAAAGGGGTGATGGACGCACCTGGAAAATCGGGTCACTCCCACCCGAATATTGCGCTTTTCAGACCGGCTTAAGAAACGGCGCACCACGAGACTATATCCCACACCTGGCTCAGAGGGTCCTACGCCCACGGAATC", "num_tokens": 102, # number of tokens to generate "top_k": 4, # only predict top 4 most likely outcomes per token "top_p": 1.0, # include 100% cumulative prob results in sampling "temperature": 0.7, # add variability (creativity) to predictions " ": True, # enable more diverse outputs "enable_logits": False, # disable raw model output (logits) } # Retrieve the API key from the environment key = os.getenv("NVCF_RUN_KEY") # Send the example sequence and parameters to the Evo 2 API r = requests.post( # Example URL for the Evo 2 model API. url=os.getenv("URL","https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate") # Authorization headers to authenticate with the API headers={"Authorization": f"Bearer {key}"}, # The data payload (sequence and parameters) sent as JSON json=example, ) 有关各种提示的 API 输出的更多信息,请参阅 NVIDIA BioNeMo 框架文档 。 此外,还可以使用开源 NVIDIA BioNeMo Framework 对 Evo 2 进行微调,该框架提供可靠的工具,可根据 BioPharma 中的专业任务调整预训练模型(例如 Evo 2): # Prepare raw sequence data for training based on a YAML config file preprocess_evo2 -c data_preproc_config.yaml # Trains the Evo 2 model with preprocessed data and parallelism across multiple GPUs torchrun --nproc-per-node=8 --no-python train_Evo 2 -d data_train_config.yaml --num-nodes=1 --devices=8 --max-steps=100 --val-check-interval=25 --experiment-dir=/workspace/bionemo2/model/checkpoints/example --seq-length=8192 --tensor-parallel-size=4 --pipeline-model-parallel-size=1 --context-parallel-size=2 --sequence-parallel --global-batch-size=8 --micro-batch-size=1 --model-size=7b --fp8 --tflops-callback # Optional Fine-tuning: Add this argument to start from a pretrained model # --ckpt-dir=/path/to/pretrained_checkpoint Evo 2 和生物学领域的 AI 未来 AI 势必会迅速改变生物学研究,实现之前人们认为需要数十年才能实现的突破。Evo 2 代表了这一变革的重大飞跃,它引入了基因组基础模型,能够以超大规模分析和生成 DNA、RNA 和蛋白质序列。 虽然 Evo 在预测原核生物的突变效应和基因表达方面表现优异,但 Evo 2 的功能更为广泛,并增强了跨物种泛化,这使其成为研究真核生物学、人类疾病和进化关系的宝贵工具。 从识别导致癌症风险的基因到设计复杂的生物分子系统,Evo 2 执行零样本击任务的能力充分体现了其通用性。借助长上下文依赖项,AI 能够揭示跨基因组的模式,提供对精准医学、农业和合成生物学的进步至关重要的多模态和多尺度见解。 随着该领域的发展,像 Evo 2 这样的模型为 AI 解读生命复杂性的未来奠定了基础,同时还用于设计新的有用的生物系统。这些进步与 AI 驱动的科学领域的更广泛趋势是一致的,在这些趋势中,基础模型针对特定领域的挑战进行了定制,解锁了以前无法实现的功能。Evo 2 的贡献标志着 AI 将成为解码、设计和重塑生存世界不可或缺的合作伙伴。 有关 Evo 2 的更多信息,请参阅 Arc Institute 发布的技术报告。Evo 2 也可在 NVIDIA BioNeMo 平台中使用。 致谢 在此, 我们要感谢以下参与所述研究的人员,感谢他们为本文的构思、写作和图形设计做出的杰出贡献 : Garyk Brixi ,斯坦福大学遗传学博士生 与 Arc Institute 合作的机器学习工程师 Jerome Ku Michael Poli ,Liquid AI 的创始科学家兼斯坦福大学计算机科学博士生 Greg Brockman ,OpenAI 联合创始人兼总裁 Eric Nguyen, 斯坦福大学生物工程博士生 Brandon Yang ,Cartesia AI 联合创始人兼斯坦福大学计算机科学博士生 (休假中) Dave Burke ,Arc Institute 首席 技术官 Hani Goodarzi ,Arc Institute 核心研究员,加州大学旧金山分校生物物理学和生物化学副教授 Patrick Hsu ,Arc Institute 联合创始人、生物工程助理教授兼加州大学伯克利分校 Deb 教职研究员 Brian Hie – 斯坦福大学化学工程助理教授、Dieter Schwarz 基金会斯坦福大学数据科学教职人员、Arc Institute 创新研究员、斯坦福大学进化设计实验室负责人
https://developer.nvidia.com/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/
Using NetworkX, Jaccard Similarity, and cuGraph to Predict Your Next Favorite Movie
As the amount of data available to everyone in the world increases, the ability for a consumer to make informed decisions becomes increasingly difficult. Fortunately, large datasets are a beneficial component for recommendation systems, which can make a sometimes overwhelming decision much easier. Graphs are excellent choices for modeling the relationships inherent in the data that fuel recommendation systems, and NetworkX is a very popular option that many data scientists turn to for graph analytics in Python. NetworkX is easy to learn and use, stocked with a wide breadth of graph algorithms, backed by a large and friendly community, and has copious examples available in notebooks, documents, Stack Overflow, and your favorite LLM. However, to the disappointment of countless developers that broke into graph analytics with or even because of NetworkX, it famously falls short in performance at the scales used by typical recommendation systems. This begs the question: Can an effective graph-based recommendation system be written in a few simple lines of Python? More generally, can developers and data scientists have both easy-to-use and high-performance graph analytics? The answer to both questions is, “Yes.” Read on to discover how you can create a simple and effective recommendation system in Python using NetworkX, a dataset of 33M movie reviews, the Jaccard Similarity algorithm, and the NVIDIA cuGraph back-end, which provides the >250x speedup necessary for modern large-scale graph data. The MovieLens dataset Here’s the most important part of the system: the data. The MovieLens dataset is generously made available for public download and is described in more detail in the README file . The full set includes about 331K anonymized users reviewing 87K movies, resulting in 34M ratings. Figure 1. MovieLens data represented as a graph, where the individual ratings easily map to edges between user and movie nodes Extracting recommendations from the data: bipartite graphs and Jaccard Similarity The type of graph created from the MovieLens data is a bipartite graph because there are only two types of nodes: movies and users. The reviews (edges) can only occur between a user and a movie. This makes it particularly easy to apply the Jaccard Similarity algorithm to find similarities between movies. Jaccard Similarity compares pairs of nodes and computes a similarity coefficient using their relationships in the graph. In this case, movies are related to each other based on how users have chosen to watch and review them. Figure 2. Jaccard Similarity computes a similarity coefficient using the sizes of the sets of neighbors for the two nodes being compared Based on the viewing preferences of users, you can see m3 is more similar to m2 than it is to m1, and movies m4 and m1 aren’t similar at all. This system would recommend m2 to someone who likes m3 and wouldn’t recommend m1 to someone who likes m4. NetworkX makes it easy… for smaller graphs Not surprisingly, NetworkX supports the type of analysis described earlier, and it’s quite easy to start seeing results in just a few lines of Python. But as you’ll see, performance becomes a limitation for larger-sized graphs—such as those needed for your movie recommendation system—when using NetworkX without the GPU-accelerated cuGraph backend. I discuss the key pieces of the recommendation system later in this post, but the full source code is available in the /rapidsai/nx-cugraph GitHub repo. Because the Jaccard Similarity algorithm you’re using doesn’t take edge weights into account, it considers all reviews equal. You don’t want movies with low reviews to be recommended, so filter out all reviews under a certain threshold, which has the side effect of making the graph smaller too. # Create a separate DataFrame containing only "good" reviews (rating >= 3). good_ratings_df = ratings_df[ratings_df["rating"] >= 3] good_user_ids = good_ratings_df["userId"].unique() good_movie_ids = good_ratings_df["movieId"].unique() If you print the sizes of the data you’re working with, you see that your graph of good reviews is approximately 330K nodes and 28M edges, with an average degree (number of neighbors per node) of 84: total number of users: 330975 total number of reviews: 33832162 average number of total reviews/user: 102.22 total number of users with good ratings: 329127 total number of good reviews: 27782577 average number of good reviews/user: 84.41 As mentioned earlier, graphs of this size often present a challenge to NetworkX, but GPU acceleration using the cuGraph backend removes the performance limitations often associated with this much data. However, I’ll continue with a CPU-only environment to demonstrate the default performance. All the following examples were run on a workstation using NetworkX 3.4.2 and a Intel Xeon Platinum 8480CL at 2.0 GHz with 2 TB RAM. Using a NetworkX graph created from users and good movie reviews, pick a user, find one of their highest rated movies, and use Jaccard Similarity to find other movies like it: # Pick a user and one of their highly-rated movies user = good_user_ids[321] user_reviews = good_user_movie_G[user] highest_rated_movie = max( user_reviews, key=lambda n: user_reviews[n].get("rating", 0) ) When you look up the node ID in the movie name map, you see that one of this user’s highest rated movies is the animated film, Mulan : highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0} You can now use Jaccard Similarity to recommend a movie based on the user’s preferences and viewing history: %%time # Run Jaccard Similarity jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch)) CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s Wall time: 2min 14s The Jaccard Similarity computation using the default NetworkX implementation ran for over two minutes. Using these results, you can now provide a recommendation. # Sort by coefficient value, which is the 3rd item in the tuples jacc_coeffs.sort(key=lambda t: t[2], reverse=True) # Create a list of recommendations ordered by "best" to "worst" based on the # Jaccard Similarity coefficients and the movies already seen movies_seen = list(good_user_movie_G.neighbors(user)) recommendations = [mid for (_, mid, _) in jacc_coeffs if mid not in movies_seen] Now you can print the first movie in the sorted list of recommendations: User ID 289308 might like Tarzan (1999) (movie ID: 2687) The code is easy and the results look good, but performance holds us back As you can see, the recommendation seems reasonable; someone who likes Mulan seems likely to also enjoy the 1999 Disney animated film Tarzan . However, if the goal was to provide a service or to analyze hundreds or thousands of movies, the two-minute runtime would have you start looking for an alternative to NetworkX. You can see that finding similarities between other movies using this system isn’t any faster: %%time # 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)" print_similar_movies(1196) movies similar to Star Wars: Episode V - The Empire Strikes Back (1980): movieId=260, Star Wars: Episode IV - A New Hope (1977) movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983) movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981) CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s Wall time: 11min 30s Here’s another example: %%time # 318: "Shawshank Redemption, The (1994)" print_similar_movies(318) movies similar to "Shawshank Redemption, The (1994)": movieId=296, Pulp Fiction (1994) movieId=593, "Silence of the Lambs, The (1991)" movieId=356, Forrest Gump (1994) CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s Wall time: 16min 49s The quality of the recommendations returned is impressive given that this system is composed of only a few lines of code. However, the runtime performance makes it virtually unusable. ‌As described earlier, finding recommendations based on Shawshank Redemption, The (1994) takes nearly 17 minutes. NVIDIA cuGraph makes it transformatively faster The graph algorithm in this workflow is prohibitively expensive, but by using the NVIDIA cuGraph backend and a compatible GPU, you can dramatically improve performance without changing the code. Jaccard Similarity is supported in nx-cugraph version 25.02 or later. Version 25.02 is available from nightly builds and will be part of future stable releases later this month. Instructions on installing nx-cugraph, as well as other RAPIDS packages, from both nightly and stable channels using conda or pip, are available in the RAPIDS Installation Guide . After being installed, enable nx-cugraph by setting an environment variable: NX_CUGRAPH_AUTOCONFIG=True cuGraph uses the GPU to dramatically accelerate the neighbor lookups and set comparisons needed for the Jaccard Similarity computation. As the graph scales and the number of movies and reviews per movie increases, performance remains almost constant. The best part of the system—the simplicity of the code—does not change, and the results are identical, but performance increases by over 250x for the run that previously took nearly 17 minutes, reducing it to under 4 seconds. Figure 3. Speedup of cuGraph over NetworkX for Jaccard Similarity computation for various movies Software: NetworkX 3.4.2, cuGraph/nx-cugraph 25.02 ; CPU: Intel(R) Xeon(R) Platinum 8480CL @ 2.0GHz 2TB RAM ; GPU: NVIDIA Quadro RTX 8000 48GB RAM Conclusion This post covered a simple and effective recommendation system that’s easy to write in Python using NetworkX. Although there are many other approaches you could take—as covered in What Is a Recommendation System? —few would match the low effort required to start exploring data that graph analysis with NetworkX offers. However, productive and meaningful data exploration requires quick turnaround, and NetworkX has traditionally struggled to scale to larger, real-world problem sizes. The NVIDIA cuGraph backend for NetworkX accelerates the familiar and flexible NetworkX API to also make it performant at scale, generating results in seconds instead of tens of minutes, keeping you focused and productive. You can now continue using NetworkX, the most popular graph analytics library, without concern for scaling issues simply by adding a GPU and the cuGraph backend to your environment. For more information about accelerated graph analysis using NetworkX and NVIDIA cuGraph, see RAPIDS: GPU-Accelerated NetworkX Backend .
https://developer.nvidia.com/zh-cn/blog/using-networkx-jaccard-similarity-and-cugraph-to-predict-your-next-favorite-movie/
使用 NetworkX、Jaccard Similarity 和 cuGraph 预测您下一部最喜欢的电影
随着全球每个人可用的数据量不断增加,消费者做出明智决策的能力也越来越难。幸运的是,对于推荐系统而言,大型数据集是一个非常有用的组件,有时这会让某些情况下的决策变得更加容易。 对于为推荐系统提供支持的数据中固有的关系建模而言,图形是一个很好的选择,而 NetworkX 是许多数据科学家在 Python 中进行图形分析时非常喜欢的选择。NetworkX 易于学习和使用,拥有各种图形算法,并由庞大而友好的社区提供支持,并且在 Notebook、文档、Stack Overflow 和您喜欢的 LLM 中提供了大量示例。然而,令无数开发人员失望的是,他们使用 NetworkX 或甚至因为 NetworkX 而涉足图形分析,但众所周知,它在典型推荐系统使用的规模上的性能表现不佳。 这就引出了一个问题:能否用 Python 的几行简单代码编写有效的基于图形的推荐系统?更一般地说,开发者和数据科学家能否同时进行易于使用的高性能图形分析? 这两个问题的答案都是“Yes” 请继续阅读,了解如何使用 NetworkX、Jaccard Similarity 算法和 NVIDIA cuGraph 后端 (可将现代大规模图形数据所需的速度提高 250 倍以上),在 Python 中创建简单有效的推荐系统,使用 3300 万条电影评论的数据集。 MovieLens 数据集 我们先从系统中最重要的部分开始:数据。MovieLens 数据集 1 可供公开 下载 , README 文件 中有更详细的说明。该系列包括大约 331k 匿名用户,他们在观看 87k 部电影,获得了 34M 的评分。 图 1、MovieLens 数据可以表示为一个图形,其中各个评分可轻松映射到用户和电影节点之间的边缘。 从数据中提取建议:二分图和 Jaccard Similarity 我们根据 MovieLens 数据创建的图形类型是二部图,因为只有两种类型的节点:电影节点和用户节点,并且评论(边缘)只能在用户和电影之间发生。这使得应用 Jaccard Similarity 算法来查找电影之间的相似性变得特别容易。Jaccard Similarity 比较节点对,并使用它们在图形中的关系计算相似性系数。在这种情况下,电影根据用户选择的观看和审查方式相互关联。 图 3、Jaccard Similarity 使用所比较的两个节点的近邻集的大小来计算相似性系数。根据用户的观看偏好,我们可以看到 m3 与 m2 更相似,电影 m4 和 m1 完全不相似。此系统会向喜欢 m3 的用户推荐 m2,而不会向喜欢 m4 的用户推荐 m1。 NetworkX 可以轻松处理较小的图形 不足为奇的是,NetworkX 支持我们上述的分析类型,而且只需使用几行 Python 代码即可轻松查看结果。但正如我们将看到的,当使用无 GPU 加速的 cuGraph 后端的 NetworkX 时,大型图形 (例如我们的电影推荐系统所需的图形) 的性能会受到限制。 我们将在下方查看推荐系统的关键部分,但可在 此处 获取完整源代码。 由于我们使用的 Jaccard Similarity 算法未考虑边缘权重,因此会将所有评论视为相同。我们不希望推荐好评度低的电影,因此我们会过滤掉某个值下的所有好评度,这会导致图形也变小。 # Create a separate DataFrame containing only "good" reviews (rating >= 3). good_ratings_df = ratings_df[ratings_df["rating"] >= 3] good_user_ids = good_ratings_df["userId"].unique() good_movie_ids = good_ratings_df["movieId"].unique() 如果我们打印正在处理的数据的大小,我们会看到好评图大约有 330k 个节点,28M 个边缘,平均度数 (每个节点的近邻数) 为 84: total number of users: 330975 total number of reviews: 33832162 average number of total reviews/user: 102.22 total number of users with good ratings: 329127 total number of good reviews: 27782577 average number of good reviews/user: 84.41 如上所述,这种规模的图形通常会给 NetworkX 带来挑战,但使用 cuGraph 后端的 GPU 加速消除了通常与如此多的数据相关的性能限制。不过,我们将继续使用 CPU 环境来演示默认性能。 注意 以下所有示例均在使用 NetworkX 3.4.2 和 Intel(R) Xeon(R) Platinum [email protected](2TB RAM)的工作站上运行 使用由用户创建的 NetworkX 图形和优秀的电影评论,我们来选择一个用户,找到他们评分最高的电影之一,并使用 Jaccard Similarity 找到类似的其他电影。 # Pick a user and one of their highly-rated movies user = good_user_ids[321] user_reviews = good_user_movie_G[user] highest_rated_movie = max( user_reviews, key=lambda n: user_reviews[n].get("rating", 0) ) 当我们在电影名称贴图中查找节点 ID 时,我们会发现该用户评分最高的电影之一是动画电影“Mulan”: highest rated movie for user=289308 is Mulan (1998), id: 1907, rated: {'rating': 5.0} 我们现在可以使用 Jaccard Similarity 根据用户的偏好和观看历史记录来推荐电影: %%time # Run Jaccard Similarity jacc_coeffs = list(nx.jaccard_coefficient(good_user_movie_G, ebunch)) CPU times: user 2min 5s, sys: 15.4 ms, total: 2min 5s Wall time: 2min 14s 使用默认 NetworkX 实现的 Jaccard 相似性计算运行了两分钟以上。根据这些结果,我们现在可以提供推荐。 # Sort by coefficient value, which is the 3rd item in the tuples jacc_coeffs.sort(key=lambda t: t[2], reverse=True) # Create a list of recommendations ordered by "best" to "worst" based on the # Jaccard Similarity coefficients and the movies already seen movies_seen = list(good_user_movie_G.neighbors(user)) recommendations = [mid for (_, mid, _) in jacc_coeffs if mid not in movies_seen] 现在,我们只需在已排序的推荐列表中打印出第一部电影: User ID 289308 might like Tarzan (1999) (movie ID: 2687) 代码很简单,结果看起来不错,但性能却拖累了我们 如我们所见,这个推荐似乎是合理的;喜欢“Mulan”的人似乎也喜欢 1999 年的迪士尼动画电影“Tarzan”。 但是,如果我们的目标是提供服务,或分析数百乃至数千部电影,那么两分钟的运行时间就能让我们开始寻找 NetworkX 的替代方案。我们可以看到,使用此系统查找其他电影相似点的速度并没有加快: %%time # 1196: "Star Wars: Episode V - The Empire Strikes Back (1980)" print_similar_movies(1196) movies similar to Star Wars: Episode V - The Empire Strikes Back (1980): movieId=260, Star Wars: Episode IV - A New Hope (1977) movieId=1210, Star Wars: Episode VI - Return of the Jedi (1983) movieId=1198, Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981) CPU times: user 13min 47s, sys: 71.8 ms, total: 13min 47s Wall time: 11min 30s %%time # 318: "Shawshank Redemption, The (1994)" print_similar_movies(318) movies similar to "Shawshank Redemption, The (1994)": movieId=296, Pulp Fiction (1994) movieId=593, "Silence of the Lambs, The (1991)" movieId=356, Forrest Gump (1994) CPU times: user 28min 28s, sys: 172 ms, total: 28min 28s Wall time: 16min 49s 鉴于此系统仅由几行代码组成,因此所返回的推荐内容的质量令人印象深刻。但是,运行时性能使其几乎无法使用。如上所述,根据“Shawshank Redemption, The (1994)”查找推荐内容大约需要 17 分钟。 NVIDIA cuGraph 使其变革性地加快 上述工作流程中的图形算法成本高昂,但通过使用 NVIDIA cuGraph 后端和兼容的 GPU,我们可以在不更改代码的情况下显著提高性能。 nx-cugraph 版本 25.02 或更高版本支持 Jaccard Similarity。版本 25.02 可在 nightly builds 中使用,并将于本月晚些时候纳入未来的稳定版本中。有关如何使用 conda 或 pip 从 nightly 和 stable 通道安装 nx-cugraph 以及其他 RAPIDS 包的说明,请参阅 RAPIDS Installation Guide 。 安装后,只需设置环境变量即可启用 nx-cugraph: NX_CUGRAPH_AUTOCONFIG=True cuGraph 利用 GPU 显著加速近邻查找,并设置 Jaccard 相似性计算所需的比较结果。此外,随着图形规模以及每部电影的电影和评论数量的增加,性能几乎保持不变。 该系统最优秀的部分,即代码的简单性,并没有改变,结果也是一样的,但在过去近 17 分钟的运行中,性能提高了 250 倍以上,缩短到 4 秒以内。 图 4、图表显示了各种电影的 cuGraph over NetworkX for Jaccard Similarity 计算速度 。 软件:NetworkX 3.4.2,cuGraph/nx-cugraph 25.02 CPU:Intel(R) Xeon(R) Platinum [email protected] 2TB RAM GPU:NVIDIA Quadro RTX 8000 48GB RAM 结束语 这篇博文介绍了一个简单而有效的推荐系统,它可以使用 NetworkX 轻松地用 Python 编写。虽然我们可以采用许多其他方法(如此处所述),但很少有方法能够做到与开始探索 NetworkX 图形分析提供的数据所需的工作量不相上下。然而,高效和有意义的数据探索需要快速的周转,而 NetworkX 传统上一直难以扩展到更大的实际问题规模。 适用于 NetworkX 的 NVIDIA cuGraph 后端可对熟悉且灵活的 NetworkX API 进行加速,还可大规模提升其性能,在几秒钟 (而非数十分钟) 内生成结果,从而让您专注工作并高效工作。现在,用户只需向环境中添加 GPU 和 cuGraph 后端,即可继续使用热门的图形分析库 NetworkX,而无需担心扩展问题。 如需了解有关使用 NetworkX 和 NVIDIA cuGraph 进行加速图形分析的更多信息,请访问 https://rapids.ai/nx-cugraph 。
https://developer.nvidia.com/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/
NVIDIA Open GPU Datacenter Drivers for RHEL9 Signed by Red Hat
NVIDIA and Red Hat have partnered to bring continued improvements to the precompiled NVIDIA Driver introduced in 2020 . Last month, NVIDIA announced that the open GPU driver modules will become the default recommended way to enable NVIDIA graphics hardware. Today, NVIDIA announced that Red Hat is now compiling and signing the NVIDIA open GPU kernel modules to further streamline the usage for the Red Hat Enterprise Linux 9 OS and to provide a way forward for supporting Confidential Computing with NVIDIA hardware in the datacenter. The signed open GPU drivers can now be installed and used without any extra key enrollment configuration. As with the current legacy precompiled packages, only the latest RHEL 9 kernel is supported at the time of the driver update. Starting with RHEL 9.5, NVIDIA now offers a tech preview repository available for trial until April 30, 2025. This repository enables the usage of the open GPU driver and legacy driver (with caveats) using modularity streams . After the tech preview ends, the signed kernel modules will be available in the NVIDIA Compute package repositories with the rest of the CUDA platform: CUDA-X acceleration libraries, CUDA-optimized applications, and frameworks. For more information about modularity streams and how they work with the precompiled graphics driver, see Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams . Activating the tech preview To use the signed NVIDIA open GPU driver modules, make sure that the Red Hat repositories are enabled, including RHEL9 AppStream, RHEL9 BaseOS, and RHEL9 CRB: $ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms $ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms $ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms Add the CUDA network repository: $ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/ When the tech preview repository is activated, view the supported modularity streams: $ dnf module list nvidia-driver Figure 1. Example output list from the tech preview tool of the NVIDIA modularity streams installed The tech preview provides the following new modularity streams: latest-open-gpu <driver_version>-open-gpu Install the latest stream to opt into the signed precompiled packages: $ sudo dnf module install nvidia-driver:latest-open-gpu Reboot and enjoy! Figure 2. Example nvidia-smi command output showing the NVIDIA CUDA driver version running and system parameters Important caveat with the tech preview repository The legacy precompiled driver is also listed in the modularity streams: nvidia-driver:latest nvidia-driver:latest-<major_version> nvidia-driver:<major_version> These drivers have been signed by NVIDIA and require signing key enrollment. They install just fine without any warnings but upon reboot the NVIDIA graphics driver will not be available: $ nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. Upon further inspection, the problem becomes clear: # insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service This happens because secure boot is enabled by default: $ sudo mokutil --sb-state  SecureBoot enabled This module has been signed by NVIDIA using a key that is not in the chain of trust: $ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer signer: NVIDIA For more information about enrolling the NVIDIA signing key with mokutil, see UEFI Secure Boot on GitHub. Installation Install the tech preview repository on a host with secure boot enabled: $ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/ $ sudo dnf module install nvidia-driver:latest-open-gpu $ sudo reboot If you discover any issues with packaging, please report those problems to github.com/NVIDIA/yum-packaging-precompiled-kmod/issues . This repository will be active until March 31, 2025. Summary Now with signed packages from NVIDIA and Red Hat, secure boot with NVIDIA on RHEL 9 is even better. You can look forward to more developments in this space. The driver source code is available in the /NVIDIA/open-gpu-kernel-modules GitHub repo. Packaging templates and instructions are provided on GitHub to enable you to maintain your own precompiled kernel module packages for custom kernels and derivative Linux distros: /NVIDIA/yum-packaging-precompiled-kmod /NVIDIA/yum-packaging-nvidia-plugin To give feedback, send comments or report driver bugs . If you are not already a member, join the NVIDIA Developer Program .
https://developer.nvidia.com/zh-cn/blog/nvidia-open-gpu-datacenter-drivers-for-rhel9-signed-by-red-hat/
适用于 RHEL9 的 NVIDIA 开放 GPU 数据中心驱动,由 Red Hat 签署
NVIDIA 和 Red Hat 合作,对 2020 年推出的预编译 NVIDIA Driver 进行持续改进。上个月,NVIDIA 宣布 ,开放的 GPU 驱动模组将成为启用 NVIDIA 图形硬件的默认推荐方式。 今天,NVIDIA 宣布 Red Hat 正在编译并签署 NVIDIA 开放 GPU 内核模块 ,以进一步简化 Red Hat Enterprise Linux 9 操作系统的使用,并为在数据中心使用 NVIDIA 硬件支持 机密计算 提供前进方向。 现在无需任何额外的密钥注册配置即可安装和使用经过签名的开放 GPU 驱动。与当前的旧版预编译软件包一样,在驱动更新时仅支持最新的 RHEL 9 内核。 从 RHEL 9.5 开始,NVIDIA 现在提供技术预览库,可在 2025 年 4 月 30 日之前试用。此库允许使用 模块化流使 用开放式 GPU 驱动程序和旧版驱动程序(附带说明)。 技术预览结束后,已签名的内核模块将在 NVIDIA Compute 软件包存储库中与 CUDA 平台的其他部分一起提供:CUDA-X 加速库、CUDA 优化的应用和框架。 有关 模块化流及其如何与预编译图形驱动程序配合使用的更多信息 ,请参阅 Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams 。 激活技术预览 要使用经过签名的 NVIDIA 开源 GPU 驱动模块,请确保已启用 Red Hat 资源库,包括 RHEL9 AppStream、RHEL9 BaseOS 和 RHEL9 CRB: $ subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms $ subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms $ subscription-manager repos --enable=codeready-builder-for-rhel-9-x86_64-rpms 添加 CUDA 网络存储库: $ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/ 激活技术预览库后,请查看支持的模块化流: $ dnf module list nvidia-driver 图 1、已安装的 NVIDIA 模块化流的技术预览工具中的输出列表示例 技术预览提供了以下新的模块化流: latest-open-gpu <driver_version>-open-gpu 安装最新的流,选择使用经过签名的预编译软件包: $ sudo dnf module install nvidia-driver:latest-open-gpu 重启并尽情享受吧! 图 2、显示 正在运行的 NVIDIA CUDA 驱动程序版本和系统参数的 nvidia-smi 命令输出示例 技术预览库的重要注意事项 旧版预编译驱动也列在模块化流中: nvidia-driver:latest nvidia-driver:latest-<major_version> nvidia-driver:<major_version> 这些驱动已由 NVIDIA 签名,需要签名密钥注册。它们可以正常安装,没有任何警告,但在重启后,NVIDIA 图形驱动程序将不可用: $ nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. 经过进一步检查,问题变得明显: # insmod /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko insmod: ERROR: could not insert module /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko: Key was rejected by service 发生这种情况的原因是默认情况下启用了 secure boot: $ sudo mokutil --sb-state SecureBoot enabled 本模块由 NVIDIA 使用不在信任链中的密钥进行签名: $ modinfo /usr/lib/modules/5.14.0-503.16.1.el9_4.x86_64/extra/drivers/video/nvidia/nvidia.ko | grep signer signer: NVIDIA 有关使用 mokutil 注册 NVIDIA 签名密钥的更多信息,请参阅 GitHub 上的 UEFI Secure Boot。 安装 在已启用安全启动的主机上安装技术预览库: $ sudo dnf config-manager --add-repo=https://developer.download.nvidia.com/compute/cuda/preview/repos/rhel9/x86_64/ $ sudo dnf module install nvidia-driver:latest-open-gpu $ sudo reboot 如果您发现任何打包问题,请向 github.com/NVIDIA/yum-packaging-precompiled-kmod/issues 报告这些问题。 此库的有效期至 2025 年 3 月 31 日。 总结 现在,借助 NVIDIA 和 Red Hat 的签名软件包,在 RHEL 9 上使用 NVIDIA 进行安全启动会更好。您可以期待此领域的更多开发。 您可在 /NVIDIA/open-gpu-kernel-modules GitHub 仓库中获取驱动源代码。 GitHub 上提供了打包模板和说明,以便您为自定义内核和衍生 Linux 发行版维护自己的预编译内核模块包: /NVIDIA/yum-packaging-precompiled-kmod /NVIDIA/yum-packaging-nvidia-plugin 如要提供反馈,请 发送评论 或 报告驱动错误 。如果您还不是会员,请 加入 NVIDIA Developer Program 。
https://developer.nvidia.com/blog/get-started-with-gpu-acceleration-for-data-science/
Get Started with GPU Acceleration for Data Science
In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows, offering significant performance improvements. RAPIDS is a suite of open-source libraries and frameworks developed by NVIDIA, designed to accelerate data science pipelines using GPUs with minimal code changes. Providing tools like cuDF for data manipulation, cuML for machine learning, and cuGraph for graph analytics, RAPIDS enables seamless integration with existing Python libraries, making it easier for data scientists to achieve faster and more efficient processing. This post shares tips for transitioning from CPU data science libraries to GPU-accelerated workflows, especially for experienced data scientists. Setting up RAPIDS on desktop or cloud infrastructure Getting started with RAPIDS is straightforward, but it does have several dependencies. The recommended approach is to follow the official RAPIDS Installation Guide , which provides detailed instructions for local installations. You have multiple paths to install the framework: through pip install, Docker image, or through an environment such as Conda. To set up RAPIDS in a cloud environment, see the RAPIDS Cloud Deployment Guide . Before installing, ensure compatibility by checking your CUDA version and the supported RAPIDS version on the installation page. cuDF and GPU acceleration for pandas An advantage of RAPIDS lies in its modular architecture, which empowers users to adopt specific libraries designed for GPU-accelerated workflows. Among these, cuDF stands out as a powerful tool for seamlessly transitioning from traditional pandas -based workflows to GPU-optimized data processing, and requires zero code changes. To get started, make sure to enable the cuDF extension before importing pandas for execution of data import and remainder of the operation on GPU. By loading the RAPIDS extension with %load_ext cudf.pandas , you can effortlessly integrate cuDF DataFrame within existing workflows, preserving the familiar syntax and structure of pandas. Similar to pandas, cuDF pandas supports different file formats such as .csv, .json, .pickle, .paraquet, and hence enables GPU-accelerated data manipulation. The following code is an example of how to enable the cudf.pandas extension and concatenate two .csv files: %load_ext cudf.pandas import pandas as pd import cupy as cp train = pd.read_csv('./Titanic/train.csv') test = pd.read_csv('./Titanic/test.csv') concat = pd.concat([train, test], axis = 0) Loading the cudf.pandas extension enables the execution of familiar pandas operations—such as filtering, grouping, and merging—on GPUs without requiring a code change or rewrites. The cuDF accelerator is compatible with the pandas API to ensure a smooth transition from CPU to GPU while delivering substantial computational speedups. target_rows = 1_000_000 repeats = -(-target_rows // len(train)) # Ceiling division train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows) print(train_df.shape) # (1000000, 2) repeats = -(-target_rows // len(test)) # Ceiling division test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows) print(test_df.shape) # (1000000, 2) combine = [train_df, test_df] (1000000, 12) (1000000, 11) filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)] grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean() additional_info = pd.DataFrame({ 'PassengerId': [1, 2, 3], 'VIP_Status': ['No', 'Yes', 'No'] }) merged_df = train_df.merge(additional_info, on='PassengerId', how='left') Decoding performance: CPU and GPU runtime metrics in action In data science, performance optimization is not just about speed, but also understanding how computational resources are utilized. It involves analyzing how operations leverage CPU and GPU architectures, identifying inefficiencies, and implementing strategies to enhance workflow efficiency. Performance profiling tools like %cudf.pandas.profile play a key role by offering a detailed examination of code execution. The following execution result breaks down each function, and distinguishes between tasks processed on the CPU from those accelerated on the GPU: %%cudf.pandas.profile train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False) Pclass Survived 0 1 0.629592 1 2 0.472810 2 3 0.242378 Total time elapsed: 5.131 seconds 5 GPU function calls in 5.020 seconds 0 CPU function calls in 0.000 seconds Stats +------------------------+------------+-------------+------------+------------+-------------+------------+ | Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall | +------------------------+------------+-------------+------------+------------+-------------+------------+ | DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 | | DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 | | GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 | | DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 | | DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 | +------------------------+------------+-------------+------------+------------+-------------+------------+ This granularity helps pinpoint operations that inadvertently revert to CPU execution, a common occurrence due to unsupported cuDF functions, incompatible data types, or suboptimal memory handling. It is crucial to identify these issues because such fallbacks can significantly impact overall performance. To learn more about this loader, see Mastering the cudf.pandas Profiler for GPU Acceleration . Additionally, you can use Python magic commands like %%time and %%timeit to enable benchmarks of specific code blocks that facilitate direct comparisons of runtime between pandas (CPU) and the cuDF accelerator for pandas (GPU). These tools provide insights into the efficiency gains achieved through GPU acceleration. Benchmarking with %%time provides a clear comparison of execution times between CPU and GPU environments, highlighting the efficiency gains achievable through parallel processing. %%time print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape) train_df = train_df.drop(['Ticket', 'Cabin'], axis=1) test_df = test_df.drop(['Ticket', 'Cabin'], axis=1) combine = [train_df, test_df] print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape) CPU output: Before (999702, 12) (999856, 11) (999702, 12) (999856, 11) After (999702, 10) (999856, 9) (999702, 10) (999856, 9) CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms Wall time: 63.3 ms GPU output: Before (999702, 12) (999856, 11) (999702, 12) (999856, 11) After (999702, 10) (999856, 9) (999702, 10) (999856, 9) CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms Wall time: 5.46 ms The %%time example delivers a 10x speedup in execution time, reducing wall time from 63.3 milliseconds (ms) on the CPU to 5.46 ms on the GPU. This highlights the efficiency of GPU acceleration with cuDF pandas for large-scale data operations. Further insights are gained using %%timeit , which performs repeated executions to measure consistency and reliability in performance metrics. %%timeit for dataset in combine: dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False) pd.crosstab(train_df['Title'], train_df['Sex']) CPU output: 1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) GPU output: 89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) The %%timeit example gives us a 10x performance improvement with GPU acceleration, reducing the runtime from 1.11 seconds per loop on the CPU to 89.6 ms per loop on the GPU. This highlights the efficiency of cuDF pandas for intensive data operations. Verifying GPU utilization When working with different data types, it is important to verify whether your system is utilizing the GPU effectively. You can check whether arrays are being processed on the CPU or GPU by using the familiar type command to differentiate between NumPy and CuPy arrays. type(guess_ages) cupy.ndarray If the output is np.array , the data is being processed on the CPU. If the output is cupy.ndarray , the data is being processed on the GPU. This quick check ensures that your workflows are leveraging GPU resources where intended. Secondly, by simply using the print command, you can confirm whether the GPU is being utilized and ensure that a cuDF DataFrame is being processed. The output specifies whether the fast path (cuDF) or slow path (pandas) is in use. This straightforward check provides an easy way to validate that the GPU is active for accelerating data operations. print(pd) <module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))> Lastly, commands such as df.info can be used to inspect the structure of cuDF DataFrame and confirm that computations are GPU-accelerated. This helps verify whether operations are running on the GPU or falling back to the CPU. train_df.info() <class 'cudf.core.dataframe.DataFrame'> RangeIndex: 1000000 entries, 0 to 999999 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Survived 1000000 non-null int64 1 Pclass 1000000 non-null int64 2 Sex 1000000 non-null int64 3 Age 1000000 non-null float64 4 SibSp 1000000 non-null int64 5 Parch 1000000 non-null int64 6 Fare 1000000 non-null float64 7 Embarked 997755 non-null object 8 Title 1000000 non-null int64 dtypes: float64(2), int64(6), object(1) memory usage: 65.9+ MB Conclusion RAPIDS, through tools like cuDF pandas, provides a seamless transition from traditional CPU-based data workflows to GPU-accelerated processing, offering significant performance improvements. By leveraging features such as %%time , %%timeit , and profiling tools like %%cudf.pandas.profile , you can measure and optimize runtime efficiency. The ability to inspect GPU utilization through simple commands like type , print(pd) , and df.info ensures that workflows are leveraging GPU resources effectively. To try the data operations detailed in this post, check out the accompanying Jupyter Notebook . To learn more about GPU-accelerated data science, see 10 Minutes to Data Science: Transitioning Between RAPIDS cuDF and CuPy Libraries and RAPIDS cuDF Instantly Accelerates pandas Up to 50x on Google Colab . Join us for GTC 2025 and register for the Data Science Track to gain deeper insights. Recommended sessions include: Accelerating Data Science with RAPIDS and NVIDIA GPUs Scaling Machine Learning Workflows with RAPIDS To build expertise with RAPIDS, check out the following hands-on workshops at GTC: Accelerate Clustering Algorithms to Achieve the Highest Performance Analyzing and Visualizing Large Data Interactively Using Accelerated Computing
https://developer.nvidia.com/zh-cn/blog/get-started-with-gpu-acceleration-for-data-science/
适用于数据科学的 GPU 加速入门
在数据科学领域,运营效率是处理日益复杂和大型数据集的关键。GPU 加速已成为现代工作流程的关键,可显著提高性能。 RAPIDS 是由 NVIDIA 开发的一套开源库和框架,旨在使用 GPU 以尽可能减少代码更改来加速数据科学流程。RAPIDS 提供用于数据操作的 cuDF 、用于机器学习的 cuML 和用于图形分析的 cuGraph 等工具,可实现与现有 Python 库的无缝集成,使数据科学家更容易实现更快、更高效的处理。 本文分享了从 CPU 数据科学库过渡到 GPU 加速工作流程的技巧,特别适合经验丰富的数据科学家。 在桌面或云基础架构上设置 RAPIDS 开始使用 RAPIDS 非常简单,但它确实有几个依赖项。推荐的方法是遵循官方的 RAPIDS 安装指南 ,该指南提供了详细的本地安装说明。您有多种安装框架的路径:通过 pip install、Docker 镜像,或通过 Conda 等环境。要在云环境中设置 RAPIDS,请参阅 RAPIDS 云部署指南 。安装前,请检查安装页面上的 CUDA 版本和受支持的 RAPIDS 版本,确保兼容性。 适用于 pandas 的 cuDF 和 GPU 加速 RAPIDS 的一个优势在于其模块化架构,使用户能够采用专为 GPU 加速工作流程设计的特定库。其中,cuDF 作为一款功能强大的工具脱颖而出,可从基于 pandas 的传统工作流程无缝过渡到 GPU 优化的数据处理流程,并且无需更改代码。 首先,请确保在导入 pandas 之前启用 cuDF 扩展,以便在 GPU 上执行数据导入和剩余操作。通过使用 %load_ext cudf.pandas 加载 RAPIDS 扩展程序,您可以轻松地将 cuDF DataFrame 集成到现有工作流程中,从而保留熟悉的 pandas 语法和结构。 与 pandas 类似, cuDF pandas 支持不同的文件格式,例如 .csv、.json、.pickle、.paraquet,因此支持 GPU 加速的数据操作。 以下代码是如何启用 cudf.pandas 扩展名并连接两个 .csv 文件的示例: %load_ext cudf.pandas import pandas as pd import cupy as cp train = pd.read_csv('./Titanic/train.csv') test = pd.read_csv('./Titanic/test.csv') concat = pd.concat([train, test], axis = 0) 通过加载 cudf.pandas 扩展程序,无需更改或重写代码,即可在 GPU 上执行熟悉的 pandas 操作,例如过滤、分组和合并。cuDF 加速器与 pandas API 兼容,可确保从 CPU 到 GPU 的平稳过渡,同时大幅提高计算速度。 target_rows = 1_000_000 repeats = -(-target_rows // len(train)) # Ceiling division train_df = pd.concat([train] * repeats, ignore_index=True).head(target_rows) print(train_df.shape) # (1000000, 2) repeats = -(-target_rows // len(test)) # Ceiling division test_df = pd.concat([test] * repeats, ignore_index=True).head(target_rows) print(test_df.shape) # (1000000, 2) combine = [train_df, test_df] (1000000, 12) (1000000, 11) filtered_df = train_df[(train_df['Age'] > 30) & (train_df['Fare'] > 50)] grouped_df = train_df.groupby('Embarked')[['Fare', 'Age']].mean() additional_info = pd.DataFrame({ 'PassengerId': [1, 2, 3], 'VIP_Status': ['No', 'Yes', 'No'] }) merged_df = train_df.merge(additional_info, on='PassengerId', how='left') 解码性能:CPU 和 GPU 运行时指标的实际应用 在数据科学中,性能优化不仅涉及速度,还涉及了解计算资源的利用方式。其中包括分析运营如何利用 CPU 和 GPU 架构、识别效率低下问题,以及实施旨在提高工作流程效率的策略。 %cudf.pandas.profile 等性能分析工具通过详细检查代码执行情况发挥着关键作用。以下执行结果会对每个函数进行分解,并区分在 CPU 上处理的任务与在 GPU 上加速的任务: %%cudf.pandas.profile train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False) Pclass Survived 0 1 0.629592 1 2 0.472810 2 3 0.242378 Total time elapsed: 5.131 seconds 5 GPU function calls in 5.020 seconds 0 CPU function calls in 0.000 seconds Stats +------------------------+------------+-------------+------------+------------+-------------+------------+ | Function | GPU ncalls | GPU cumtime | GPU percall | CPU ncalls | CPU cumtime | CPU percall | +------------------------+------------+-------------+------------+------------+-------------+------------+ | DataFrame.__getitem__ | 1 | 5.000 | 5.000 | 0 | 0.000 | 0.000 | | DataFrame.groupby | 1 | 0.000 | 0.000 | 0 | 0.000 | 0.000 | | GroupBy.mean | 1 | 0.007 | 0.007 | 0 | 0.000 | 0.000 | | DataFrame.sort_values | 1 | 0.002 | 0.002 | 0 | 0.000 | 0.000 | | DataFrame.__repr__ | 1 | 0.011 | 0.011 | 0 | 0.000 | 0.000 | +------------------------+------------+-------------+------------+------------+-------------+------------+ 这种粒度有助于查明无意中恢复到 CPU 执行的操作,这是由于不受支持的 cuDF 函数、不兼容的数据类型或次优内存处理而常见的情况。识别这些问题至关重要,因为此类回退会严重影响整体性能。如需详细了解此加载程序,请参阅 Mastering cudf.pandas Profiler for GPU Acceleration 。 此外,您可以使用 Python magic 命令,如 %%time 和 %%timeit ,来启用特定代码块的基准测试,以便直接比较 pandas(CPU)和 cuDF 加速器(GPU)之间的运行时。这些工具可让您深入了解通过 GPU 加速实现的效率提升。通过使用 %%time 进行基准测试,可以清楚地比较 CPU 和 GPU 环境之间的执行时间,从而凸显通过并行处理实现的效率提升。 %%time print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape) train_df = train_df.drop(['Ticket', 'Cabin'], axis=1) test_df = test_df.drop(['Ticket', 'Cabin'], axis=1) combine = [train_df, test_df] print("After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape) CPU output: Before (999702, 12) (999856, 11) (999702, 12) (999856, 11) After (999702, 10) (999856, 9) (999702, 10) (999856, 9) CPU times: user 56.6 ms, sys: 8.08 ms, total: 64.7 ms Wall time: 63.3 ms GPU output: Before (999702, 12) (999856, 11) (999702, 12) (999856, 11) After (999702, 10) (999856, 9) (999702, 10) (999856, 9) CPU times: user 6.65 ms, sys: 0 ns, total: 6.65 ms Wall time: 5.46 ms %%time 示例可将执行时间提高 10 倍,将墙面时间从 CPU 上的 63.3 毫秒 (ms) 缩短到 GPU 上的 5.46 毫秒。这凸显了使用 cuDF pandas 进行 GPU 加速在大规模数据操作中的效率。您可以使用 %%timeit 获得更多见解,它执行重复执行来测量性能指标中的一致性和可靠性。 %%timeit for dataset in combine: dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\\.', expand=False) pd.crosstab(train_df['Title'], train_df['Sex']) CPU output: 1.11 s ± 7.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) GPU output: 89.6 ms ± 959 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 在 GPU 加速方面, %%timeit 示例将性能提升了 10 倍,将运行时间从 CPU 上的每循环 1.11 秒缩短到 GPU 上的每循环 89.6 毫秒。这凸显了 cuDF pandas 在密集型数据操作中的效率。 验证 GPU 利用率 在处理不同的数据类型时,请务必验证您的系统是否有效利用了 GPU。您可以使用熟悉的 type 命令来区分 NumPy 和 CuPy 数组,检查数组是在 CPU 还是 GPU 上处理。 type(guess_ages) cupy.ndarray 如果输出为 np.array ,则数据将在 CPU 上处理。如果输出为 cupy.ndarray ,则数据将在 GPU 上处理。此快速检查可确保您的工作流程按预期利用 GPU 资源。 其次,只需使用 print 命令,即可确认是否正在利用 GPU,并确保正在处理 cuDF DataFrame。输出指定使用的是 fast 路径 (cuDF) 还是 slow 路径 (pandas)。这种简单的检查提供了一种验证 GPU 是否处于活动状态以加速数据操作的简单方法。 print(pd) <module 'pandas' (ModuleAccelerator(fast=cudf, slow=pandas))> 最后,可以使用 df.info 等命令检查 cuDF DataFrame 的结构,并确认计算已通过 GPU 加速。这有助于验证操作是在 GPU 上运行,还是回退至 CPU。 train_df.info() <class 'cudf.core.dataframe.DataFrame'> RangeIndex: 1000000 entries, 0 to 999999 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Survived 1000000 non-null int64 1 Pclass 1000000 non-null int64 2 Sex 1000000 non-null int64 3 Age 1000000 non-null float64 4 SibSp 1000000 non-null int64 5 Parch 1000000 non-null int64 6 Fare 1000000 non-null float64 7 Embarked 997755 non-null object 8 Title 1000000 non-null int64 dtypes: float64(2), int64(6), object(1) memory usage: 65.9+ MB 结束语 通过 cuDF pandas 等工具,RAPIDS 可实现从基于 CPU 的传统数据工作流到 GPU 加速处理的无缝过渡,从而显著提高性能。通过利用 %%time 、 %%timeit 等功能以及 %%cudf.pandas.profile 等分析工具,您可以测量和优化运行时效率。通过 type 、 print(pd) 和 df.info 等简单命令检查 GPU 利用率,可确保工作流程有效利用 GPU 资源。 要尝试本文中详述的数据操作,请查看 随附的 Jupyter Notebook 。 如需了解有关 GPU 加速的数据科学的更多信息,请参阅“ 10 分钟了解数据科学:在 RAPIDS cuDF 和 CuPy 库之间过渡 ”以及“ RAPIDS cuDF 即时将 pandas 在 Google Colab 上的运行速度提高 50 倍 ”。 加入我们的 GTC 2025 大会,并报名参加 Data Science Track ,获得更深入的见解。推荐的会议包括: 借助 RAPIDS 和 NVIDIA GPUs 加速数据科学 借助 RAPIDS 扩展机器学习工作流程 要积累有关 RAPIDS 的专业知识,请查看 GTC 上的以下实战研讨会: 加速聚类算法以实现超高性能 使用加速计算以交互方式分析和可视化大数据
https://developer.nvidia.com/blog/how-to-use-openusd/
How to Use OpenUSD
Universal Scene Description (OpenUSD) is an open, extensible framework and ecosystem with APIs for composing, editing, querying, rendering, collaborating, and simulating within 3D virtual worlds. This post explains how you can start using OpenUSD today with your existing assets and tools and what steps you can take to iteratively up-level your USD workflows. For an interactive, comprehensive introduction to the essential concepts, vocabulary, and best practices for OpenUSD, explore the free Learn OpenUSD learning path from the NVIDIA Deep Learning Institute . Data exchange OpenUSD does not replace your existing tools or data formats. As the name suggests, Universal Scene Description provides a common way to represent data for many 3D concepts such as meshes and PBR materials. It continues evolving to support additional concepts such as physics and B-reps. Figure 1. OpenUSD’s growing ecosystem of DCC applications To begin using OpenUSD with your existing assets, use an exporter or converter to bring your content into USD. This approach ensures swift and effective integration into new USD workflows. By using USD in this way, you can begin to enhance collaboration, performance, and interoperability across different stages of production. For more information about how to get started, see the list of applications and tools that support USD interchange . As you expand your USD workflows and want to use OpenUSD with proprietary data, you may have a need to develop your own data exchange solutions. Data exchange pipelines manage the extraction, transformation, and ingestion of content from one data format to another. There are several primary types of data exchange solutions used to integrate existing content into USD workflows: Importers and exporters : These are found in digital content creation (DCC) applications and enable you to directly import and export 3D assets to and from USD. Popular software like Rhino and Blender facilitate the importing of 3D assets, while applications like Houdini open 3D assets as native USD files.This makes it straightforward to work with USD within your existing tools. Converters: These are standalone programs or scripts designed to convert 3D files from various formats to and from USD. They offer flexibility for batch processing and automation, ensuring that your assets can be easily transformed without the need for manual intervention. USD file format plug-ins : This type of USD plug-in enables you to keep your data in its original source format while dynamically reading and translating it to USD on the fly when you open a stage. This means you can work with your assets in their native formats and still benefit from USD’s powerful features, while maintaining data integrity and workflow efficiency. Conceptual data mapping and extract, transform, load (ETL) are crucial processes for developing data exchange solutions that help ensure the seamless transfer and integration of 3D assets and data. Conceptual data mapping involves defining how data from one system or format corresponds to data in another system or format. This is particularly important when working with different DCC applications or file types, as each may have its own unique data structure and representation. ETL is a more comprehensive process used to move and integrate data from one system to another. It consists of the following main steps: Extract: Pull data from the source system or file format. The data could be 3D models, textures, animations, or any other relevant content. Transform: Transform extracted data to fit the target runtime requirements. This might include converting units, reformatting data structures, or filtering out irrelevant or private data. Load: Load the transformed data into the target runtime. This is sometimes called ingestion and ensures that the data is ready for use in USD-based workflows and applications. Conceptual data mapping and ETL processes are essential for ensuring that content can be effectively and accurately transferred into the USD ecosystem, maintaining its quality and functionality throughout the exchange. The OpenUSD Exchange SDK can help accelerate the development of data exchange solutions and help with authoring high-quality OpenUSD data. After getting your data into OpenUSD, the next step is understanding how to aggregate and structure it for maximum efficiency. Learn how to develop data exchange pipelines with the free Learn OpenUSD self-paced curriculum. Asset structure principles and content aggregation Well-structured assets unlock effective content aggregation and asset reusability—this is where OpenUSD truly excels. Proper asset structuring and content aggregation are key methods for efficiently organizing and combining 3D assets and scenes in complex projects. Developing an asset structure involves breaking down your 3D content into smaller, manageable components that can be easily referenced and reused. This enables you to create a hierarchical and organized structure that simplifies the management and maintenance of your assets. This approach both improves reusability and performance by enabling you to load only the necessary components when needed. With a well-designed asset structure, you can enhance collaboration as different workstreams can author data independently in separate layers, which can then be composed into a single asset. This non-destructive editing approach ensures that changes made by one team do not overwrite another team’s work. After you’ve built your smaller components, use content aggregation to combine them into a single cohesive scene. Composition arcs, such as references and payloads, support this process. Figure 2. Digital twin of a liquified natural gas carrier (credit: HD Hyundai and Siemens) These techniques help make OpenUSD a powerful tool for managing large-scale 3D projects, facilitating efficient collaboration, asset reuse, and pipeline efficiency, making it an indispensable part of any modern 3D production workflow. Learn asset structuring principles and content aggregation with the free Learn OpenUSD self-paced curriculum. Composition arcs Composition arcs are the foundational operators of OpenUSD that combine smaller units of scene description into larger units. This enables robust asset structures and content aggregation in OpenUSD workflows. This feature is one of the superpowers of OpenUSD, setting it apart from other 3D data formats and frameworks. Figure 3. Composition arcs enable the creation of complex 3D objects such as autonomous mobile robots Composition arcs are essential tools for non-destructively combining units of USD data. The composition arcs available in OpenUSD include sublayers, references, payloads, variant sets, inherits, specializes, and the most recent addition, relocates. Start with the following composition arcs to gain an understanding how you can leverage them in your own workflows: Sublayers: Enable multiple collaborators or processes to operate on a given USD context. The data authored in a sublayer is added to a USD context in an “include” fashion. References: Most suited for content aggregation and can be used to graft one USD context onto the hierarchy of another USD context. By referencing assets, you can maintain a single source of truth for each component, ensuring consistency and reducing redundancy. This is particularly useful in large-scale projects where the same assets are used repeatedly. Variant sets: Used to store multiple variations of a component, enhancing adaptability without duplicating data. This feature enables you to create different versions of an asset, such as different materials or configurations, all within a single, efficient structure. Use cases include specifying different textures and materials or configurations of a model, such as describing a car featuring different paint colors or with different rim styles. Figure 4. Composition of an OpenUSD scene Learning and implementing composition arcs will provide immediate value to your USD workflows. However, achieving mastery can take years due to their extensive flexibility and diverse applications. To use OpenUSD effectively, we highly recommend investing significant time in experimentation and refining your understanding of composition arc types. Learn to create composition arcs with the free Learn OpenUSD self-paced training. Get started OpenUSD offers a comprehensive set of features for efficient 3D asset management and content aggregation, making it an indispensable framework for large-scale projects. By mastering data exchange, asset structuring, and composition arcs, you can significantly improve collaboration, performance, and reusability in your 3D workflows. Build on your foundational knowledge with courses on the Learn OpenUSD learning path . Start developing with OpenUSD on the Omniverse Kit SDK and generative AI models for OpenUSD development through USD NIM microservices . For more resources on OpenUSD, explore the Alliance for OpenUSD (AOUSD) forum and visit the AOUSD website.
https://developer.nvidia.com/zh-cn/blog/how-to-use-openusd/
如何使用 OpenUSD
通用场景描述(Universal Scene Description) 是开放、可扩展的框架和生态系统,具有用于在 3D 虚拟世界中合成、编辑、查询、渲染、协作和仿真的 API。 本文介绍了如何使用现有资产和工具立即开始使用 OpenUSD,以及可以采取哪些步骤以迭代方式提升 USD 工作流程的水平。 要以交互方式全面介绍 OpenUSD 的基本概念、词汇和最佳实践,请探索 NVIDIA Deep Learning Institute 提供的 免费 Learn OpenUSD 学习路径 。 数据交换 OpenUSD 不会取代您现有的工具或数据格式。顾名思义,Universal Scene Description 提供了一种常用方法来表示网格和 PBR 材质等许多 3D 概念的数据。它不断发展,以支持其他概念,例如 physics 和 B-reps。 图 1、OpenUSD 日益壮大的 DCC 应用生态系统 要开始将 OpenUSD 与您的现有素材结合使用,请使用导出工具或转换器将您的内容导入 USD。这一方法可确保快速有效地集成到新的 USD 工作流程中。通过这种方式使用 USD,您可以开始增强不同生产阶段的协作、性能和互操作性。 有关入门指南的更多信息,请参阅 支持 USD 交换的应用程序和工具列表 。 当您扩展 USD 工作流并希望将 OpenUSD 与专有数据结合使用时,您可能需要开发自己的数据交换解决方案。 数据交换流程管理从一种数据格式到另一种数据格式的内容提取、转换和提取。有几种主要类型的数据交换解决方案用于将现有内容集成到 USD 工作流中: 导入和导出工具:这些工具可在数字内容创建 (DCC) 应用中找到,使您能够直接向 USD 导入和导出 3D 素材。Rhino 和 Blender 等热门软件有助于导入 3D 素材,而 Houdini 等应用则将 3D 素材作为原生 USD 文件打开。这使得您可以在现有工具中轻松使用 USD。 转换器: 这些是独立的程序或脚本,旨在将不同格式的 3D 文件转换为 USD 格式或从 USD 格式转换为 3D 文件。它们为批量处理和自动化提供了灵活性,可确保您的资产可以轻松转换,而无需人工干预。 USD 文件格式插件 :通过这种类型的 USD 插件,您可以保留原始源格式的数据,同时在打开舞台时动态读取数据并将其转换为 USD。这意味着,您可以使用原生格式处理资产,同时仍能从 USD 的强大功能中受益,同时保持数据完整性和工作流程效率。 概念数据映射以及提取、转换、加载(ETL)是开发数据交换解决方案的关键流程,有助于确保无缝传输和集成 3D 资产和数据。 概念数据映射涉及定义一个系统或格式的数据与另一个系统或格式中的数据对应的方式 。在使用不同的 DCC 应用或文件类型时,这一点尤为重要,因为每个应用或文件类型都可能有自己独特的数据结构和表示形式。 ETL 是一个更全面的过程,用于将数据从一个系统迁移到另一个系统并进行集成。它由以下主要步骤组成: 提取: 从源系统或文件格式中提取数据。数据可以是 3D 模型、纹理、动画或任何其他相关内容。 转换 :转换提取的数据以满足目标运行时需求。这可能包括转换单位、重新格式化数据结构,或过滤掉无关紧要或私有数据。 加载: 将转换后的数据加载到目标运行时。 这有时称为 ingestion ,可确保数据可随时用于基于 USD 的工作流和应用。 概念数据映射和 ETL 流程对于确保内容可以有效准确地传输到 USD 生态系统,并在整个交换过程中保持其质量和功能至关重要。 OpenUSD Exchange SDK 有助于加速数据交换解决方案的开发,并有助于创作高质量的 OpenUSD 数据。 在将数据导入 OpenUSD 后,下一步是了解如何聚合和构建数据,以更大限度地提高效率。 了解如何通过免费的 Learn OpenUSD 自定进度课程 开发数据交换工作流 。 资产结构原则和内容聚合 结构良好的资产可实现有效的内容聚合和资产可重用性,这正是 OpenUSD 真正擅长的地方。在复杂的项目中,正确的素材结构和内容聚合是高效组织和合并 3D 素材和场景的关键方法。 开发素材结构需要将 3D 内容分解为更小、可管理的组件,以便于引用和重复使用。这样,您就可以创建一个层次化且井然有序的结构,简化资产的管理和维护。这种方法允许您在需要时仅加载必要的组件,从而提高可复用性和性能。 借助精心设计的资产结构,您可以增强协作,因为不同的工作流可以在不同的层中独立创作数据,然后将数据组合成单个资产。这种非破坏性编辑方法可确保一个团队所做的更改不会覆盖另一个团队的工作。 构建完较小的组件后,请使用内容聚合功能将其合并为一个连贯一致的场景。合成操作符 (例如 references 和 payloads) 支持此过程。 图 2 、 液化天然气运输船的数字孪生 (图片来源:HD Hyundai 和 Siemens) 这些技术有助于使 OpenUSD 成为管理大型 3D 项目的强大工具,促进高效协作、资产重复使用和工作流效率,使其成为任何现代 3D 生产工作流不可或缺的一部分。 通过免费的 Learn OpenUSD 自定进度课程,学习 资产结构化原则和内容聚合 。 合成操作符 合成操作符是 OpenUSD 的基础运算符,可将场景描述的较小单元组合成较大的单元。这可在 OpenUSD 工作流中实现稳健的资产结构和内容聚合。 此功能是 OpenUSD 的超能力之一,使其有别于其他 3D 数据格式和框架。 图 3、合成操作符支持创建复杂的 3D 对象,例如自动移动机器人 合成操作符是以非破坏性方式组合 USD 数据单位的必要工具。OpenUSD 中提供的合成操作符包括子层、引用、有效载荷、变体集、继承、专门化以及最近添加的重新定位。 从以下合成操作符开始,了解如何在自己的工作流程中利用它们: 子层 :支持多个协作者或进程在给定的 USD 环境中运行。子层中创作的数据以“include”方式添加到 USD 上下文中。 引用:最适合内容聚合,可用于将一个 USD 上下文移植到另一个 USD 上下文的层次结构中。通过引用资产,您可以为每个组件维护单一的事实来源,从而确保一致性并减少冗余。这在重复使用相同资产的大型项目中特别有用。 变体集 :用于存储组件的多个变体,在不复制数据的情况下增强适应性。此功能使您能够在单个高效结构中创建素材的不同版本,例如不同的材质或配置。用例包括指定模型的不同纹理、材质或配置,例如描述具有不同涂料颜色或不同轮缘风格的汽车。 图 4、OpenUSD 场景的合成 学习和实施合成操作符将为您的 USD 工作流提供即时价值。然而,由于其广泛的灵活性和多样化的应用,掌握可能需要数年时间。为了有效使用 OpenUSD,我们强烈建议您投入大量时间进行实验,并深入了解合成弧类型。 通过免费的 Learn OpenUSD 自定进度培训,学习 创建 composition arcs 。 开始使用 OpenUSD 为高效的 3D 资产管理和内容聚合提供了一整套功能,使其成为大型项目不可或缺的框架。通过掌握数据交换、资产结构化和合成操作符,您可以显著改善 3D 工作流中的协作、性能和可复用性。 通过学习 Learn OpenUSD 学习路径上的课程,在基础知识的基础上进行构建。 开始在 Omniverse Kit SDK 上使用 OpenUSD 进行开发,并使用生成式 AI 模型通过 USD NIM 微服务 进行 OpenUSD 开发。 有关 OpenUSD 的更多资源,请访问 Alliance for OpenUSD ( AOUSD ) 论坛 并访问 AOUSD 网站。
https://developer.nvidia.com/blog/nvidia-jetpack-6-2-brings-super-mode-to-nvidia-jetson-orin-nano-and-jetson-orin-nx-modules/
NVIDIA JetPack 6.2 Brings Super Mode to NVIDIA Jetson Orin Nano and Jetson Orin NX Modules
The introduction of the NVIDIA Jetson Orin Nano Super Developer Kit sparked a new age of generative AI for small edge devices. The new Super Mode delivered an unprecedented generative AI performance boost of up to 1.7x on the developer kit, making it the most affordable generative AI supercomputer. JetPack 6.2 is now available to support Super Mode for Jetson Orin Nano and Jetson Orin NX production modules, delivering up to 2x higher generative AI model performance. Now you can unlock greater value and lower total cost of ownership for new and existing robotics and edge AI applications. This post discusses the details of Super Mode, including new power modes, benchmarks for popular generative AI models on the Jetson Orin Nano and Orin NX modules, updates to the documentation, and insights into NPN partners supporting Super Mode. New reference power modes on the Jetson Orin Nano and Jetson Orin NX series JetPack 6.2 enables the power boost on the Jetson Orin Nano and Jetson Orin NX series by unlocking higher frequencies on the GPU, DLA memory, and CPU clocks. MODULE EXISTING REFERENCE POWER MODES (available with existing flashing configs) NEW POWER MODES (available only with new flashing configs) NVIDIA Jetson Orin Nano 4GB 7W, 10W 10W, 25W, MAXN SUPER NVIDIA Jetson Orin Nano 8GB 7W, 15W 15W, 25W, MAXN SUPER NVIDIA Jetson Orin NX 8GB 10W, 15W, 20W,MAXN 10W, 15W, 20W, 40W, MAXN SUPER NVIDIA Jetson Orin NX 16GB 10W, 15W, 25W,MAXN 10W, 15W, 25W, 40W, MAXN SUPER Table 1. New reference power modes on the Jetson Orin Nano and Jetson Orin NX modules Jetson Orin Nano modules now have a 25W mode and a new uncapped MAXN SUPER mode. Similarly, Jetson Orin NX modules can now use a new higher 40W reference power mode as well as an uncapped MAXN SUPER mode. The MAXN SUPER is an uncapped power mode that enables the highest number of cores and clock frequency for CPU, GPU, DLA, PVA, and SOC engines. If the total module power exceeds the thermal design power (TDP) budget in this mode, the module is throttled to lower frequency, which delivers lower performance while staying within the thermal budget. We strongly recommend building your own custom power mode to find the right balance between power consumption or thermal stability and performance for your application and needs. Table 2 compares the detailed specifications of Jetson Orin Nano 4GB and 8GB and Jetson Orin NX 8GB and 16GB in their original and Super Mode. ORIN NANO 4GB​ ORIN NANO 4GB (SUPER) ​ ORIN NANO 8GB​ ORIN NANO 8GB ​ (SUPER) ​ ORIN NX 8GB​ ORIN NX 8GB​ (SUPER)​ ORIN NX 16GB​ ORIN NX 16GB​ (SUPER)​ PEAK AI PERF ​ INT8 ​ 20 TOPS (Sparse)​ 10 TOPS (Dense)​ 34 TOPS (Sparse) ​ 17 TOPS (Dense) ​ 40 TOPS (Sparse)​ 20 TOPS (Dense)​ 67 TOPS (Sparse) ​ 33 TOPS (Dense) ​ 70 TOPS (Sparse)​ 35 TOPS (Dense)​ 117 TOPS (Sparse) ​ 58 TOPS (Dense) ​ 100 TOPS (Sparse) ​50 TOPS (Dense)​ 157 TOPS (Sparse) ​ 78 TOPS (Dense) ​ NVIDIA AMPERE GPU 512 CUDA Cores ​ 16 Tensor Cores​ 625 MHz 20/10 INT8 TOPs (S/D) 5 FP16 TFLOPs 512 CUDA Cores ​ 16 Tensor Cores ​ 1020 MHz 34/17 INT8 TOPs (S/D) 8.5 FP16 TFLOPs 1024 CUDA Cores​ 32 Tensor Cores ​625 MHz 40/20 INT8 TOPs (S/D) 10 FP16 TFLOPs 1024 CUDA Cores​ 32 Tensor Cores​ 1020 MHz 67/33  INT8 TOPs (S/D) 17 FP16 TFLOPs 1024 CUDA Cores ​ 32 Tensor Cores 765 MHz 50/25 INT8 TOPs (S/D) ​ 13 FP16 TFLOPs 1024 CUDA Cores​ 32 Tensor Cores​ 1173 MHz 77/38 INT8 TOPs (S/D) 19 FP16 TFLOPs 1024 CUDA Cores ​ 32 Tensor Cores​ 918 MHz 60/30 INT8 TOPs (S/D) 15 FP16 TFLOPs 1024 CUDA Cores​ 32 Tensor Cores ​ 1173 MHz 77/38 INT8 TOPs (S/D) 19 FP16 TFLOPs CPU ​ 6X A78​ 1.5 GHz​ 6X A78​ 1.7 GHz ​ 6X A78​ 1.5 GHz​ 6X A78​ 1.7 GHz ​ 6X A78​ 2.0 GHz​ 6X A78​ 2.0 GHz​ 8X A78​2 .0 GHz​ 8X A78​ 2.0 GHz​ DLA (S/D) ​ NA​ NA​ NA​ NA​ 20/10 INT8 TOPs​ 40/20 INT8 TOPs ​ 40/20 INT8 TOPs​ 80/40 INT8 TOPs ​ DRAM BW ​ 34 GB/s​ 51 GB/s ​ 68 GB/s​ 102 GB/s ​ 102 GB/s​ 102 GB/s​ 102 GB/s​ 102 GB/s​ REFERENCE MODULE POWER ​ 7W | 10W​ 7W | 25W |​ MAXN SUPER 7W | 15W​ 15W | 25W | MAXN SUPER​ 10W | 15W | 20W | MAXN​ 10W | 15W | 20W | 40W | MAXN SUPER​ 10W | 15W | 25W​ | MAXN 10W | 15W | 25W | 40W ​ | MAXN SUPER Table 2. Original specs for Jetson Orin Nano and Jetson Orin NX and the specs in Super Mode While using the new power modes, ensure that your product’s existing or new thermal design can accommodate the new specifications with the power modes. For more information, see the updated Thermal Design Guide . Updated Power Estimator Tool The Power Estimator Tool is a powerful tool provided by NVIDIA to create custom power profiles and nvpmodel configuration files by modifying system parameters such as the cores, maximum frequency and load levels on the GPU, CPU, DLA, and so on. The tool provides an estimated power consumption with various settings and can be used to create optimal parameters settings to get the desired balance between performance and power consumption. We have updated the Power Estimator Tool with Super Mode. We strongly recommend that you use the Power Estimator Tool and verify in practice before deploying with high-performance applications. Boost performance on popular generative AI models With the introduction of Super Mode with JetPack 6.2, the Jetson Orin Nano and Jetson Orin NX modules deliver up to a 2x inference performance boost. We benchmarked the most popular large language models (LLMs), vision language models (VLMs) and vision transformers (ViTs). Large language models The following chart and tables show the Super Mode performance benchmark for popular LLMs such as Llama3.1 8B, Qwen2.5 7B, and Gemma2 2B. Figure 1. Performance improvements for LLMs using Super Mode DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior. In the following tables, LLM generation performance (tokens per second) was measured with INT4 quantization using MLC API. Table 3 shows the LLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2. Model Orin Nano 8GB (original) Orin Nano 8GB (Super Mode) Perf Gain (x) Gemma 2 2B 11.40 18.60 1.64 SmolLM2 1.7B 23.00 35.80 1.56 Table 3. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 4GB Table 4 shows the LLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2. Model Orin Nano 8GB (original) Orin Nano 8GB (Super Mode) Perf Gain (x) Llama 3.1 8B 14.00 19.10 1.37 Llama 3.2 3B 27.70 43.10 1.55 Qwen 2.5 7B 14.20 21.80 1.53 Gemma 2 2B 21.5 35.0 1.63 Gemma 2 9B 7.20 9.20 1.28 Phi-3.5 3.8B 24.70 38.10 1.54 SmolLM2 1.7B 41.00 64.50 1.57 Table 4. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin Nano 8GB Table 5 shows the LLM performance gain on Jetson Orin NX 8GB with JetPack 6.2. Model Orin NX 8GB (original) Orin NX 8GB (Super Mode) Perf Gain (x) Llama 3.1 8B 15.90 23.10 1.46 Llama 3.2 3B 34.50 46.50 1.35 Qwen 2.5 7B 17.10 23.80 1.39 Gemma 2 2B 26.60 39.30 1.48 Gemma 2 9B 8.80 13.38 1.52 Phi-3.5 3.8B 30.80 41.30 1.34 SmolLM2 1.7B 51.50 69.80 1.35 Table 5. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX 8GB Table 6 shows the LLM performance gain on Jetson Orin NX 16GB with JetPack 6.2. Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x) Llama 3.1 8B 20.50 22.80 1.11 Llama 3.2 3B 40.40 45.80 1.13 Qwen 2.5 7B 20.80 23.50 1.13 Gemma 2 2B 31.60 39.00 1.23 Gemma 2 9B 10.56 13.26 1.26 Phi-3.5 3.8B 35.90 40.90 1.14 SmolLM2 1.7B 59.50 68.80 1.16 Table 6. Benchmark performance in tokens/sec for popular LLMs on Jetson Orin NX and Orin Nano modules Vision language models The following chart and tables show the Super Mode performance benchmark for popular VLMs such as VILA1.5 8B, LLAVA1.6 7B, and Qwen2 VL 2B. Figure 2. Performance improvements of VLMs when run using Super Mode DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior. Table 7 shows the VLM performance gain on Jetson Orin Nano 4GB with JetPack 6.2. Model Orin Nano 4GB (original) Orin Nano 4GB (Super Mode) Perf Gain (x) PaliGemma2 3B 7.2 11.2 1.56 Table 7. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 4GB Table 8 shows the VLM performance gain on Jetson Orin Nano 8GB with JetPack 6.2. Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x) VILA 1.5 3B 0.7 1.1 1.51 VILA 1.5 8B 0.6 0.8 1.45 LLAVA 1.6 7B 0.4 0.6 1.38 Qwen2 VL 2B 2.8 4.4 1.57 InternVL2.5 4B 2.5 5.1 2.04 PaliGemma2 3B 13.7 21.6 1.58 SmolVLM 2B 8.1 12.9 1.59 Table 8. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin Nano 8GB Table 9 shows the VLM performance gain on Jetson Orin NX 8GB with JetPack 6.2. Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x) VILA 1.5 3B 0.8 1 1.25 VILA 1.5 8B 0.7 1.04 1.50 LLAVA 1.6 7B 0.5 1.2 2.54 Qwen2 VL 2B 3.4 4.8 1.41 InternVL2.5 4B 3 4.1 1.37 PaliGemma2 3B 17.1 23.9 1.40 SmolVLM 2B 9.7 14.4 1.48 Table 9. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX 16GB Table 10 shows the VLM performance gain on Jetson Orin NX 16GB with JetPack 6.2. Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x) VILA 1.5 3B 1 1.3 1.23 VILA 1.5 8B 0.8 1 1.25 LLAVA 1.6 7B 0.6 0.7 1.07 Qwen2 VL 2B 4 4.8 1.20 InternVL2.5 4B 2.8 4.4 1.57 PaliGemma2 3B 20 23.8 1.19 SmolVLM 2B 11.7 14.3 1.22 Table 10. Benchmark performance in tokens/sec for popular VLMs on Jetson Orin NX and Orin Nano modules All VILA and LLAVA models were run with INT4 precision using MLC while the rest of the models were run in FP4 precision with Hugging Face Transformers. Vision transformers The following chart and tables show the Super Mode performance benchmark for popular ViTs such as CLIP, DINO, and SAM2. Figure 3. Performance improvements of ViTs when run using Super Mode DNR means that memory on the module was not sufficient to run the specific model. Model performance will be influenced by throttling behavior. Table 11 shows the ViT performance gain on Jetson Orin Nano 4GB with JetPack 6.2. Model Orin Nano 4GB (original) Orin Nano 4GB (Super Mode) Perf Gain (x) clip-vit-base-patch32 126.8 189.5 1.49 clip-vit-base-patch16 63.2 112.4 1.78 DINOv2-base-patch14 49.3 79.3 1.61 SAM2 base 2.5 3.8 1.54 vit-base-patch16-224 62.4 103.3 1.66 Table 11. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 4GB Table 12 shows the ViT performance gain on Jetson Orin Nano 8GB with JetPack 6.2. Model Orin Nano 8GB (original) Orin Nano 8GB (Super Mode) Perf Gain (x) clip-vit-base-patch32 196 314 1.60 clip-vit-base-patch16 95 161 1.69 DINOv2-base-patch14 75 126 1.68 SAM2 base 4.4 6.3 1.43 Grounding DINO 4.1 6.2 1.52 vit-base-patch16-224 98 158 1.61 vit-base-patch32-224 171 273 1.60 Table 12. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin Nano 8GB Table 13 shows the ViT performance gain on Jetson Orin NX 8GB with JetPack 6.2. Model Orin NX 8GB (original) Orin NX 8GB (Super Mode) Perf Gain (x) clip-vit-base-patch32 234.0 361.1 1.54 clip-vit-base-patch16 101.7 204.3 2.01 DINOv2-base-patch14 81.4 160.3 1.97 SAM2 base 3.9 7.4 1.92 Grounding DINO 4.2 7.4 1.75 vit-base-patch16-224 98.6 192.5 1.95 vit-base-patch32-224 193.1 313.5 1.62 Table 13. Benchmark performance in tokens/sec for popular ViTs on Jetson Orin NX 8GB Table 14 shows the ViT performance gain on Jetson Orin NX 16GB with JetPack 6.2. Model Orin NX 16GB (original) Orin NX 16GB (Super Mode) Perf Gain (x) clip-vit-base-patch32 323.2 356.7 1.10 clip-vit-base-patch16 163.5 193.6 1.18 DINOv2-base-patch14 127.5 159.8 1.25 SAM2 base 6.2 7.3 1.18 Grounding DINO 6.2 7.2 1.16 vit-base-patch16-224 158.6 190.2 1.20 vit-base-patch32-224 281.2 309.5 1.10 Table 14. Benchmark performance in frames/sec for popular ViTs on Jetson Orin NX 16GB All ViT models were run with FP16 precision using NVIDIA TensorRT and measurements are in FPS. Getting started on NVIDIA Jetson Orin Nano and Jetson Orin NX with JetPack 6.2 The NVIDIA Jetson ecosystem provides various ways for you to flash the developer kit and production modules with the JetPack image. To install JetPack 6.2 on the Jetson Orin Nano Developer Kit or the modules, use one of the following methods: SD-Card image from the JetPack SDK page. SDK Manager for flashing per the download and install instructions . Flashing script for bootloader, kernel, and root file system, per Flashing Support . New flashing configuration The new power modes are only available with the new flashing configuration. The default flashing configuration has not changed. To enable the new power modes, you must use the new flashing configuration while flashing. Here’s the new flashing configuration to be used with flashing: jetson-orin-nano-devkit-super.conf After flashing or updating to JetPack 6.2, run the following command to start the newly available Super Mode. MAXN SUPER mode on Jetson Orin Nano Modules: sudo nvpmodel -m 2 MAXN SUPER mode on Jetson Orin NX Modules: sudo nvpmodel -m 0 You can also select the MAXN SUPER and other power modes from the power mode menu at the top-right corner of the page. Figure 4. Power mode selection menu s Jetson AI Lab The Jetson AI Lab is the NVIDIA hub for exploring and experimenting with generative AI technologies optimized for edge devices. It supports developers and provides a collaborative community with nearly 50 tutorials, prebuilt containers, and resources for deploying on-device LLMs, SLMs, VLMs, diffusion policies, and speech models using optimized inferencing infrastructures. By simplifying access to cutting-edge AI tools, the lab empowers developers of all levels to innovate and deploy generative AI locally, advancing open-source edge AI and robot learning. Dive into generative AI with ease using these easy-to-follow tutorials for your developer kit powered by JetPack 6.2: Ollama with Open Web UI : Introductory LLM application example NanoOWL : Interactive demo with cutting-edge ViT model LeRobot : Physical AI example with HuggingFace LeRobot Updated documentation: Datasheets and design guides With the newer performance boost, the following resources have been updated and can be downloaded from the Jetson Download Center : Jetson Orin NX Module datasheet Jetson Orin Nano Module datasheet Jetson Orin NX Series and Jetson Orin Nano Series Design Guide Thermal Design Guide Jetson ecosystem partners ready for Super Mode To support customer deployments, the Jetson ecosystem partners have enhanced their solutions to support this boosted performance. Category Jetson ecosystem partners ISV solutions DeepEdge Edge Impulse RidgeRun Ultralytics Hardware system partners AAEON Advantech Aetina AIMobile ASUSTek Axiomtek Connect Tech Seeed Studio Syslogic Vecow Yuan High-Tech Thermal solutions Advanced Thermal Solutions Frore Systems NVIDIA Jetson Orin lifecycle and roadmap Due to the growing customer demand for Jetson Orin, NVIDIA recently announced the extension of the product lifecycle of Jetson Orin through 2032. With this performance boost, the Jetson Orin Nano and Orin NX series are the ideal platforms for both current and future models. The upcoming JetPack 5.1.5 will also enable Super Mode for the Jetson Orin NX and Jetson Orin Nano modules. Developers and customers who develop with JetPack 5 will benefit from the performance boost. Figure 5. JetPack software roadmap Boost your application performance with JetPack 6.2 JetPack 6.2 is a groundbreaking release. It delivers an astonishing 2x boost in inference performance on existing Jetson modules, without any added cost. This upgrade is a must-have for Jetson developers and customers looking to supercharge their applications. Upgrade to JetPack 6.2 today and unleash the full potential of your Jetson platform. Stay up to date by subscribing to our newsletter , and follow NVIDIA Robotics on LinkedIn , Instagram , X , and Facebook . For more information, explore our documentation or join the Robotics community on our developer forums , Discord , and YouTube channels.
https://developer.nvidia.com/zh-cn/blog/nvidia-jetpack-6-2-brings-super-mode-to-nvidia-jetson-orin-nano-and-jetson-orin-nx-modules/
NVIDIA JetPack 6.2 为 NVIDIA Jetson Orin Nano 和 Jetson Orin NX 模块引入超级模式
NVIDIA Jetson Orin Nano 超级开发者套件 的推出为小型边缘设备开启了 生成式 AI 的新时代。新的 超级模式 在开发者套件上实现了前所未有的生成式 AI 性能提升,最高可达 1.7 倍,使其成为最经济实惠的生成式 AI 超级计算机。 JetPack 6.2 现已支持 Jetson Orin Nano 和 Jetson Orin NX 生产模组的超级模式,可将生成式 AI 模型的性能提升高达 2 倍。现在,您可以为新的和现有的机器人和边缘 AI 应用释放更多价值并降低总体拥有成本。 本文讨论了 Super Mode 的详细信息,包括新的功率模式、Jetson Orin Nano 和 Orin NX 模组上热门生成式 AI 模型的基准测试、文档更新,以及对支持 Super Mode 的 NPN 合作伙伴的见解。 Jetson Orin Nano 和 Jetson Orin NX 系列上的新参考功率模式 JetPack 6.2 通过在 GPU、DLA 内存和 CPU 时钟上解锁更高的频率,提升 Jetson Orin Nano 和 Jetson Orin NX 系列的功耗。 模块 现有参考电源模式 ( 适用于现有刷写配置 ) NEW POWER MODES (仅适用于新的刷写配置) NVIDIA Jetson Orin Nano 4GB 7 瓦、10 瓦 10 瓦、25 瓦、 最大值 超级 (SUPER) NVIDIA Jetson Orin Nano 8GB 7 瓦、15 瓦 15 瓦、25 瓦、 最大值 超级 (SUPER) NVIDIA Jetson Orin NX 8GB 10 瓦、15 瓦、20 瓦、MAXN 10 瓦、15 瓦、20 瓦、40 瓦, 最大值 超级 (SUPER) NVIDIA Jetson Orin NX 16GB 10 瓦、15 瓦、25 瓦、MAXN 10 瓦、15 瓦、25 瓦、40 瓦, 最大值 超级 (SUPER) 表 1、Jetson Orin Nano 和 Jetson Orin NX 模组上的新参考功率模式 Jetson Orin Nano 模组现提供 25W 模式和全新的无上限 MAXN SUPER 模式。同样,Jetson Orin NX 模组现在可以使用更高的 40W 参考功率模式以及不设上限的 MAXN SUPER 模式。 MAXN SUPER 是不设上限的功率模式,可为 CPU、GPU、DLA、PVA 和 SOC 引擎提供最高数量的核心数和时钟频率。在此模式下,如果模组总功率超过热设计功率(TDP)预算,系统会将模组调低频率,从而在保持热预算的同时降低性能。 我们强烈建议您构建自己的自定义功率模式,以便根据您的应用和需求,在功耗或热稳定性与性能之间找到适当的平衡。 表 2 比较了 Jetson Orin Nano 4GB 和 8GB 以及 Jetson Orin NX 8GB 和 16GB 在原始模式和 Super Mode 下的详细规格。 Orin Nano 4GB* Orin Nano 4GB (SUPER)* Orin Nano 8GB* Orin Nano 8GB* (SUPER)* ORIN NX 8GB* ORIN NX 8GB* (SUPER)* Orin NX 16GB* Orin NX 16GB* (SUPER)* AI 性能峰值 * INT8 * 20 TOPS (Sparse) 10 TOPS (Dense) 34 TOPS (Sparse) 17 TOPS (Dense) 40 TOPS (Sparse) 20 TOPS (Dense) 67 TOPS (Sparse) 33 TOPS (Dense) 70 TOPS (Sparse) 35 TOPS (Dense) 117 TOPS (Sparse) 58 TOPS (Dense) 100 TOPS (Sparse) 50 TOPS (Dense) 157 TOPS (Sparse) 78 TOPS (Dense) NVIDIA Ampere GPU 512 个 CUDA Cores 16 个 Tensor Cores 625 MHz 20/10 INT8 TOPs (S/D) 5 FP16 TFLOPs 512 个 CUDA Cores 16 个 Tensor Cores 1020 MHz 34/17 INT8 TOPS (S/D) 8.5 FP16 TFLOPS 1024 个 CUDA Cores 32 个 Tensor Cores 625 MHz 40/20 INT8 TOPs (S/D) 10 FP16 TFLOPs 1024 个 CUDA Cores 32 个 Tensor Cores 1020 MHz 67/33 INT8 TOPs (S/D) 17 FP16 TFLOPs 1024 个 CUDA Cores 32 个 Tensor Cores 765 MHz 50/25 INT8 TOPs (S/D) 13 FP16 TFLOPs 1024 个 CUDA Cores 32 个 Tensor Cores 1173 MHz 77/38 INT8 TOPs (S/D) 19 FP16 TFLOPs 1024 个 CUDA Cores 32 个 Tensor Cores 918 MHz 60/30 INT8 TOPs (S/D) 15 FP16 TFLOPs 1024 个 CUDA Cores 32 个 Tensor Cores 1173 MHz 77/38 INT8 TOPs (S/D) 19 FP16 TFLOPs CPU * 6 块 A78+ 1.5 GHz* 6 块 A78+ 1.7 GHz * 6 块 A78+ 1.5 GHz* 6 块 A78+ 1.7 GHz * 6 块 A78+ 2.0 GHz* 6 块 A78+ 2.0 GHz* 8 块 A78+2 .0 GHz 以上 8 块 A78+ 2.0 GHz* DLA (S/D) * NA* NA* NA* NA* 20/10 INT8 TOPS* 40/20 INT8 TOPS * 40/20 INT8 TOPS* 80/40 INT8 TOPS * DRAM 重制版 * 34 GB/秒* 51 GB/秒 * 68 GB/秒* 102 GB/秒 * 102 GB/秒* 102 GB/秒* 102 GB/秒* 102 GB/秒* 模组功耗 * 7 瓦 10 瓦以上 7 瓦 10 瓦 25 瓦 * 7 瓦 15 瓦以上 7 瓦 15 瓦 25 瓦 * 10 瓦 15 瓦 20 瓦以上 10 瓦 15 瓦 25 瓦 40 瓦 * 10 瓦 15 瓦 25 瓦以上 10 瓦 15 瓦 25 瓦 40 瓦 * 表 2、Jetson Orin Nano 和 Jetson Orin NX 的原始规格以及超级模式下的规格 在使用新的功率模式时,请确保您产品的现有或新的散热设计能够适应新的功率模式规格。有关更多信息,请参阅更新的 Thermal Design Guide 。 更新了功耗估算器工具 功耗估算器工具是 NVIDIA 提供的一款功能强大的工具,可通过修改 GPU、CPU、DLA 上的核心、最大频率和负载级别等系统参数来创建自定义功耗配置文件和 nvpmodel 配置文件。该工具通过各种设置提供估计功耗,并可用于创建最佳参数设置,以在性能和功耗之间实现所需的平衡。 我们更新了 Power Estimator Tool 的超级模式。我们强烈建议您使用 Power Estimator Tool 并在实践中进行验证,然后再通过高性能应用程序进行部署。 提升热门生成式 AI 模型的性能 随着 JetPack 6.2 超级模式的引入,Jetson Orin Nano 和 Jetson Orin NX 模组可将推理性能提升高达 2 倍。我们对热门的大语言模型 (LLMs)、视觉语言模型 (VLMs) 和视觉转换器 (ViTs) 进行了基准测试。 大语言模型 以下图表和表格显示了 Llama3.1 8B、Qwen2.5 7B 和 Gemma2 2B 等热门 LLM 的超模性能基准测试。 图 1、使用 Super Mode 可提升 LLM 的性能 DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。 在下表中,LLM 生成性能(每秒令牌数)是使用 MLC API 通过 INT4 量化测量的。 表 3 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 上的 LLM 性能提升。 模型 Orin Nano 8GB (原始版本) Orin Nano 8GB (超级模式) 性能增益 (倍) Gemma 2 2B 11.40 18.60 欧元 1.64 SmolLM2 1.7 B 23.00 35.80 1.56 表 3、对 Jetson Orin Nano 4GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位 表 4 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 上的 LLM 性能提升。 模型 Orin Nano 8GB (原始版本) Orin Nano 8GB (超级模式) 性能增益 (倍) Llama 3.1 8B 14.00 19.10 1.37 Llama 3.2 3B 27.70 43.10 1.55 Qwen 2.5 7B 14.20 21.80 1.53 Gemma 2 2B 21.5 35.0 1.63 Gemma 2 9B 7.20 9.20 1.28 Phi-3.5 38 B 24.70 38.10 1.54 SmolLM2 1.7 B 41.00 64.50 1.57 表 4、对 Jetson Orin Nano 8GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位 表 5 显示了搭载 JetPack 6.2 的 Jetson Orin NX 8GB 上的 LLM 性能提升。 模型 Orin NX 8GB (原始版本) Orin NX 8GB (超级模式) 性能增益 (倍) Llama 3.1 8B 15.90 23.10 1.46 Llama 3.2 3B 34.50 46.50 1.35 Qwen 2.5 7B 17.10 23.80 1.39 Gemma 2 2B 26.60 39.30 1.48 Gemma 2 9B 8.80 13.38 1.52 Phi-3.5 38 B 30.80 评分:41.30 1.34 SmolLM2 1.7 B 51.50 69.80 1.35 表 5、对 Jetson Orin NX 8GB 上的热门 LLM 进行性能基准测试,以令牌/秒为单位 表 6 显示了搭载 JetPack 6.2 的 Jetson Orin NX 16GB 上的 LLM 性能提升。 模型 Orin NX 16GB (原始版本) Orin NX 16GB (超级模式) 性能增益 (倍) Llama 3.1 8B 20.50 22.80 1.11 Llama 3.2 3B 40.40 45.80 1.13 Qwen 2.5 7B 20.80 23.50 1.13 Gemma 2 2B 31.60 欧元 39.00 1.23 Gemma 2 9B 10.56 13.26 1.26 Phi-3.5 38 B 35.90 40.90 1.14 SmolLM2 1.7 B 59.50 68.80 1.16 表 6、对 Jetson Orin NX 和 Orin Nano 模组上的热门 LLM 进行性能基准测试,以令牌/秒为单位 视觉语言模型 以下图表和表格显示了热门 VLM (例如 VILA1.5 8B、LLAVA1.6 7B 和 Qwen2 VL 2B) 的超模性能基准测试。 图 2、使用 Super Mode 运行时 VLM 的性能提升 DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。 表 7 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 的 VLM 性能提升情况。 模型 Orin Nano 4GB (原始) Orin Nano 4GB (超级模式) 性能增益 (倍) PaliGemma2 3B 7.2 11.2 1.56 表 7、对 Jetson Orin Nano 4GB 上的热门 VLM 进行性能基准测试(tokens/sec) 表 8 显示了搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 上的 VLM 性能提升。 模型 Orin NX 16GB (原始版本) Orin NX 16GB (超级模式) 性能增益 (倍) VILA 1.5 3B 0.7 1.1 1.51 VILA 1.5 8B 0.6 0.8 1.45 LLAVA 1.6 7B 0.4 0.6 1.38 Qwen2 VL 2B 2.8 4.4 1.57 InterVL2.5 4B 2.5 5.1 2.04 PaliGemma2 3B 13.7 21.6% 1.58 SmolVLM 2B 8.1 12.9 1.59 表 8、对 Jetson Orin Nano 8GB 上的热门 VLM 进行性能基准测试(tokens/sec) 表 9 显示搭载 JetPack 6.2 的 Jetson Orin NX 8GB 上的 VLM 性能提升。 模型 Orin NX 16GB (原始版本) Orin NX 16GB (超级模式) 性能增益 (倍) VILA 1.5 3B 0.8 1 1.25 VILA 1.5 8B 0.7 1.04 1.50 LLAVA 1.6 7B 0.5 1.2 2.54 Qwen2 VL 2B 3.4 4.8 1.41 InterVL2.5 4B 3 4.1 1.37 PaliGemma2 3B 17.1 23.9 1.40 SmolVLM 2B 9.7 14.4 1.48 表 9、对 Jetson Orin NX 16GB 上的热门 VLM 进行性能基准测试,以令牌/秒为单位 表 10 显示配备 JetPack 6.2 的 Jetson Orin NX 16GB 上的 VLM 性能提升。 模型 Orin NX 16GB (原始版本) Orin NX 16GB (超级模式) 性能增益 (倍) VILA 1.5 3B 1 1.3 1.23 VILA 1.5 8B 0.8 1 1.25 LLAVA 1.6 7B 0.6 0.7 1.07 Qwen2 VL 2B 4 4.8 1.20 InterVL2.5 4B 2.8 4.4 1.57 PaliGemma2 3B 20 23.8 1.19 SmolVLM 2B 11.7 14.3 1.22 表 10、对 Jetson Orin NX 和 Orin Nano 模组上的热门 VLM 进行性能基准测试,以令牌/秒为单位 所有 VILA 和 LLAVA 模型均使用 MLC 以 INT4 精度运行,而其余模型则使用 Hugging Face Transformers 以 FP4 精度运行。 视觉 Transformer 模型 以下图表和表格显示了 CLIP、DINO 和 SAM2 等热门 ViTs 的 Super Mode 性能基准测试。 图 3、使用 Super Mode 运行 ViTs 时的性能提升 DNR 意味着模块上的内存不足以运行特定模型。模型性能将受到节流行为的影响。 表 11 显示搭载 JetPack 6.2 的 Jetson Orin Nano 4GB 的 ViT 性能提升情况。 模型 Orin Nano 4GB (原始) Orin Nano 4GB (超级模式) 性能增益 (倍) clip-vit-base-patch32 126.8 189.5 1.49 clip-vit-base-patch16 63.2 112.4 1.78 基于 DINOv2 的补丁 14 49.3% 79.3 1.61 基于 SAM2 2.5 3.8 1.54 Vit-base-patch16-224 62.4% 103.3 1.66 表 11、对 Jetson Orin Nano 4GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位 表 12 显示搭载 JetPack 6.2 的 Jetson Orin Nano 8GB 的 ViT 性能提升情况。 模型 Orin Nano 8GB (原始版本) Orin Nano 8GB (超级模式) 性能增益 (倍) clip-vit-base-patch32 196 214 1.60 clip-vit-base-patch16 95% 151 1.69 基于 DINOv2 的补丁 14 75 126 1.68 基于 SAM2 4.4 6.3 1.43 支持恐龙 4.1 6.2 1.52 Vit-base-patch16-224 98 158 1.61 Vit-base-patch32-224 172 273 1.60 表 12、对 Jetson Orin Nano 8GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位 表 13 显示配备 JetPack 6.2 的 Jetson Orin NX 8GB 上的 ViT 性能提升。 模型 Orin NX 8GB (原始版本) Orin NX 8GB (超级模式) 性能增益 (倍) clip-vit-base-patch32 224.0 361.1 1.54 clip-vit-base-patch16 101.7 204.3 2.01 基于 DINOv2 的补丁 14 81.4% 160.3 1.97 基于 SAM2 3.9 7.4 1.92 支持恐龙 4.2 7.4 1.75 Vit-base-patch16-224 98.6% 192.5 1.95 Vit-base-patch32-224 192.1 313.5 1.62 表 13、对 Jetson Orin NX 8GB 上的热门 ViTs 进行性能基准测试,以令牌/秒为单位 表 14 显示配备 JetPack 6.2 的 Jetson Orin NX 16GB 上的 ViT 性能提升。 模型 Orin NX 16GB (原始版本) Orin NX 16GB (超级模式) 性能增益 (倍) clip-vit-base-patch32 322.2 356.7 1.10 clip-vit-base-patch16 163.5 192.6 1.18 基于 DINOv2 的补丁 14 127.5 159.8 1.25 基于 SAM2 6.2 7.3 1.18 支持恐龙 6.2 7.2 1.16 Vit-base-patch16-224 158.6 190.2 1.20 Vit-base-patch32-224 271.2 209.5 1.10 表 14、对 Jetson Orin NX 16GB 上的热门 ViT 进行性能基准测试 (帧/秒) 所有 ViT 模型均使用 NVIDIA TensorRT 以 FP16 精度运行,且测量以 FPS 进行。 使用 NVIDIA Jetson Orin Nano 和 NVIDIA Jetson Orin NX 的 JetPack 6.2 入门 NVIDIA Jetson 生态系统提供了多种使用 JetPack 镜像刷写开发者套件和生产模组的方式。 要在 Jetson Orin Nano 开发者套件或模组上安装 JetPack 6.2,请使用以下方法之一: JetPack SDK 页面中的 SD-Card 镜像 。 SDK Manager ,用于 根据下载和安装说明 进行刷写。 根据 刷写支持 ,刷写 bootloader、kernel 和 root file system 的脚本。 新的刷写配置 新的电源模式仅适用于新的刷写配置。默认刷写配置未发生变化。要启用新的电源模式,您必须在刷写时使用新的刷写配置。 以下是用于刷写的新刷写配置: jetson-orin-nano-devkit-super.conf 刷写或更新到 JetPack 6.2 后,运行以下命令以启动新提供的超级模式。 Jetson Orin Nano 模组上的 MAXN SUPER 模式: sudo nvpmodel -m 2 Jetson Orin NX 模组上的 MAXN SUPER 模式: sudo nvpmodel -m 0 您还可以从页面右上角的电源模式菜单中选择 MAXN SUPER 和其他电源模式。 图 4. 电源模式选择菜单 Jetson AI 实验室 Jetson AI 实验室是 NVIDIA 探索和试验针对边缘设备优化的生成式 AI 技术的中心。它为开发者提供支持,并为协作社区提供了近 50 个教程、预构建容器和资源,用于使用优化的推理基础架构部署设备上的 LLM、SLM、VLM、diffusion policies 和 speech models。 通过简化对先进 AI 工具的访问,该实验室使各级别的开发者能够在本地创新和部署生成式 AI,从而推进开源边缘 AI 和机器人学习。 通过这些易于学习的教程,为由 JetPack 6.2 提供支持的开发者套件轻松深入了解生成式 AI: 使用 Open Web UI 的 Ollama :入门级 LLM 应用示例 NanoOWL: 采用先进 ViT 模型的交互式演示 LeRobot :Hugging Face LeRobot 的物理 AI 示例 更新文档:Datasheets 和设计指南 借助更新的性能提升,以下资源已更新,可从 Jetson Download Center 下载: Jetson Orin NX 模组数据表 Jetson Orin Nano 模组数据表 Jetson Orin NX 系列和 Jetson Orin Nano 系列设计指南 散热设计指南 Jetson 生态系统合作伙伴为 Super Mode 做好准备 为支持客户部署,Jetson 生态系统合作伙伴已增强其解决方案,以支持这种提升的性能。 类别 Jetson 生态系统合作伙伴 ISV 解决方案 DeepEdge Edge Impulse RidgeRun Ultralytics 硬件系统合作伙伴 AAEON Advantech Aetina AIMobile ASUSTek Axiomtek Connect Tech Seeed Studio Syslogic Vecow Yuan High-Tech 散热解决方案 Advanced Thermal Solutions Frore Systems NVIDIA Jetson Orin 生命周期和路线图 鉴于客户对 Jetson Orin 的需求日益增长,NVIDIA 最近宣布 将 Jetson Orin 的产品生命周期延长至 2032 年。凭借这种性能提升,Jetson Orin Nano 和 Orin NX 系列成为当前和未来模型的理想平台。 即将推出的 JetPack 5.1.5 还将为 Jetson Orin NX 和 Jetson Orin Nano 模组启用超级模式。使用 JetPack 5 进行开发的开发者和客户将从性能提升中受益。 图 5、JetPack 软件路线图 使用 JetPack 6.2 提升应用性能 JetPack 6.2 是一个开创性版本。它可在现有 Jetson 模组上实现惊人的 2 倍推理性能提升,且无需增加任何成本。对于希望为其应用程序提供强效助力的 Jetson 开发者和客户而言,此次升级必不可少。立即升级到 JetPack 6.2,充分发挥 Jetson 平台的全部潜力。
https://developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform/
Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform
As robotics and autonomous vehicles advance, accelerating development of physical AI —which enables autonomous machines to perceive, understand, and perform complex actions in the physical world—has become essential. At the center of these systems are world foundation models (WFMs) —AI models that simulate physical states through physics-aware videos, enabling machines to make accurate decisions and interact seamlessly with their surroundings. NVIDIA Cosmos is a platform that helps developers build custom world models for physical AI systems at scale. It offers open world foundation models and tools for every stage of development, from data curation to training to customization. Video 1. NVIDIA Cosmos: A World Foundation Model Platform for Physical AI This post explains Cosmos and its key features that accelerate physical AI development. Accelerating world model development with NVIDIA Cosmos Building physical AI is challenging, demanding precise simulations and real-world behavior understanding and prediction. A key tool for overcoming these challenges is a world model, which predicts future environmental states based on past observations and current inputs. These models are invaluable for physical AI builders, enabling them to simulate, train, and refine systems in controlled environments. However, developing effective world models requires vast amounts of data, computational power, and real-world testing, which can introduce significant safety risks, logistical hurdles, and prohibitive costs. To address these challenges, developers often turn to synthetic data generated from 3D simulations to train models. While synthetic data is a powerful tool, creating it is resource-intensive and may fall short of accurately reflecting real-world physics, particularly in complex or edge-case scenarios. The end-to-end NVIDIA Cosmos platform accelerates world model development for physical AI systems. Built on CUDA, Cosmos combines state-of-the-art world foundation models, video tokenizers, and AI-accelerated data processing pipelines. Developers can accelerate world model development by fine-tuning Cosmos world foundation models or building new ones from the ground up. In addition to Cosmos world foundation models, the platform also includes: NVIDIA NeMo Curator for efficient video data curation Cosmos Tokenizer for efficient, compact, and high-fidelity video tokenization Cosmos world foundation models pretrained for robotics and autonomous driving applications NVIDIA NeMo Framework for model training and optimization Figure 1. NVIDIA Cosmos is a world foundation model development platform with generative models, data curator, tokenizer, and framework to accelerate physical AI development Pretrained world foundation models for physical AI Cosmos world foundation models are pretrained large generative AI models trained on 9,000 trillion tokens—including 20 million hours of data from autonomous driving , robotics , synthetic environments, and other related domains. These models create realistic synthetic videos of environments and interactions, providing a scalable foundation for training complex systems, from simulating humanoid robots performing advanced actions to developing end-to-end autonomous driving models. These models use two architectures: autoregressive and diffusion. Both approaches use the transformer architecture for its scalability and effectiveness in handling complex temporal dependencies. Autoregressive model Cosmos autoregressive model is designed for video generation, predicting the next token based on input text and past video frames. It uses a transformer decoder architecture, with key modifications for world model development. 3D RoPE (Rotary Position Embeddings) encodes spatial and temporal dimensions separately, ensuring precise video sequence representation. Cross-attention layers enable text inputs, providing better control over world generation. QK-normalization enhances training stability. Pretraining of this model is progressive, starting with predicting up to 17 future frames from a single input frame, then extending to 34 frames, and eventually up to 121 frames (or 50,000 tokens). Text inputs are introduced to combine descriptions with video frames, and the model is fine-tuned with high-quality data for robust performance. This structured approach enables the model to generate videos of varying lengths and complexities, with or without text inputs. Figure 2. The Cosmos autoregressive model uses a transformer decoder architecture, with key modifications for world model development Diffusion models Diffusion models are popular for generating images, videos, and audio due to their ability to deconstruct training data and reconstruct it based on user input, producing high-quality, realistic outputs. Diffusion models operate in two phases: Forward diffusion process: Training data is progressively corrupted by adding Gaussian noise over multiple steps, effectively transforming it into pure noise. Reverse diffusion process: The model learns to reverse this noise step by step, recovering the original data by denoising the corrupted input. Once trained, diffusion models generate new data by sampling random Gaussian noise and passing it through the learned denoising process. In addition, Cosmos diffusion models also get several key updates tailored for physical AI development. 3D patchification processes video into smaller patches, simplifying spatio-temporal sequence representation. Hybrid positional embeddings handle spatial and temporal dimensions, supporting videos with varying resolutions and frame rates. Cross-attention layers incorporate text inputs, enabling better control over video generation based on descriptions. Adaptive layer normalization with LoRA reduces model size by 36%, maintaining high performance with fewer resources. Figure 3. Cosmos diffusion model architecture combines advanced video compression, flexible positional encoding, and text integration to deliver high-quality, physics-aware video generation Model sizes for varied needs Developers can choose from the following three model sizes to meet performance, quality, and deployment needs. Nano: Optimized for real-time, low-latency inference and edge deployment. Super: Designed as performant baseline models. Ultra: Focused on maximum quality and fidelity, ideal for distilling custom models. Strengths and limitations Cosmos world foundation models generate low-resolution, real-world-accurate synthetic video data, essential for training robotics and autonomous vehicle systems. While they lack artistic flair, their outputs closely replicate the physical world, making them ideal for precise object permanence and realistic scenarios in physical AI model training. Guardrails for safe use of Cosmos world foundation models AI models need guardrails to ensure reliability by mitigating hallucinations, preventing harmful outputs, safeguarding privacy, and aligning with AI standards for safe and controlled deployment. Cosmos ensures the safe use of its world foundation models through a customizable, two-stage guardrail system aligned with NVIDIA’s commitment to trustworthy AI. Cosmos Guardrails operates in two stages: Pre-guard and Post-guard. Pre-guard This stage involves text prompt-based safety measures using two layers: Keyword Blocking: A blocklist checker scans prompts for unsafe keywords, using lemmatization to detect variations and blocking non-English terms or spelling errors. Aegis Guardrail: The NVIDIA fine-tuned Aegis AI Content Safety model detects and blocks semantically unsafe prompts, including categories like violence, harassment, and profanity. Unsafe prompts halt video generation and return an error message. Post-guard The Post-guard stage ensures the safety of generated videos through: Video Content Safety Classifier: A multiclass classifier evaluates every video frame for safety. If any frame is flagged as unsafe, the entire video is rejected. Face Blur Filter: All human faces in generated videos are blurred using the RetinaFace model to protect privacy and reduce biases based on age, gender, or race. NVIDIA experts rigorously test with adversarial examples, annotating over 10,000 prompt-video pairs to refine the system and address edge cases. Evaluating Cosmos world foundation models for 3D consistency and physics alignment Cosmos benchmarks play a crucial role in assessing the ability of world foundation models to simulate real-world physics accurately and efficiently for physical AI applications. While publicly available benchmarks for video generation focus on fidelity, temporal consistency, and speed of generated videos, Cosmos benchmarks add new dimensions to evaluate generalist models: 3D consistency and physics alignment, ensuring the videos are evaluated based on accuracy required for physical AI systems. 3D consistency Cosmos models were tested for 3D consistency on static scenes from a curated subset of 500 videos from an open dataset . Text prompts describing the videos were generated to avoid motion-related complexities. Comparisons were made against VideoLDM , a baseline generative model. Metrics used Geometric Consistency: Assessed through epipolar geometry constraints using metrics like Sampson error and camera pose estimation success rate. View Synthesis Consistency: Evaluated through metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). These metrics measure the quality of synthesized views from interpolated camera positions. Lower Sampson error and higher success rates indicate better 3D alignment. Similarly, higher PSNR and SSIM and lower LPIPS are indicators of a better quality. Model Sampson Error ↓ Pose Estimation Success Rate (%) ↑ PSNR ↑ SSIM ↑ LPIPS ↓ VideoLDM 0.841 4.40% 26.23 0.783 0.135 Cosmos 1.0 Diffusion Text2World 7B 0.355 62.60% 33.02 0.939 0.070 Cosmos 1.0 Diffusion Video2World 7B 0.473 68.40% 30.66 0.929 0.085 Cosmos 1.0 Autoregressive 4B 0.433 35.60% 32.56 0.933 0.090 Cosmos 1.0 Autoregressive Video2World 5B 0.392 27.00% 32.18 0.931 0.090 Real videos (reference) 0.431 56.40% 35.38 0.962 0.054 Table 1. Evaluation of 3D consistency for Cosmos world foundation models versus base VideoLDM model Results Cosmos world foundation models outperform the baseline in 3D consistency (table 1), with higher geometric alignment and camera pose success rates. Their synthesized views match real-world quality, confirming their effectiveness as world simulators. Physical alignment Physics alignment tests how well Cosmos models simulate real-world physics, including motion, gravity, and energy dynamics. Using NVIDIA PhysX and NVIDIA Isaac Sim , eight controlled scenarios were designed to evaluate properties like gravity, collision, torque, and inertia in virtual environments. Metrics used Pixel-Level Metrics: Peak Signal-to-Noise Ratio (PSNR) measures how closely the pixel values of the model’s output match the reference video. Higher values indicate less noise and better accuracy. Structural Similarity Index Measure (SSIM) assesses the similarity in structure, luminance, and contrast between the generated and ground-truth frames. Higher SSIM values reflect greater visual fidelity. Feature-Level Metric: DreamSim measures the similarity between high-level features extracted from both videos. This approach evaluates the semantic consistency of the generated content, focusing on objects and motion rather than individual pixels. Object-Level Metric: Intersection-over-Union (IoU) calculates the overlap between the predicted and actual object regions in the video. This is especially useful for tracking specific objects through the simulation to ensure their behavior aligns with physical expectations. Higher PSNR, SSIM, DreamSim and IoU are indicators of better physical alignment. Model Conditioning PSNR ↑ SSIM ↑ DreamSim ↑ Avg. IoU ↑ Cosmos 1.0 Diffusion Video2World 7B prompt + 1 frame 17.34 0.54 0.84 0.332 Cosmos 1.0 Diffusion Video2World 7B prompt + 9 frames 21.06 0.69 0.86 0.592 Cosmos 1.0 Diffusion Video2World 14B prompt + 1 frame 16.81 0.52 0.84 0.338 Cosmos 1.0 Diffusion Video2World 14B prompt + 9 frames 20.21 0.64 0.86 0.598 Cosmos 1.0 Autoregressive 4B 1 frame 17.91 0.49 0.83 0.394 Cosmos 1.0 Autoregressive 4B 9 frames 18.13 0.48 0.86 0.481 Cosmos 1.0 Autoregressive Video2World 5B prompt + 1 frame 17.67 0.48 0.82 0.376 Cosmos 1.0 Autoregressive Video2World 5B prompt + 9 frames 18.29 0.48 0.86 0.481 Cosmos 1.0 Autoregressive Video2World 12B 1 frame 17.94 0.49 0.83 0.395 Cosmos 1.0 Autoregressive Video2World 12B 9 frames 18.22 0.49 0.87 0.487 Cosmos 1.0 Autoregressive Video2World 13B prompt + 1 frame 18 0.49 0.83 0.397 Cosmos 1.0 Autoregressive Video2World 13B prompt + 9 frames 18.26 0.48 0.87 0.482 Table 2. Physics alignment results with metrics calculated over 33 frames, the maximum length supported by the autoregressive variants of the Cosmos world foundation models Results Cosmos world foundation models show strong adherence to physical laws (Table 2), particularly with increased conditioning data. Post-training on camera conditioning dataset achieves a twofold increase in pose estimation success rate compared to baseline models. However, challenges like object impermanence (where objects vanish or appear unexpectedly) and implausible behaviors (such as violating gravity) highlight areas for improvement. Customizing for physical AI applications with Cosmos and NVIDIA Omniverse Video search and understanding: Simplifies video tagging and search by understanding spatial and temporal patterns, making training data preparation easier. Controllable 3D-to-real synthetic data generation: With NVIDIA Omniverse , developers can create 3D scenarios and use Cosmos to generate photorealistic videos that are precisely controlled by 3D scenes for highly tailored synthetic datasets. Policy model development and evaluation: World foundation models fine-tuned for action-conditioned video prediction enable scalable, reproducible evaluation of policy models—strategies mapping states to actions—reducing reliance on risky real-world tests or complex simulations for tasks like obstacle navigation or object manipulation. Foresight for action selection: Cosmos equips physical AI models with predictive capabilities to assess the outcomes of potential actions. Multiverse simulation: Using Cosmos and NVIDIA Omniverse, developers can simulate multiple future outcomes to help AI models evaluate and select the best strategy for achieving its goals, benefiting applications like predictive maintenance and autonomous decision-making. From generalist to customized specialist models Cosmos introduces a two-stage approach to world model training. Generalist models: Cosmos world foundation models are built as generalists, trained on extensive datasets that encompass diverse real-world physics and environments. These open models are capable of handling a broad range of scenarios, from natural dynamics to robotic interactions, providing a solid foundation for any physical AI task. Specialist models: Developers can fine-tune generalist models using smaller, targeted datasets to create specialists tailored for specific applications, such as autonomous driving or humanoid robotics or they can generate customized synthetic scenarios, such as night scenes with emergency vehicles or high-fidelity industrial robotics environments. This fine-tuning process significantly reduces the required data and training time compared to training models from scratch. Cosmos accelerates training and fine-tuning with efficient video processing pipelines, highly performant tokenizer, and advanced training frameworks, enabling developers to address operational needs and edge cases for advancing physical AI. Accelerated data processing with NVIDIA NeMo Curator Training models require curated, high-quality data, which is time and resource-intensive. NVIDIA Cosmos includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs. NVIDIA NeMo Curator enables robotics and AV developers to process vast datasets efficiently. For example, 20 million hours of video can be processed in 40 days on NVIDIA Hopper GPUs, or just 14 days on NVIDIA Blackwell GPUs—compared to 3.4 years on unoptimized CPU pipelines. Key benefits include: 89x faster curation: Dramatically reduces processing time Scalability: Handles 100+ PB of data seamlessly High throughput: Advanced filtering, captioning, and embedding ensure quality without sacrificing speed Figure 4. Cosmos includes NeMo Curator that delivers 89x faster video data processing High-fidelity compression and reconstruction with Cosmos Tokenizer After data is curated, it must be tokenized for training. Tokenization breaks down complex data into manageable units, enabling models to process and learn from it more efficiently. Cosmos tokenizers simplify this process with faster compression and visual reconstruction while preserving quality, reducing costs and complexity. For autoregressive models, the discrete tokenizer compresses data 8x in time and 16×16 in space, processing up to 49 frames at once. For diffusion models, the continuous tokenizer achieves 8x time and 8×8 space compression, handling up to 121 frames. Fine-tuning with NVIDIA NeMo Developers can fine-tune Cosmos world foundation models using the NVIDIA NeMo Framework. NeMo Framework accelerates model training on GPU-powered systems, whether enhancing an existing model or building a new one, from on-premises data centers to the cloud. NeMo Framework efficiently loads multimodal data by: Sharding terabyte size dataset into compressed files to reduce IO overhead. Deterministically saving and loading datasets to avoid repetition and minimize compute waste. Reducing network bandwidth when exchanging data using optimized communications. Get started with NVIDIA Cosmos Cosmos world foundation models are open and available on NGC and Hugging Face . Developers can also run Cosmos world foundation models on the NVIDIA API catalog . Also available on the API catalog are Cosmos tools to enhance text prompts for accuracy, an inbuilt watermarking system that enables easy future identification of AI-generated sequences, and a specialized model to decode video sequences for augmented reality applications. To learn more, watch the demo . NeMo Curator for accelerated data processing pipelines is available as a managed service and SDK. Developers can now apply for early access . Cosmos tokenizers are open neural networks available on GitHub and Hugging Face . Get started with NVIDIA Cosmos .
https://developer.nvidia.com/zh-cn/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform/
NVIDIA Cosmos World 基础模型平台助力物理 AI 进步
随着机器人和 自动驾驶汽车 的发展,加速 物理 AI 的发展变得至关重要,而物理 AI 使自主机器能够感知、理解并在现实世界中执行复杂的操作。这些系统的核心是 世界基础模型 (WFMs) ,即通过物理感知视频模拟物理状态的 AI 模型,使机器能够做出准确决策并与周围环境无缝交互。 NVIDIA Cosmos 平台可帮助开发者大规模为物理 AI 系统构建自定义世界模型。它为从数据管护、训练到定制的每个开发阶段提供开放世界基础模型和工具。 视频 1、NVIDIA Cosmos:适用于物理 AI 的世界基础模型平台 本文将介绍 Cosmos 及其加速物理 AI 开发的主要功能。Cosmos 是一种开源的 Python 库,用于加速物理 AI 开发。 借助 NVIDIA Cosmos 加速世界模型开发 构建物理 AI 极具挑战性,需要精确的模拟以及真实世界的行为理解和预测。克服这些挑战的一个关键工具是世界模型,该模型根据过去的观察结果和当前的输入结果预测未来的环境状态。这些模型对于物理 AI 构建者来说非常重要,使他们能够在受控环境中模拟、训练和优化系统。 然而,开发有效的世界模型需要大量数据、计算能力和真实世界的测试,这可能会带来重大的安全风险、物流障碍和高昂的成本。为了应对这些挑战,开发者通常会使用通过 3D 仿真生成的 合成数据 来训练模型。虽然合成数据是一种功能强大的工具,但创建合成数据需要大量资源,可能无法准确反映现实世界的物理特性,在复杂或边缘情况下尤其如此。 端到端 NVIDIA Cosmos 平台 可加速物理 AI 系统的世界模型开发。Cosmos 基于 CUDA 构建,结合了先进的世界基础模型、视频标记器和 AI 加速的数据处理流程。 开发者可以通过微调 Cosmos 世界基础模型或从头开始构建新模型来加速世界模型的开发。除了 Cosmos 世界基础模型之外,该平台还包括: 用于高效视频数据管护的 NVIDIA NeMo Curator Cosmos Tokenizer 可实现高效、紧凑和高保真的视频标记化 为机器人和自动驾驶应用预训练的 Cosmos World Foundation 模型 用于模型训练和优化的 NVIDIA NeMo 框架 图 1、NVIDIA Cosmos 是一个世界性的基础模型开发平台,包含生成式模型、数据管护器、标记器和框架,可加速物理 AI 开发 适用于物理 AI 的 预训练世界基础模型 Cosmos 世界基础模型是预训练的大型生成式 AI 模型,使用 9000 万亿个令牌进行训练,其中包括来自 自动驾驶 、 机器人 开发、合成环境和其他相关领域的 2000 万个小时的数据。这些模型可创建有关环境和交互的逼真合成视频,为训练复杂系统 (从模拟执行高级动作的类 人型机器人 到开发端到端自动驾驶模型) 提供可扩展的基础。 这些模型使用两种架构:自回归和扩散。这两种方法都使用 Transformer 架构,因为其可扩展性和有效性可用于处理复杂的时间依赖项。 自回归模型 Cosmos 自回归模型专为视频生成而设计,可根据输入文本和过去的视频帧预测下一个令牌。它使用 Transformer 解码器架构,并针对世界模型开发进行了关键修改。 3D RoPE (Rotary Position Embeddings) 可分别对空间和时间维度进行编码,确保精确的视频序列表示。 交叉注意力层支持文本输入,从而更好地控制世界生成。 QK 标准化可增强训练稳定性。 此模型的预训练是渐进式的,首先从单个输入帧预测多达 17 帧的未来帧,然后扩展到 34 帧,最终达到 121 帧 (或 50,000 个 tokens)。引入文本输入以将描述与视频帧相结合,并使用高质量数据对模型进行微调,以实现强大的性能。这种结构化方法使模型能够生成长度和复杂性各不相同的视频,无论是否输入文本。 图 2、Cosmos 自回归模型使用 Transformer 解码器架构,并对世界模型开发进行了关键修改 扩散模型 Diffusion 模型在生成图像、视频和音频方面很受欢迎,因为它们能够解构训练数据并根据用户输入进行重建,从而生成高质量、逼真的输出。 扩散模型分为两个阶段: 前向扩散过程: 通过在多个步骤中添加高斯噪声,有效地将其转换为纯噪声,训练数据会逐渐损坏。 反向扩散过程: 模型会逐步学习反向噪声,通过对损坏的输入降噪来恢复原始数据,例如使用 Stable Diffusion 模型。 经过训练后,扩散模型通过对随机高斯噪声进行采样并将其传递给学习的降噪过程来生成新数据。此外,Cosmos 扩散模型还获得了一些专为物理 AI 开发打造的关键更新。 3D Patchification 将视频处理为更小的分块,从而简化时空序列表示。 混合位置嵌入可处理空间和时间维度,支持具有不同分辨率和帧率的视频。 交叉注意力层整合文本输入,从而更好地控制基于描述生成的视频。 采用 LoRA 的自适应层归一化可将模型大小减少 36%,从而以更少的资源保持高性能。 图 3、Cosmos 扩散模型架构将先进的视频压缩、灵活的位置编码和文本集成相结合,可生成具有物理感知的高质量视频 满足不同需求的模型大小 开发者可以从以下三种模型大小中进行选择,以满足性能、质量和部署需求。 Nano: 针对实时、低延迟推理和边缘部署进行优化。 Super: 设计为高性能基准模型。 Ultra:专注于最大化质量和保真度,适合提取自定义模型。 优点和局限性 Cosmos 世界基础模型生成低分辨率、真实世界准确的合成视频数据,这对于训练 机器人 和自动驾驶汽车系统至关重要。虽然它们缺乏艺术天赋,但其输出结果密切复制了物理世界,因此非常适合在物理 AI 模型训练中实现精确的物体持久性和逼真的场景。 用于安全使用 Cosmos World 基础模型的护栏 AI 模型需要护栏来确保可靠性,具体方法包括减少幻影、防止有害输出、保护隐私,以及符合 AI 标准以实现安全可控的部署。Cosmos 通过可定制的双阶段护栏系统确保其 World Foundation 模型的安全使用,该系统符合 NVIDIA 对值得信赖的 AI 的承诺。 Cosmos Guardrails 分为两个阶段:Pre-guard 和 Post-guard。 预防护 此阶段涉及基于文本提示的安全措施,使用两层: 关键字屏蔽:屏蔽列表检查器会扫描不安全关键字的提示,使用引言化来检测变异,并阻止非英语术语或拼写错误。 Aegis Guardrail: 经过 NVIDIA 微调的 Aegis AI 内容安全模型可以检测并阻止语义上不安全的提示,包括暴力、骚扰和冒犯等类别。不安全提示停止视频生成并返回错误消息。 护卫队队员 Post-guard Stage 通过以下方式确保生成视频的安全性: 视频内容安全分类器: 多类别分类器评估每个视频帧的安全性。如果任何帧被标记为不安全,则整个视频将被拒绝。 面部模糊滤镜: 使用 RetinaFace 模型对生成的视频中的所有人脸进行模糊处理,以保护隐私并减少基于年龄、性别或种族的偏见。 NVIDIA 专家使用对抗示例进行严格测试,标注超过 10,000 个提示视频对,以优化系统并解决边缘案例。 评估 Cosmos 世界基础模型,以实现 3D 一致性和物理对齐 Cosmos 基准测试在评估世界基础模型为物理 AI 应用准确高效地模拟真实物理的能力方面发挥着至关重要的作用。虽然公开可用的视频生成基准测试侧重于保真度、时间一致性和生成视频的速度,但 Cosmos 基准测试增加了新的维度来评估通用模型:3D 一致性和物理对齐,确保根据物理 AI 系统所需的准确性评估视频。 3D 一致性 Cosmos 模型在 开放数据集 中的 500 个精选视频子集中进行了静态场景的 3D 一致性测试。生成描述视频的文本提示是为了避免与动作相关的复杂性。并与基准生成模型 VideoLDM 进行了比较。 所用指标 几何一致性:使用 Sampson 误差和摄像头姿态估计成功率等指标通过外极性几何约束条件进行评估。 查看合成一致性: 通过峰值信噪比 (Peak Signal-to-Noise Ratio, PSNR)、结构相似性指数 (Structural Similarity Index, SSIM) 和学习感知图像块相似性 (Learned Perceptual Image Patch Similarity, LPIPS) 等指标进行评估。这些指标用于测量从插入的摄像头位置合成视图的质量。 更低的 Sampson 误差和更高的成功率表明 3D 对齐效果更好。同样,较高的 PSNR 和 SSIM 以及较低的 LPIPS 都表示质量有所提高。 模型 Sampson 错误* 姿态估计成功率 (%)* PSNR* SSIM LPIPS VideoLDM 0.841 4.40% 26.23 0.783 0.135 Cosmos 1.0 Diffusion Text2World 7B 0.355 62.60% 33.02 0.939 0.070 Cosmos 1.0 Diffusion Video2World 7B 0.473 68.40% 30.66 0.929 0.085 Cosmos 1.0 Autoregressive 4B 0.433 35.60% 32.56 0.933 0.090 Cosmos 1.0 Autoregressive Video2World 5B 0.392 27.00% 32.18 0.931 0.090 真实视频 (参考) 0.431 56.40% 35.38 0.962 0.054 表 1、评估 Cosmos 世界基础模型与基础 VideoLDM 模型的 3D 一致性 成果 Cosmos 世界基础模型在 3D 一致性方面的表现优于基准 (表 1),具有更高的几何对齐和摄像头姿态成功率。其合成视图与真实世界的质量相匹配,证实了其作为世界模拟器的有效性。 物理对齐 物理对齐测试 Cosmos 模型在模拟真实物理 (包括运动、重力和能量动力学) 时的效果。借助 NVIDIA PhysX 和 NVIDIA Isaac Sim ,设计出八个受控场景,用于评估虚拟环境中的重力、碰撞、扭矩和惯性等属性。 所用指标 像素级指标 :峰值信噪比(Peak Signal-to-Noise Ratio,PSNR)可测量模型输出的像素值与参考视频的匹配程度。值越高表示噪点越小,准确性越高。结构相似性指数指标(Structural Similarity Index Measure,SSIM)用于评估生成的帧与真实帧之间在结构、亮度和对比度方面的相似性。较高的 SSIM 值反映了更高的视觉保真度。 特征级别指标 :DreamSim 测量从两个视频中提取的高级特征之间的相似性。这种方法评估生成内容的语义一致性,重点关注物体和运动,而不是单个像素。 目标级别指标: 交集并集比 (Intersection-over-Union,IoU) 用于计算视频中预测的目标区域与实际目标区域之间的重叠。这对于在整个模拟过程中追踪特定物体特别有用,可确保其行为符合物理预期。 PSNR、SSIM、DreamSim 和 IoU 越高,表示物理对齐越好。 模型 调节 PSNR* SSIM DreamSim™ 平均值。IoU* Cosmos 1.0 Diffusion Video2World 7B 提示符 = 1 帧 17.34 0.54 0.84 0.332 Cosmos 1.0 Diffusion Video2World 7B 提示符 = 9 帧 21.06 0.69 0.86 0.592 Cosmos 1.0 Diffusion Video2World 14B 提示符 = 1 帧 16.81 0.52 0.84 0.338 Cosmos 1.0 Diffusion Video2World 14B 提示符 = 9 帧 20.21 0.64 0.86 0.598 Cosmos 1.0 Autoregressive 4B 1 帧 17.91 0.49 0.83 0.394 Cosmos 1.0 Autoregressive 4B 9 帧 18.13 0.48 0.86 0.481 Cosmos 1.0 Autoregressive Video2World 5B 提示符 = 1 帧 17.67 0.48 0.82 0.376 Cosmos 1.0 Autoregressive Video2World 5B 提示符 = 9 帧 18.29 0.48 0.86 0.481 Cosmos 1.0 Autoregressive Video2World 12B 1 帧 17.94 0.49 0.83 0.395 Cosmos 1.0 Autoregressive Video2World 12B 9 帧 18.22 0.49 0.87 0.487 Cosmos 1.0 Autoregressive Video2World 13B 提示符 = 1 帧 18 0.49 0.83 0.397 Cosmos 1.0 Autoregressive Video2World 13B 提示符 = 9 帧 18.26 0.48 0.87 0.482 表 2、物理对齐结果,计算指标超过 33 帧 (Cosmos 世界基础模型的自回归变体支持的最大长度) 成果 Cosmos 世界基础模型非常遵守物理定律 (表 2),尤其是在条件数据增加的情况下。与基线模型相比,在摄像头调节数据集上进行后训练后,姿态估计的成功率提高了两倍。然而,物体无常 (物体意外消失或出现) 和难以置信的行为 (如违反重力) 等挑战凸显了需要改进的领域。 使用 Cosmos 和 NVIDIA Omniverse 定制物理 AI 应用 视频搜索和理解: 通过了解空间和时间模式来简化视频标记和搜索,从而更轻松地准备训练数据 可控的 3D 到真实合成数据生成: 借助 NVIDIA Omniverse ,开发者可以创建 3D 场景,并使用 Cosmos 生成逼真的视频,这些视频由 3D 场景精确控制,用于高度定制的合成数据集。 策略模型开发和评估: World foundation models 针对动作条件视频预测进行了微调,可实现策略模型的可扩展、可再现评估 (将状态映射到动作的策略),从而减少对障碍物导航或对象操作等任务进行风险真实测试或复杂模拟的依赖。 行动选择前瞻: Cosmos 为物理 AI 模型配备预测功能,以评估潜在行动的结果。 Multiverse 模拟: 借助 Cosmos 和 NVIDIA Omniverse,开发者可以模拟多种未来结果,帮助 AI 模型评估和选择实现目标的最佳策略,从而使预测性维护和自主决策等应用受益。 从通用模型到定制的专业模型 Cosmos 为世界模型训练引入了一种分为两个阶段的方法。 多面手模型:Cosmos 世界基础模型以多面手的身份构建,基于涵盖各种真实物理和环境的大量数据集进行训练。这些开放模型能够处理从自然动力学到机器人交互的各种场景,为任何物理 AI 任务提供坚实的基础。 专业模型:开发者可以使用更小的定向数据集微调通用模型,为特定应用(如自动驾驶或人形机器人)打造量身定制的专家模型,也可以生成定制的合成场景,如夜间场景中的紧急车辆或高保真工业机器人环境。与从头开始训练模型相比,这种微调过程可显著减少所需的数据和训练时间。 Cosmos 通过高效的视频处理流程、高性能的 tokenizer 和高级训练框架加速训练和微调,使开发者能够满足运营需求和边缘案例,推动物理 AI 的发展。 借助 NVIDIA NeMo Curator 加速数据处理 训练模型需要精心挑选的高质量数据,而且需要耗费大量时间和资源。NVIDIA Cosmos 包含由 NVIDIA NeMo Curator 提供支持并针对 NVIDIA 数据中心 GPU 进行优化的数据处理和管护流程。 借助 NVIDIA NeMo Curator,机器人和自动驾驶(AV)开发者能够高效处理庞大的数据集。例如,20 million 小时的视频可以在 40 天内在 NVIDIA Hopper GPU 上处理完毕,或仅在 14 天内在 NVIDIA Blackwell GPU 上处理完毕——相比之下,未经优化的 CPU 流水线需要 3.4 年。 主要优势包括: PyTorch、pandas、LangChain、Megatron、NVIDIA、cuOpt、Stable Diffusion、Llama、Jetson、Google、Python、Hugging Face、Arm、Anaconda、Siemens、DPU、GPU 和 Github 等技术的 89 倍管护速度:显著缩短处理时间 可扩展性: 无缝处理 100 多个 PB 的数据 高吞吐量:高级过滤、字幕和嵌入可在不牺牲速度的情况下确保质量 图 4、Cosmos 包含 NeMo Curator,可将视频数据处理速度提高 89 倍 使用 Cosmos Tokenizer 实现高保真压缩和重建 整理数据后,必须对其进行标记化以进行训练。标记化将复杂的数据分解为可管理的单元,使模型能够更高效地处理数据并从中学习。 Cosmos 标记器可简化此过程,加快压缩和视觉重建速度,同时保持质量,降低成本和复杂性。对于自回归模型,离散分词器可将数据压缩为时间的 8 倍,在空间中压缩为 16 × 16,一次最多处理 49 帧。对于扩散模型,连续分词器可实现 8 倍的时间和 8 × 8 的空间压缩,最多可处理 121 帧。 使用 NVIDIA NeMo 进行微调 开发者可以使用 NVIDIA NeMo 框架微调 Cosmos 世界基础模型。NeMo 框架可以在 GPU 驱动的系统上加速模型训练,无论是增强现有模型还是构建新模型,从本地数据中心到云端。 NeMo 框架通过以下方式高效加载多模态数据: 将 TB 大小的数据集分片到压缩文件中,以减少 IO 开销。 决定性地保存和加载数据集,以避免重复并尽可能减少计算浪费。 在使用优化的通信交换数据时降低网络带宽。 开始使用 NVIDIA Cosmos Cosmos World Foundation 模型现已开放,可在 NGC 和 Hugging Face 上获取。开发者还可以在 NVIDIA API Catalog 上运行 Cosmos World Foundation 模型。API Catalog 上还提供了用于增强文本提示以提高准确性的 Cosmos 工具、支持未来轻松识别 AI 生成序列的内置水印系统,以及用于解码视频序列以用于增强现实应用的专用模型。如需了解详情,请观看演示。 NeMo Curator for accelerated data processing pipelines is available as a managed service and SDK. Developers can now apply for early access . Cosmos tokenizers are open neural networks available on GitHub and Hugging Face . 开始使用 NVIDIA Cosmos 。
https://developer.nvidia.com/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/
Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities
Generative AI has evolved from text-based models to multimodal models, with a recent expansion into video, opening up new potential uses across various industries. Video models can create new experiences for users or simulate scenarios for training autonomous agents at scale. They are helping revolutionize various industries including robotics, autonomous vehicles, and entertainment. The development of video foundation models presents unique challenges due to the vast and varied nature of video data. This also underscores the necessity of scalable pipelines for curating data and effectively training models that can comprehend temporal and spatial dynamics. We are announcing brand new video foundation model capabilities in the NVIDIA NeMo framework , an end-to-end training framework that enables you to pretrain and fine-tune your own video foundation models. The framework includes a high-throughput data curation, efficient multimodal data loading functionality, scalable model training, and a parallelized in-framework inference. Video 1. NVIDIA Cosmos is a world foundation model platform that accelerates the development of physical AI applications like humanoid robots and autonomous vehicles. High-throughput video curation through optimized pipelines NeMo Curator improves generative AI model accuracy by efficiently processing and preparing high-quality data, including large video datasets. Using NeMo Curator’s scalable data pipelines, you can efficiently clip, annotate, and filter 100 PB or more of videos. To remove bottlenecks and optimize performance, NeMo Curator uses the following combination: NVDEC : Hardware decoder NVENC : Hardware encoder Ray : Compute framework for scaling AI applications The NeMo Curator autobalancing techniques can leverage heterogeneous clusters with multiple GPU types to take advantage of NVENC on L40S GPUs and the performance of H100 and GB200 GPUs. Figure 1 shows how NeMo Curator can process 20M hours of video data, reducing the processing time from years to days, achieving 89x speed up using 1K GPUs compared to unoptimized pipelines on CPUs for ISO power usage. Figure 1. NeMo Curator delivers 89x faster video data processing NeMo Curator provides the following relevant pipelines for video foundation model training and fine-tuning datasets: Clipping Sharding The clipping pipeline starts with decoding and splitting raw videos into short, continuous clips by analyzing frame-to-frame color changes. The stitching stage smooths the clips out by using image embedding similarities to potentially merge adjacent clips together. These clips are then transcoded to the high-quality video encoding (H264), and they are annotated with video embeddings and captions, either existing or synthetically generated by a VLM, to facilitate semantic search capabilities. Figure 2. Video curation clipping and sharding pipelines Sharding generates text embeddings for captions to create the final WebDataset used for training. NeMo Curator also uses Ray streaming to build an auto-balancing system and deploy an optimal number of workers for each stage in the pipeline to avoid being bottlenecked by any stage (Figure 3). Figure 3. Auto-balancing system to match the throughput of the overall pipeline Efficient multimodal dataloading Video models can be trained on billions of images and millions of videos, necessitating an efficient data loading strategy to achieve high throughput during training time. This is accomplished in the NeMo framework through the use of Megatron-Energon data loader : Shard large-scale data: Uses the WebDataset format to shard a TB-size dataset into compressed files to help reduce I/O overhead during training. Deterministic save and load: Enables the dataset to be visited in one pass without repetition when the training job is disrupted, ensuring consistency across different training cluster setups. Sequence packing: Packs variable length or resolution images and videos together up to the max sequence length, minimizing compute wastage due to padding while simplifying data loading logic. NeMo uses the special THD attention kernel from the Transformer engine to support accelerated training with sequence packing. Figure 4. Mixed image-video training with sequence packing Reduce network bandwidth strain: Each model parallel rank downloads a different subset of data instead of the whole dataset, and then all-gathers the data across ranks to get an identical dataloader. Figure 5. Reducing network bandwidth strain to improve training throughput Scaling video foundation model training Video foundation models can be either autoregressive or diffusion models. The well-established suite of NeMo tools on large language models (LLMs) can be reused for autoregressive models, while support for diffusion transformers such as DiT, MovieGen, and the latest NVIDIA Cosmos world foundation models for physical AI have been newly added. The NeMo tech stack is highly optimized and provides more than 40% Model FLOPs utilization (MFU) in the latest benchmark (Table 1). Model size Context length Training config GPU used (TFLOPS/s) Throughput (token/s/GPU) DiT 7B 8k baseline, no optimization OOM DiT 7B 8k CP=2 457 8,969 DiT 7B 74k TP=4 SP CP=4 414 2,933 DiT 28B 8k TP=2 SP PP=2 435 2,392 DiT 28B 74k TP=8 SP CP=4 PP=4 411 994 Table 1. GPU utilization and throughput benchmark for NVIDIA NeMo framework on diffusion transformers (DiT) Legend: CP =context parallelism; TP =tensor parallelism; SP =sequence parallelism; PP =pipeline parallelism Overview of the video diffusion pipeline A video diffusion training pipeline is generally composed of the following major steps: Tokenize the input image and video with a causal temporal 3D tokenizer to generate 3D spatio-temporal tokens. Use a transformer decoder conditioned by the diffusion noise schedule timestep t and text input. Timestep conditioning is applied through an Adaptive LayerNormalization (AdaLN) mechanism, with an option to use AdaLN-LoRA, which further improves Model FLOPs Utilization (MFU) during training. Text conditioning is applied through a cross attention layer in each transformer block. The NeMo framework enables you to initialize your transformer decoder based on the canonical DiT architecture or the MovieGen Llama architecture, which uses Grouped-Query Attention (GQA). Compute the diffusion loss with the parallelized EDM diffusion pipeline using the noise prediction from the diffusion transformer. NeMo also applies additional Root Mean Square Layer Normalization (RMSNorm) on the queries and keys before attention blocks to stabilize diffusion training. RMSNorm is applied per attention head to remain compatible with tensor parallelism. Figure 6. NeMo video diffusion training pipeline Parallelism optimizations for video diffusion models NeMo and Megatron-Core enable various model parallelism techniques: Tensor parallel (TP) Sequence parallel (SP) Pipeline parallel (PP) Context parallel (CP) However, these techniques face unique challenges when applied to video diffusion transformers. Here’s how NeMo solves these challenges to achieve scalable and performant training: Efficient pipeline parallelism for conditioning Support for Spatio-Temporal DiT (ST-DiT) architecture Customized random seeding mechanism The traditional approach is to communicate conditioning information across pipeline stages, incurring additional communication cost and requiring nontrivial modifications to the pipeline schedule. NeMo solves this problem by computing the conditional embeddings at each pipeline stage. The computation cost with efficient pipeline parallelism for conditioning is much less than the communication cost and improves training throughput. Figure 7. Trading communication for compute in conditioning pipeline parallelism The Spatio-Temporal DiT (ST-DiT) architecture introduces additional spatial and temporal self-attention layers to each transformer block, as an alternative to training with full self attention on long video sequences. This approach exposes communication overhead during context parallelism due to smaller compute over short input sequence for these layers. NeMo addresses this by using local attention computation with A2A communication for spatial/temporal attention, while maintaining P2P ring topology for full self-attention. The hybrid approach effectively reduces bandwidth needs for temporal/spatial attention while still benefiting from context parallelism over full self-attention layer (Table 2). Figure 8. Spatial-temporal DiT transformer block Layer Input Seq Communication primitive Communication bandwidth Temporal self-attention Short seq Local compute & A2A (bhw/cp, t, d) Spatial self-attention Short seq Local compute & A2A (bt/cp, hw, d) Full attention Long seq CP with P2P (b, h*w*t/cp, d) Table 2. NeMo communication strategies for each kind of layer Legend: b =batch size; h*w =spatial size; t =temporal size; cp =context parallel size; d =hidden size , with input size being (b, t*h*w, d). The customized random seeding mechanism goal is to make sure that random seeds are correctly initialized across the following components: Time step Gaussian noise The actual model weights Table 3 shows NeMo’s initialization strategy. RNG seed Data parallel Context parallel Pipeline parallel Tensor parallel Time step (t) Diff Same Same Same Gaussian noise Diff Diff Same Same Weight initialization Same Same Diff Diff Table 3. Customized random seeding for parallelized diffusion transformers Legend: Diff= Different random seed from other parallel ranks; Same =Same random seed as other parallel ranks. Efficient in-framework inference The NeMo framework accelerates inference by distributing denoising operations across multiple GPUs through context parallelism. After parallel denoising, the latent tensors are combined to reconstruct the video sequence before decoding with the Cosmos video tokenizer. Benchmarks show 80–90% scaling efficiency on up to 32 H100 GPUs, with FP8 Multi-Head Attention providing 28% and 48% performance improvements over BF16 on 1 and 32 GPUs respectively. Figure 9. Parallelized video generation with context parallelism Figure 10. Inference performance at different GPU count​s Conclusion In this post, we covered all the features of NVIDIA NeMo framework that will help you pretrain or fine-tune video foundation models in an effective and efficient manner. NeMo Curator offers high-throughput data curation through clipping and sharding pipelines, and the Megatron Energon library offers efficient multimodal data loading. NeMo Frameworks enables scalable video foundation model training by supporting various model parallelism techniques specially optimized on diffusion and autoregressive models. In addition, it provides efficient in-framework inference by distributing denoising operations across multiple GPUs and incorporating FP8 Multi-Head Attention. You can curate your video data with NeMo Curator early access program , tokenize them, pre-train( diffusion , autoregressive ), fine-tune ( diffusion , autoregressive ), and perform multi-GPU in-framework inference ( diffusion , autoregressive )with NeMo Framework  today. You can also try the NVIDIA Cosmos world foundation models at build.nvidia.com and watch the CES keynote from NVIDIA CEO Jensen Huang to learn more about the NVIDIA Cosmos world foundation model platform . Acknowledgements Thanks to the following contributors: Parth Mannan, Xiaowei Ren, Zhuoyao Wang, Carl Wang, Jack Chang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings, Ekaterina Sirazitdinova, Oleg Sudakov, Linnan Wang, Mingyuan Ma, Bobby Chen, Forrest Lin, Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab Bhattacharya, David Page, Jacob Huffman, Tommy Huang, Nima Tajbakhsh, and Ashwath Aithal.
https://developer.nvidia.com/zh-cn/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/
通过 NVIDIA NeMo 框架的最新功能全链路加速视频基础模型的训练和微调
生成式 AI 已经从基于文本的模型发展到多模态模型,最近还扩展到了视频领域,为各行各业开辟了新的潜在用途。视频模型可以为用户创造新的体验,以及大规模模拟场景来训练自主代理。它们正在帮助包括机器人技术、自动驾驶汽车和娱乐业在内的多个行业实现变革。 由于视频数据的庞大和多样性,视频 基础模型 的开发面临着独特的挑战。这也凸显了可扩展的数据管理流程和训练框架的必要性,以此来理解时间和空间动态的模型。 我们发布了 NVIDIA NeMo 框架 中全新的视频基础模型功能,这是一个端到端的训练框架,用于预训练和微调您的专属视频基础模型。该框架包括高吞吐量的数据管理、高效的多模态数据加载功能、可扩展的模型训练,以及并行化的框架内推理。 高吞吐量视频数据处理链路 NVIDIA NeMo Curator 通过高效处理和准备高质量的数据(包括大型视频数据集),来提高生成式 AI 模型的准确性。 使用 NeMo Curator 的可扩展数据流程,您可以高效地剪辑、标注和过滤 100 PB 或更多的视频数据。为了消除瓶颈并优化性能,NeMo Curator 使用以下组合: NVDEC:硬件解码器 NVENC:硬件编码器 Ray:用于扩展 AI 应用的计算框架 根据实验测试数据(仅做技术讨论和参考)显示,NeMo Curator 的自动负载平衡技术可以利用具有多种 GPU 类型的异构集群,以充分利用 L40S GPU 上的 NVENC 以及 H100 和 GB200 GPU 的性能。 图1 展示了 NeMo Curator 如何处理 2,000 万小时的视频数据,将处理时间从数年缩短到数天,使用 1,000 个 GPU 相比于 CPU 上未优化的流程,在相同功耗下实现了 89 倍的加速。 图1. NeMo Curator 将视频数据处理速度提升 89 倍 (图片展示的性能数据来自 NVIDIA 的实验测试,仅供技术讨论和参考) NeMo Curator 为视频基础模型的训练和微调数据集提供以下相关的功能: 剪辑 (Clipping) 分片 (Sharding) 剪辑链路从解码和将原始视频分割为短而连续的片段开始,通过分析帧与帧之间的颜色变化来完成分割。在拼接阶段,利用图像嵌入(embedding)的相似性来平滑衔接片段,将相邻片段合并在一起。随后,这些片段会被转码为高质量的视频编码格式(H264),并通过视频嵌入(embedding)和字幕进行标注,这些字幕可以是现有的,也可以是由视觉语言模型 (VLM) 合成生成的,从而增强语义搜索功能。 图 2. 视频整理中的剪辑与分片链路 分片通过为字幕生成文本嵌入(embedding)来创建用于训练的最终 WebDataset。NeMo Curator 还利用 Ray 流处理技术构建了一个自动平衡系统,并为链路中的每个阶段部署了最佳数量的工作线程,以避免任何阶段成为瓶颈(如图 3 所示)。 图 3. 用于匹配整体链路的吞吐量的自动平衡系统 高效的多模态数据加载 视频模型可以在数十亿张图像和数百万个视频上进行训练,这需要一种高效的数据加载策略,从而在训练期间实现高吞吐量。 在 NeMo 框架中,这通过使用 NVIDIA Megatron -Energon 数据加载器 实现: 将大规模数据分片 :采用 WebDataset 格式,将 TB 级数据集分片为压缩文件,以减少训练期间的 I/O 开销。 保存与加载一致性 :在训练任务中断时,确保数据集在一次遍历中不重复访问,从而在不同的训练集群配置中保持一致性。 序列打包 :将长度或分辨率可变的图像和视频打包在一起,直至达到最大序列长度,减少因填充而导致的计算资源浪费,同时简化数据加载逻辑。NeMo 使用 Transformer Engine 中的特殊 THD 注意力算子核(attention kernel),通过结合序列打包来加速训练。 图 4. 通过序列打包加速图像-视频混合训练 减少网络带宽压力: 每个模型并行等级(rank)只下载数据集的一个不同子集,而不是整个数据集,然后所有等级通过全聚合(all-gather)在各卡之间共享数据,从而确保每张卡获得一致的数据加载。 图 5. 减少网络带宽压力以提高训练吞吐量 提高视频基础模型的训练拓展性 视频基础模型可以是自回归模型或扩散模型。 NeMo 框架里面现有的 大语言模型 (LLMs)功能可以复用于上述自回归模型,同时 NeMo 新增了对扩散 Transformer 的支持,如 DiT、MovieGen,以及最新的 NVIDIA Cosmos 世界基础模型(用于物理 AI)。 NeMo 技术栈经过高度优化,在最新基准测试中提供了超过 40% 的 模型浮点运算 利用率 (Model FLOPs Utilization,MFU) (表 1)。 Model size Context length Training config GPU used (TFLOPS/s) Throughput (token/s/GPU) DiT 7B 8k baseline, no optimization OOM DiT 7B 8k CP=2 457 8,969 DiT 7B 74k TP=4 SP CP=4 414 2,933 DiT 28B 8k TP=2 SP PP=2 435 2,392 DiT 28B 74k TP=8 SP CP=4 PP=4 411 994 表 1. NVIDIA NeMo 框架在扩散 Transformer(DiT)上的 GPU 利用率和吞吐量基准测试 标注: CP =context parallelism; TP =tensor parallelism; SP =sequence parallelism; PP =pipeline parallelism 视频扩散模型链路概述 视频扩散训练链路通常由以下几个主要步骤组成: 使用因果时序 3D 分词器对输入的图像和视频进行分词,生成 3D 时空分词。 使用由扩散噪声调度@时间步 t 和文本输入作为条件的 Transformer 解码器。 时间步条件调节通过 Adaptive LayerNormalization(AdaLN)机制进行应用,并提供使用 AdaLN-LoRA 的选项,后者在训练过程中进一步提高模型浮点运算利用率(Model FLOPs Utilization,MFU)。 文本条件调节通过每个 Transformer 块中的交叉注意力层呈现。 NeMo 框架使您能够基于标准的的 DiT 架构或 MovieGen Llama 架构来初始化您的 Transformer 解码器,后者使用分组查询注意力机制(GQA, Grouped-Query Attention)。 使用扩散 Transformer 的噪声预测,通过并行化的 EDM 扩散链路计算扩散损失。 NeMo 还在每个注意力块之前,对 query 和 key 向量使用额外的均方根层归一化(Root Mean Square Layer Normalization,RMSNorm)以稳定扩散训练。RMSNorm 在每个注意力头上应用,以保持与张量并行性的兼容性。 图 6. NeMo 视频扩散模型训练链路 视频扩散模型的并行优化 NeMo 和 NVIDIA Megatron-Core 支持多种模型并行技术: 张量并行 (Tensor Parallel, TP) 序列并行 (Sequence Parallel, SP) 流水线并行 (Pipeline Parallel, PP) 上下文并行 (Context Parallel, CP) 然而,这些技术在应用于视频扩散 Transformer 时面临独特的挑战。以下是 NeMo 如何解决这些挑战,实现可扩展和高性能训练的方式: 用于条件调节的高效流水线并行 支持时空 DiT (ST-DiT) 架构 定制的随机种子机制 传统方法是通过流水线各阶段之间传递条件信息,这会产生额外的通信开销,并且需要对流水线调度进行较大修改。NeMo 通过在每个流水线阶段即时计算条件嵌入(embedding)来解决这个问题。与通信成本相比,条件调节的高效流水线并行计算成本要低得多,从而提高了训练吞吐量。 图 7. 在条件调节流水线并行中,以通信换取计算资源的优化策略 时空 DiT (ST-DiT) 架构在每个 Transformer 块中引入了额外的空间和时间自注意力层,以替代在长视频序列上进行全自注意力训练。因为这些层在输入序列比较短,所以进行计算时计算量较小,以至于暴露出在上下文并行中的通信开销。NeMo 通过使用局部注意力计算和 A2A 通信来处理空间/时间注意力层里的短序列输入,同时保持 P2P 环形拓扑结构用于全自注意力里的长序列输入。该混合方法有效地减少了时间/空间注意力的带宽需求,同时仍然从上下文并行中获益,避免了全自注意力层的计算开销(表 2)。 图 8. 时空 DiT Transformer 架构 Layer Input Seq Communication primitive Communication bandwidth Temporal self-attention Short seq Local compute & A2A (bhw/cp, t, d) Spatial self-attention Short seq Local compute & A2A (bt/cp, hw, d) Full attention Long seq CP with P2P (b, h*w*t/cp, d) 表 2. NeMo 针对每种层的通信策略 标识 : b =batch size; h*w =spatial size; t =temporal size; cp =context parallel size; d =hidden size , with input size being (b, t*h*w, d). 定制的随机种子机制的目标是确保以下组件中的随机种子得以正确初始化: 时间步 高斯噪声 实际模型权重 表 3 显示了 NeMo 框架里面对于随机种子的初始化策略。 RNG seed Data parallel Context parallel Pipeline parallel Tensor parallel Time step (t) Diff Same Same Same Gaussian noise Diff Diff Same Same Weight initialization Same Same Diff Diff 表 3. 针对并行化扩散 Transformer 架构的定制随机种子设置 标识: Diff= Different random seed from other parallel ranks; Same =Same random seed as other parallel ranks. ​​ 框架内高效推理 推理的时候,NeMo 框架通过上下文并行,将去噪操作分布到多个 GPU 上,从而加速推理。在并行去噪后,潜在张量被合并以重建视频序列,然后使用 Cosmos 视频分词器进行解码。 基准测试显示,在最多 32 个 NVIDIA Hopper 架构 GPU 上,NeMo 的扩展效率达到 80-90%。在 1 个和 32 个 GPU 上,FP8 多头注意力相比 BF16 提供了分别 28% 和 48% 的性能提升。 图 9. 使用上下文并行化的视频生成 图 10. 在不同 GPU 数量下的推理性能 结论 在本文中,我们介绍了 NVIDIA NeMo 框架,帮助您以高效的方式进行视频基础模型的预训练或微调。 NeMo Curator 通过剪辑和分片链路提供高吞吐量的数据整理,Megatron Energon 库则提供高效的多模态数据加载。NeMo 框架通过支持多种针对扩散模型和自回归模型特别优化的模型并行技术,实现了可扩展的视频基础模型训练。此外,它通过将去噪操作分布到多个 GPU 上,并结合 FP8 多头注意力,实现了高效的框架内推理。 您可以通过 NeMo Curator 的早期 试用 计划 处理您的视频数据,对其进行 分词 ,进行预训练( 扩散 、 自回归 )、微调( 扩散 、 自回归 ),并在 NeMo 框架下进行多 GPU 的框架内推理( 扩散 、 自回归 )。 您还可以访问 NVIDIA 官网 尝试 NVIDIA Cosmos 世界基础模型,并观看 NVIDIA CEO 黄仁勋在 CES 上的主题演讲 ,了解更多关于 NVIDIA Cosmos 世界基础模型平台的信息。 鸣谢 感谢以下贡献者: Parth Mannan, Xiaowei Ren, Zhuoyao Wang, Carl Wang, Jack Chang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings, Ekaterina Sirazitdinova, Oleg Sudakov, Linnan Wang, Mingyuan Ma, Bobby Chen, Forrest Lin, Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab Bhattacharya, David Page, Jacob Huffman, Tommy Huang, Nima Tajbakhsh, and Ashwath Aithal.
https://developer.nvidia.com/blog/nvidia-cudss-advances-solver-technologies-for-engineering-and-scientific-computing/
NVIDIA cuDSS Advances Solver Technologies for Engineering and Scientific Computing
NVIDIA cuDSS is a first-generation sparse direct solver library designed to accelerate engineering and scientific computing. cuDSS is increasingly adopted in data centers and other environments and supports single-GPU, multi-GPU and multi-node (MGMN) configurations. cuDSS has become a key tool for accelerating computer-aided engineering (CAE) workflows and scientific computations across multiple domains such as structural engineering, fluid dynamics, electromagnetics, circuit simulation, optimization, and AI-assisted engineering problems. This post highlights some of the key performance and usability features delivered in cuDSS v0.4.0 and cuDSS v0.5.0 , as summarized in Table 1. cuDSS v0.4.0 achieves a significant performance boost for factorization and solve steps, while also introducing several new features, including the memory prediction API, automatic hybrid memory selection, and variable batch support. cuDSS v0.5.0 adds host execution mode , which is particularly beneficial for smaller matrices, and demonstrated substantial performance improvements using hybrid memory mode and host multithreading for analysis phase, an area that is typically challenging to parallelize effectively. cuDSS v0.4.0 release cuDSS v0.5.0 release PIP wheel and Conda support Factorization and solve performance improvements (up to 10x) for single and multi-GPU when factors have dense parts Memory prediction API Automatic normal/hybrid memory mode selection Variable (non-uniform) batch support (variable N, NNZ, NRHS, LD) Host execution mode (parts of computations on the host) for smaller matrices Host multithreading (currently only for the reordering) with user-defined threading backend New pivoting approach (static pivoting with scaling) Improved performance and memory requirements for hybrid memory mode Table 1. cuDSS features in releases v0.4.0 and v0.5.0 Feature highlights This section focuses on notable usability enhancements and performance improvements. Memory prediction API The memory prediction API is important for users who need to know the precise amount of device and host memory required by cuDSS before reaching the most memory-intensive phase (numerical factorization). It is especially useful in scenarios where device memory may be insufficient—either when solving large linear systems or when the application has a limited memory budget for cuDSS. In either case, it is recommended to enable hybrid memory mode before the analysis phase. Note that if hybrid memory mode is enabled but everything fits within the available device memory (whether based on the user-defined limit or GPU capacity), cuDSS will automatically detect this and switch to the faster default memory mode. A typical call sequence for solving a linear system with cuDSS is as follows: Analysis (reordering and symbolic factorization) Numerical factorization (where the values of the factors are allocated and computed) Solving With the introduction of memory prediction, users can now query the amount of device and host memory required for the chosen mode (either default or hybrid memory) after the analysis phase, as well as the minimum memory required for hybrid memory mode. As the sample below demonstrates, the query is a single call of cudssDataGet with CUDSS_DATA_MEMORY_ESTIMATES that writes an output in a small fixed-size array. /* * After cudssExecute(..., CUDSS_PHASE_ANALYSIS, ,,,) */ int64_t memory_estimates[16] = {0}; cudssDataGet(cudssHandle, solverData, CUDSS_DATA_MEMORY_ESTIMATES, &memory_estimates, sizeof(memory_estimates); /* memory_estimates[0] - permanent device memory * memory_estimates[1] - peak device memory * memory_estimates[2] - permanent host memory * memory_estimates[3] - peak host memory * memory_estimates[4] - minimum device memory for the hybrid memory mode * memory_estimates[5] - maximum host memory for the hybrid memory mode * memory_estimates[6,...,15] - reserved for future use */ To see the full sample code that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo. Non-uniform batch API In scenarios where the application requires solving multiple linear systems, and each system individually is not large enough to fully saturate the GPU, performance can be enhanced through batching. There are two types of batching: uniform and non-uniform. Unlike uniform batches, non-uniform batches do not impose restrictions on the dimensions or sparsity patterns of the matrices. cuDSS v0.4.0 introduces support for non-uniform batches. The opaque cudssMatrix_t objects can represent either a single matrix or a batch of matrices and thus the only part that needs to be changed is how the matrix objects are created and modified. To create batches of dense or sparse matrices, v0.4.0 introduced new APIs cudssMatrixCreateBatchDn or cudssMatrixCreateBatchCsr . For modifying the matrix data are the similarly added APIs cudssMatrixSetBatchValues and cudssMatrixSetBatchCsrPointers as well as cudssMatrixGetBatchDn and cudssMatrixGetBatchCsr . cuDSS v0.5.0 modifies cudssMatrixFormat_t which can now be queried using cudssMatrixGetFormat to determine whether cudssMatrix_t object is a single matrix or a batch. Once the batches of matrices are created, they can be passed to the main calls of cudssExecute in the exact same way as if they were single matrices. The sample below demonstrates the use of new batch APIs to create batches of dense matrices for the solution and right-hand sides, and a batch of sparse matrices for As. /* * For the batch API, scalar arguments like nrows, ncols, etc. * must be arrays of size batchCount of the specified integer type */ cudssMatrix_t b, x; cudssMatrixCreateBatchDn(&b, batchCount, ncols, nrhs, ldb, batch_b_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR); cudssMatrixCreateBatchDn(&x, batchCount, nrows, nrhs, ldx, batch_x_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR); cudssMatrix_t A; cudssMatrixCreateBatchDn(&A, batchCount, nrows, ncols, nnz, batch_csr_offsets, NULL, batch_csr_columns, batch_csr_values, CUDA_R_32I, CUDA_R_64F, mtype, mview, base); /* * The rest of the workflow remains the same, incl. calls to cudssExecute() with batch matrices A, b and x */ To see the full sample code that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo. Host multithreading API Although most of the compute- and memory-intensive parts of cuDSS are executed on the GPU, some important tasks are still executed on the host. Prior to v0.5.0, cuDSS did not support multithreading (MT) on the host, and host execution was always single-threaded. The new release introduces support for arbitrary user-defined threading runtimes (such as pthreads, OpenMP, and thread pools ), offering flexibility similar to how support was introduced for user-defined communication backends in the MGMN mode in cuDSS v0.3.0. Among the tasks executed on the host, reordering (a critical part of the analysis phase) often stands out, as it can take a significant portion of the total execution time (analysis plus factorization plus solve). To address this common bottleneck in direct sparse solvers, cuDSS v0.5.0 introduces both general MT support on the host and a multithreaded version of reordering. Note that this is available only for the CUDSS_ALG_DEFAULT reordering algorithm. As with the MGMN mode, the new MT mode is optional and does not introduce any new dependencies to the user application if not used. Enabling this feature in your application is simple—just set the name of the shim threading layer library using cudssSetThreadingLayer and (optionally) specify the maximum number of threads that cuDSS is allowed to use, as shown in the following sample: /* * Before cudssExecute(CUDSS_PHASE_ANALYSIS) * thrLibFileName - filename to the cuDSS threading layer library * If NULL then export CUDSS_THREADING_LIB = ‘filename’ */ cudssSetThreadingLayer(cudssHandle, thrLibFileName); /* * (optional)Set number of threads to be used by cuDSS */ int32_t nthr = ...; cudssConfigSet(cudssHandle, solverConfig, CUDSS_CONFIG_HOST_NTHREADS, &nthr, sizeof(nthr); To see the full sample code that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo. Host execution While the primary objective of cuDSS is to enable GPU acceleration for sparse direct solver functionality, for tiny and small matrices (which typically don’t have enough parallelism to saturate a GPU) an extensive use of the GPU can bring a non-negligible overhead. This can sometimes even dominate the total runtime. To make cuDSS a more universal solution, v0.5.0 introduces the host execute mode, which enables factorization and solve phases on the host. When enabled, cuDSS will use a heuristic size-based dispatch to determine whether to perform part of the computations (during factorization and solve phases) on the host or on the device. Additionally, when hybrid execution mode is enabled, users can pass host buffers for the matrix data which saves the needless memory transfers from the host to the device. Host execution mode doesn’t give cuDSS capabilities of a fully-fledged CPU solver, but helps to optionally remove the unwanted memory transfers and improve performance for small matrices. The following sample demonstrates how to turn on hybrid execution mode. /* * Before cudssExecute(CUDSS_PHASE_ANALYSIS) */ int hybrid_execute_mode = 1; cudssConfigSet(solverConfig, CUDSS_CONFIG_HYBRID_EXECUTE_MODE, &hybrid_execute_mode, sizeof(hybrid_execute_mode); To see the full sample code that makes use of this feature, visit the NVIDIA/CUDALibrarySamples GitHub repo. Performance improvements of cuDSS v0.4.0 and v0.5.0 cuDSS v0.4.0 and v0.5.0  introduced significant performance improvements across several types of workloads. In v0.4.0, the factorization and solve steps are accelerated by detecting when parts of the triangular factors become dense and leveraging more efficient dense BLAS kernels for those parts. The speedup achieved through this optimization depends largely on the symbolic structure of the factors, which in turn is influenced by the original matrix and the reordering permutation. Figure 1 illustrates the performance improvement of v0.4.0 over v0.3.0, based on a large collection of matrices from the SuiteSparse Matrix Collection , analyzed on the NVIDIA H100 GPU. Figure 1.  Performance improvement for the factorization and solve phases of cuDSS v0.4.0 over v0.3.0 for a variety of matrices from SuiteSparse Matrix Collection As shown in the chart, both the factorization and solve phases saw substantial improvements, with geometric means of 1.74 and 2.0, respectively. Some matrices with relatively sparse triangular factors did not show significant speedups. However, matrices like Serena , conf5_4_8x8_20 and atmosmodd (which come from various types of HPC applications) experienced speedups of more than 8x in the factorization phase and more than 6x in the solve phase. The analysis phase also saw significant speedup, thanks to the multithreaded reordering introduced in cuDSS v0.5.0. Figure 2 compares the performance of the analysis phase between v0.5.0 and v0.4.0, using the same set of matrices from the SuiteSparse Matrix Collection. The performance improvement arises from the fact that v0.4.0 used a single-threaded reordering implementation, while v0.5.0  leverages multiple CPU threads (cores) on the host. While it’s well-known that state-of-the-art reordering algorithms are notoriously difficult to parallelize efficiently, cuDSS v0.5.0 makes good use of multiple CPU cores, resulting in a solid geometric mean speedup of 1.98, with the maximum improvement reaching 4.82. Note that the analysis phase includes both the (optionally multithreaded) reordering and symbolic factorization, which is performed on the GPU. Therefore, the actual speedup for the reordering part is likely even higher than what the chart indicates. Figure 2. Performance improvement (analysis phase only) of cuDSS v0.5.0 over v0.4.0 for a variety of matrices from SuiteSparse Matrix Collection using host multithreading feature released in v0.5.0 cuDSS v0.5.0 further optimizes the performance of the hybrid memory mode, which was first introduced in v0.3.0 This feature allows part of the internal arrays used within cuDSS to reside on the host, enabling the solution of systems that don’t fit into the memory of a single GPU. It works particularly well on NVIDIA Grace -based systems, thanks to the significantly higher memory bandwidth between the CPU and GPU. Figure 3 presents the performance speedup for the factorization and solve phases with cuDSS 0.5.0, comparing an NVIDIA Grace Hopper system (Grace CPU plus NVIDIA H100 GPU) against an x86 system (Intel Xeon Platinum 8480CL, 2S) plus NVIDIA H100 GPU, using a set of large matrices. Figure 3. Performance improvement of cuDSS v0.5.0 with hybrid memory mode for a variety of matrices As previously mentioned, v0.5.0 introduces the hybrid execution mode , which improves performance of cuDSS for small matrices. Figure 4 shows the speedup of the hybrid execution mode against the CPU solver (Intel MKL PARDISO) for the factorization and solve phases. Figure 4. Performance improvement of cuDSS v0.5.0 with hybrid execution (enabling host execution) against the CPU solver for a variety of matrices Finally, Figure 5 shows the speedup of the new hybrid execution mode (cuDSS v0.5.0) compared to the default mode (cuDSS v0.4.0) for the factorization and solve phases on a set of small matrices. While the speedup of the factorization phase is significant only for really small matrices, the solve phase delivers speedups for systems with up to 30K equations. This can be explained by the fact that the solve phase has less work compared to the factorization phase and cannot make good use of a GPU for the tested matrices. Figure 5. Performance improvement of cuDSS v0.5.0 with hybrid execution for a variety of matrices Summary NVIDIA cuDSS v0.4.0 and v0.5.0 releases provide several new enhancements that significantly improve performance. Highlights include general speedups in factorization and solving, a hybrid memory and execution mode, host multithreading, and support for non-uniform batch sizes. In addition to our continued investment in performance, we will consistently enhance our APIs to expand functionality, providing users with greater flexibility and fine-grained control. Ready to get started? Download NVIDIA cuDSS v0.5.0 . To learn more, check out the cuDSS v0.5.0 release notes and the following previous posts: NVIDIA cuDSS Library Removes Barriers to Optimizing the US Power Grid Spotlight: Honeywell Accelerates Industrial Process Simulation with NVIDIA cuDSS Join the conversation and provide feedback in the NVIDIA Developer Forum .
https://developer.nvidia.com/zh-cn/blog/nvidia-cudss-advances-solver-technologies-for-engineering-and-scientific-computing/
NVIDIA cuDSS 提高工程和科学计算中的求解器技术
NVIDIA cuDSS 是第一代稀疏直接求解器库,旨在加速工程和科学计算。cuDSS 正越来越多地应用于数据中心和其他环境,并支持单 GPU、多 GPU 和多节点(MGMN)配置。 cuDSS 已成为加速多个领域 (例如结构工程、流体动力学、电磁学、电路模拟、优化和 AI 辅助工程问题) 的计算机辅助工程 (CAE) 工作流程和科学计算的关键工具。 本文重点介绍了 cuDSS v0.4.0 和 cuDSS v0.5.0 中提供的一些关键性能和可用性功能 (如表 1 中总结的内容),cuDSS v0.4.0 可显著提升分解和求解步骤的性能,同时还引入了一些新功能,包括内存预测 API、自动混合内存选择和可变批量支持。cuDSS v0.5.0 增加了主机执行模式,这对较小的矩阵特别有利,并且在分析阶段使用混合内存模式和主机多线程实现了显著的性能提升,而有效并行化通常是一个具有挑战性的领域。 cuDSS v0.4.0 版本 cuDSS v0.5.0 版本 PIP wheel 和 Conda 支持在因子具有密集部分时,对单 GPU 和多 GPU 进行分解并解决性能提升 (高达 10 倍) 内存预测 API 自动选择正常/混合内存模式可变 (非均匀) 批量支持 (变量 N、NNZ、NRHS、LD) 较小矩阵的主机执行模式 (部分主机计算模式) 带有用户定义线程后端的主机多线程 (目前仅用于重新排序) 新的透视方法 (通过扩展进行静态透视) 提高了混合内存模式的性能和内存需求 表 1.版本 v0.4.0 和 v0.5.0 中的 cuDSS 功能 功能亮点 本节重点介绍显著的可用性增强和性能改进。 内存预测 API 对于需要在达到显存密集型阶段(数值分解)之前了解 cuDSS 所需设备和主机显存的精确数量的用户而言,内存预测 API 非常重要。 在设备显存不足的情况下,无论是求解大型线性系统,还是应用程序的 cuDSS 显存预算有限时,此功能尤其有用。在这两种情况下,建议在分析阶段之前启用混合显存模式。 请注意,如果混合显存模式已启用,但一切都适合可用设备显存 (无论是基于用户定义的限制还是 GPU 容量),cuDSS 将自动检测到这一点并切换到更快的默认显存模式。 使用 cuDSS 求解线性系统的典型调用序列如下所示: 分析 (重新排序和符号分解) 数值分解(分配并计算因子值) 求解 引入 memory prediction 后,用户现在可以在分析阶段结束后查询所选模式(默认或 hybrid memory)所需的 device 和 host memory 数量,以及 hybrid memory 模式所需的最小 memory。如下示例所示,查询是使用 CUDSS_DATA_MEMORY_ESTIMATES 对 cudssDataGet 进行的一次调用,可将输出写入固定大小的小型数组中。 /* * After cudssExecute(..., CUDSS_PHASE_ANALYSIS, ,,,) */ int64_t memory_estimates[16] = {0}; cudssDataGet(cudssHandle, solverData, CUDSS_DATA_MEMORY_ESTIMATES, &memory_estimates, sizeof(memory_estimates); /* memory_estimates[0] - permanent device memory * memory_estimates[1] - peak device memory * memory_estimates[2] - permanent host memory * memory_estimates[3] - peak host memory * memory_estimates[4] - minimum device memory for the hybrid memory mode * memory_estimates[5] - maximum host memory for the hybrid memory mode * memory_estimates[6,...,15] - reserved for future use */ 要查看使用此功能的 完整示例代码 ,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。 不均匀批处理 API 在应用程序需要求解多个线性系统,且每个系统单独不够大,无法使 GPU 完全饱和的情况下,可以通过批处理来提高性能。有两种类型的批处理:uniform 和 non-uniform。与 uniform 批量不同,non-uniform 批量不会对矩阵的维度或稀疏模式施加限制。 cuDSS v0.4.0 引入了对不均匀批量的支持。不透明的 cudssMatrix_t 对象可以表示单个矩阵,也可以表示批量矩阵,因此唯一需要更改的部分是创建和修改矩阵对象的方式。 为创建密集或稀疏矩阵的批量,v0.4.0 引入了新的 API cudssMatrixCreateBatchDn 或 cudssMatrixCreateBatchCsr 、添加的类似 API cudssMatrixSetBatchValues 和 cudssMatrixSetBatchCsrPointers 以及 cudssMatrixGetBatchDn 和 cudssMatrixGetBatchCsr 可用于修改矩阵数据。cuDSS v0.5.0 修改 cudssMatrixFormat_t ,现在可以使用 cudssMatrixGetFormat 查询 tg_10,以确定 cudssMatrix_t 对象是单个矩阵还是批量对象。 创建矩阵批量后,它们可以像单个矩阵一样以完全相同的方式传递给 cudssExecute 的主要调用。以下示例演示了如何使用新的批量 API 为解和右侧创建批量密集矩阵,并为 As 创建批量稀疏矩阵。 /* * For the batch API, scalar arguments like nrows, ncols, etc. * must be arrays of size batchCount of the specified integer type */ cudssMatrix_t b, x; cudssMatrixCreateBatchDn(&b, batchCount, ncols, nrhs, ldb, batch_b_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR); cudssMatrixCreateBatchDn(&x, batchCount, nrows, nrhs, ldx, batch_x_values, CUDA_R_32I, CUDA_R_64F, CUDSS_LAYOUT_COL_MAJOR); cudssMatrix_t A; cudssMatrixCreateBatchDn(&A, batchCount, nrows, ncols, nnz, batch_csr_offsets, NULL, batch_csr_columns, batch_csr_values, CUDA_R_32I, CUDA_R_64F, mtype, mview, base); /* * The rest of the workflow remains the same, incl. calls to cudssExecute() with batch matrices A, b and x */ 要查看使用此功能的 完整示例代码 ,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。 托管多线程 API 虽然 cuDSS 的大多数计算和内存密集型部分都在 GPU 上执行,但一些重要任务仍然在主机上执行。在 v0.5.0 之前,cuDSS 不支持主机上的多线程 (MT),并且主机执行始终是单线程的。新版本引入了对任意用户定义线程运行时 ( 例如 pthreads、OpenMP 和线程池 ) 的支持,其灵活性与 cuDSS v0.3.0 中在 MGMN 模式 下引入对用户定义通信后端的支持类似。 在主机上执行的任务中,重新排序 (分析阶段的关键部分) 通常非常突出,因为它可能占用总执行时间 (分析加分解加解) 的很大一部分。为解决直接稀疏求解器中的常见瓶颈,cuDSS v0.5.0 在主机上引入了通用 MT 支持和多线程版本的重新排序。请注意,此功能仅适用于 CUDSS_ALG_DEFAULT 重排序算法。 与 MGMN 模式一样,新的 MT 模式是可选的,如果不使用,不会向用户应用引入任何新的依赖项。在应用中启用此功能非常简单 – 只需使用 cudssSetThreadingLayer 设置 shim 线程层库的名称,并 (可选) 指定允许 cuDSS 使用的最大线程数,如下例所示: /* * Before cudssExecute(CUDSS_PHASE_ANALYSIS) * thrLibFileName - filename to the cuDSS threading layer library * If NULL then export CUDSS_THREADING_LIB = ‘filename’ */ cudssSetThreadingLayer(cudssHandle, thrLibFileName); /* * (optional)Set number of threads to be used by cuDSS */ int32_t nthr = ...; cudssConfigSet(cudssHandle, solverConfig, CUDSS_CONFIG_HOST_NTHREADS, &nthr, sizeof(nthr); 要查看使用此功能的 完整示例代码 ,请访问 NVIDIA/CUDALibrarySamples GitHub 存储库。 主机执行 虽然 cuDSS 的主要目标是为稀疏直接求解器功能实现 GPU 加速,但对于微小的矩阵 (通常没有足够的并行度使 GPU 饱和),广泛使用 GPU 会带来不可忽略的开销。有时,这甚至会主宰总运行时间。 为使 cuDSS 成为更通用的解决方案,v0.5.0 引入了 主机执行模式 ,该模式可在主机上启用分解和求解阶段。启用 cuDSS 后,cuDSS 将使用基于大小的启发式分配来确定是否在主机或设备上执行部分计算 (在分解和求解阶段)。 此外,启用混合执行模式后,用户可以为矩阵数据传递主机缓冲区,从而节省从主机到设备的不必要的内存传输。主机执行模式无法提供成熟 CPU 求解器的 cuDSS 功能,但有助于选择性地消除不需要的内存传输,并提高小型矩阵的性能。 以下示例演示了如何开启混合执行模式。 /* * Before cudssExecute(CUDSS_PHASE_ANALYSIS) */ int hybrid_execute_mode = 1; cudssConfigSet(solverConfig, CUDSS_CONFIG_HYBRID_EXECUTE_MODE, &hybrid_execute_mode, sizeof(hybrid_execute_mode); 要查看使用此功能的 完整示例代码 ,请访问 NVIDIA/CUDALibrarySamples GitHub repo。 cuDSS v0.4.0 和 v0.5.0 的性能提升 cuDSS v0.4.0 和 v0.5.0 为多种类型的工作负载引入了显著的性能提升。 在 v0.4.0 中,通过检测三角形因子的某些部分何时变得密集,并为这些部分利用更高效的密集 BLAS 内核,可加速分解和求解步骤。通过此优化实现的加速在很大程度上取决于因子的符号结构,而符号结构又会受到原始矩阵和重新排序排列的影响。 根据 SuiteSparse 矩阵集合中的大量矩阵集合,并在 NVIDIA H100 GPU 上进行分析,图 1 展示了 v0.4.0 相较于 v0.3.0 的性能提升。 图 1. cuDSS v0.4.0 相对于 v0.3.0 对 SuiteSparse Matrix Collection 中各种矩阵的分解和求解阶段的性能有所提升 如图所示,分解阶段和求解阶段均有显著改进,几何均值分别为 1.74 和 2.0、一些三角形因子相对稀疏的矩阵并未显示出显著的加速。但是, Serena 、 conf5_4_8x8_20 和 atmosmodd (来自各种类型的 HPC 应用) 等矩阵在分解阶段的速度提升超过 8 倍,在求解阶段的速度提升超过 6 倍。 得益于 cuDSS v0.5.0 中引入的多线程重排序,分析阶段的速度也得到了显著提升。图 2 使用来自 SuiteSparse 矩阵集合的同一组矩阵,比较了 v0.5.0 和 v0.4.0 分析阶段的性能。 性能提升的原因是,v0.4.0 使用了单线程重排序实现,而 v0.5.0 则利用主机上的多个 CPU 线程 (cores)。众所周知,先进的重排序算法很难高效并行化,而 cuDSS v0.5.0 可以很好地利用多个 CPU cores,从而实现 1.98 的几何平均加速,最大提升可达 4.82。 请注意,分析阶段包括 (可选多线程) 重排序和符号分解,这在 GPU 上执行。因此,重新排序部分的实际加速可能甚至高于图表所示。 图 2、对于使用 v0.5.0 中发布的主机多线程功能的 SuiteSparse 矩阵集合中的各种矩阵,cuDSS v0.5.0 的性能比 v0.4.0 得到提升 (仅限分析阶段) cuDSS v0.5.0 进一步优化了 混合显存模式 (最初在 v0.3.0 中引入)的性能。此功能允许 cuDSS 中使用的部分内部数组驻留在主机上,从而为不适合单个 GPU 内存的系统提供解决方案。由于 CPU 和 GPU 之间的内存带宽显著增加,它在基于 NVIDIA Grace 的系统上的运行效果尤为出色。 图 3 展示了 cuDSS 0.5.0 下分解和求解阶段的性能提升情况,并使用一组大型矩阵比较了 NVIDIA Grace Hopper 系统 (Grace CPU 加 NVIDIA H100 GPU) 与 x86 系统 (Intel Xeon Platinum 8480CL,2S) 加 NVIDIA H100 GPU 的性能提升情况。 图 3、cuDSS v0.5.0 借助混合显存模式,性能得到提升 如前所述,v0.5.0 引入了 混合执行模式 ,可提高 cuDSS 在处理小型矩阵时的性能。图 4 显示了在分解和求解阶段,混合执行模式相对于 CPU 求解器 (Intel MKL PARDISO) 的加速情况。 图 4、cuDSS v0.5.0 通过混合执行 (启用主机执行) 相较于 CPU 求解器在各种矩阵中的性能提升 最后,图 5 显示了与默认模式 (cuDSS v0.4.0) 相比,新的混合执行模式 (cuDSS v0.5.0) 在一组小矩阵上分解和求解阶段的速度提升情况。虽然分解阶段的加速仅适用于非常小的矩阵,但求解阶段可为方程高达 30K 个的系统提供加速。这可以解释为,与分解阶段相比,求解阶段的工作量较少,并且无法充分利用 GPU 来测试矩阵。 图 5、cuDSS v0.5.0 的性能提升,支持各种矩阵的混合执行 总结 NVIDIA cuDSS v0.4.0 和 v0.5.0 版本提供了多项新的增强功能,可显著提高性能。亮点包括分解和求解方面的一般加速、混合内存和执行模式、主机多线程,以及对非均匀批量大小的支持。除了持续投资性能之外,我们还将持续增强 API 以扩展功能,为用户提供更高的灵活性和细粒度控制。 准备好开始使用了吗?下载 NVIDIA cuDSS v0.5.0。 如需了解更多信息,请查看 cuDSS v0.5.0 版本说明 以及之前的以下帖子: NVIDIA cuDSS 库消除了优化美国电网的障碍 聚焦:Honeywell 借助 NVIDIA cuDSS 加速工业流程模拟 在 NVIDIA 开发者论坛 中加入对话并提供反馈。
https://developer.nvidia.com/blog/spotlight-university-of-tokyo-uses-nvidia-grace-hopper-for-groundbreaking-energy-efficient-seismic-research/
Spotlight: University of Tokyo Uses NVIDIA Grace Hopper for Groundbreaking Energy-Efficient Seismic Research
Supercomputers are the engines of groundbreaking discoveries. From predicting extreme weather to advancing disease research and designing safer, more efficient infrastructures, these machines simulate complex systems that are impractical to test in the real world due to their size, cost, and material requirements. Since the introduction of the GPU in 1999, NVIDIA has continually pushed the boundaries of accelerated computing , an approach that uses specialized hardware to dramatically speed up work by bundling frequently occurring tasks in parallel processing. This efficiency enables supercomputers to handle unprecedented computational challenges while consuming less energy per task. Today, NVIDIA-powered systems lead the charge in energy-efficient supercomputing, with eight of the top 10 placements on the Green500 —an industry benchmark for energy-efficient supercomputers. The JEDI system at Germany’s Jülich Supercomputing Center, powered by NVIDIA Grace Hopper , exemplifies this progress, achieving a staggering efficiency of 72.7 gigaflops per watt. Advancing seismic safety with the University of Tokyo This drive for energy efficiency is exemplified in the recent research effort between NVIDIA and the University of Tokyo. Situated in one of the most seismically active regions in the world, Japan requires cutting-edge research to mitigate the impact of earthquakes. The Earthquake Research Institute at the University of Tokyo has used the NVIDIA Grace Hopper Superchip to accelerate its seismic simulations, achieving groundbreaking results . The tightly coupled CPU-GPU architecture of the NVIDIA GH200 Grace Hopper Superchip enables an improvement in simulation performance of 86x, with 32x greater energy efficiency compared to traditional methods. Advanced memory optimization ensures faster and more complex computations, empowering researchers to model earthquake-resistant infrastructure with unmatched precision. By leveraging data-driven predictions and heterogeneous computing systems, the university is helping transform seismic research into a more efficient endeavor. Award-winning research: Heterogeneous computing for seismic modeling These breakthroughs were presented at WACCPD 2024 , where the University of Tokyo showcased how NVIDIA hardware-software synergy enables solutions to dynamic, large-scale problems. In seismic modeling, the innovative architecture of Grace Hopper accelerates time-evolution equation-based simulations by combining high-performance GPUs for intensive calculations with the memory capacity of CPUs for predictive algorithms. Beyond seismic research, this approach has the potential to reshape energy efficiency in high-performance computing (HPC) . This groundbreaking work developed a new CPU-GPU heterogeneous computing method for repeatedly solving time-evolution partial differential equation (PDE) problems with guaranteed accuracy. This new method achieves a very short time-to-solution (TTS) with low energy-to-solution by leveraging both the large memory capacity of the CPU and the high computing performance of the GPU. When scaled up to the Alps supercomputer , the method was 51.6x faster than using only the CPU and 6.98x faster than the GPU, while achieving an impressive 94.3% efficiency across 1,920 compute nodes. Figure 1. Performance comparison of the newly-developed PDE solver. Using both CPU and GPU in the NVIDIA GH200 Grace Hopper Superchip realizes maximum performance The simulations involve stepping forward in time, where the solution (the seismic activity, for example) at each point in time is calculated by solving a system of equations. In turn, at each timestep multiple iterations are required, where the solution converges to a suitably accurate result. The calculations at each iteration are very intense and require the computational power of GPUs. Therefore, there are two aspects that determine the overall TTS (and corresponding energy to solution): how fast each iteration can be calculated at each timestep, and how many iterations are required at each timestep. It is this combination that enables Grace Hopper architecture to really excel: the Hopper GPU performs each calculation exceedingly quickly, while the large memory capacity of the Grace CPU stores a history of prior results to inform the predictor, allowing the number of iterations to be vastly reduced. This data-driven method is well suited to the NVIDIA GH200 Grace Hopper Superchip. The number of iterations required to converge depends on how good the initial “guess” of the solution is. The novelty of this work involves data from previous timesteps being used to maximize the accuracy of each initial solution, thus reducing the number of iterations required. This method requires a combination of capabilities: The large memory capacity of the NVIDIA Grace CPU to hold the required previous-timestep data, The computational ability of Hopper to solve the equations at each iteration, The fast interconnect to allow the data-driven results from Grace CPU to optimize the number of solver iterations on the Hopper GPU. Figure 2. The data-driven method solves two problem sets simultaneously by using the high bandwidth CPU-GPU connection on GH200 to communicate predictions from the CPU to the solver on the GPU Figure 3. Data-driven predictor running on the Grace CPU reduces iterations from about 150 to 50 The benefits are shown in Figure 1, where it can be seen that the time-to-solution is reduced by 86x, compared to only using the CPU, or 9x compared to only using the GPU. The corresponding energy reductions are 32x (compared to CPU) and 7x (compared to GPU alone). Enabling energy efficient supercomputing The University of Tokyo’s seismic research breakthroughs highlight the transformative power of energy-efficient supercomputing. By harnessing the NVIDIA Grace Hopper Superchip, researchers are not only accelerating seismic simulations by unprecedented margins but also drastically reducing energy consumption—a crucial step in sustainable computing . This collaboration demonstrates how cutting-edge technology can address urgent global challenges, like earthquake safety, while setting new benchmarks for performance and energy efficiency. As these innovations continue to scale across industries, they pave the way for a future where high-performance computing drives both scientific progress and environmental responsibility. At NVIDIA GTC 2025 , Kohei Fujita, associate professor at the University of Tokyo Earthquake Research Institute, will discuss this breakthrough research in the session, Implement Accelerated PDE-Based Time-History Simulation by Data-Driven Methods on Strongly Coupled CPU-GPU systems [S72925] .
https://developer.nvidia.com/zh-cn/blog/spotlight-university-of-tokyo-uses-nvidia-grace-hopper-for-groundbreaking-energy-efficient-seismic-research/
聚焦:东京大学使用 NVIDIA Grace Hopper 进行开创性的节能地震研究
超级计算机是推动突破性发现的引擎。从预测极端天气到推进疾病研究,以及设计更安全、更高效的基础设施,这些机器可以模拟因规模、成本和材料要求而无法在现实世界中进行测试的复杂系统。 自 1999 年推出 GPU 以来,NVIDIA 一直在不断突破加速 计算的极限 ,这种方法使用专业硬件,通过在并行处理中捆绑频繁出现的任务来大幅加速工作。这种效率使超级计算机能够处理前所未有的计算挑战,同时减少每项任务消耗的能源。 如今,由 NVIDIA 提供支持的系统在高效超级计算领域处于领先地位,在 Green500 榜单前 10 名中有 8 名入选,而 Green500 榜单是高效超级计算机的行业基准测试。德国于利希超级计算中心(Jülich Supercomputing Center)的 JEDI 系统由 NVIDIA Grace Hopper 提供动力支持,充分体现了这一进步,实现了 72.7 gigaflops 每瓦的惊人效率。 与东京大学一起推进地震安全性 NVIDIA 和东京大学最近的研究工作就是这种 提高能效的做法的典范 。日本位于世界上地震最活跃的地区之一,需要进行尖端研究来减轻地震的影响。 东京大学地震研究所 使用 NVIDIA Grace Hopper 超级芯片 加速地震模拟, 取得了突破性成果 。 NVIDIA GH200 Grace Hopper 超级芯片采用紧密合的 CPU-GPU 架构 ,模拟性能提高了 86 倍,能效比传统方法高 32 倍。高级内存优化可确保更快速、更复杂的计算,使研究人员能够以出色的精度对抗震基础设施进行建模。通过利用数据驱动的预测和异构计算系统,该大学正在帮助将地震研究转变为更高效的工作。 获奖研究:用于地震建模的异构计算 在 WACCPD 2024 大会上,东京大学展示了 NVIDIA 硬件与软件的协同作用如何为动态、大规模问题提供解决方案,展示了这些突破性成果。在地震建模中,Grace Hopper 的创新架构将用于密集型计算的高性能 GPU 与用于预测算法的 CPU 内存容量相结合,从而加速基于时间进化方程的模拟。 除了地震研究之外,这种方法还有可能重塑 高性能计算(HPC) 的能效。 这项突破性的工作开发了一种新的 CPU-GPU 异构计算方法,能够以有保证的准确性反复解决时间演进偏微分方程 (PDE) 问题。这种新方法通过利用 CPU 的大内存容量和 GPU 的高计算性能,实现了极短的解决方案时间 (TTS) 和低能耗的解决方案。当扩展到 Alps 超级计算机 时,该方法的速度比仅使用 CPU 快 51.6 倍,比 GPU 快 6.98 倍,同时在 1,920 个计算节点中实现了 94.3% 的惊人效率。 图 1、新开发的 PDE 求解器的性能比较。在 NVIDIA GH200 Grace Hopper 超级芯片中同时使用 CPU 和 GPU 可实现出色性能 模拟涉及逐步推进时间,即通过求解方程组来计算每个时间点的解 (例如地震活动)。反过来,在每个时间步长中,都需要进行多次迭代,使解收到适当准确的结果。每次迭代的计算都非常密集,需要 GPU 的计算能力。 因此,有两个方面决定整体 TTS (以及相应的解能量):在每个时间步长下每次迭代的计算速度,以及每个时间步长下需要进行多少次迭代。正是这种组合使 Grace Hopper 架构真正表现出色:Hopper GPU 执行每次计算的速度极快,而 Grace CPU 的大内存容量则存储先前结果的历史记录以告知预测器,从而大幅减少迭代次数。 这种数据驱动的方法非常适合 NVIDIA GH200 Grace Hopper 超级芯片。所需的迭代次数取决于对解的初始“猜测”的准确性。这项工作的新颖之处在于,之前时间步长的数据可用于更大限度地提高每个初始解决方案的准确性,从而减少所需的迭代次数。此方法需要结合以下功能: NVIDIA Grace CPU 的大内存容量可容纳所需的先前时间步长数据, Hopper 在每次迭代中求解方程的计算能力 快速互连允许 Grace CPU 的数据驱动结果优化 Hopper GPU 上的求解器迭代次数。 图 2、数据驱动方法通过使用 GH200 上的高带宽 CPU-GPU 连接将预测从 CPU 传输到 GPU 上的求解器,同时解决两个问题集 图 3、在 Grace CPU 上运行的数据驱动预测器可将迭代次数从大约 150 次减少到 50 次 图 1 显示了这些优势,与仅使用 CPU 相比,解决时间缩短了 86 倍,与仅使用 GPU 相比,缩短了 9 倍。相应的能耗分别为 32 倍 (与 CPU 相比) 和 7 倍 (仅与 GPU 相比)。 实现高能效超级计算 东京大学的地震研究取得的突破凸显了高效超级计算的变革力量。通过利用 NVIDIA Grace Hopper Superchip,研究人员不仅以前所未有的利润加速地震模拟,而且还大幅降低了能源消耗,这是 可持续计算 的关键一步。 这项合作展示了尖端技术如何应对紧迫的全球性挑战 (例如地震安全性),同时为性能和能效设定新的基准。随着这些创新继续在各行各业中推广,它们为高性能计算推动科学进步和承担环保责任的未来铺平道路。 在 NVIDIA GTC 2025 上,东京大学地震研究所副教授 Kohei Fujita 将在名为“Implement Accelerated PDE-Based Time-History Simulation by Data-Driven Methods on Strongly Coupled CPU-GPU systems”的会议中讨论这项突破性研究[S72925]。
https://developer.nvidia.com/blog/spotlight-brli-and-toulouse-inp-develop-ai-based-flood-models-using-nvidia-modulus/
Spotlight: BRLi and Toulouse INP Develop AI-Based Flood Models Using NVIDIA Modulus
Flooding poses a significant threat to 1.5 billion people, making it the most common cause of major natural disasters. Floods cause up to $25 billion in global economic damage every year. Flood forecasting is a critical tool in disaster preparedness and risk mitigation. Numerical methods have long been developed that provide accurate simulations of river basins. With these, engineers such as those at the consulting firm BRLi study different rainfall scenarios, and provide valuable assessments of flood risks, mitigation strategies, and disaster relief plans to local stakeholders. Accurate predictions can save lives, protect infrastructure, and reduce economic losses. Yet these methods use physics-based numerical simulations that are computationally intensive. This approach often takes many hours to simulate a flooding event, even on many parallel processors. It’s not suitable for real-time forecasting of ongoing events. This limitation has long been a bottleneck in providing rapid, actionable flood warnings tailored to a given event, and has prevented the development of real-time forecasting systems. To address this issue, a joint team at BRLi and National Polytechnic Institute of Toulouse (Toulouse INP) , through a collaboration in the ANITI research institute, has designed an end-to-end AI approach that fully replaces the physics-based solver. This team includes expertise in both hydraulic modeling and AI methods for physics, enabling this interdisciplinary work. Hydrology experts from BRLi provided detailed physics models for a given basin to produce a database of floods. The Toulouse INP team then used these to train an end-to-end AI system to emulate the solver leveraging NVIDIA Modulus from the Earth-2 platform. Earth-2 is a digital twin cloud platform to develop AI-augmented forecasting and visualization pipelines for weather and climate applications. The team used the optimized training pipelines and parallel training capabilities along with the built-in deep learning models provided by Modulus. The resulting solver successfully emulates several hours of flooding ahead in mere seconds on a single GPU, opening the way for real-time forecasting. This is a groundbreaking approach leveraging AI to revolutionize flood forecasting, dramatically reducing computation time. AI-based flood forecasting system The joint team focused on the Têt River basin, in the south of France. Significant engineering efforts have led to detailed unstructured meshes of this region that encompass many important topographic and engineering features like bridges, dikes, and water retention basins. Detailed numerical simulations using the open-source Telemac-Mascaret code were run with a variety of water input profiles to produce a database of flooding events. For training, the INP team used NVIDIA Modulus as part of the Earth-2 platform, which provides a suite of tools for enabling AI powered solutions in the climate and weather domain. The team used one of the built-in model architectures in Modulus for fast and effortless training with their custom data. To explore what AI models can enable on weather and climate forecasting, check out the FourCastNet and CorrDiff NVIDIA NIM microservices in a browser-based experience. The INP team chose one of the graph neural network (GNN) architectures that has shown impressive performance in describing atmospheric dynamics and adapted it to the Têt River basin. This approach enables the model to capture complex spatial relationships and temporal dynamics crucial for accurate flood prediction. The AI model was trained to emulate the intricate hydrological processes that lead to flooding, by imitating the data from the detailed numerical simulations. Training was done on up to 14 NVIDIA A100 Tensor Core GPUs in parallel on the CALMIP Turpan supercomputer, achieving a near-linear speedup with a scaling efficiency of up to 91%. Using an autoregressive approach, the model can predict flooded areas and flow rates in 30-minute increments up to several hours into the future. To address the challenge of capturing long-range dependencies without substantially increasing the number of message-passing layers, the team created progressively coarser representations of the original mesh. These coarser meshes were connected to the fine mesh using a KD-Tree, forming a multimesh. This strategy expanded the model’s receptive field, enhancing its ability to simulate large-scale flood dynamics while maintaining computational efficiency. During training, the model optimization was guided by the mean squared error (MSE) loss function to ensure accurate predictions. For testing, the evaluation metrics included the L1 error to measure the average prediction deviation and the critical success index (CSI), which was computed at a 5 cm threshold. The CSI quantified the model’s ability to correctly detect and classify flooded areas by comparing the intersection of predicted and observed flooded cells to their union. Surrogate GNN model The final model can perform a 6-hour prediction in 19 ms on a single NVIDIA A100 80 GB GPU. In comparison, a similar simulation with the physics-based numerical model requires 12 hours of CPU time and at least 25 minutes of time-to-solution on 28 CPUs. What’s more, the method leverages the true mesh developed by the hydraulic engineering team with all its complexity. This is a true breakthrough for real-time flood modeling, for which AI attempts often require simplified configurations and meshes. This achievement also showcases how NVIDIA Modulus enabled the setup and training of cutting-edge AI architectures directly on engineering meshes. This example can be replicated for many problems across multiple industries, simply by providing simulation data. Figure 1. The surrogate GNN model can perform a 6-hour flooding prediction in 19 ms on a single NVIDIA A100 80 GB GPU Conclusion Building on these exciting results, the team is currently refining the training data and evaluation metrics to ensure that the level of quality produced by the model matches the physics-based approach, including in rare events. Once fully validated, the model will be considered for integration into the engineering toolchains at BRLi. As a first step, it will be used to produce large ensembles of thousands of runs to obtain uncertainty quantification in basin studies, significantly improving upon the state of the art which relies on only a handful of simulations. Next, working with operational disaster relief services will be crucial to find the optimal data sources to ingest and feed to a real-time forecasting system, and how the data should be sent back to them. Earth-2 is an open platform and NVIDIA Modulus is an open-source project to support the growing physics AI community. To learn how NVIDIA Modulus can help your physics AI research projects, explore the NVIDIA/modulus GitHub repo.
https://developer.nvidia.com/zh-cn/blog/spotlight-brli-and-toulouse-inp-develop-ai-based-flood-models-using-nvidia-modulus/
聚焦:BRLi 和 Toulouse INP 使用 NVIDIA Modulus 开发基于 AI 的洪水模型
洪水对 15 亿人构成了重大威胁,使其成为重大自然灾害的最常见原因。洪水每年给全球带来高达 25 亿美元的经济损失。洪水预测是防灾和减轻风险的重要工具。我们早已开发出能够准确模拟河流流域的数值方法。借助这些技术,像咨询公司 BRLi 的工程师可以研究不同的降水情景,并为当地利益相关者提供宝贵的洪水风险评估、缓解策略和灾害救济计划。准确的预测可以挽救生命、保护基础设施并减少经济损失。 然而,这些方法使用基于物理性质的数值模拟,而且计算量非常大。这种方法通常需要数小时才能模拟洪水事件,即使在许多并行处理器上也是如此。它不适合实时预测正在进行的事件。长期以来,这种限制一直是针对特定事件提供快速、可行的洪水警报的瓶颈,并阻碍了实时预报系统的开发。 为解决这一问题,BRLi 和 图卢兹国家理工学院 (Toulouse INP) 的一个联合团队通过与 ANITI 研究院 的合作,设计了一种完全取代基于物理的求解器的端到端 AI 方法。该团队在水力学建模和 AI 方法方面拥有丰富的专业知识,为这项跨学科工作提供了支持。 BRLi 的水文专家为给定的水域提供了详细的物理模型,以生成洪水数据库。然后,Toulouse INP 团队使用这些模型来训练端到端 AI 系统,以模拟从 Earth-2 平台利用 NVIDIA Modulus 的求解器。Earth-2 是一个数字孪生云平台,用于为天气和气候应用开发 AI 增强型预测和可视化流程。该团队使用了经过优化的训练流程和并行训练功能,以及 Modulus 提供的内置深度学习模型。 生成的求解器在单个 GPU 上只需几秒钟即可成功模拟前方数小时的洪水,为实时预测开辟了道路。这是一种利用 AI 革新洪水预测的突破性方法,可显著缩短计算时间。 基于 AI 的洪水预报系统 联合团队重点关注法国南部的 Têt River 流域。经过大量的工程工作,该区域形成了详细的非结构化网格,其中包含许多重要的地形和工程特征,例如桥梁、栈桥和水 Retention Basins。使用开源 Telemac-Mascaret 代码的详细数值模拟使用各种水输入剖面图运行,以生成洪水事件数据库。 在训练方面,INP 团队将 NVIDIA Modulus 用作 Earth-2 平台的一部分,该平台提供了一套工具,用于在气候和天气领域实现 AI 驱动的解决方案。该团队使用 Modulus 中的一个内置模型架构,使用其自定义数据进行快速轻松的训练。要探索 AI 模型可以在天气和气候预报方面实现哪些功能,请在基于浏览器的体验中查看 FourCastNet 和 CorrDiff NVIDIA NIM 微服务。 INP 团队选择了一种在描述 大气动力 学方面表现出色的图神经网络 (GNN) 架构,并对其进行了调整以适应 Têt River 流域。这种方法使模型能够捕捉复杂的空间关系和时间动态,这对准确预测洪水至关重要。AI 模型经过训练,通过模仿详细的数值模拟中的数据来模拟导致洪水的复杂水文过程。 我们在 CALMIP Turpan 超级计算机上并行训练了多达 14 个 NVIDIA A100 Tensor Core GPU ,实现了近线性加速,扩展效率高达 91%。通过使用自回归方法,该模型可以以 30 分钟为增量预测未来几个小时内的洪水淹没地区和流速。 为了解决在不大幅增加消息传递层数量的情况下捕获远程依赖项的挑战,团队逐步创建了原始网格的更粗表示形式。这些粗网格使用 KD-Tree 连接到精细网格,形成了一个多网格。该策略扩展了模型的感受野,增强了其在保持计算效率的同时模拟大规模洪水动力学的能力。 在训练期间,模型优化由均方误差(MSE)损失函数指导,以确保预测准确。在测试方面,评估指标包括用于测量平均预测偏差的 L1 误差和在 5 厘米阈值下计算的关键成功指数(CSI)。CSI 通过比较预测和观察到的洪水淹没细胞与其并集的交集,量化了模型正确检测和分类洪水淹没地区的能力。 替代 GNN 模型 在单个 NVIDIA A100 80 GB GPU 上,最终模型可以在 19 毫秒内执行 6 小时预测。相比之下,使用基于物理性质的数值模型进行类似模拟需要 12 小时的 CPU 时间,在 28 个 CPU 上至少需要 25 分钟的求解时间。此外,该方法利用了液压工程团队开发的真正的复杂网格。这是实时洪水建模的真正突破,AI 通常需要简化的配置和网格。 这一成就还展示了 NVIDIA Modulus 如何直接在工程网格上设置和训练尖端 AI 架构。只需提供模拟数据,即可将此示例复制用于解决多个行业中的许多问题。 图 1、代理 GNN 模型可以在 19 毫秒内使用单个 NVIDIA A100 80 GB GPU 执行 6 小时洪水预测 结束语 基于这些令人兴奋的结果,该团队目前正在完善训练数据和评估指标,以确保模型生成的质量水平与基于物理的方法相匹配,在罕见事件中也是如此。经过充分验证后,我们将考虑将该模型集成到 BRLi 的工程工具链中。 作为第一步,它将用于生成包含数千次运行的大型系综,以在池研究中获得不确定性量化,从而显著改进仅依赖于少量模拟的先进技术。其次,与实际救灾服务机构合作至关重要,有助于找到最佳数据源,将其提取并输入实时预报系统,以及如何将数据发回这些数据源。 Earth-2 是一个开放平台,而 NVIDIA Modulus 是一个开源项目,旨在支持不断发展的物理 AI 社区。要了解 NVIDIA Modulus 如何为您的物理 AI 研究项目提供帮助,请探索 NVIDIA/modulus Github 存储库。
https://developer.nvidia.com/blog/render-path-traced-hair-in-real-time-with-nvidia-geforce-rtx-50-series-gpus/
Render Path-Traced Hair in Real Time with NVIDIA GeForce RTX 50 Series GPUs
Hardware support for ray tracing triangle meshes was introduced as part of NVIDIA RTX in 2018. But ray tracing for hair and fur has remained a compute-intensive problem that has been difficult to further accelerate. That is, until now. NVIDIA GeForce 50 Series GPUs include a major advancement in the acceleration of ray tracing for hair and fur: hardware ray tracing support for the linear swept sphere (LSS) primitive. This new primitive is a big step toward rendering high-quality digital humans in real time. The range of applications for realistic human rendering is ever-increasing and includes AI avatars, film and game characters, graphics research, high-performance scientific computing applications, synthetic data generation, and more. LSS is available today in NVIDIA OptiX and in the NVAPI SDK . There are advanced examples of real-time hair using LSS with physically based shading models included in the RTX Character Rendering SDK which is part of NVIDIA RTX Kit . RTX Kit is a suite of neural rendering technologies to ray trace games with AI, render scenes with immense geometry, and create game characters with photorealistic visuals. RTX Kit was introduced during the GeForce RTX 50 Series launch . What is the LSS primitive? The LSS primitive is a thick, round 3D line with varying radii. Multiple linear swept spheres can be chained together to build 3D curves, sharing vertices where they overlap, similar to how triangles are used to build 3D surfaces. LSS is shaped like a cylindrical or conical tube with spheres optionally capping either end. Figure 1. The shapes of a linear swept sphere If the two endpoints are overlapped so the line has no length, then the line swept sphere becomes just a regular sphere. This can be used for hardware-accelerated particle rendering. To make spheres convenient and memory efficient, this special case of LSS for spheres is being exposed as a separate first-class sphere primitive. Linear swept spheres aren’t normally meant to be shown in magnified close-up, as in Figure 1. Rather, they are intended to be small onscreen, maybe one pixel wide, give or take. When multiple linear swept spheres are chained together into strands and the camera is pulled back a bit, the sharp angles melt away and these thick lines become thin smooth curves. Figure 2. Linear swept spheres appear smooth from a distance LSS indexing LSS comes with two indexing modes: an explicit list mode, and a semi-implicit mode called successive implicit. In list mode, each segment is specified explicitly using a pair of indices into your vertex buffer. In successive implicit mode, line segments are specified using one explicit index and one implicit index, where the second point is assumed to be the next vertex in the vertex buffer. Because successive vertices in memory are common and convenient, successive implicit mode is an easy way to save 50% of the memory in the index buffer. Figure 3 shows an example where an explicit index buffer would contain [0,1,1,2,2,3,4,5,5,6]. With successive implicit indexing, the index buffer would be [0,1,2,4,5]. Figure 3. An example of two strands with vertex indices Rendering strand-based curves using legacy hardware For someone already using the CUDA-based ray tracing API framework NVIDIA OptiX , LSS is already available as the default linear curve type, and works on all GPUs that OptiX supports. The OptiX version of LSS automatically uses a software fallback on GPUs prior to NVIDIA Blackwell GPUs, and the new hardware-accelerated primitive on GeForce RTX 50 Series GPUs, without needing any code changes. If you’re not using OptiX, chances are you’ll need to use an alternative primitive. When it comes to high performance, tessellated curves are the next best thing to LSS. One option is camera-facing or ray-facing quads. Ray-facing quads have a relatively low memory footprint, though they may need to be oriented during every frame of animation (either hair motion or camera motion). Disjoint Orthogonal Triangle Strips (DOTS) provide a clever solution for tessellating curves that enables viewing from any angle without having to reorient the triangles every frame to face the camera. This way, if you render static curves, you won’t need to rebuild your bounding volume hierarchy (BVH) when the camera or hair moves. Because triangles have broad support, DOTS can take advantage of existing RTX hardware. Anyone who wants to use LSS on GeForce RTX 50 Series GPUs can still support ray traced strands on older GPUs using DOTS. Some hair examples using both DOTS and LSS will be available as part of the RTX Character Rendering SDK that implements path tracing of realistic human hair and skin. One advantage of LSS is that the geometry is naturally round, making it easy to shade and easy to avoid self-intersection when casting shadow and reflection rays. Tessellated curves like DOTS will typically need to have the geometric normal overridden by a custom shading normal, as shown in Figure 4. When rounded shading normals are used, then the appearance of DOTS can be similar to or even indistinguishable from the appearance of curves made out of linear swept spheres. Figure 4. Disjoint Orthogonal Triangle Strips can match the appearance of linear swept spheres when shading normals are used Benefits of using LSS Some of the existing approaches to real-time hair include: Textured cards Ray-facing quads, tessellated tubes, orthogonal quads (DOTS) Software higher-order curve primitives (cubic Bezier, for example) Textured cards have been a good way to achieve a reasonable enough quality for hair in games on a very light budget. However, this kind of setup is extremely limited and does not allow much room for simulating or animating hair. Since a textured card has multiple strands per card, there is no control over individual strands, by design. Strand-based approaches are far more controllable and flexible, and this is one reason why strands are clearly the future of real-time hair rendering as the world’s hardware improves over time. Figure 5. Rendering animated hair on humans is about 2x faster with LSS compared to DOTS, while also requiring about 5x less VRAM to store the geometry Since triangles are well supported in hardware these days, tessellating curves into triangles (tubes, quads, DOTS) can be very fast to render. The downside is that tessellating takes time and can use a lot of memory. A quad requires four vertices, though two vertices can be shared with the next successive neighboring quad in the strand. So on average, camera facing quads will cost at least two vertices per strand segment. If tubes or DOTS are used, then the strand will end up with more than two vertices per segment on average. LSS only needs one vertex per strand segment on average, so it uses much less memory than tessellated curves, while achieving higher rendering speeds. And LSS is of course much faster than software higher-order curve primitives, like the cubic B-spline. Depending on the needs and tolerance for quality, this extra performance may come with a memory tradeoff compared to cubic curves, for example, if more vertices are needed for smoothness. What else can LSS be used for? The RTX version of the linear swept sphere was designed mainly with hair and fur in mind, but it can be used for other applications including particles, molecular models, wireframe rendering, font and path rendering, and whatever else you can imagine. Figure 6. Alternative uses of linear swept spheres include wireframe (left), particle rendering (center), and molecular models (right). Image credits: Bay Raitt, Valve; Stanford Dark Sky Simulations; University of Illinois Get started with LSS RT Cores in the NVIDIA GeForce RTX 50 Series GPUs introduce hardware support for a new primitive called linear swept spheres (LSS). LSS is a round 3D line primitive with varying thickness, enabling flexible approximation of various strand types. As a special case of LSS, the GeForce RTX 50 Series GPUs hardware primitive also supports spheres directly (without a swept linear segment), which is useful for applications like particle systems. Rendering animated hair on humans is about 2x faster with LSS compared to DOTS, while also requiring about 5x less VRAM to store the geometry. This is similar for other common use cases. With LSS on GeForce RTX 50 Series GPUs and DOTS for earlier GPUs, there is now a way to get the highest possible hair ray tracing performance on all RTX GPUs. When ray tracing in CUDA, LSS is currently available in OptiX . For DirectX, API for LSS can be found starting in the R570 version of the NVAPI SDK . Vulkan developers will be interested in the Vulkan LSS extension . For examples of physically based hair rendering, check out the RTX Character Rendering SDK . To learn more about the latest AI and rendering technologies, check out the NVIDIA RTX Kit . Join the conversation about LSS on the OptiX NVIDIA Developer Forum .
https://developer.nvidia.com/zh-cn/blog/render-path-traced-hair-in-real-time-with-nvidia-geforce-rtx-50-series-gpus/
使用 NVIDIA GeForce RTX 50 系列 GPU 实时渲染路径追踪头发
2018 年, NVIDIA RTX 引入了对光线追踪三角形网格的硬件支持。但是,头发和毛皮的光线追踪仍然是一个计算密集型问题,一直难以进一步加速。那就是,直到现在。 NVIDIA GeForce 50 系列 GPUs 在加速毛发和毛皮光线追踪方面取得了重大进步:硬件光线追踪支持线性扫描球体 (LSS) 基元。这种新基元是向实时渲染高质量数字人迈出的一大步。逼真的人类渲染的应用领域与日俱增,包括 AI 虚拟形象、电影和游戏角色、图形研究、高性能科学计算应用、合成数据生成等。 LSS 现已在 NVIDIA OptiX 和 NVAPI SDK 中提供。 RTX Character Rendering SDK 是 NVIDIA RTX Kit 的一部分,其中包含使用基于物理性质的着色模型的 LSS 的高级实时毛发示例。RTX Kit 是一套神经渲染技术,可利用 AI 对游戏进行光线追踪、渲染具有宏大几何图形的场景,以及使用逼真视觉效果创建游戏角色。RTX Kit 在 GeForce RTX 50 Series 发布时推出。 什么是 LSS 基元? LSS 基元是一条具有不同半径的粗而圆的 3D 线。可以将多个线性扫描球体链接在一起以构建 3D 曲线,在它们重叠的位置共享顶点,这与使用三角形构建 3D 表面的方式类似。LSS 的形状类似圆柱形或锥形管,球体的两端均可选择覆盖。 图 1. 线性扫描球体的形状 如果两个端点重叠,因此直线没有长度,则直线扫描球体仅会成为常规球体。这可用于硬件加速粒子渲染。为了方便球体和提高内存效率,我们将球体 LSS 的这种特殊情况作为单独的一级球体基元公开。 通常情况下,线性扫描球体并不像图 1 那样以放大镜头显示。相反,它们的屏幕尺寸较小,可能有 1 像素宽,可以取或取。当多个线性扫描球体被链式连接成多条链路,并将摄像头稍微往后拉时,清晰的角度会消失,这些粗壮的线条会变成纤细的平滑曲线。 图 2、线性扫描球体在一定距离下显得光滑 LSS 索引 LSS 提供两种索引模式:显式列表模式和称为连续隐式的半隐式模式。在列表模式下,系统会在顶点缓冲区中使用一对索引显式指定每个片段。在连续隐式模式下,使用一个显式索引和一个隐式索引指定线段,其中第二个点假定为顶点缓冲区中的下一个顶点。 由于内存中的连续顶点十分常见且方便,因此可以通过连续隐式模式在索引缓冲区中轻松节省 50% 的内存。图 3 显示了显式索引缓冲区包含 [0,1,1,2,2,3,4,5,5,6] 的示例。通过连续的隐式索引,索引缓冲区将为 [0,1,2,4,5]。 图 3. 带有顶点索引的两条线的示例 使用传统硬件渲染基于链的曲线 对于已经使用基于 CUDA 的光线追踪 API 框架 NVIDIA OptiX 的用户,LSS 已作为默认线性曲线类型提供,并且适用于 OptiX 支持的所有 GPU。OptiX 版本的 LSS 自动使用 NVIDIA Blackwell GPU 之前的 GPU 上的软件回退,以及 GeForce RTX 50 Series GPU 上的新硬件加速基元,而无需更改任何代码。如果您未使用 OptiX,则可能需要使用替代基元。 说到高性能,细分曲线是 LSS 的下一个最佳选择。其中一个选项是摄像头对焦或光线对焦四边形。尽管在动画的每一帧(毛发运动或相机运动)中,面向光线的四边形的显存占用相对较小,但其显存占用相对较小。 不交正交三角形条 (DOTS) 提供了一个巧妙的曲线细分解决方案,支持从任何角度进行查看,而无需重新调整每帧三角形的方向以面向摄像头。这样,如果渲染静态曲线,则无需在相机或头发移动时重建包围盒层次结构 (BVH)。三角形拥有广泛支持,因此 DOTS 可以利用现有的 RTX 硬件。 如果您想在 GeForce RTX 50 系列 GPUs 上使用 LSS,仍然可以使用 DOTS 在较旧的 GPU 上支持光线追踪链。一些同时使用 DOTS 和 LSS 的毛发示例将作为 RTX Character Rendering SDK 的一部分提供,该 SDK 可实现逼真的人类头发和皮肤的路径追踪。 LSS 的一个优点是几何体自然呈圆形,因此在投射阴影和反射光线时,很容易着色,并且很容易避免自交。像 DOTS 这样的细分曲线通常需要使用自定义着色法线覆盖几何法线,如图 4 所示。当使用圆周着色法线时,DOTS 的外观可能与由线性扫描球体构成的曲线相似,甚至无法区分。 图 4、在使用着色法线时,不交接正交三角形条可以与线性扫描球体的外观相匹配 使用 LSS 的优势 一些现有的实时头发处理方法包括: 纹理卡 面向光线的四边形,镶嵌管,正交四边形 (DOTS) 软件高阶曲线基元 (例如 cubic Bezier) 纹理卡是在预算有限的情况下,在游戏中实现合理的头发质量的好方法。但是,这种设置非常有限,没有足够的空间用于模拟头发或制作头发动画。由于一张纹理卡的每张卡有多个链,因此根据设计,无法控制单个链。基于链的方法更具可控性和灵活性,这也是为什么链显然是实时头发渲染的未来,因为世界的硬件会随着时间的推移而改进。 图 5、与 DOTS 相比,使用 LSS 在人类身上渲染动画毛发的速度大约快 2 倍,同时存储几何图形所需的 VRAM 也减少了大约 5 倍 由于硬件目前已大力支持三角形,因此可以非常快速地将曲线细分为三角形(管、四边形、DOTS)。缺点是细分需要时间,并且会占用大量内存。一个四边形需要四个顶点,不过两个顶点可以与链中的下一个相邻四边形共享。因此,面向四边形的摄像头平均每个链段至少会耗费两个顶点。如果使用管或 DOTS,则链的每个部分最终平均将包含两个以上的顶点。 LSS 的每个链段平均只需要一个顶点,因此与细分曲线相比,它使用的内存更少,同时实现了更高的渲染速度。当然,LSS 比软件高阶曲线基元 (如 cubic B-spline) 的速度要快得多。根据对质量的需求和容差,与 cubic curves 相比,这种额外的性能可能会带有内存权衡,例如,如果需要更多的顶点来实现平滑度。 LSS 还有哪些其他用途? RTX 版本的线性扫描球体在设计时主要考虑了毛发和毛皮,但它可以用于其他应用,包括粒子、分子模型、线框渲染、字体和路径渲染以及您可以想象的其他任何应用。 图 6、线性扫描球体的其他用途包括线框 (左)、粒子渲染 (中心) 和分子模型 (右)。图片来源:Bay Raitt,Valve;Stanford Dark Sky Simulations;University of Illinois 开始使用 LSS NVIDIA GeForce RTX 50 系列 GPU 中的 RT Cores 为一种称为线性扫描球体 (LSS) 的新基元引入了硬件支持。LSS 是一种厚度不同的 3D 圆线基元,支持灵活逼近各种链类型。作为 LSS 的一种特殊情况,GeForce RTX 50 系列 GPU 硬件基元还直接支持球体 (不包含扫描线性分割),这对于像粒子系统这样的应用非常有用。 与 DOTS 相比,使用 LSS 在人类身上渲染动画毛发的速度大约快 2 倍,同时存储几何图形所需的 VRAM 也减少了大约 5 倍。这与其他常见用例类似。借助 GeForce RTX 50 系列 GPUs 上的 LSS 和早期 GPUs 上的 DOTS,现在可以在所有 RTX GPUs 上获得尽可能高的毛发光线追踪性能。 在 CUDA 中进行光线追踪时,LSS 目前可在 OptiX 中使用。对于 DirectX,可以从 R570 版本的 NVAPI SDK 开始找到适用于 LSS 的 API。Vulkan 开发者可能会对 Vulkan LSS 扩展 感兴趣。 有关基于物理效果的头发渲染示例,请查看 RTX Character Rendering SDK 。如需详细了解最新的 AI 和渲染技术,请查看 NVIDIA RTX Kit 。在 OptiX NVIDIA 开发者论坛 上加入有关 LSS 的对话。
https://developer.nvidia.com/blog/nvidia-video-codec-sdk-13-0-powered-by-nvidia-blackwell/
NVIDIA Video Codec SDK 13.0 Powered by NVIDIA Blackwell
The release of NVIDIA Video Codec SDK 13.0 marks a significant upgrade, adding support for the latest-generation NVIDIA Blackwell GPUs. This version brings a wealth of improvements aimed at elevating both video encoding and decoding capabilities. From enhanced compression efficiency to better throughput and encoding quality, SDK 13.0 addresses the ever-evolving demands of the video ecosystem. Here are some of the key features introduced in this update. Encode features: Improved compression efficiency: Achieve better video quality at lower bitrates. New YUV 4:2:2 encoding support (H.264 and HEVC): Enables a broader range of use cases, particularly in professional video production and broadcasting. Higher bit-depth encoding: Introduces support for 10-bit encoding in H.264, and new hardware capabilities to encode 8-bit content as 10-bit. Interlaced encoding (H.264): Adds interlaced encoding support in H.264. AV1 lookahead level and UHQ: Introduces lookahead level and UHQ modes for AV1, for latency-tolerant use cases that require the highest possible video quality. MV-HEVC support: Supports two views to improve compression for stereoscopic content. For more information, see Enabling Stereoscopic and 3D Views Using MV-HEVC in NVIDIA Video Codec SDK 13.0 . Decode features: 4:2:2 decode support (H.264 and HEVC): Expands decoding capabilities, which is especially valuable for professional video workflows. Higher bit-depth H.264 decoding: Introduces support for 10-bit decoding in H.264. 2x H.264 throughput on NVIDIA Blackwell: Brings an impressive 2x throughput improvement in H.264 decoding compared to previous-generation GPUs. H.264 8K support: Now handles ultra-high-definition video decoding with ease, providing future-proof capabilities for 8K content. Dynamic decode surface allocation: Enables applications to minimize GPU memory usage. Encode quality enhancements in NVIDIA Blackwell Here’s more information about the key encode features in this update. Improved compression efficiency NVIDIA encoder (NVENC) hardware in NVIDIA Blackwell includes many enhancements for improving compression efficiency. These include improvement in motion estimation including enhanced subpixel search and better rate distortion optimization (RDO), among others for HEVC and AV1. These enhancements apply across presets and provide significant quality gain over ADA-generation GPUs. 4:2:2 chroma subsampling In previous generations, NVENC supported formats including 4:2:0 and 4:4:4. YUV 4:4:4 retains full color, resulting in a larger file size and bandwidth requirements to transfer data. In 4:2:0, the full information is retained in the luminance channel but the chroma channel contains only 25% of the original color content. To overcome the loss in color, NVIDIA Blackwell introduces 4:2:2 chroma. 4:2:2 retains 50% of the color information compared to 4:4:4 but reduces the required bandwidth to transfer video data. 4:2:2 is popular in the video editing and broadcasting eco-system due to higher color resolution compared to 4:2:0 but lower bandwidth requirements compared to 4:4:4. As with YUV 4:2:0, semi-planar layouts are supported for 4:2:2 for 8-bit and 10-bit depths. The NVENCODE API introduces two new formats for YUV 4:2:2 semi-planar inputs: NV_ENC_BUFFER_FORMAT_NV16 , for 8-bit YUV 4:2:2 NV_ENC_BUFFER_FORMAT_P210 , for 10-bit YUV 4:2:2 chromaFormatIdc =2 enables 4:2:2 encoding. As with 4:2:0 and 4:4:4, an application can also pass ARGB input with chromaFormatIdc=2 to generate a 4:2:2 subsample encoded output. The color space conversion from ARGB to YUV 4:2:2 is done inside the encode driver using the CUDA kernel. The compression efficiency for 4:2:2 input is similar to 4:2:0 and 4:4:4 formats. Higher bit-depth encoding enhancement NVIDIA Video Codec SDK 13.0 introduces 10-bit encoding support in H.264 on NVIDIA Blackwell GPUs. All the chroma subsampling formats—4:2:0, 4:2:2 and 4:4:4—support encoding 10-bit content. In addition, NVIDIA Blackwell NVENC can encode 8-bit content as 10-bit for H.264 and HEVC, a feature already available for AV1 in ADA. ADA and earlier GPUs continue to support this feature for HEVC, but unlike NVIDIA Blackwell, the input YUV is upscaled from 8- to 10-bit as a preprocessing step using CUDA. This feature improves the coding efficiency due to higher precision in the encoding pipeline. This upgrade results in smoother gradations and more accurate color reproduction, ideal for high-quality video production. Many of the input-related calculations in the encoder are done in 10-bit instead of 8-bit. Applications can expect an improvement of around 3% in compression efficiency when using this feature without any significant impact on encoder performance. Unlike AV1, 10-bit encoding is supported only on select profiles for H.264 and HEVC. Applications should only enable this feature if the decoder supports 10-bit profiles. H.264 interlaced encoding NVIDIA Blackwell GPUs bring back the support for encoding interlaced content, with improved flexibility for legacy video workflows for users working with traditional broadcast video formats. Interlaced encoding involves splitting a frame into two fields. The first field includes the odd lines of the image while the second field includes the even lines. These fields are transmitted sequentially at a rapid rate creating an illusion of a single frame. Field encoding is supported for YUV 4:2:0 and YUV 4:2:2, 8- and 10-bit content on H.264. NVENCODE API supports both top field first and bottom field first layouts for interlaced content. Lookahead level and UHQ Lookahead level analyzes future frames and enables efficient allocation of bits to different blocks within a frame based on how much a given block is referenced in next frames. The statistics generated during lookahead encoding are used for complexity estimation in rate control. Video Codec SDK 13.0 adds the support for the lookahead level in AV1 and introduces AV1 UHQ tuning info, which combines lookahead level and temporal filtering to provide the best quality and performance for various HQ latency-tolerant encoding presets. These features provide best-in-class visual quality, perfect for demanding video applications In the UHQ tuning, the optimal settings for lookahead and temporal filtering are applied in combination rather than individually. As with UHQ HEVC, the number of B-frames is set to five, while using the middle B-frame as the reference. UHQ mode also disables adaptive I- and B-frames and uses a fixed GOP (group of pictures) structure. NVIDIA Blackwell NVENC supports new encode stats enabling the lookahead algorithm to identify the referencing details across multiple references, resulting in much better quality and performance of the UHQ tuning info for both HEVC and AV1 than ADA. For AV1, the UHQ tuning info in NVIDIA Blackwell has seven B-frames, instead of five, further enhancing the compression efficiency. Figure 1 and 2 shows the bitrates saving for ADA HQ, ADA UHQ, NVIDIA Blackwell HQ and NVIDIA Blackwell UHQ for HEVC and AV1, respectively. Figure 1. HEVC Bit-rate savings for HQ and UHQ tuning info with natural content Figure 2. AV1 Bit-rate savings for HQ and UHQ tuning info with natural content Enhanced video decoding capabilities NVIDIA Blackwell GPUs bring significant advancements to the world of video decoding, particularly in H.264 and HEVC formats. These enhancements offer both feature set and performance improvements, setting new standards in the industry. H.264 decoding enhancements With NVIDIA Blackwell, the H.264 decoding capabilities have undergone major improvements. Some of the key features include the following: 4:2:0 10-bit support (exclude MBAFF): This provides improved color depth, making it ideal for working in color-sensitive fields such as video production or gaming. 4:2:2 8/10-bit support (exclude MBAFF): With support for both 8-bit and 10-bit chroma sampling, NVIDIA Blackwell ensures high-quality video playback, even in compressed formats, which provides better visual fidelity. 2x performance improvement: Perhaps the most exciting update is the performance boost. NVIDIA Blackwell offers a 2x performance improvement compared to previous generations, which means smoother video playback and faster decoding even for the most demanding video files. Resolution support up to 8192×8192: Whether you’re working with ultra-high-definition video or cutting-edge 3D content, NVIDIA Blackwell has the ability to handle resolutions up to 8192×8192. This means that you can decode videos with greater clarity and detail. These improvements ensure that NVIDIA Blackwell delivers top-tier video decoding, whether you’re working on high-resolution video projects or handling large-scale video processing tasks. HEVC decoding for enhanced flexibility and speed High-Efficiency Video Coding (HEVC) has become the go-to format for efficient video compression, and NVIDIA Blackwell takes it to the next level. The new enhancements include the following: Support for 4:2:2 8/10-bit and 12-bit decoding: NVIDIA Blackwell now offers a wider range of decoding options for HEVC, making it easier to process high-quality video with minimal loss of fidelity. Performance improvements: Thanks to improvements in the NVDCLK, you see noticeable performance boosts when decoding HEVC content. This translates into smoother playback and more efficient video rendering. Figure 3. H.264 and HEVC Decode performance (fps) The NVDECODE API introduces two new formats for 4:2:2 decode output: cudaVideoSurfaceFormat_NV16=4,      /**< Semi-Planar YUV 422 [Y plane followed by interleaved UV plane] */ cudaVideoSurfaceFormat_P216=5       /**< 16 bit Semi-Planar YUV 422[Y plane followed by interleaved UV plane]*/ Dynamic decode surface allocation for GPU memory efficiency One of the standout features in Video Codec SDK 13.0 is the introduction of dynamic decode surface allocation. By adapting to varying video bitstreams, this capability reduces unnecessary memory consumption, leading to a higher number of decode sessions. This improvement is crucial for optimizing GPU memory usage in a few video-decoding use cases. In some cases, the bitstream may use fewer reference frames than what the DPB size suggests, wasting valuable video memory. Allocating the max decode surface results in higher memory foot prints. Comparing the new SDK Before this release, video applications created the decoder object with a fixed minimum number of surfaces based on the DPB size. This approach, while functional, sometimes allocated memory more than necessary. CUVIDDECODECREATEINFO stDecodeCreateInfo; memset(&stDecodeCreateInfo, 0x0, sizeof(CUVIDDECODECREATEINFO )); . . . // Setup the remaining structure members stDecodeCreateInfo.ulNumDecodeSurfaces = <dpb_size>// Prior to SDK 13.0, this could not change rResult = cuvidCreateDecoder(&hDecoder, &stDecodeCreateInfo) With Video Codec SDK 13.0, you gain the flexibility to allocate extra YUV surfaces only when needed. You can create a decoder object with a smaller initial allocation of YUV surfaces (such as 3 or 4) and use the cuvidReconfigureDecoder API to allocate more surfaces dynamically as needed. This dynamic allocation reduces unnecessary memory consumption and enhances the overall efficiency of the decoding process. Unlocking new possibilities with Video Codec SDK 13.0 NVIDIA Video Codec SDK 13.0 pushes the boundaries of video encoding and decoding with the latest NVIDIA Blackwell GPUs. Whether you’re creating content for broadcast, editing high-quality video, or working with the latest 8K footage, this update offers the tools you need to elevate your workflows. With improvements in compression efficiency, support for new color formats, enhanced encoding quality, and more, Video Codec SDK 13.0 is designed to meet the growing demands of modern video applications. In support of the Video Codec SDK 13.0 launch, NVIDIA partners Blackmagic, Capcut, and Wondershare have already integrated features such as 4:2:2 encode, 4:2:2 decode,  AV1 UHQ, and split encoding in their video pipelines.
https://developer.nvidia.com/zh-cn/blog/nvidia-video-codec-sdk-13-0-powered-by-nvidia-blackwell/
NVIDIA 视频编解码器 SDK 13.0 由 NVIDIA Blackwell 驱动
NVIDIA Video Codec SDK 13.0 的发布标志着重大升级,增加了对新一代 NVIDIA Blackwell GPU 的支持。此版本带来了大量改进,旨在提升视频编码和解码功能。从增强的压缩效率到更好的吞吐量和编码质量,SDK 13.0 可满足视频生态系统不断变化的需求。 以下是本次更新中引入的一些主要功能。 编码功能: 提高压缩效率: 以更低的比特率实现更好的视频质量。 新的 YUV 4:2:2 编码支持 (H.264 和 HEVC): 支持更广泛的用例,特别是在专业视频制作和广播中。 更高的位深编码: 在 H.264 中引入对 10 位编码的支持,以及将 8 位内容编码为 10 位的新硬件功能。 交错编码(H.264): 在 H.264 中添加交错编码支持。 AV1 前瞻性级别和 UHQ: 为 AV1 引入前瞻性级别和 UHQ 模式,适用于需要尽可能高的视频质量的延迟容忍用例。 MV-HEVC 支持: 支持两个视图,以提高立体内容的压缩率。有关更多信息,请参阅在 NVIDIA Video Codec SDK 13.0 中使用 MV-HEVC 启用立体和 3D 视图 。 解码功能: 4:2:2 解码支持 (H.264 和 HEVC): 扩展解码功能,这对于专业视频工作流程尤其重要。 更高位深的 H.264 解码: 在 H.264 中引入对 10 位解码的支持。 NVIDIA Blackwell 上的 H.264 吞吐量提高了 2 倍: 与上一代 GPU 相比,H.264 解码的吞吐量提高了 2 倍,令人印象深刻。 支持 H.264 8K: 现在,可轻松处理超高清视频解码,为 8K 内容提供面向未来的功能。 动态解码表面分配: 支持应用更大限度地减少 GPU 显存占用。 在 NVIDIA Blackwell 中对质量增强进行编码 以下是本次更新中有关关键编码功能的详细信息。 提高压缩效率 NVIDIA Blackwell 中的 NVIDIA 编码器 (NVENC) 硬件包含许多用于提高压缩效率的增强功能。其中包括改进运动估计,包括增强的子像素搜索和更好的速率失真优化(RDO),以及 HEVC 和 AV1。这些增强功能适用于所有预设,与 ADA 代 GPU 相比,可显著提升质量。 4:2:2 色度二次采样 在前几代产品中,NVENC 支持的格式包括 4:2:0 和 4:4:4。YUV 4:4:4 保留全彩,导致传输数据的文件大小和带宽需求更大。在 4:2:0 模式下,全部信息将保留在亮度通道中,而色度通道仅包含原始颜色内容的 25%。 为克服色差问题,NVIDIA Blackwell 引入了 4:2:2 色度。与 4:4:4 相比,4:2:2 可保留 50%的颜色信息,但降低了传输视频数据所需的带宽。 4:2:2 在视频编辑和广播生态系统中很受欢迎,因为与 4:2:0 相比,4:2:2 的色彩分辨率更高,但与 4:4:4 相比,带宽要求更低。 与 YUV 4:2:0 一样,8 位和 10 位深度的半平面布局支持 4:2:2,NVENCODE API 为 YUV 4:2:2 半平面输入引入了两种新格式: NV_ENC_BUFFER_FORMAT_NV16 ,对于 8 位 YUV 4:2:2 NV_ENC_BUFFER_FORMAT_P210 ,对于 10 位 YUV 4:2:2 chromaFormatIdc =2 支持 4:2:2 编码。与 4:2:0 和 4:4:4 一样,应用还可以通过 chromaFormatIdc=2 传递 ARGB 输入,以生成 4:2:2 的子样本编码输出。使用 CUDA 内核在编码驱动中完成从 ARGB 到 YUV 4:2:2 的色彩空间转换。 4:2:2 输入的压缩效率类似于 4:2:0 和 4:4:4 格式。 更高的位深度编码增强 NVIDIA 视频编解码器 SDK 13.0 在 NVIDIA Blackwell GPUs 上的 H.264 中引入 10 位编码支持。所有色度子采样格式—4:2:0、4:2:2 和 4:4:4—支持对 10 位内容进行编码。 此外,NVIDIA Blackwell NVENC 可以将 8 位内容编码为 H.264 和 HEVC 的 10 位内容,这项功能已在 ADA 中的 AV1 中提供。 ADA 和早期 GPU 继续支持 HEVC 的此功能,但与 NVIDIA Blackwell 不同的是,输入 YUV 从 8 位升级到 10 位,作为使用 CUDA 的预处理步骤。 由于编码管线的精度更高,此功能可提高编码效率。此次升级可实现更平滑的渐变和更准确的色彩再现,是高质量视频制作的理想之选。编码器中的许多与输入相关的计算均在 10 位 (而非 8 位) 下完成。应用程序的压缩效率有望提高约 3%,而不会对编码器性能产生任何重大影响。 与 AV1 不同,仅 H.264 和 HEVC 的特定配置文件支持 10 位编码。应用程序应仅在解码器支持 10 位配置文件的情况下启用此功能。 H.264 交错编码 NVIDIA Blackwell GPUs 恢复了对交错内容编码的支持,并为传统广播视频格式的用户提高了传统视频工作流程的灵活性。 交错编码 需要将一帧拆分成两个字段。第一个字段包含图像的奇数行,第二个字段包含偶数行。这些字段以快速的顺序传输,会产生单帧的错觉。H.264 上的 YUV 4:2:0 和 YUV 4:2:2、8- 和 10 位内容支持字段编码。 NVENCODE API 支持用于交错内容的“top field first”(顶部字段优先) 和“bottom field first”(底部字段优先) 布局。 前瞻性和 UHQ 前瞻性分析未来帧,并根据给定块在后续帧中的引用量,将位高效分配到帧中的不同块。前瞻性编码期间生成的统计数据用于速率控制的复杂性估计。 Video Codec SDK 13.0 在 AV1 中增加了对前瞻性级别的支持,并引入了 AV1 UHQ 调优信息,该信息结合了前瞻性级别和时间性过滤,可为各种 HQ 延迟容忍编码预设提供出色的质量和性能。这些功能可提供出色的视觉质量,非常适合要求严苛的视频应用 在 UHQ 调优中,前瞻性和时间过滤的最佳设置是组合应用的,而不是单独应用的。与 UHQ HEVC 一样,B 帧的数量设置为 5,同时使用中间的 B 帧作为参考。UHQ 模式还会禁用自适应 I 帧和 B 帧,并使用固定的 GOP(group of pictures)结构。 NVIDIA Blackwell NVENC 支持新的编码统计信息,使前瞻性算法能够识别多个引用的引用详细信息,从而使 HEVC 和 AV1 的 UHQ 调整信息的质量和性能比 ADA 高得多。 对于 AV1,NVIDIA Blackwell 中的 UHQ 调整信息有 7 个 B 帧,而不是 5 个 B 帧,这进一步提高了压缩效率。 图 1 和 2 分别显示了 ADA HQ、ADA UHQ、NVIDIA Blackwell HQ 和 NVIDIA Blackwell UHQ 在 HEVC 和 AV1 上节省的比特率。 图 1、使用自然内容为 HQ 和 UHQ 调优信息节省 HEVC 比特率 图 2、使用自然内容,为 HQ 和 UHQ 调优信息节省 AV1 码率 增强的视频解码功能 NVIDIA Blackwell GPUs 为视频解码领域带来了巨大进步,尤其是 H.264 和 HEVC 格式的视频解码。这些增强功能同时提供功能集和性能改进,为行业树立了新的标杆。 H.264 解码增强功能 借助 NVIDIA Blackwell,H.264 解码功能得到了重大改进。其中一些主要特性包括: 4:2:0 10 位支持 (MBAFF 除外):这可改善色彩深度,非常适合视频制作或游戏等色彩敏感领域的工作。 4:2:2 8/10 位支持 (MBAFF 除外):NVIDIA Blackwell 支持 8 位和 10 位色度采样,可确保高质量的视频播放 (即使是压缩格式),从而提供更好的视觉保真度。 2 倍性能提升:最令人兴奋的更新或许是性能提升。与前几代产品相比,NVIDIA Blackwell 的性能提升了 2 倍,这意味着视频播放更流畅,解码速度更快,即使是对于要求严苛的视频文件也是如此。 支持分辨率高达 8192 × 8192: 无论您是处理超高清视频还是处理先进的 3D 内容,NVIDIA Blackwell 都能处理高达 8192 × 8192 的分辨率。这意味着您可以更清晰、更精细地解码视频。 无论您是处理高分辨率视频项目,还是处理大规模视频处理任务,这些改进都能确保 NVIDIA Blackwell 提供出色的视频解码功能。 HEVC 解码可提高灵活性和速度 高效视频编码 (HEVC) 已成为高效视频压缩的首选格式,而 NVIDIA Blackwell 将其提升至更高水平。新的增强功能包括: 支持 4:2:2 8/10 位和 12 位解码: NVIDIA Blackwell 现在为 HEVC 提供了更广泛的解码选项,可更轻松地处理高质量视频,并将保真度损失降至最低。 性能提升 :得益于 NVDCLK 的改进,您可以在解码 HEVC 内容时看到明显的性能提升。这转化为更流畅的播放和更高效的视频渲染。 图 3、H.264 和 HEVC 解码性能 (fps) NVDECODE API 为 4:2:2 解码输出引入了两种新格式: cudaVideoSurfaceFormat_NV16=4, /**< Semi-Planar YUV 422 [Y plane followed by interleaved UV plane] */ cudaVideoSurfaceFormat_P216=5 /**< 16 bit Semi-Planar YUV 422[Y plane followed by interleaved UV plane]*/ 动态解码表面分配,提高 GPU 显存效率 Video Codec SDK 13.0 的一个突出特性是引入了动态解码表面分配。通过适应不同的视频位流,此功能可减少不必要的内存消耗,从而增加解码会话的数量。这一改进对于在一些视频解码用例中优化 GPU 内存使用至关重要。 在某些情况下,bitstream 使用的参考帧可能少于 DPB 大小,因此会浪费宝贵的视频内存。分配最大解码表面会产生更高的显存占用空间。 比较新的 SDK 在此版本发布之前,视频应用程序根据 DPB 大小创建了具有固定最小表面数量的解码器对象。这种方法虽然实用,但有时分配的内存超过了需要。 CUVIDDECODECREATEINFO stDecodeCreateInfo; memset(&stDecodeCreateInfo, 0x0, sizeof(CUVIDDECODECREATEINFO )); . . . // Setup the remaining structure members stDecodeCreateInfo.ulNumDecodeSurfaces = <dpb_size>// Prior to SDK 13.0, this could not change rResult = cuvidCreateDecoder(&hDecoder, &stDecodeCreateInfo) 借助 Video Codec SDK 13.0,您可以灵活地仅在需要时分配额外的 YUV 表面。您可以创建具有较小 YUV 表面初始分配 (例如 3 或 4) 的解码器对象,并使用 cuvidReconfigureDecoder API 根据需要动态分配更多表面。这种动态分配可减少不必要的内存消耗,并提高解码过程的整体效率。 借助 Video Codec SDK 13.0 解锁新的可能性 NVIDIA Video Codec SDK 13.0 借助最新的 NVIDIA Blackwell GPU 突破视频编码和解码的极限。无论是创作广播内容、编辑高质量视频,还是处理最新的 8K 视频片段,此更新都提供了改进工作流程所需的工具。 Video Codec SDK 13.0 的压缩效率有所提升,支持新的颜色格式,编码质量有所提升,可满足现代视频应用日益增长的需求。 为支持 Video Codec SDK 13.0 的发布,NVIDIA 合作伙伴 Blackmagic、Capcut 和 Wondershare 已在其视频工作流中集成 4:2:2 编码、4:2:2 解码、AV1 UHQ 和分割编码等功能。
https://developer.nvidia.com/blog/enabling-stereoscopic-and-3d-views-using-mv-hevc-in-nvidia-video-codec-sdk-13-0/
Enabling Stereoscopic and 3D Views Using MV-HEVC in NVIDIA Video Codec SDK 13.0
NVIDIA announces the implementation of Multi-View High Efficiency Video Coding (MV-HEVC) encoder in the latest NVIDIA Video Codec SDK release, version 13.0. This significant update marks a major leap forward in hardware-accelerated, multi-view video compression. It offers enhanced compression efficiency and quality for stereoscopic and 3D video applications as compared to simulcast encoding. Understanding MV-HEVC MV-HEVC is an extension of the High Efficiency Video Coding (HEVC) standard, designed to efficiently compress multiple video views of the same scene captured from different vantage points. It addresses the limitations of traditional video-coding methods such as simulcast encoding, which often resulted in high bitrates for multi-view content and lacked efficient inter-view prediction. Figure 1. MV-HEVC for 3D content Key features in NVIDIA Video Codec SDK 13.0 MV-HEVC The MV-HEVC implementation in the Video Codek SDK 13.0 includes the following features: Support for two views 8-bit and 10-bit 4:2:0 color sampling Lookahead functionality Delta-QP support Separate bitrate allocation for secondary views These features provide you with flexible options for encoding multi-view content while maintaining high quality and efficiency. Technical implementation The MV-HEVC implementation uses advanced techniques to achieve superior compression: Inter-view prediction : Reuses data from different views for maximum compression. Coding tools : Employs motion-compensated prediction and disparity estimation. Scalability : Implements a base layer for backward compatibility and enhancement layers for improved quality. Bitrate allocation : Implements separate allocation for secondary views, allowing application developers to experiment and fine-tune quality. For more information about implementation, see the NVIDIA Video Codec SDK documentation . Quality gains The addition of MV-HEVC support has resulted in significant quality improvements, with bitrate savings of up to 27% as compared to simulcast encoding of two views. Figure 2. Shark 1920×1088 600 frames (Camera 1 and Camera 5) 30fps Preset P7 Figure 3 shows the VMAF bit rate savings for the same clip. Figure 3. Shark 1920×1088 600 frames (Camera 1 and Camera 5) 30fps Preset P7 Real-world applications MV-HEVC is particularly useful for the following applications: 3D movies : Enhancing the viewing experience with efficient bit distribution between left– and right-eye views. AR/VR : Creating immersive environments with reduced data requirements. Spatial video : Supporting new technologies for capturing and viewing immersive 3D videos using advanced smartphones and specialized headsets. Auto: Process and compress multi-camera views for tasks such as objection detection and tracking. The \Samples\AppEncode\AppEncCuda sample application  in the Video Codec SDK now supports MV-HEVC encoding, providing you with a powerful tool for implementing multi-view video compression. For more information about implementation details for MV-HEVC support in AppEncCuda, see the NVIDIA Video Codec SDK documentation . Backward compatibility The NVIDIA implementation ensures backward compatibility, supporting two views with identical GOP structures for efficient random access and seeking. This approach lays the foundation for future expansion, as it can be readily adapted to accommodate more views in more complex multi-view applications. By maintaining consistent GOP structures across both views, we optimize synchronization and facilitate seamless navigation within the video content. Conclusion The addition of MV-HEVC support in NVIDIA VIDEO CODEC SDK 13.0 marks a significant milestone in multi-view video coding capabilities. With impressive BD-Rate gains and no performance overhead, this update provides you with a powerful tool for creating high-quality stereoscopic and 3D video experiences. We encourage you to explore this new feature and integrate MV-HEVC encoding into your applications to take advantage of the improved compression efficiency and quality.
https://developer.nvidia.com/zh-cn/blog/enabling-stereoscopic-and-3d-views-using-mv-hevc-in-nvidia-video-codec-sdk-13-0/
在 NVIDIA 视频编解码器 SDK 13.0 中使用 MV-HEVC 启用立体和 3D 视图
NVIDIA 宣布在最新的 NVIDIA 视频编解码器 SDK 版本 13.0 中实施多视图高效视频编码 (MV-HEVC) 编码器。这一重要更新标志着硬件加速的多视图视频压缩技术实现了重大飞跃。与 simulcast 编码相比,它为立体和 3D 视频应用提供了更高的压缩效率和质量。 了解 MV-HEVC MV-HEVC 是高效视频编码 (HEVC) 标准的扩展,旨在高效压缩从不同有利位置拍摄的同一场景的多个视频视图。它解决了传统视频编码方法 (如 simulcast 编码) 的局限性,这些方法通常会导致多视图内容的高比特率,并且缺乏高效的视图间预测。 图 1、适用于 3D 内容的 MV-HEVC NVIDIA 视频编解码器 SDK 13.0 MV-HEVC 的主要特性 Video Codek SDK 13.0 中的 MV-HEVC 实现包括以下功能: 支持两种视图 8 位和 10 位 4:2:0 颜色采样 前瞻性功能 Delta-QP 支持 为辅助视图单独分配比特率 这些功能为您提供灵活的多视图内容编码选项,同时保持高质量和高效率。 技术实施 MV-HEVC 实现使用高级技术实现出色的压缩: 视图间预测: 重复使用来自不同视图的数据,以实现最大压缩。 编码工具: 采用运动补偿预测和差异估计。 可扩展性 :实现用于向后兼容性的基础层,以及用于提高质量的增强层。 比特率分配: 实现对辅助视图的单独分配,允许应用程序开发者实验和微调质量。 有关实施的更多信息,请参阅 NVIDIA Video Codec SDK 文档 。 质量提升 添加的 MV-HEVC 支持显著提高了质量,与两个视图的同步广播编码相比,可节省高达 27% 的比特率。 图 2、Shark 1920 × 1088 600 帧 (Camera 1 和 Camera 5) 30fps 预设 P7 图 3 显示了同一片段节省的 VMAF 比特率。 图 3、Shark 1920 × 1088 600 帧 (Camera 1 和 Camera 5) 30fps 预设 P7 真实应用 MV-HEVC 尤其适用于以下应用: 3D 电影 :通过在左眼和右眼视图之间进行高效位分布来增强观看体验。 AR/VR:创建具有更低数据要求的沉浸式环境。 空间视频:支持使用先进的智能手机和专用头显设备捕捉和观看沉浸式 3D 视频的新技术。 自动:处理和压缩多摄像头视图,以执行物体检测和追踪等任务。 Video Codec SDK 中的 Samples\AppEncode\AppEncCuda 示例应用现在支持 MV-HEVC 编码,为您提供用于实现多视图视频压缩的强大工具。如需详细了解 AppEncCuda 中 MV-HEVC 支持的实现详情,请参阅 NVIDIA Video Codec SDK 文档 。 向后兼容性 NVIDIA 实施可确保向后兼容性,支持具有相同 GOP 结构的两个视图,以实现高效的随机访问和查找。这种方法为未来的扩展奠定了基础,因为它可以轻松进行调整,以在更复杂的多视图应用中适应更多视图。通过在两个视图中保持一致的 GOP 结构,我们优化了同步,并促进了视频内容内的无缝导航。 结束语 NVIDIA VIDEO CODEC SDK 13.0 中新增的 MV-HEVC 支持标志着多视图视频编码功能的重要里程碑。带来了令人惊叹的 BD-Rate 获益且无性能开销,为您提供了一款用于创建高质量立体和 3D 视频体验的强大工具。 我们建议您探索这项新功能,并将 MV-HEVC 编码集成到您的应用中,以利用改进后的压缩效率和质量。
https://developer.nvidia.com/blog/new-ai-model-offers-cellular-level-view-of-cancerous-tumor/
New AI Model Offers Cellular-Level View of Cancerous Tumors
Researchers studying cancer unveiled a new AI model that provides cellular-level mapping and visualizations of cancer cells, which scientists hope can shed light on how—and why—certain inter-cellular relationships triggers cancers to grow. BioTuring , a San Diego-based startup, announced an AI model that can quickly create detailed visualizations of cancerous tumors—at single-cell resolution. This type of granular data shows a cell’s size, shape, which genes are activated, and, crucially, the relative spatial position of millions of different cells within a tissue sample. Unlike traditional, far slower biological computation methods, the new model provides real-time, high-resolution insights into tumor dynamics and how cancerous and immune cells interact. “There are around 30 trillion cells in the human body, and if you look at a large tumor biopsy, it has a few million cells,” said Son Pham, BioTuring’s CEO. “The analogy would be—imagine you’re analyzing satellite imagery with a very high resolution, trying to understand how a city works. What our model does, in the context of biology, is show you each house, what’s inside those houses, who’s talking to whom, and what they’re saying.” “Similarly, our models let you see which cells are talking to which other cells, which groups are forming and talking amongst themselves, and what kind of relationships they’re forming—which can answer some of the most complex challenges in clinical oncological research.” BioTuring, a member of the NVIDIA Inception program for startups, is pursuing its research in single-cell spatial omics, a subfield of biology which examines biological molecules—like messenger RNA and proteins—in their original spatial context in tissue. Video 1. Demo of BioTuring’s SpatialX deep learning platform for unified multi-technology spatial data analysis. To create its high-resolution mapping, or “disease cell atlases”, including of ovarian cancer cells, the team used NVIDIA H100 Tensor Core GPUs , and NVIDIA cuBLAS and NVIDIA cuSPARSE libararies to accelerate matrix operations in optimized analyses like the more traditional machine learning algorithm, the Weighted Gene Co-expression Network Analysis and CellChat. Knowing how cancer cells develop and metastasize within a human body—and specifically, within a microenvironment within an organ—could improve screening methods for early cancer detection. Additionally, researchers can use the model’s cellular insights to better understand tumor heterogeneity—or cancerous tumors in the same patient with cells that materially differ from one another. The new model’s enhanced visual granularity means researchers and drug developers have a far better chance at discovering molecular markers that can more accurately target cancerous cells. The model can see, for instance, how a person’s killer T cells—humans’ disease-fighting cells—can change shape to engage cancers. By knowing how a person’s immune system morphs to fight a specific cancer, a drug developer could create synthetic therapies that support the patient’s immune system. “We’re helping uncover biological discoveries that researchers can use to drive therapeutic strategies,” said Rujuta Narurkar, BioTuring’s COO. “Understanding the tumor’s microenvironment through various stages will help map the trajectory of cancer and potentially reveal the source of cancer itself. This new level of cancer tissue resolution has never been possible before. But new technologies are now bringing it within reach.”
https://developer.nvidia.com/zh-cn/blog/new-ai-model-offers-cellular-level-view-of-cancerous-tumor/
新的 AI 模型提供癌症细胞级视图
研究癌症的研究人员推出了一种新的 AI 模型,该模型提供了癌细胞的细胞级映射和可视化,科学家希望该模型能够揭示某些细胞间关系如何以及为什么会引发癌症生长。 位于圣地亚哥的初创公司 BioTuring 宣布推出 一种 AI 模型,该模型可以以单细胞分辨率快速创建癌症肿瘤的详细可视化图像。这种类型的颗粒数据显示了细胞的大小、形状、哪些基因被激活,关键是显示了组织样本中数百万个不同细胞的相对空间位置。 与速度慢得多的传统生物计算方法不同,新模型提供了有关肿瘤动力学以及癌细胞和免疫细胞相互作用的实时高分辨率见解。 BioTuring 首席执行官 Son Pham 表示:“人体大约有 30 万亿个细胞,如果进行大规模的肿瘤活检,就会发现有几百万个细胞。“类比是 – 想象一下,您正在以非常高分辨率分析卫星图像,试图了解城市的运作方式。在生物学领域,我们的模型所做的是向您展示每栋房子、这些房子里的东西、谁在和谁说话,以及他们在说什么。” “同样,我们的模型可让您了解哪些细胞正在与哪些其他细胞对话,哪些细胞群正在形成并相互交流,以及它们正在形成何种关系,从而应对临床肿瘤研究中一些非常复杂的挑战。” BioTuring 是 NVIDIA Inception 计划的成员,正在开展单细胞空间组学的研究。单细胞空间组学是生物学的一个分支领域,用于在组织中的原始空间环境中检查生物分子 (例如信使 RNA 和蛋白质)。 1 BioTuring SpatialX 为了创建高分辨率映射 (或“疾病细胞图谱”) (包括卵巢癌细胞),该团队使用 NVIDIA H100 Tensor Core GPUs ,以及 NVIDIA cuBLAS 和 NVIDIA cuSPARSE 库来加速优化分析中的矩阵运算,例如更传统的机器学习算法、 Weighted Gene Co-expression Network Analysis 和 CellChat。 了解癌细胞在人体内部(尤其是在器官内的微环境中)的发展和转移情况,可以改进癌症早期检测的筛查方法。此外,研究人员可以利用该模型的细胞见解来更好地了解肿瘤异构性,或了解同一患者的恶性肿瘤,其细胞之间存在重大差异。 新模型增强的视觉粒度意味着研究人员和药物开发者有更大的机会发现能够更准确地向癌细胞的分子标记物。 例如,该模型可以看到一个人的杀手 T 细胞(即人类的抗病细胞)如何改变形状以致癌。通过了解人体的免疫系统如何演变以对抗特定癌症,药物开发者可以开发出支持患者免疫系统的合成疗法。 BioTuring 的首席运营官 Rujuta Narurkar 说:“我们正在帮助发现生物学发现,研究人员可以利用这些发现来推动治疗策略。“通过各个阶段了解肿瘤的微环境有助于绘制出癌症的轨迹,并可能揭示癌症本身的来源。这种更高水平的癌症组织分辨率在以前是不可能实现的。但现在,新技术使其触手可及。”
https://developer.nvidia.com/blog/get-started-with-neural-rendering-using-nvidia-rtx-kit/
Get Started with Neural Rendering Using NVIDIA RTX Kit
Neural rendering is the next era of computer graphics.  By integrating neural networks into the rendering process, we can take dramatic leaps forward in performance, image quality, and interactivity to deliver new levels of immersion. NVIDIA RTX Kit is a suite of neural rendering technologies to ray-trace games with AI, render scenes with immense geometry, and create game characters with photorealistic visuals. RTX Kit is now available on the /NVIDIA RTX/RTXkit GitHub repo and brings together several new and familiar RTX and AI components. Though it is possible to use each component individually, RTX Kit presents a unified location to access these essential rendering SDKs. Name Description Status RTX Neural Shaders (new) Train and deploy neural networks within shaders to unlock new compression and approximation techniques for next-generation asset generation. Available now RTX Neural Materials (new) Use AI to compress shader code of complex multi-layered materials for up to 5X faster material processing to bring real-time performance to film-quality assets. Get notified when RTX Neural Materials is available RTX Neural Texture Compression (new) Use AI to compress textures with up to 8x VRAM improvement at similar visual fidelity to traditional block compression at runtime. Available now RTX Texture Filtering (new) Randomly samples textures after shading and filters difficult volumes, reducing artifacts and improving image quality. Available now RTX Mega Geometry (new) Accelerate BVH building for cluster-based geometry systems like Nanite for real-time path tracing of millions of triangles. SDK Available now Get notified when it is available in the NVIDIA RTX Branch of Unreal Engine 5 (NvRTX) RTX Character Rendering (new) Set of tools to create path-traced hair and skin. Available now DLSS 4 (new) Multi Frame Generation has been introduced for GeForce RTX 50 Series GPUs, allowing for generation of up to 3 frames per rendered frame. Available now through NVIDIA Streamline SDK and Unreal Engine 5 plugin Reflex 2 (new) Reflex technologies optimize the graphics pipeline for ultimate responsiveness, providing faster target acquisition, quicker reaction times, and improved aim precision in competitive games. Reflex Low Latency available now Get notified for Reflex Frame Warp availability RTX Dynamic Illumination Library of importance sampling algorithms that sample the most important lights in a scene and render them physically accurate. Provides implementations for three techniques: ReSTIR DI, GI and PT. Available now ReSTIR PT coming soon to the SDK and the NVIDIA RTX Branch of Unreal Engine 5 (NvRTX) RTX Global Illumination Scalable solution to compute multi-bounce indirect lighting. Provides implementations of two techniques: Neural Radiance Cache (NRC) and Spatially Hashed Radiance Cache (SHARC). Available now (NRC is experimental) RTX Path Tracing Reference solution for optimized real-time path tracing utilizing several RTX technologies. Available now NVIDIA Real-Time Denoisers Library of denoisers designed to work with low ray-per-pixel signals. Available now NVIDIA Opacity Micro-Map Efficiently map intricate geometries onto triangles and encode their opacity for better ray tracing performance. SDK available now Coming soon to the NVIDIA RTX Branch of Unreal Engine 5 (NvRTX) RTX Memory Utility Compaction and suballocation of acceleration structures to reduce memory consumption. Available now SpatioTemporal Blue Noise Utility tool containing some pregenerated blue noise textures and sample code for generating new blue noise textures Available now Shader Execution Reordering Performance optimization that unlocks the potential for better execution and memory coherence in ray tracing shaders. API available now for NVIDIA RTX 40 Series and later Table 1. NVIDIA RTX Kit technologies In this tutorial, we focus on new SDKs available today through NVIDIA RTX Kit : RTX Neural Shaders RTX Neural Texture Compression RTX Texture Filtering RTX Mega Geometry RTX Character Rendering While DirectX Support is not available yet for neural shading applications, Vulkan support is available today. For more information about the DLSS integration process, see How to Integrate NVIDIA DLSS 4 into Your Game with NVIDIA Streamline . RTX Neural Shaders Figure 1. Example of a scene from Zorah, built in Unreal Engine 5, rendered using RTX Neural Shaders NVIDIA RTX Neural Shaders bring small neural networks into programmable shaders. This technology framework enables the training and deployment of neural networks directly within shaders, enabling you to compress game data and shader code and approximate film-quality materials, volumes, geometry, and more in real time. Here’s how to get started. Check prerequisites: Verify that your system meets the following requirements (Table 2). GPU Architecture Driver CMake Vulkan Visual Studio Windows SDK Slang Turing and newer 572.16 3.24.3 1.3.296 2022+ 10.0.22621.0 v2025.3.3 Table 2. NVIDIA RTX Neural Shaders requirements Clone the /NVIDIA-RTX/RTXNS repo: Install Git and Git-lfs, if you haven’t already and read through the information on the repo. Build the samples: Follow the instructions located in the repo to build the solution using Visual Studio. Run the samples: There are three samples included with the SDK. Each one demonstrates how to use the SDK for various tasks: Simple Inferencing :  Demonstrates how to implement an inference shader using some of the low-level building blocks from the SDK. Simple Training :  Provides an introduction to training a neural network for use in a shader. Shader Training :  A more complex sample that shows how to train an MLP model on the Disney BRDF shader. Figure 2. Simple Inferencing sample Figure 3. Simple Training sample Figure 4. Shader Training sample Create a neural shader: Follow the How to Write Your First Neural Shader tutorial.  Use the sample code and the Library Usage Start Guide as reference. In addition to the RTX Neural Shaders, NVIDIA provides two applications of neural shading through the following: Neural Texture Compression: Implements compression and decompression of textures using neural networks, and enables transcoding into other compressed texture formats. Neural Materials: Implements compression of material data using neural networks. To be notified when this SDK is available, see the NVIDIA RTX Kit Notify Me Form . RTX Neural Texture Compression Figure 5. Scene from Zorah, built in Unreal Engine 5, showing Neural Texture Compression in use One application of RTX Neural Shaders available today is RTX Neural Texture Compression. This SDK uses AI to compress textures more efficiently, reducing texture memory consumption by up to 8x compared to traditional block compression. Check prerequisites: Verify that your system meets the following requirements (Table 3): GPU Architecture Driver CMake Vulkan Windows SDK Turing and newer 570+ 3.28 1.3 10.0.22621.0 Table 3. RTX Neural Texture Compression requirements Clone the /NVIDIA-RTX/RTXNTC repo: Install Git and Git-lfs, if you haven’t already and read through the information on the repo. Build the samples: Follow the instructions located in the repo to build the solution using Visual Studio. Run the samples: There are three applications included. Together, these tools show how NTC works and how to use it: NTC Command-Line Tool (ntc-cli): Provides a tool for compression and decompression of material texture sets. NTC Explorer: Enables interactive experimentation with neural texture compression, works as a viewer for NTC files. NTC Renderer: Demonstrates how to render a GLTF model with NTC materials using Inference on Load or Inference on Sample. Figure 6. NTC Renderer Figure 7. NTC Explorer Integrate the SDK: To add NTC to your application, follow the Integration Guide . Verify reduced memory utilization: Use NVIDIA NSight Systems to check memory usage and compare it to traditional textures. RTX Texture Filtering Figure 8. Filter comparisons under magnification RTX Texture Filtering improves quality and efficiency of texturing filtering, especially for AI compressed textures. It does this through stochastic sampling of texture filters, which enables the practical and efficient implementation of filtering after shading. RTX Texture Filtering is designed to enable easy integration of this technique into your shader libraries. Check prerequisites: Verify that your system meets the following requirements (Table 4): GPU Architecture Driver CMake Visual Studio Windows SDK DirectX Raytracing 1.1 API supported GPU and newer 555.85+ 3.24.3+ 2022+ 10.0.20348.0+ Table 4. RTX Texture Filtering requirements Clone the /NVIDIA-RTX/RTXTF repo: Install Git and Git-lfs, if you haven’t already and read through the information on the repo. Observe the differences in filtering techniques: The GitHub repo provides many examples of filter comparisons to help you better understand the benefit that RTX Texture Filtering brings over other solutions. Run the sample application: The repo includes an application that can be used to visualize the effects of RTX Texture Filtering. Figure 9. Texture Filtering sample Integrate the RTX Texture Filtering shade library into your shader framework: Follow the instructions in the Integration Guide to add it to your application. RTX Mega Geometry Figure 10. RTX Mega Geometry comparison As hardware becomes more powerful, the amount of geometric detail in real-time computer graphics is increasing rapidly. This growth is occurring in two areas: Higher instance counts (more objects in a scene) Greater triangle density (more detailed individual objects) RTX Mega Geometry addresses these challenges by accelerating bounding volume hierarchy (BVH) build speed for cluster-based systems like Nanite and intelligently compressing and caching clusters of triangles over many frames. This enables the streaming of various levels of detail and extreme triangle density all while path tracing. RTX Mega Geometry is available today through an SDK and will be available in the NVIDIA RTX Branch of Unreal Engine 5 (NvRTX). Check prerequisites: Verify that your system meets the following requirements (Table 5): GPU Architecture Driver CMake Windows SDK Visual Studio Turing and newer 570+ 3.28+ 10.0.20348 2019+ Table 5. NVIDIA RTX Mega Geometry requirements Clone the /NVIDIA-RTX/RTXMG repo: Install Git and Git-lfs, if you haven’t already and read through the information on the repo. Build the sample: Follow the instructions located in the repo to build the solution using Visual Studio. Run the sample: Observe how Mega Geometry enables high image quality with low memory usage. Figure 11. RTX Mega Geometry sample Observe the effect of Mega Geometry: Use the built in profiler tool to view the impact of Mega Geometry in a variety of preset scenes. Review the sample code and integrate the API: To add Mega Geometry to your application, take a look at the sample application and refer to the API calls used. RTX Character Rendering Figure 12. Sample using RTX Character Rendering RTX Character Rendering consists of four algorithms: Subsurface Scattering (SSS): Renders skin with accurate lighting and translucency. It enables path-traced skin, which enhances realism. The SDK implements a combined (SSS) solution which extends the SOTA Burley Diffusion Profile with a single scattering term. Linear Swept Spheres (LSS): Adds the NVIDIA Blackwell-accelerated sphere and curve primitive for strand-based, path-traced hair, which brings added depth and volume. This algorithm is only compatible with NVIDIA RTX 50 Series GPUs. Enhanced analytical Bi-Directional Scattering Distribution Function (BSDF): Provides shading for strand-based hair. Disjoint Orthogonal Triangles Strips (DOTS): Provides high-quality strand-based hair for all GPUs. Check prerequisites: Verify that your system meets the following requirements (Table 6): GPU Architecture Driver CMake Vulkan Windows SDK Volta and newer (Blackwell required for LSS) 570+ 3.24.3 1.3.268+ 10.0.20348+ Table 6. NVIDIA RTX Character Rendering requirements Clone the /NVIDIA-RTX/RTXCR repo: Install Git and Git-lfs, if you haven’t already and read through the information on the repo. Build the sample: Follow the instructions located in the repo to build the solution using Visual Studio. Run the sample: Use this opportunity to try out the different path tracing techniques available within the SDK and see how it fits within the context of a game environment. Figure 13. RTX Character Rendering sample Integrate the SDK: To add the RTX Character Rendering SDK to your application, follow the Integration Guide . Summary Get started with developing with NVIDIA RTX Kit today. To see the latest in neural graphics technologies, be sure to check out NVIDIA at GDC .
https://developer.nvidia.com/zh-cn/blog/get-started-with-neural-rendering-using-nvidia-rtx-kit/
开始使用 NVIDIA RTX 套件进行神经渲染
神经渲染是计算机图形的下一个时代。通过将神经网络集成到渲染过程中,我们可以在性能、图像质量和交互性方面实现巨大飞跃,从而将沉浸感提升到新的高度。 NVIDIA RTX Kit 是一套神经渲染技术,可借助 AI 进行光线追踪游戏、渲染具有宏大几何图形的场景,以及使用逼真视觉效果创建游戏角色。 RTX Kit 现已在 /NVIDIA RTX/RTXkit GitHub 存储库中提供,并整合了几个新的和熟悉的 RTX 和 AI 组件。尽管可以单独使用每个组件,但 RTX Kit 提供了一个统一的位置来访问这些基本渲染 SDK。 姓名 说明 状态 RTX 神经着色器 (新增) 在着色器中训练和部署神经网络,为新一代资产生成解锁新的压缩和近似技术。 现已推出 RTX 神经材质 (新增) 使用 AI 压缩复杂多层材质的着色器代码,使材质处理速度提升高达 5 倍,从而为电影级素材带来实时性能。 在 RTX Neural Materials 推出时收到通知 RTX 神经纹理压缩 (new) 使用 AI 压缩纹理,将 VRAM 提升高达 8 倍,且视觉保真度与运行时的传统块压缩相当。 现已推出 RTX 纹理过滤 (新增) 在着色后对纹理进行随机采样,并过滤难以处理的体积,从而减少伪影并提高图像质量。 现已推出 RTX Mega 几何图形 (新增) 加速基于集群的几何系统 (例如 Nanite) 的 BVH 构建,实现数百万个三角形的实时路径追踪。 SDK 现已推出 ,在 Unreal Engine 5 (NvRTX) 的 NVIDIA RTX 分支中推出时接收通知 RTX 角色渲染 (新增) 一套用于创建路径追踪 hair 和 skin 的工具。 现已推出 DLSS 4 (新增) GeForce RTX 50 系列 GPU 已引入 Multi Frame Generation 技术,支持每渲染帧生成多达 3 帧。 现已通过 NVIDIA Streamline SDK 和 Unreal Engine 5 插件提供 Reflex 2 (新增) Reflex 技术可优化图形管线,实现出色的响应速度,从而加快目标捕获速度、缩短反应时间,并提高竞技游戏的瞄准精度。 Reflex 低延迟现已推出获取 Reflex Frame Warp 可用性通知 RTX 动态照明 重要性采样算法库,可对场景中最重要的光线进行采样,并在物理上准确渲染这些光线。提供三种技术的实现:ReSTIR DI、GI 和 PT。 现已推出 ReSTIR PT,即将用于虚幻引擎 5 的 SDK 和 NVIDIA RTX 分支(NvRTX) RTX 全局光照 用于计算多次反弹间接照明的可扩展解决方案。提供两种技术的实现:Neural Radiance Cache (NRC) 和 Spatially Hashed Radiance Cache (SHARC)。 现已推出 (NRC 处于试验阶段) RTX 路径追踪 参考解决方案,利用多种 RTX 技术优化实时路径追踪,利用了几种 RTX 技术进行优化。 现已推出 NVIDIA Real-Time Denoiser 降噪器库旨在处理低光线采样率信号。 现已推出 NVIDIA 透明度微地图 将复杂的几何图形高效映射到三角形并对其不透明度进行编码,以获得更好的光线追踪性能。 SDK 现已推出 ,即将登陆 NVIDIA RTX 分支的 Unreal Engine 5 (NvRTX) RTX 显存实用程序 对加速结构进行压缩和子分配,以减少内存消耗。 现已推出 时空蓝噪 实用工具包含一些预生成的蓝色噪声纹理和示例代码,用于生成新的蓝色噪声纹理 现已推出 着色器执行重排序 性能优化,释放光线追踪着色器实现更好执行和内存一致性的潜力。 适用于 NVIDIA RTX 40 Series 及更高版本的 API 现已推出 表 1、NVIDIA RTX Kit 技术 在本教程中,我们重点介绍目前通过 NVIDIA RTX Kit 提供的新 SDK: RTX 神经着色器 RTX 神经纹理压缩 RTX 纹理过滤 RTX Mega 几何图形 RTX 角色渲染 虽然神经着色应用程序尚未提供 DirectX 支持, 但 Vulkan 支持现已推出。有关 DLSS 集成过程的更多信息,请参阅如何借助 NVIDIA Streamline 将 NVIDIA DLSS 4 集成到您的游戏中。 RTX 神经着色器 图 1、Zorah 场景示例,使用 Unreal Engine 5 构建,并使用 RTX Neural Shaders 进行渲染 NVIDIA RTX 神经网络着色器将小型神经网络引入可编程着色器。此技术框架支持直接在着色器中训练和部署神经网络,使您能够实时压缩游戏数据和着色器代码,以及近似的电影级材质、体积、几何图形等。 以下是入门指南。 检查预备知识: 验证您的系统是否满足以下要求(表 2)。 GPU 架构 Driver CMake Vulkan Visual Studio Windows SDK Slang Turing 和更新版本 572.16 3.24.3 1.3.295 2022 年* 10.0.22621.0 v2025.3.3 表 2、NVIDIA RTX 神经着色器要求 克隆 /NVIDIA-RTX/RTXNS 存储库 :安装 Git 和 Git-lfs (如果您尚未安装) 并阅读存储库上的信息。 构建示例: 按照存储库中的说明使用 Visual Studio 构建解决方案。 运行示例: SDK 包含三个示例。每个示例都演示了如何将 SDK 用于各种任务: 简单推理 :演示如何使用 SDK 中的一些低级构建块实施推理着色器。 简单训练 :介绍如何训练用于着色器的神经网络 着色器训练 :更复杂的示例,展示如何在 Disney BRDF 着色器上训练 MLP 模型。 图 2. Simple Inferencing 样本 图 3. Simple Training sample 图 4、Shader Training 样本 创建神经着色器 :遵循 “How to Write Your First Neural Shader” 教程。使用示例代码和 “Library Usage Start Guide” 作为参考。 除 RTX 神经着色器外,NVIDIA 还通过以下方式提供两种神经着色应用: 神经纹理压缩: 使用神经网络实现纹理的压缩和解压缩,并支持转码为其他压缩纹理格式。 神经材质 :使用神经网络压缩材质数据。如需在此 SDK 推出时收到通知,请参阅 NVIDIA RTX Kit Notify Me 表单 。 RTX 神经纹理压缩 图 5、Zorah 中的场景,使用 Unreal Engine 5 构建,展示了正在使用的 Neural Texture Compression RTX Neural Shaders 目前的一个应用是 RTX Neural Texture Compression。此 SDK 使用 AI 更高效地压缩纹理,与传统块压缩相比,可将纹理内存消耗量降低高达 8 倍。 检查预备知识: 验证您的系统是否满足以下要求(表 3): GPU 架构 Driver CMake Vulkan Windows SDK Turing 和更新版本 570% 3.28 1.3 10.0.22621.0 表 3、RTX 神经纹理压缩要求 克隆 /NVIDIA-RTX/RTXNTC 存储库 :安装 Git 和 Git-lfs (如果您尚未安装) 并阅读存储库上的信息。 构建示例: 按照存储库中的说明使用 Visual Studio 构建解决方案。 运行示例: 包含三个应用程序。这些工具共同展示了 NTC 的工作原理及其使用方式: NTC 命令行工具 (ntc-cli) :提供用于压缩和解压缩材质纹理集的工具。 NTC Explorer: 支持神经纹理压缩的交互式实验,可用作 NTC 文件的查看器。 NTC 渲染器 :演示如何使用 Inference on Load 或 Inference on Sample 渲染具有 NTC 材质的 GLTF 模型。 图 6、NTC 渲染器 图 7、NTC Explorer 集成 SDK: 要将 NTC 添加到您的应用中,请按照 Integration Guide 进行操作。 验证显存占用率降低情况 :使用 NVIDIA NSight Systems 检查显存占用率,并将其与传统纹理进行比较。 RTX 纹理过滤 图 8、放大下的 Filter comparisons RTX 纹理过滤可提高纹理过滤的质量和效率,尤其是对于 AI 压缩纹理而言。它通过纹理滤镜的随机采样来实现这一点,从而实现着色后的实用高效过滤。RTX 纹理过滤旨在将此技术轻松集成到着色器库中。 检查预备知识: 验证您的系统是否满足以下要求(Table 4): GPU 架构 Driver CMake Visual Studio Windows SDK 支持 DirectX Raytracing 1.1 API 的 GPU 和更新版本 255.85% 3.24.3 及以上 2022 年* 10.0.20348.0 及以上 表 4、RTX 纹理过滤要求 克隆 /NVIDIA-RTX/RTXTF 存储库 :安装 Git 和 Git-lfs (如果您尚未安装) 并阅读存储库上的信息。 观察过滤技术的差异: Github 资源库提供了许多过滤器比较示例,可帮助您更好地了解 RTX 纹理过滤相较于其他解决方案的优势。 运行示例应用: 资源库中包含一个可用于可视化 RTX Texture Filtering 效果的应用。 图 9、Texture Filtering 示例 将 RTX 纹理过滤着色库集成到着色器框架中 : 按照 Integration Guide 中的说明将其添加到您的应用中。 RTX Mega 几何图形 图 10. RTX Mega Geometry 对比 随着硬件功能的增强,实时计算机图形中的几何细节量也在迅速增加。这一增长发生在两个领域: 实例数量越多(场景中的对象越多) 三角形密度更高(单个物体细节更丰富) RTX Mega Geometry 可加快 Nanite 等基于集群的系统的边界体积层次结构 (BVH) 构建速度,并智能压缩和缓存多个帧上的三角形集群,从而解决这些挑战。这使得在进行路径追踪时能够流式传输不同级别的细节和极高的三角形密度。 RTX Mega Geometry 现已通过 SDK 提供,并将在 Unreal Engine 5 (NvRTX) 的 NVIDIA RTX 分支中提供。 检查预备知识: 验证您的系统是否满足以下要求 (表 5): GPU 架构 Driver CMake Windows SDK Visual Studio Turing 和更新版本 570% 3.28 及以上 10.0.20348 2019 年* 表 5、NVIDIA RTX Mega Geometry 要求 克隆/NVIDIA-RTX/RTXMG 资源库:安装 Git 和 Git-lfs (如果您尚未安装) 并阅读资源库中的信息。 构建示例: 按照存储库中的说明使用 Visual Studio 构建解决方案。 运行示例: 观察 Mega Geometry 如何在低内存占用的情况下实现高质量图像。 图 11、RTX Mega Geometry 示例 观察 Mega Geometry 的效果: 使用内置分析器工具查看 Mega Geometry 在各种预设场景中的影响。 查看示例代码并集成 API :要将 Mega Geometry 添加到您的应用中,请查看示例应用并参考所使用的 API 调用。 RTX 角色渲染 图 12、使用 RTX 角色渲染的示例 RTX 角色渲染由四种算法组成: 次表面散射 (SSS) :以准确的光照和半透明效果渲染皮肤。它支持路径追踪皮肤,从而增强真实感。该 SDK 实现了组合 (SSS) 解决方案,通过单个散射术语扩展了 SOTA Burley Diffusion Profile。 线性扫描球体 (LSS) :为基于链的路径追踪头发添加 NVIDIA Blackwell 加速球体和曲线基元,从而增加深度和体积。此算法仅与 NVIDIA RTX 50 系列 GPUs 兼容。 增强型分析 Bi-Directional Scattering Distribution Function (BSDF) :为基于链的头发提供着色。 不交正交三角形条 (DOTS) :为所有 GPU 提供基于链的高质量毛发。 检查预备知识: 验证您的系统是否满足以下要求(Table 6): GPU 架构 Driver CMake Vulkan Windows SDK Volta 及更新版本 (LSS 需要 Blackwell) 570% 3.24.3 1.3.268 及以上 10.0.20348 及以上 表 6、NVIDIA RTX 角色渲染要求 克隆 /NVIDIA-RTX/RTXCR 存储库 :安装 Git 和 Git-lfs (如果您尚未安装) 并阅读存储库上的信息。 构建示例: 按照存储库中的说明使用 Visual Studio 构建解决方案。 运行示例 :利用此机会试用 SDK 中提供的不同路径追踪技术,并了解它如何适应游戏环境的上下文。 图 13、RTX 角色渲染示例 集成 SDK:要将 RTX Character Rendering SDK 添加到您的应用中 , 请按照 Integration Guide 进行操作。 总结 立即开始使用 NVIDIA RTX Kit 进行开发。要了解神经图形技术的新动态,请务必查看 GDC 上的 NVIDIA 大会 。
https://developer.nvidia.com/blog/nvidia-rtx-mega-geometry-now-available-with-new-vulkan-samples/
NVIDIA RTX Mega Geometry Now Available with New Vulkan Samples
Geometric detail in computer graphics has increased exponentially in the past 30 years. To render high quality assets with higher instance counts and greater triangle density, NVIDIA introduced RTX Mega Geometry. RTX Mega Geometry is available today through NVIDIA RTX Kit , a suite of rendering technologies to ray trace games with AI, render scenes with immense geometry, and create game characters with photorealistic visuals. As part of this release, new Vulkan samples have been made available to all developers. These open-source samples show how you can use clusters of triangles to build acceleration structures faster, render massive amounts of animated geometry, path trace models with streaming level of detail (LoD), and more. NVIDIA is also releasing two libraries to help process geometry into clusters. For more information on RTX Mega Geometry, see the documentation on GitHub . For more information on individual Vulkan samples, visit each repository. Sample: Animated clusters This sample introduces clusters and shows how to use the VK_NV_cluster_acceleration_structure extension to ray trace massive, animated scenes.. With NVIDIA RTX Mega Geometry, acceleration structures for animated objects can be built faster than using previous APIs. It also shows how clusters can be rasterized using VK_EXT_mesh_shader . Figure 1. Each mesh in this scene independently deforms, changing the position of every triangle with every frame For more information, see the vk_animated_clusters GitHub repo. Sample: Partitioned top-level acceleration structure Frequently, only parts of a scene change between frames. The new VK_NV_partitioned_acceleration_structure extension enables apps to rebuild parts of a Top-Level Acceleration Structure (TLAS) when part of a scene changes, rather than the whole thing. The partitioned TLAS sample showcases this extension using a simple simulation of more than more than 100K physics objects. Figure 2. VK_NV_partitioned_acceleration_structure enables part of the TLAS to be rebuilt when only part of it changes For more information, see the vk_partitioned_tlas GitHub repo. Sample: Dynamic tessellation for clusters This sample showcases how to use clusters and VK_NV_cluster_acceleration_structure to path trace dynamic tessellation with displacement, which requires per-frame generation of geometry. It can also render the content using VK_NV_mesh_shader . Figure 3. NVIDIA RTX Mega Geometry can be used for dynamic tessellation For more information, see the vk_tessellated_clusters GitHub repo. Sample: Level of detail for clusters This sample showcases a continuous LoD technique using clusters that uses VK_NV_cluster_acceleration_structure for ray tracing. It can also rasterize the content using VK_NV_mesh_shader . In addition, the sample implements an on-demand streaming system from RAM to VRAM for the geometry. Figure 4. vk_lod_clusters builds levels of detail out of groups of clusters. These are structured so that the rendering is seamless even if the chosen level of detail varies across a mesh For more information, see the vk_lod_clusters GitHub repo. Library: Cluster builder nv_cluster_builder is a small generic spatial clustering C++ library, created to cluster triangle meshes for ray tracing. It implements an algorithm similar to the recursive node-splitting techniques sometimes used to create bounding volume hierarchies. While it is limited to axis-aligned splits, its outputs work well for ray tracing. Figure 5. nv_cluster_builder clusters triangles, quads, and smaller clusters, given their bounding boxes and optional connectivity data For more information, see the nv_cluster_builder GitHub repo. Library: Level of detail cluster builder nv_lod_cluster_builder is a continuous LoD mesh library that enables fine-grained control over geometric detail within a mesh, compared to traditional discrete LoD. Clusters of triangles are carefully precomputed by decimating the original mesh in a way that they can be seamlessly combined across different LoD levels. At rendering time, a subset of these clusters can be selected to adaptively provide the required amount of detail as the camera navigates the scene. Figure 5. nv_lod_cluster_builder generates clusterized meshes with a seamless level of detail For more information, see the nv_cluster_lod_builder GitHub repo. These RTX Mega Geometry samples are part of NVIDIA RTX Kit. Learn more about RTX Kit technologies at the NVIDIA RTX Kit website today. Ask questions, provide feedback, and discuss with other developers on the NVIDIA Developer Forums for ray tracing .
https://developer.nvidia.com/zh-cn/blog/nvidia-rtx-mega-geometry-now-available-with-new-vulkan-samples/
NVIDIA RTX Mega Geometry 现已支持新的 Vulkan 示例程序
在过去 30 年里,计算机图形领域的几何细节呈指数级增长。为了渲染具有更高实例数量和三角形密度的高质量素材,NVIDIA 推出了 RTX Mega Geometry。RTX Mega Geometry 现已通过 NVIDIA RTX Kit 推出,这是一套渲染技术,可利用 AI 对游戏进行光线追踪、渲染具有宏大几何图形的场景,以及创建具有逼真视觉效果的游戏角色。 作为此版本的一部分,新的 Vulkan 示例已提供给所有开发者。这些开源示例展示了如何使用三角形集群更快地构建加速结构、渲染大量动画几何图形、具有流式传输细节级别(LoD)的路径追踪模型等。NVIDIA 还将发布两个库,帮助将几何图形处理到集群中。有关 RTX Mega Geometry 的更多信息,请参阅 GitHub 上的文档。有关单个 Vulkan 示例的更多信息,请访问每个存储库。 示例:动画集群 此示例介绍了集群,并展示了如何使用 VK_NV_cluster_acceleration_structure 扩展对大型动画场景进行光线追踪。借助 NVIDIA RTX Mega Geometry,动画对象的加速结构构建速度比使用之前的 API 更快。它还展示了如何使用 VK_EXT_mesh_shader 对集群进行光栅化。 图 1. 每个网格在这个场景中各自变形,每帧都会改变每个三角形的位置 有关更多信息,请参阅 vk_animated_clusters GitHub 存储库。 示例:分区顶层加速结构 通常情况下,只有场景的一部分在帧之间发生变化。借助新的 VK_NV_partitioned_acceleration_structure 扩展,应用可以在部分场景发生变化时重建顶层加速结构(TLAS)的部分,而非重建整个场景。分区的 TLAS 示例展示了此扩展,它使用对超过 100K 个物理对象的简单模拟。 图 2 、VK_NV_partitioned_acceleration_structure 支持在仅部分 TLAS 发生变化时重建部分 TLAS 有关更多信息,请参阅 vk_partitioned_tlas GitHub 存储库。 示例:集群的动态曲面细分 此示例展示了如何使用集群和 VK_NV_cluster_acceleration_structure 来追踪具有位移的动态曲面细分,这需要每帧生成几何图形。它还可以使用 VK_NV_mesh_shader 渲染内容。 图 3、NVIDIA RTX Mega Geometry 可用于动态曲面细分 有关更多信息,请参阅 vk_tessellated_clusters GitHub 存储库。 示例:集群的细节级别 此示例展示了一种使用集群的连续 LoD 技术,该集群使用 VK_NV_cluster_acceleration_structure 进行光线追踪。它还可以使用 VK_NV_mesh_shader 对内容进行光栅化。此外,该示例针对几何图形实现了从 RAM 到 VRAM 的按需串流系统。 图 4.vk_lod_clusters 根据集群构建细节级别。这些都是结构化的,因此即使网格中选择的细节级别不同,渲染也是无缝的 有关更多信息,请参阅 vk_lod_clusters GitHub 存储库。 库:集群构建器 nv_cluster_builder 是一个小型通用空间聚类 C++ 库,用于聚类三角形网格以进行光线追踪。它实现了一种类似于递归节点分割技术的算法,有时用于创建边界体积层次结构。虽然它仅限于进行轴对齐的拆分,但其输出非常适合光线追踪。 图 5 .nv_cluster_builder 集群中三角形、四边形和较小集群的边界框和可选连接数据 有关更多信息,请参阅 nv_cluster_builder GitHub 存储库。 库:细节级别集群构建器 nv_lod_cluster_builder 是一个连续 LoD 网格库,与传统的离散 LoD 相比,可对网格内的几何细节进行精细控制。我们通过对原始网格进行抽取来仔细预计算三角形,以便在不同的 LoD 级别上无缝地组合这些网格。在渲染时,可以从这些集群中选择一个子集,以便在摄像头导航场景时自适应地提供所需的细节量。 图 5.nv_lod_cluster_builder 生成细节无缝的集群化网格 有关更多信息,请参阅 nv_cluster_lod_builder GitHub 存储库。 这些 RTX Mega Geometry 示例是 NVIDIA RTX Kit 的一部分。立即访问 NVIDIA RTX Kit 网站 ,详细了解 RTX Kit 技术。在适用于光线追踪的 NVIDIA Developer Forums 上提出问题、提供反馈并与其他开发者进行讨论 。
https://developer.nvidia.com/blog/build-apps-with-neural-rendering-using-nvidia-nsight-developer-tools-on-geforce-rtx-50-series-gpus/
Build Apps with Neural Rendering Using NVIDIA Nsight Developer Tools on GeForce RTX 50 Series GPUs
The next generation of NVIDIA graphics hardware has arrived. Powered by NVIDIA Blackwell, GeForce RTX 50 Series GPUs deliver groundbreaking new RTX features such as DLSS 4 with Multi Frame Generation , and NVIDIA RTX Kit with RTX Mega Geometry and RTX Neural Shaders. NVIDIA RTX Blackwell architecture introduces fifth-generation Tensor Cores to drive AI workloads and fourth-generation RT Cores with double the triangle intersection rate of the previous generation. It is equipped with ultra-fast GDDR7 memory delivering massive boosts to total memory bandwidth. NVIDIA Nsight Developer Tools support graphics developers to create even more realistic worlds and simulations with GeForce RTX 50 Series GPUs. Design and optimize ray tracing applications in NVIDIA Nsight Graphics NVIDIA Nsight Graphics is a standalone tool for designing, debugging, and optimizing games and other graphics applications. In version 2025.1 all development workflows are now available on Blackwell GPUs: Frame Debugging, the Ray Tracing Inspector, GPU Trace, the Real Time Shader Profiler, and Vulkan Shader Debugging. For developers working on optimization, a leading feature of the NVIDIA Blackwell architecture is the significant increase in performance counter observation capabilities, supporting about 8x the number of counters as the NVIDIA Ada Lovelace architecture . Hardware performance counters and the software stack that manages them—the NVIDIA Nsight Perf SDK library—form the basis of profiling tools such as Nsight Graphics GPU Trace. Expanded counters in the Blackwell architecture bring a revolutionary new level of detail in streaming multiprocessor (SM) utilization and efficiency, warp and resource occupancy, cache bandwidths, hit rates, and much more. GPU Trace reveals these counters on a timeline, and in improved tables and tooltips. With the ability to collect more counters simultaneously, you don’t have to choose your configuration or re-run your application. You receive everything in a single run. System-wide performance tuning in NVIDIA Nsight Systems NVIDIA Nsight Systems provides a top-down capture of graphics application performance and utilization of resources across both the CPU and GPU. This enables performance limiters to be identified and correlated with other hardware events to guide tuning activities. Use cases include determining CPU-bound versus GPU-bound portions of the runtime, stutter analysis, graphics API resource tracking, and more. Nsight Systems 2025.1 adds the ability to record Blackwell GPU metrics. Diagnose crashes with NVIDIA Nsight Aftermath SDK NVIDIA Nsight Aftermath SDK integrates into D3D12 and Vulkan applications to generate GPU crash reports when an exception or TDR occurs, helping developers track down and debug hard-to-reproduce errors in deployed applications. Version 2025.1 adds support for generating crash reports on Blackwell GPUs. Applications should update to the latest SDK to ensure continuous support across all architectures. Real-time HUDs with NVIDIA Nsight Perf SDK NVIDIA Nsight Perf SDK empowers you to bring GPU performance measurement, visualization, and report generation directly into your own application. It includes a built-in HUD renderer to effortlessly enable real-time, high-level performance triage. Nsight Perf SDK is also the underlying foundation for managing hardware performance counters on which all Nsight tools are built. Version 2025.1 adds full support for the expanded set of capabilities for single-pass metrics now available from the Blackwell architecture. Get started with Nsight Developer Tools Create new worlds and explore the power of AI-enhanced neural rendering technologies, with Nsight Developer Tools to assist you. The full set of tools with support for NVIDIA RTX Blackwell architecture is available now. To get started, download the latest releases: NVIDIA Nsight Graphics 2025.1 NVIDIA Nsight Aftermath SDK 2025.1 NVIDIA Nsight Systems 2025.1.1 NVIDIA Nsight Perf SDK 2025.1 Learn more about Nsight Developer Tools and explore tutorials for Nsight Tools. Ask questions, provide feedback, and engage with the developer community on the Nsight Developer Forums .
https://developer.nvidia.com/zh-cn/blog/build-apps-with-neural-rendering-using-nvidia-nsight-developer-tools-on-geforce-rtx-50-series-gpus/
使用 NVIDIA Nsight 开发者工具和 GeForce RTX 50系列 GPU 构建神经渲染应用
新一代 NVIDIA 图形硬件已经面世。由 NVIDIA Blackwell 提供支持的 GeForce RTX 50 系列 GPU 可提供突破性的全新 RTX 功能,例如支持多帧生成的 DLSS 4,以及支持 RTX Mega Geometry 和 RTX Neural Shaders 的 NVIDIA RTX Kit 。NVIDIA RTX Blackwell 架构采用第五代 Tensor Cores 来驱动 AI 工作负载和第四代 RT Cores,三角形交叉速率是上一代的两倍。它配备超快的 GDDR7 内存,可大幅提升总内存带宽。 NVIDIA Nsight 开发者工具 支持图形开发者使用 GeForce RTX 50 系列 GPUs 打造更加逼真的世界和仿真。 在 NVIDIA Nsight Graphics 中设计和优化光线追踪应用 NVIDIA Nsight Graphics 是一款用于设计、调试和优化游戏和其他图形应用的独立工具。在版本 2025.1 中,所有开发工作流现均可在 Blackwell GPUs 上使用:Frame Debugging、Ray Tracing Inspector、GPU Trace、Real Time Shader Profiler 和 Vulkan Shader Debugging。 对于从事优化工作的开发者而言, NVIDIA Blackwell 架构 的一项主要功能是显著提高了性能计数器观察功能,支持的计数器数量约为 NVIDIA Ada Lovelace 架构 的 8 倍。硬件性能计数器和管理软件堆栈—NVIDIA Nsight Perf SDK 库—构成了 Nsight Graphics GPU Trace 等性能分析工具的基础。 Blackwell 架构中的扩展计数器在流多处理器 (SM) 利用率和效率、线程束和资源占用率、缓存带宽、命中率等方面带来了革命性的新细节水平。GPU Trace 可在时间轴上、改进的表格和工具提示中显示这些计数器。由于能够同时收集更多计数器,因此您不必选择配置或重新运行应用程序。您只需一次运行即可收到所有内容。 NVIDIA Nsight Systems 中的系统级性能调优 NVIDIA Nsight Systems 自上而下地捕获图形应用程序的性能以及 CPU 和 GPU 的资源利用率。这样可以识别性能限制器,并将其与其他硬件事件关联,从而为调优活动提供指导。用例包括确定运行时中受 CPU 限制的部分与受 GPU 限制的部分、卡顿分析、图形 API 资源跟踪等。Nsight Systems 2025.1 增加了记录 Blackwell GPU 指标的功能。 使用 NVIDIA Nsight Aftermath SDK 诊断崩溃 NVIDIA Nsight Aftermath SDK 集成到 D3D12 和 Vulkan 应用中,可在发生异常或 TDR 时生成 GPU 崩溃报告,帮助开发者追踪和调试已部署应用中难以重现的错误。版本 2025.1 增加了在 Blackwell GPU 上生成崩溃报告的支持。应用应更新到最新的 SDK,以确保跨所有架构提供持续支持。 借助 NVIDIA Nsight Perf SDK 实现实时 HUD NVIDIA Nsight Perf SDK empowers 您可以将 GPU 性能测量、可视化和报告生成功能直接引入自己的应用。它包含内置的 HUD 渲染器,可轻松实现实时的高级性能分诊。Nsight Perf SDK 还管理硬件性能计数器的基础,所有 Nsight 工具都是在此基础上构建的。版本 2025.1 全面支持 Blackwell 架构现已提供的单遍指标扩展功能集。 开始使用 Nsight Developer Tools 借助 Nsight Developer Tools 开发者工具,创造新世界并探索 AI 增强型神经渲染技术的强大功能。支持 NVIDIA RTX Blackwell 架构的全套工具现已推出。首先,下载最新版本: NVIDIA Nsight Graphics 2025.1 NVIDIA Nsight Aftermath SDK 2025.1 NVIDIA Nsight Systems 2025.1.1 NVIDIA Nsight Perf SDK 2025.1 详细了解 Nsight Developer Tools ,并探索 Nsight Tools 的 教程 。 在 Nsight Developer Forums 上提出问题、提供反馈并与开发者社区互动。
https://developer.nvidia.com/blog/how-to-integrate-nvidia-dlss-4-into-your-game-with-nvidia-streamline/
How to Integrate NVIDIA DLSS 4 into Your Game with NVIDIA Streamline
NVIDIA DLSS 4 is the latest iteration of DLSS introduced with the NVIDIA GeForce RTX 50 Series GPUs . It includes several new features: DLSS Multi Frame Generation: Multi Frame Generation has been introduced for GeForce RTX 50 Series GPUs, allowing for generation of up to 3 frames per rendered frame. DLSS Ray Reconstruction & DLSS Super Resolution (Beta): Updated to use a new Transformer model, which improves temporal stability, detail in motion, and lighting details. DLSS Frame Generation: The Frame Generation model has been updated to run faster and use less VRAM on GeForce RTX 50 and RTX 40 Series GPUs. Video 1. DLSS 4 | New Multi Frame Gen & Everything Enhanced Here’s how you can get started with DLSS 4 in your integrations. This post focuses on the Streamline SDK , which provides a plug-and-play framework for simplified plugin integration. Streamline SDK integration overview The NVIDIA Streamline SDK is an open-source framework that simplifies the integration of NVIDIA DLSS and various super-resolution technologies from other hardware vendors into games and applications. It enables you to implement multiple upscaling solutions through a single integration point within your game engine. Figure 1. Streamline SDK architecture To add Streamline to your application, follow the Streamline Manual Hooking guide. Integrate without any features and focus on tasks such as manual hooking and resource state tracking. Verify the NVIDIA and Streamline dual signatures on sl.itnerposer.dll before loading the DLL. Follow the verification process within the Security section of the programming guide. Check for system (hardware and software) support for each of the DLSS 4 features and show appropriate error messages to end users based on reported hardware support. DLSS Multi Frame Generation integration overview DLSS Frame Generation is an NVIDIA technology that uses AI to generate additional frames between existing ones in game, significantly boosting frame rates and enabling smoother gameplay on NVIDIA GeForce RTX 50 Series and NVIDIA RTX 40 Series GPUs. In DLSS 4, NVIDIA introduces Multi-Frame Generation, which enables up to three more frames to be generated per rendered frame on GeForce RTX 50 Series GPUs. For more information, see the DLSS Frame Generation Programming Guide . Figure 2. The difference with DLSS 4 Multi Frame Generation Using Streamline, integrate DLSS Multi Frame Generation. Pass in the appropriate constants, camera matrices, and input resources in addition to the ones marked for DLSS Super Resolution (for example, Hudless and UI Color or Alpha). Be sure to also disable DLSS Multi Frame Generation when appropriate, such as for in-menu or scene transitions. Validate that inputs are correct using the Streamline Imgui plugin and buffer visualization using the development DLLs. DLSS Multi Frame Generation checklist Here’s the integration checklist: All the required inputs are passed to Streamline: Depth buffers Motion vectors HUD-less color buffers UI color buffers Common constants and a frame index are provided for each frame using the slSetConstants and slSetFeatureConstants methods. All tagged buffers are valid at frame present time, and they are not reused for other purposes. Buffers are tagged with a unique ID of 0 . The frame index provided with the common constants matches the presented frame. For multi-frame generation, numFramesToGenerate is set 1 less than the multiplier mode. For example: set it to 1 for 2x, 2 for 3x, and 3 for 4x. Inputs passed into Streamline look correct, as well as camera matrices and dynamic objects. The application checks the signature of sl.interposer.dll to make sure it is a genuine NVIDIA library. Requirements for Dynamic Resolution are met, if supported by the game. DLSS Frame Generation is disabled by setting sl::DLSSGOptions::mode to sl::DLSSGMode::off in the following cases: When the game is paused Loading In-menu Modifying resolution Switching between full-screen and windowed mode Not rendering game frames The swap chain is re-created every time DLSS Frame Generation is turned on or off by changing sl::DLSSGOptions::mode . This avoids unnecessary performance overhead when DLSS Frame Generation is switched off. The amount of motion blur is reduced. When DLSS Frame Generation is enabled, halve the distance and magnitude of motion blur. Reflex is properly integrated. For more information, see the checklist in the Reflex Programming Guide . The in-game user interface for enabling or disabling DLSS Frame Generation is implemented and follows the NVIDIA RTX UI guidelines . Only full production, non-watermarked libraries are packaged in the release build. There are no errors or unexpected warnings in the Streamline and DLSS Frame Generation log files while running the feature. The extent resolution or resource size, whichever is in use, for the Hudless and UI Color and Alpha buffers exactly matches that of the back buffer. The VSync setting in the menu is hooked to sl::DLSSGState::bIsVsyncSupportAvailable . DLSS Super Resolution integration overview DLSS Super Resolution is an NVIDIA technology that uses AI to upscale a lower-resolution image to a higher resolution, boosting game performance. For more information, see the DLSS Programming Guide . Figure 3. High-quality graphics using DLSS Super Resolution Using Streamline, pass in the necessary input resources and set up the upscaling pipeline before all post-processing. Validate IQ and performance benefits from DLSS Super Resolution. DLSS Super Resolution checklist Here’s the integration checklist: A game-specific application ID is used during initialization. DLSS Super Resolution is integrated as close to the start of post-processing as possible. Mip-map bias is set when DLSS Ray Resolution is enabled. Without this, textures will look blurry, smudgy, or low-resolution. Motion vectors for all scenes, materials, and objects are accurate. Static scenes resolve and compatible jitter is confirmed. Incorrect or missing jitter can lead to poor anti-aliasing and increased flickering. Exposure value is properly sent each frame or auto-exposure is enabled. DLSS modes are queried and user-selectable in the user interface or dynamic resolution support is active and tested. The full production, non-watermarked DLSS library ( nvngx_dlss.dll ) is packaged in the release build. The Camera Reset flag is passed on scene changes, view changes such as first person to third person, or during camera jumps in cutscenes. NGX cleanup or shutdown procedures are performed when DLSS is no longer needed. Otherwise, you’ll leak resources/memory. DLSS should only replace the primary upscale pass on the main render target and should not be used on secondary buffers such as shadows and reflections. DLSS Ray Reconstruction integration overview DLSS Ray Reconstruction improves the image quality of path-traced and intensive ray-traced content. It does this by replacing hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays. For more information, see the DLSS Ray Reconstruction Programming Guide . Figure 4. With and without DLSS Ray Reconstruction ALT text: Two versions of a still image from the game Cyberpunk show DLSS off in Overdrive mode and DLSS 3.5 in Overdrive mode with Ray Reconstruction. DLSS Ray Resolution is an addition to DLSS Super Resolution and uses the same perf quality modes. Make sure that DLSS Super Resolution is integrated first. Then, integrate DLSS Ray Reconstruction using the Streamline SDK. Validate IQ and performance benefits from DLSS Ray Reconstruction. Ray Reconstruction integration checklist Here’s the integration checklist: A game-specific application ID is used during initialization. Both DLSS Ray Reconstruction and DLSS Super Resolution are integrated at the start, or as close to the start as possible, of post-processing. Other denoisers in your pipeline, such as NRD, are completely disabled. DLSSDPreset is set to eDefault if you previously set it to ePresetA or ePresetB . Mip-map bias is set when DLSS Ray Reconstruction is enabled. Without this, textures will look blurry,  smudgy, or low-resolution. The required buffers are provided. DLSS Ray Reconstruction requires linear depth, which is different from those buffers provided to DLSS Super Resolution and DLSS Frame Generation. Use kBufferTypeLinearDepth , which is specifically provided for this. The Inverted Depth bit is set if the depth buffer provided has inverted z ordering. Motion vectors and specular motion vectors for all scenes, materials, and objects are accurate. Static scenes resolve and compatible jitter is confirmed. Incorrect or missing jitter can lead to poor anti-aliasing and increased flickering. DLSS modes are queried and user-selectable in the user interface or dynamic resolution support is active and tested. A DLSS RR enable and disable toggle is available. The full production, non-watermarked DLSS Ray Reconstruction library ( nvngx_dlssd.dll ) is packaged in the release build. The Camera Reset flag is passed on scene changes, view changes such as first person to third person, or during camera jumps in cutscenes. NGX cleanup and shutdown procedures are performed when DLSS is no longer needed. Otherwise, you’ll leak resources and memory. Summary In this post, I explained how to integrate DLSS 4 features, with a focus on the Streamline SDK for simplified plugin integration. For more information about how to get started today, see NVIDIA DLSS .
https://developer.nvidia.com/zh-cn/blog/how-to-integrate-nvidia-dlss-4-into-your-game-with-nvidia-streamline/
如何借助 NVIDIA Streamline 将 NVIDIA DLSS 4 集成到您的游戏中
NVIDIA DLSS 4 是 NVIDIA GeForce RTX 50 系列 GPU 中引入的 DLSS 的最新版本。它包含多项新功能: DLSS 多帧生成: 已为 GeForce RTX 50 系列 GPUs 引入多帧生成,允许每渲染帧生成多达 3 帧。 DLSS Ray Reconstruction & DLSS Super Resolution (Beta): :已更新,可使用新的 Transformer 模型,提高时间稳定性、动态细节和照明细节。 DLSS 帧生成 :帧生成模型经过更新,可在 GeForce RTX 50 和 RTX 40 系列 GPUs 上运行更快,并使用更少的 VRAM。 DLSS 4| 下面介绍如何在集成中开始使用 DLSS 4、本文重点介绍 Streamline SDK ,它提供了一个即插即用的框架,可简化插件集成。 简化 SDK 集成概述 NVIDIA Streamline SDK 是一个开源框架,可简化将 NVIDIA DLSS 和其他硬件供应商提供的各种超分辨率技术集成到游戏和应用中的过程。它使您能够通过游戏引擎中的单个集成点实施多个放大解决方案。 图 1. Streamline SDK 架构 要将 Streamline 添加到您的应用中,请按照 Streamline 手动连接指南进行操作。无需任何功能即可集成,并专注于手动连接和资源状态跟踪等任务。 在加载 DLL 之前,请先验证 NVIDIA,并在 sl.itnerposer.dll 上简化双重签名。遵循编程指南中的安全部分中的验证流程。 检查每个 DLSS 4 功能的系统 (硬件和软件) 支持,并根据报告的硬件支持向最终用户显示适当的错误消息。 DLSS 多帧生成集成概述 DLSS 帧生成是一项 NVIDIA 技术,该技术利用 AI 在游戏中的现有帧之间生成更多帧,从而显著提高帧率,并在 NVIDIA GeForce RTX 50 Series 和 NVIDIA RTX 40 Series GPUs 上实现更流畅的游戏体验。 在 DLSS 4 中,NVIDIA 引入了多帧生成技术,该技术可在 GeForce RTX 50 Series GPUs 上为每一渲染帧多生成三个帧。有关更多信息,请参阅 DLSS Frame Generation Programming Guide 。 图 2、与 DLSS 4 Multi Frame Generation 的区别 使用 Streamline 集成 DLSS Multi Frame Generation。除了标记为 DLSS Super Resolution 的资源 (例如 Hudless 和 UI Color 或 Alpha) 外,还传递适当的常量、摄像头矩阵和输入资源。另外,请务必在适当时禁用 DLSS Multi Frame Generation,例如菜单中或场景过渡。 使用 Streamline Imgui 插件验证输入是否正确,并使用开发 DLL 实现缓冲区可视化。 DLSS 多帧生成检查清单 以下是集成检查清单: 所有必需的输入都将传递给 Streamline: 深度缓冲区 运动矢量 无 HUD 色彩缓冲区 UI 颜色缓冲区 我们使用 slSetConstants 和 slSetFeatureConstants 方法为每一帧提供常用常量和帧索引。 所有标记缓冲区在帧出现时均有效,且不会重复用于其他目的。 缓冲区的唯一 ID 为 0 。 随常用常量提供的帧索引与呈现的帧相匹配。 对于多帧生成, numFramesToGenerate 的设置比乘数模式小 1。例如:2x 设置为 1 ,3x 设置为 2 ,4x 设置为 3 。 传入 Streamline 的输入看起来正确,摄像头矩阵和动态对象也是如此。 应用程序会检查 sl.interposer.dll 的签名,以确保它是真正的 NVIDIA 库。 如果游戏支持,则需满足动态分辨率要求。 在以下情况下,通过将 sl::DLSSGOptions::mode 设置为 sl::DLSSGMode::off 来禁用 DLSS 帧生成: 游戏暂停时 加载 菜单中 修改分辨率 在全屏模式和窗口模式之间切换 不渲染游戏帧 每次打开或关闭 DLSS 帧生成时,系统都会通过更改 sl::DLSSGOptions::mode 来重新创建交换链。这可避免在关闭 DLSS 帧生成时产生不必要的性能开销。 动态模糊量减少。当 DLSS 帧生成启用后,将动态模糊的距离和大小减半。 Reflex 已正确集成。有关更多信息,请参阅 Reflex 编程指南 中的检查清单。 启用或禁用 DLSS 帧生成的游戏中用户界面遵循 NVIDIA RTX UI 指南 。 在发布版本中,只有完整的生产、非水印库被打包。 运行该功能时,Streamline 和 DLSS 帧生成日志文件中没有出现任何错误或意外警告。 Hudless 和 UI Color 和 Alpha 缓冲区的范围分辨率或资源大小 (以使用中的大小为准) 与反向缓冲区的范围分辨率或资源大小完全匹配。 菜单中的 VSync 设置已关联到 sl::DLSSGState::bIsVsyncSupportAvailable 。 DLSS 超分辨率集成概述 DLSS 超分辨率是一项 NVIDIA 技术,可利用 AI 将低分辨率图像升级到更高分辨率,从而提升游戏性能。有关更多信息,请参阅 DLSS 编程指南 。 图 3、使用 DLSS Super Resolution 的高质量图形 使用 Streamline,在进行所有后处理之前,传递必要的输入资源并设置放大管线。验证 DLSS Super Resolution 带来的 IQ 和性能优势。 DLSS 超分辨率检查清单 以下是集成检查清单: 在初始化期间使用特定于游戏的应用程序 ID。 DLSS 超分辨率尽可能在后处理开始时集成。 启用 DLSS 光线分辨率后,系统会设置 Mip-map 偏差。否则,纹理将显得模糊、模糊或低分辨率。 所有场景、材质和对象的运动矢量均准确无误。 确认静态场景解析和兼容抖动。抖动不正确或缺失会导致抗锯齿效果不佳和闪烁增加。 在每一帧或启用 auto-exposure 时,都会正确发送 Exposure 值。 系统会查询 DLSS 模式,并允许用户在用户界面中选择 DLSS 模式,或启用动态分辨率支持并对其进行测试。 完整的无水印生产 DLSS 库 ( nvngx_dlss.dll ) 已封装在版本构建中。 在场景更改、视图更改 (例如将第一人称更改为第三人称) 或转换场景中的镜头跳转期间,系统会将“Camera Reset”标志传递给相关人员。 当不再需要 DLSS 时,系统会执行 NGX 清理或关闭程序。否则,您将泄露资源/内存。 DLSS 应仅取代主渲染目标上的主要 upscale 通道,而不应用于阴影和反射等次要缓冲区。 DLSS Ray Reconstruction 集成概述 DLSS 光线重建技术可提高路径追踪和密集型光线追踪内容的画质。它通过将手动调整的降噪器替换为 NVIDIA 超级计算机训练的 AI 网络来实现这一点,该网络可在采样光线之间生成更高质量的像素。如需了解更多信息,请参阅 DLSS Ray Reconstruction Programming Guide 。 图 4、启用和禁用 DLSS Ray Reconstruction ALT 文本:“赛博朋克”游戏中静态图像的两个版本在 Overdrive 模式下关闭 DLSS,在 Overdrive 模式下通过光线重建功能关闭 DLSS 3.5 DLSS 光线分辨率是 DLSS 超分辨率的新增功能,并使用相同的性能质量模式。确保首先集成 DLSS 超分辨率。然后,使用 Streamline SDK 集成 DLSS 光线重建。验证 DLSS 光线重建带来的 IQ 和性能优势。 Ray Reconstruction 集成检查清单 以下是集成检查清单: 在初始化期间使用特定于游戏的应用程序 ID。 DLSS 光线重建和 DLSS 超分辨率均在后处理开始时或尽可能接近开始时集成。 工作流中的其他降噪器 (例如 NRD) 将完全禁用。 如果您之前将 ePresetA 或 ePresetB 设置为 eDefault ,则会将 DLSSDPreset 设置为 tg_18。 启用 DLSS 光线重建后,系统会设置 Mip-map 偏差。否则,纹理将显得模糊、模糊或低分辨率。 The required buffers are provided. DLSS 光线重建需要线性深度,这不同于为 DLSS 超分辨率和 DLSS 帧生成提供的缓冲区。使用专门为此提供的 kBufferTypeLinearDepth 。 如果所提供的深度缓冲区采用反向 z 排序,则设置 Inverted Depth 位。 所有场景、材质和物体的运动矢量和镜面运动矢量均准确无误。 确认静态场景解析和兼容抖动。抖动不正确或缺失会导致抗锯齿效果不佳和闪烁增加。 系统会查询 DLSS 模式,并允许用户在用户界面中选择 DLSS 模式,或启用动态分辨率支持并对其进行测试。 您可以使用 DLSS RR 启用和禁用开关。 完整的量产版、无水印的 DLSS Ray Reconstruction 库 ( nvngx_dlssd.dll ) 已封装在发行版中。 在场景更改、视图更改 (例如将第一人称更改为第三人称) 或转换场景中的镜头跳转期间,系统会将“Camera Reset”标志传递给相关人员。 当不再需要 DLSS 时,系统会执行 NGX 清理和关闭程序。否则,您将泄露资源和内存。 总结 在本文中,我介绍了如何集成 DLSS 4 功能,并重点介绍了简化插件集成的 Streamline SDK 。有关如何立即开始使用的更多信息,请参阅 NVIDIA DLSS 。
https://developer.nvidia.com/blog/nvidia-rtx-neural-rendering-introduces-next-era-of-ai-powered-graphics-innovation/
NVIDIA RTX Neural Rendering Introduces Next Era of AI-Powered Graphics Innovation
NVIDIA today unveiled next-generation hardware for gamers, creators, and developers—the GeForce RTX 50 Series desktop and laptop GPUs. Alongside these GPUs, NVIDIA introduced NVIDIA RTX Kit , a suite of neural rendering technologies to ray trace games with AI, render scenes with immense geometry, and create game characters with lifelike visuals. RTX Kit enhances geometry, textures, materials, and lighting, providing greater visual fidelity and performance for complex open worlds while reducing artifacts, instability, and VRAM usage. Developers can integrate individual—or all—RTX Kit SDKs directly into their rendering pipeline. RTX Neural Shaders bring AI to programmable shaders 25 years ago NVIDIA introduced GeForce and programmable shaders, unlocking two decades of innovation in graphics technology, from pixel shading to compute shading to real-time ray tracing.  Alongside the new GeForce RTX 50 Series GPUs powered by NVIDIA Blackwell architecture, NVIDIA is introducing RTX Neural Shaders, which bring small neural networks into programmable shaders, enabling a new era of graphics innovation. The applications of neural shading are vast, including radiance caching, texture compression, materials, radiance fields, and more. The RTX Neural Shaders SDK enables developers to train their game data and shader code on an RTX AI PC and accelerate their neural representations and model weights with NVIDIA Tensor Cores at runtime. During training, neural game data is compared to the output of the traditional data and is refined over multiple cycles. Developers can simplify the training process with Slang, a shading language that splits large, complex functions into smaller pieces that are easier to handle. Figure 1. Train your game data and shader code and accelerate inference with NVIDIA Tensor Cores This breakthrough technology is used for three applications: RTX Neural Texture Compression, RTX Neural Materials, and Neural Radiance Cache (NRC). RTX Neural Texture Compression uses AI to compress thousands of textures in less than a minute. Their neural representations are stored or accessed in real time or loaded directly into memory without further modification. The neurally compressed textures save up to 7x more VRAM or system memory than traditional block compressed textures at the same visual quality. RTX Neural Materials uses AI to compress complex shader code typically reserved for offline materials and built with multiple layers such as porcelain and silk. The material processing is up to 5x faster, making it possible to render film-quality assets at game-ready frame rates. RTX Neural Radiance Cache uses AI to learn multi-bounce indirect lighting to infer an infinite amount of bounces after the initial one to two bounces from path traced rays. This offers better path traced indirect lighting and performance versus path traced lighting without a radiance cache. NRC is now available through the RTX Global Illumination SDK, and will be available soon through RTX Remix and Portal with RTX. Video 1. Integrate RTX Kit SDKs into your rendering pipeline to create worlds and characters with lifelike realism . The RTX Kit technologies shown in the Zorah demo were built in Unreal Engine 5. Developers can get started with the RTX Neural Shaders and RTX Neural Texture Compression SDKs at the end of the month through NVIDIA RTX Kit . Sign up to be notified of availability . For DirectX developers, Microsoft will be adding Cooperative Vector support to DirectX in the near future. This will enable neural shaders to take advantage of accelerated performance with RTX Tensor Cores. “Microsoft is excited to partner with NVIDIA to catalyze this next generation of graphics programming by bringing industry-wide support for neural shading technology,” said Bryan Langley, partner group program manager of Silicon, Media, and Graphics at Microsoft. “DirectX will soon support Cooperative Vectors, which will unlock the power of Tensor Cores on NVIDIA GeForce RTX hardware and enable game developers to fully accelerate neural shaders on Windows.” Crossing the uncanny valley of rendering digital humans Rendering human faces is one of the most challenging tasks in real-time graphics, as people are naturally conditioned to notice even the smallest deviations from reality. RTX Neural Faces offers an innovative, new approach to improve face quality using generative AI. Instead of brute force rendering, Neural Faces takes a simple rasterized face plus 3D pose data as input and uses a real-time generative AI model to infer a more natural face. The generated face is trained from thousands of offline generated images of that face at every angle, under different lighting, emotion, and occlusion conditions. The training pipeline can use real photographs or AI generated images, with variants created using a diffusion model.  The trained model is then NVIDIA TensorRT optimized to infer the face in real time. RTX Neural Faces represent a first step in a journey to redefine real-time graphics with generative AI. RTX Neural Faces is complemented by the RTX Character Rendering SDK for hair and skin. Getting hair and skin on game characters to look and simulate accurately is challenging. Cutting edge strand-based hair techniques require up to 30 triangles per strand of hair and 4 million triangles for an entire groom, which makes ray tracing BVH builds expensive. Linear-Swept Spheres (LSS) is a new GeForce RTX 50 Series GPU-accelerated primitive that reduces the amount of geometry necessary to render strands of hair and uses spheres instead of triangles to get a more accurate fit for hair shapes. LSS makes it possible to do ray-traced hair with better performance and a smaller memory footprint. Traditional rendering methods don’t accurately simulate how light interacts with human skin, which can result in a plastic-like look. Subsurface Scattering (SSS) simulates how light penetrates beneath the surface of translucent materials and scatters internally, creating a softer, more natural appearance. Developers can get started with both RTX Skin and Hair algorithms through RTX Character Rendering SDK, coming at the end of the month. RTX Skin will also be available in RTX Remix. Sign up to be notified of availability . Video 2. RTX Neural Faces showcased in Project R2X, Zorah technology demo . The RTX Kit technologies shown in the Zorah demo were built in Unreal Engine 5. Create infinitely complex open worlds The number of triangles used to create games has exponentially increased over the past 30 years. With the introduction of the Unreal Engine 5 Nanite geometry system, developers can build open worlds filled with hundreds of millions of triangles. However, as ray traced game scenes explode in geometric complexity, the cost to build the bounding volume hierarchy (BVH) each frame for various levels of detail (LOD) grows exponentially, making it impossible to achieve real-time frame rates. RTX Mega Geometry accelerates BVH building, making it possible to ray trace up to 100x more triangles than today’s standard. Figure 2. RTX Mega Geometry enables hundreds of millions of animated triangles through real-time subdivision surfaces RTX Mega Geometry intelligently updates clusters of triangles in batches on the GPU, reducing CPU overhead and increasing performance and image quality in ray traced scenes. RTX Mega Geometry is coming soon to the NVIDIA RTX Branch of Unreal Engine (NvRTX) , so developers can use Nanite and fully ray trace every triangle in their projects. For developers using custom engines, RTX Mega Geometry will be available at the end of the month as an SDK to RTX Kit. Sign up to be notified of availability . Maximize performance and responsiveness of next-generation worlds NVIDIA DLSS is a suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality. The latest breakthrough, DLSS 4 , brings new Multi Frame Generation and Transformer-powered Super Resolution and Ray Reconstruction, accelerated by GeForce RTX 50 Series GPUs and fifth-generation Tensor Cores. DLSS Multi Frame Generation boosts FPS by generating up to three extra frames for each frame that is rendered. These frames are properly paced using improved hardware-based flip metering on GeForce RTX 50 Series. DLSS 4 also introduces a new transformer-based architecture for DLSS Super Resolution, DLSS Ray Reconstruction and Deep Learning Anti-Aliasing, which uses self-attention and can much more easily identify the longer-range patterns across a much larger pixel window for improved image quality vs traditional convolutional neural networks (CNNs). Video 3. DLSS Multi Frame Generation and new Transformer architecture provide supreme speed and visuals DLSS 4 SDK will be available at the end of the month to all developers through NVIDIA Streamline. Streamline simplifies integration of the latest NVIDIA technologies into applications and games. The new transformer models are backwards compatible with existing Super Resolution and Ray Reconstruction integrations. For Unreal Engine 5 developers, DLSS 4 is coming at the end of the month to NvRTX. Sign up to receive DLSS updates . NVIDIA Reflex is a latency reduction technology that synchronizes the CPU and GPU for ultimate responsiveness, providing faster target acquisition, quicker reaction times, and improved aim precision in latency-sensitive games. New Reflex 2 introduces Frame Warp, which further reduces latency by updating the rendered game frame based on the latest mouse input right before it is sent to the display. A predictive inpainting technique uses color and depth data from prior frames to fill holes left by the warp. As a result, Reflex Frame Warp reduces latency up to 75% compared to native rendering. Reflex 2 SDK is coming soon to GeForce RTX 50 Series GPUs , with support added for other GeForce RTX GPUs in a future update. Sign up to receive Reflex updates . Video 4. Reflex Frame Warp enables 75% faster responsiveness Redefining game AI with ACE autonomous game characters The term AI has been used in games for decades. These non-playable characters (NPCs) traditionally follow strict rules designed to mimic intelligence, adhere to a guided story, and provide a scripted interaction with the player. However, with the rise of intelligent language models, game AI is primed for a truly intelligent overhaul. NVIDIA is redefining game AI with the introduction of NVIDIA ACE autonomous game characters .  Powered by generative AI, NVIDIA ACE enables living, dynamic game worlds with companions that comprehend and support player goals, and enemies that adapt dynamically to player tactics. Enabling these autonomous characters are new ACE small language models (SLMs) capable of planning at human-like frequencies required for realistic decision making, and multimodal SLMs for vision and audio that allow AI characters to hear audio cues and perceive their environment. Figure 3. NVIDIA ACE enables game characters to perceive, plan and execute like human players Replicating human decision-making requires game characters to interact naturally with players, perceive their environment, reason through potential options and strategically plan and execute their next move within hundreds of milliseconds. Innovative distillation techniques and advancements in GeForce RTX 50 Series GPUs enable multiple AI models to run simultaneously on a GPU, giving autonomy to game characters while maximizing the image quality and performance of the game. New and NVIDIA ACE models coming soon include the following. Models for perception NemoAudio 4B Instruct takes audio and text in and produces text out and is capable of describing a soundscape in a gaming environment. Cosmos Nemotron 4B 128K Instruct takes images and text in and produces text out and is capable of simple spatial understanding. Parakeet CTC XXL 1.1B Multilingual transcribes multilingual audio to text. NV-OCR quickly recognizes characters and text on screen and transcribes into plain text. Models for cognition Mistral Nemo Minitron 8B 128K Instruct is state-of-the-art for instruction following capabilities and allows game characters to accurately roleplay. Mistral Nemo Minitron 4B 128K Instruct reduces accuracy for less VRAM consumption. Mistral Nemo Minitron 2B 128K Instruct is our smallest version, which uses 1.5 GB of VRAM. Models for action A 2 Flow is a zero-shot text-to-speech model that supports several languages. Perception, cognition, and actions are stored in an optimized embedding model that game characters continuously recall to make their next decision. Implementing these models within your agentic framework will drive human-like decision making in real-time gameplay scenarios. KRAFTON’s PUBG Ally demonstrates a Co-Playable Character (CPC) that strategically plans and executes clever tactics based on real-time interactions, dynamically working with a human player to ensure survival. KRAFTON’s InZOI , their upcoming life simulation game, demonstrates Smart Zoi: characters that adjust their behavior through actions they take throughout the day. Wemade’s MIR5 , their upcoming MMORPG, delivers an intelligent raid boss that learns from past battles and adapts its tactics based on player behavior, creating more engaging experiences every run. NetEase’s NARAKA: BLADEPOINT MOBILE PC VERSION has an intelligent AI companion that carries out actions based on commands from the player. Video 5. Krafton introduces PUBG Ally, the world’s first Co-Playable Character Audio-driven AI facial animation will soon get a big update, making characters more expressive. NVIDIA announced a new Audio2Face real-time diffusion-based architecture that delivers more accurate and detailed lip-sync, improved dynamic and lively emotions, and natural nonverbal responses. Video 6. New features showcased with James, coming soon to NVIDIA Audio2Face The latest NVIDIA ACE on-device models for agentic workflows are coming soon in early access. Get started on your PC deployment with NVIDIA In-Game Inferencing (IGI) SDK , which preconfigures the PC with the necessary AI models, engines, and dependencies. It orchestrates in-process AI inference for C++ games and applications and supports all major inference backends across different hardware accelerators. Get started GeForce RTX 50 Series introduces the next era of rendering and AI. Train and deploy AI directly within shaders for better compression and approximation techniques. Build more complex open worlds with breakthrough BVH building techniques. Leverage new digital human rendering technologies with real-time performance. Use the latest ray tracing algorithms powered by AI to simulate accurate light transport. Transform traditional game characters into AI-powered autonomous versions for new gameplay experiences. Access the RTX Kit, DLSS 4, Reflex 2, and NVIDIA ACE technologies through the Game Development resource hub.
https://developer.nvidia.com/zh-cn/blog/nvidia-rtx-neural-rendering-introduces-next-era-of-ai-powered-graphics-innovation/
NVIDIA RTX 神经渲染引领 AI 图形创新新时代
NVIDIA 今天发布了面向游戏玩家、创作者和开发者的新一代硬件 – GeForce RTX 50 系列台式机和笔记本电脑 GPU。除这些 GPU 外,NVIDIA 还推出了 NVIDIA RTX Kit ,这是一套神经渲染技术,可使用 AI 对游戏进行光线追踪、渲染具有庞大几何图形的场景,以及创建具有逼真视觉效果的游戏角色。RTX Kit 可增强几何图形、纹理、材质和照明,为复杂的开放世界提供更高的视觉保真度和性能,同时减少伪影、稳定性和 VRAM 使用率。开发者可以将单个或所有 RTX Kit SDK 直接集成到其渲染管线中。 RTX 神经着色器将 AI 引入可编程着色器 25 年前,NVIDIA 推出了 GeForce 和可编程着色器,解锁了图形技术领域 20 年的创新成果,从像素着色、计算着色到实时光线追踪,不一而足。除了采用 NVIDIA Blackwell 架构的全新 GeForce RTX 50 系列 GPU 之外,NVIDIA 还将推出 RTX 神经着色器,该技术可将小型神经网络引入可编程着色器,开创图形创新的新时代。神经着色的应用非常广泛,包括辐射缓存、纹理压缩、材质、辐射场等。 借助 RTX 神经着色器 SDK,开发者可以在 RTX AI PC 上训练游戏数据和着色器代码,并在运行时使用 NVIDIA Tensor Core 加速其神经表征和模型权重。在训练期间,我们会将神经游戏数据与传统数据的输出进行比较,并在多个周期内进行改进。开发者可以使用 Slang 简化训练过程,Slang 是一种着色语言,可以将大型复杂功能拆分成更易于处理的较小部分。 图 1、使用 NVIDIA Tensor Core 训练游戏数据和着色器代码,并加速推理 这项突破性技术可用于三种应用:RTX 神经纹理压缩、RTX 神经材质和神经辐射缓存(Neural Radiance Cache,NRC)。 RTX 神经纹理压缩 利用 AI 在不到一分钟的时间内压缩数千个纹理。它们的神经表征可实时存储或访问,或无需进一步修改即可直接加载到内存中。在相同的视觉质量下,神经压缩纹理可节省高达传统块压缩纹理 7 倍的 VRAM 或系统内存。 RTX Neural Materials 使用 AI 来压缩复杂的着色器代码 ,这些代码通常专为离线材质而设计,并由多层材质 (例如:porcelain 和 silk) 构建而成。材质处理速度最高可提升至原来的 5 倍,因此能够以游戏就绪的帧率渲染电影级画质素材。 RTX Neural Radiance Cache 使用 AI 学习多次反射间接照明,从而在路径追踪光线发生一到两次初始反射后,推理无限次反射。与无辐射缓存的路径追踪照明相比,这提供了更好的路径追踪间接照明和性能。NRC 现已通过 RTX Global Illumination SDK 提供,并将很快通过 RTX Remix 和 Portal with RTX 提供。 视频 1、将 RTX Kit SDK 集成到渲染管线中,以创建逼真的世界和角色 开发者可以在本月底通过 NVIDIA RTX 套件 开始使用 RTX Neural Shaders 和 RTX Neural Texture Compression SDK。 注册以在上市时收到通知。 对于 DirectX 开发者, Microsoft 将在不久的将来为 DirectX 添加 Cooperative Vector 支持 。这将使神经着色器能够利用 RTX Tensor Cores 的加速性能。 Microsoft 的 Silicon、Media 和 Graphics 合作伙伴团队项目经理 Bryan Langley 表示:“微软很高兴能与 NVIDIA 合作,通过为神经着色技术提供全行业支持,推动新一代图形编程。DirectX 很快将支持 Cooperative Vectors,这将释放 NVIDIA GeForce RTX 硬件上 Tensor Cores 的强大功能,并使游戏开发者能够在 Windows 上全面加速神经着色器。” 穿越渲染数字人的神秘山谷 在实时图形中,人脸渲染是一项极具挑战性的任务,因为人们在自然条件下甚至会注意到与现实的微小偏差。RTX Neural Faces 提供了一种使用生成式 AI 改善人脸质量的创新方法。Neural Faces 不采用强力渲染,而是采用简单的光栅化面部和 3D 姿态数据作为输入,并使用实时生成式 AI 模型来推理更自然的面部。在不同的光照、情绪和遮蔽条件下,从不同角度对生成的面部从数千张离线生成的图像进行训练。 训练流程可以使用真实照片或 AI 生成的图像,并使用扩散模型创建变体。然后,对经过训练的模型进行 NVIDIA TensorRT 优化,以实时推理面部。RTX 神经面孔代表着利用生成式 AI 重新定义实时图形之旅的第一步。 RTX Neural Faces 由适用于头发和皮肤的 RTX Character Rendering SDK 提供支持。让游戏角色的头发和皮肤看起来和模拟准确具有挑战性。基于链的尖端头发技术每根头发最多需要 30 个三角形,而对于整根梳子最多需要 400 万个三角形,这使得光线追踪 BVH 的构建成本高昂。Linear-Swept Spheres (LSS) 是由 GeForce RTX 50 系列 GPU 加速的全新基元,可减少渲染发丝所需的几何体数量,并使用球体代替三角形来更准确地契合头发形状。借助 LSS,我们能够以更出色的性能和更小的显存占用制作光线追踪头发。 传统的渲染方法无法准确模拟光线与人类皮肤的交互方式,而这可能会产生塑料般的外观。次表面散射 (SSS) 可模拟光线在半透明材质表面下的渗透和内部散射情况,从而打造更柔和、更自然的外观。开发者可以通过本月底推出的 RTX 角色渲染 SDK 开始使用 RTX 皮肤和头发算法。RTX Remix 中也提供 RTX 皮肤。 注册以在上市时接收通知 。 视频 2、Project R2X 和 Zorah 技术演示中展示的 RTX Neural Faces 创建无限复杂的开放世界 在过去 30 年里,用于创作游戏的三角形数量呈指数级增长。随着 Unreal Engine 5 Nanite 几何系统的推出,开发者可以构建充满数以亿计三角形的开放世界。但是,随着光线追踪游戏场景的几何复杂性呈爆炸式增长,为不同细节级别 (LOD) 每帧构建包围盒层次结构 (BVH) 的成本呈指数级增长,因此无法实现实时帧率。RTX Mega Geometry 可加速 BVH 的构建,与目前的标准相比,可对三角形进行多达 100 倍的光线追踪。 视频 3、DLSS 多帧生成和新的 Transformer 架构可提供超高的速度和视觉效果 所有开发者将于本月底通过 NVIDIA Streamline 获得 DLSS 4 SDK。Streamline 简化了将新的 NVIDIA 技术集成到应用和游戏中的过程。新的 Transformer 模型向后兼容现有的 Super Resolution 和 Ray Reconstruction 集成。对于 Unreal Engine 5 开发者,DLSS 4 将于本月底登陆 NvRTX。 注册以接收 DLSS 更新 。 NVIDIA Reflex 是一项延迟降低技术,可同步 CPU 和 GPU 以实现出色的响应速度,从而在对延迟敏感的游戏中加快目标捕获速度、缩短反应时间并提高瞄准精度。新的 Reflex 2 引入了 Frame Warp,进一步降低延迟,根据最新的鼠标输入更新渲染的游戏帧,然后发送到显示器。预测性 inpainting 技术使用先前帧的颜色和深度数据来填充线程束留下的孔。因此,与原生渲染相比,Reflex Frame Warp 可将延迟降低高达 75%。Reflex 2 SDK 即将登陆 GeForce RTX 50 系列 GPUs,并将在后续更新中添加对其他 GeForce RTX GPUs 的支持。注册以接收 Reflex 更新。 视频 4、Reflex Frame Warp 可将响应速度提高 75% 借助 ACE 自主游戏角色重新定义游戏 AI 数十年来,游戏中一直使用 AI 这个术语。这些非玩家角色(NPC)通常遵循严格的规则,旨在模仿智能、遵循引导式故事情节,并以脚本形式与玩家进行交互。然而,随着智能语言模型的兴起,游戏 AI 已经为真正的智能革新做好准备。 NVIDIA 引入了 NVIDIA ACE 自主游戏角色 ,重新定义了游戏 AI。NVIDIA ACE 由生成式 AI 提供动力支持,可打造生动、动态的游戏世界,其伙伴能够理解并支持玩家目标,敌人能够动态适应玩家战术。 为实现这些自主角色,我们推出了能够以逼真决策所需的类似人类频率进行规划的新型 ACE 小语言模型 (SLMs),以及用于视觉和音频的多模态 SLMs,使 AI 角色能够听到音频提示并感知环境。 图 3、NVIDIA ACE 使游戏角色能够像人类玩家一样感知、计划和执行 复制人类决策需要游戏角色与玩家进行自然交互,感知环境,通过潜在选项推理,并在数百毫秒内战略性地规划和执行下一步行动。借助 GeForce RTX 50 系列 GPU 中的蒸馏创新技术和改进,多个 AI 模型能够在 GPU 上同时运行,为游戏角色提供自主性,同时更大限度地提高游戏的画质和性能。即将推出的新款 NVIDIA ACE 模型包括: 感知模型 NemoAudio 4B Instruct 能够输入音频和文本并输出文本,能够描述游戏环境中的音景。 Cosmos Nemotron 4B 128K Instruct 可将图像和文本导入并生成文本,并且能够进行简单的空间理解。 Parakeet CTC XXL 1.1B 多语种可将多语种音频转录为文本。 NV-OCR 可快速识别屏幕上的字符和文本,并将其转录为纯文本。 认知模型 Mistral Nemo Minitron 8B 128K Instruct 是指令跟随功能的先进技术,可让游戏角色精确地进行角色扮演。 Mistral Nemo Minitron 4B 128K Instruct 降低了准确性,同时减少了 VRAM 消耗。 Mistral Nemo Minitron 2B 128K Instruct 是我们最小的版本 ,使用 1.5 GB 的 VRAM。 动作模型 A2 Flow 是一个零样本文本转语音模型,支持多种语言。 感知、认知和动作均存储在经过优化的嵌入模型中,游戏角色可不断调用该模型以做出下一个决策。在智能体框架中实施这些模型将在实时游戏场景中推动类似人类的决策。 KRAFTON 的 PUBG Ally 展示了一个可协作的角色 (CPC),该角色可根据实时交互战略性地规划和执行巧妙的战术,并与人类玩家动态协作以确保生存。 KRAFTON 的 InZOI 是他们即将推出的生活模拟游戏,展示了 Smart Zoi:角色通过全天采取的行动来调整自己的行为。 Wemade 的 MIR5 是他们即将推出的 MMORPG 游戏,它提供了一个智能的“突袭头目 (raid boss)”,可以从过去的战斗中学习,并根据玩家行为调整战术,从而在每次运行时创造更引人入胜的体验。 NetEase 的 NARAKA: BLADEPOINT MOBILE PC VERSION 拥有智能 AI 伙伴,可根据玩家的命令执行操作。 视频 5、Krafton 推出全球首个可协作角色“PUBG Ally” 音频驱动的 AI 面部动画很快将得到重大更新,使角色更具表现力。NVIDIA 宣布推出一款基于 Audio2Face 实时扩散的全新架构,可提供更准确、更细致的唇部同步效果,改善动态和生动的情绪,以及自然的非语言反应。 视频 6、James 共同展示的新功能即将登陆 NVIDIA Audio2Face 适用于代理工作流的最新 NVIDIA ACE 设备端模型即将推出抢先体验版。使用 NVIDIA 游戏内推理 (IGI) SDK 开始 PC 部署,该 SDK 可将 PC 预配置为必要的 AI 模型、引擎和依赖项。它为 C++ 游戏和应用编排过程内 AI 推理,并支持跨不同硬件加速器的所有主要推理后端。 开始使用 GeForce RTX 50 系列 开启了渲染和 AI 的新时代。直接在着色器中训练和部署 AI,以获得更好的压缩和近似技术。借助突破性的 BVH 构建技术,构建更复杂的开放世界。利用新的数字人渲染技术实现实时性能。使用由 AI 提供支持的最新光线追踪算法来模拟精确的光线传输。将传统游戏角色转换为 AI 驱动的自主版本,打造全新游戏体验。通过游戏开发资源中心访问 RTX 套件、DLSS 4、Reflex 2 和 NVIDIA ACE 技术。
https://developer.nvidia.com/blog/accelerating-film-production-with-dell-ai-factory-and-nvidia/
Accelerating Film Production with Dell AI Factory and NVIDIA
Filmmaking is an intricate and complex process that involves a diverse team of artists, writers, visual effects professionals, technicians, and countless other specialists. Each member brings their unique expertise to the table, collaborating to transform a simple idea into a captivating cinematic experience. From the initial spark of a story to the final cut, every step requires creativity, technical skill, and meticulous attention to detail. However, even with such talented teams, the film industry has long grappled with numerous challenges that can hinder productivity, inflate budgets, and stifle creativity. These challenges range from managing escalating production costs and adapting to rapidly changing technologies, to navigating shifting audience preferences and distribution platforms. All while striving to balance artistic vision with commercial viability in an increasingly competitive market. Enter AI, a game-changing technology that is transforming the filmmaking process from script to screen: Preproduction: AI tools help analyze scripts for narrative structure and market potential, while also assisting with storyboarding and budget optimizations. Production: AI optimizes shooting schedules, enables real-time visual effects rendering, and powers advanced camera systems for precise cinematography. Postproduction: AI is assisting editors by analyzing footage and offering suggestions for cuts, transitions, and color grading. These AI-driven innovations are not only making high-end production values more accessible to independent filmmakers but also enabling established studios to push creative boundaries further. By automating routine tasks and providing data-driven insights, AI empowers filmmakers to focus more on storytelling and artistic vision, potentially leading to more diverse and engaging cinematic experiences. One exemplary example of how AI is reshaping the filmmaking landscape is Kennedy Miller Mitchell (KMM), the renowned production house behind numerous groundbreaking franchises such as Mad Max, Babe, and Happy Feet. By using the Dell AI Factory with NVIDIA , KMM has integrated generative AI into various stages of production, transforming their workflow and enabling faster, more targeted content creation. From challenges to AI innovation KMM faced significant challenges in the evolving landscape of filmmaking. These included adapting to changing production models, managing increasingly complex visual effects, balancing creative vision with commercial viability, streamlining collaboration among growing teams, and keeping pace with rapidly advancing technology. To address these challenges, KMM developed a cutting-edge generative AI platform built on the Dell AI Factory with NVIDIA. This innovative solution integrates Dell Precision AI-ready workstations equipped with NVIDIA RTX GPUs for powerful computing and real-time rendering, alongside Dell PowerScale storage solutions for efficient file management and collaboration. This technological ecosystem enables KMM to leverage generative AI across various stages of production, from rapid previsualization of complex sequences to real-time scene modifications and advanced postproduction tasks. KMM’s generative AI leap in filmmaking In preproduction, KMM faced the challenge of creating detailed previsualizations for complex film sequences, a process that traditionally consumed significant time and resources. To address this, KMM implemented Dell Precision AI-ready workstations equipped with NVIDIA RTX GPUs to power their generative AI platform for rapid pre-visualization. The NVIDIA RTX GPUs enabled the KMM team to handle complex computations in real time, so they could explore and execute creative ideas with unprecedented freedom and efficiency, transforming the previsualization process from a time-consuming task into a dynamic, interactive experience. As a result, KMM was able to previsualize a 15-minute chase sequence for Furiosa: A Mad Max Saga in just 6–8 months with a small crew, a process that traditionally would have taken 1–2 years. Figure 1. NVIDIA AI Workstations are designed for advanced visualization, design, and compute performance In postproduction, KMM faced challenges with file storage, distribution, and collaboration among team members, which often led to bottlenecks in the workflow. To streamline these processes, KMM integrated Dell PowerScale storage arrays to store, distribute, and enable real-time collaboration on production files, supporting generative AI tasks such as rotoscoping, compositing, and sound blending. The Dell PowerScale system, with its 100-GB connectivity to back-end and nearline storage, enabled the team to pull sequences together for review in 30 minutes to an hour, a process that previously took days. This implementation resulted in a 20x performance gain in distributing production files. The future of AI in filmmaking As AI technologies continue to advance, you can expect to see more immersive storytelling through virtual and augmented reality, AI-driven narrative development based on audience preferences, and enhanced visual effects creation. The key to unlocking this potential lies in the powerful combination of AI-ready workstations and servers. AI-ready workstations equipped with NVIDIA RTX GPUs, coupled with high-performance servers, will form the backbone of this transformation. Yan Chen, studio architect at KMM, noted, ” Dell Precision workstations with NVIDIA RTX GPUs and Dell PowerScale let us use generative AI to realize directors’ creative visions at a speed previously unknown in modern filmmaking.” Chen highlighted how this hardware integration is already accelerating film production processes by up to 50 times compared to traditional methods. This technological synergy will enable filmmakers to process vast amounts of data, render complex visual effects in real time, and collaborate seamlessly across global production teams. Figure 2. Modern filmmaking, powered by innovation For more information about how KMM is using Dell Technologies and NVIDIA to revolutionize filmmaking, see Kennedy Miller Mitchell: Driving GenAI-powered innovation at the speed of creativity .
https://developer.nvidia.com/zh-cn/blog/accelerating-film-production-with-dell-ai-factory-and-nvidia/
借助 Dell AI Factory 和 NVIDIA 加速电影制作
电影制作是一个复杂而复杂的过程,涉及艺术家、作家、视觉效果专业人员、技术人员和无数其他专家组成的多元化团队。每位成员都将各自独特的专业知识带到桌面上,通过协作将简单的想法转化为引人入胜的电影体验。从故事的初始火花到最终剪辑,每一步都需要创造力、技能和对细节的细致关注。 然而,即使拥有如此优秀的团队,电影业也一直在努力应对众多挑战,这些挑战可能会阻碍生产力、增加预算和抑制创造力。这些挑战包括管理不断上涨的生产成本和适应迅速变化的技术,如 PyTorch、pandas 和 LangChain 等,以及驾不断变化的受众偏好和分销平台。同时努力在竞争日益激烈的市场中平衡艺术视觉与商业可行性,例如如何利用 NVIDIA 的 GPU 和 cuOpt 等技术来提高效率,或者如何使用 Stable Diffusion 和 Llama 等模型来创作新的内容。另外,Google 的 Jetson 和 Hugging Face 的 Python 库等工具也在帮助电影业人员提高工作效率。同时,Arm 和 Anaconda 等公司也在为电影业提供支持,例如 Siemens 的 DPU 技术等。 人工智能是一种改变游戏规则的技术,它正在改变电影制作过程从剧本到屏幕的过程: 预制作:AI 工具可帮助分析脚本的叙事结构和市场前景,同时还可协助进行分镜和预算优化。 制作:AI 可优化拍摄时间表,实现实时视觉效果渲染,并为用于精确电影摄影的高级摄像头系统提供支持。 后期制作:AI 通过分析素材以及提供裁剪、过渡和颜色分级建议来为剪辑师提供帮助。 这些 AI 驱动的创新不仅使独立电影制作人更容易获得高端制作价值,而且还使成熟的工作室能够进一步突破创意界限。通过自动执行日常任务并提供数据驱动的见解,AI 使电影制作人能够更专注于叙事和艺术视觉,从而可能带来更加多样化和引人入胜的电影体验。 AI 如何重塑电影制作领域的一个典型例子是著名的制作公司 Kennedy Miller Mitchell (KMM),该公司为 Mad Max、Babe 和 Happy Feet 等众多突破性电影系列提供支持。通过 使用 Dell AI Factory 与 NVIDIA 结合 ,KMM 已将生成式 AI 集成到生产的各个阶段,从而转变其工作流程,并实现更快、更有针对性的内容创作。 从挑战到 AI 创新 在不断演变的电影制作格局中,KMM 面临着重大挑战。这些包括适应不断变化的制作模式、管理日益复杂的视觉效果、平衡创意视觉与商业可行性、简化不断发展的团队之间的协作,以及与快速发展的技术保持同步。 为应对这些挑战,KMM 开发了一个基于 Dell AI Factory 的先进生成式 AI 平台。该创新解决方案集成了搭载 NVIDIA RTX GPUs 的 Dell Precision AI-ready 工作站 ,可实现强大的计算和实时渲染,同时还集成了 Dell PowerScale 存储解决方案,可实现高效的文件管理和协作。 这一技术生态系统使 KMM 能够在各个生产阶段(从复杂序列的快速预可视化到实时场景修改和高级后期制作任务)利用生成式 AI。 生成式 AI 在电影制作领域的飞跃 在预生产阶段,KMM 面临着为复杂的电影序列创建详细的预可视化效果的挑战,而这一过程通常会消耗大量的时间和资源。为了解决这一问题,KMM 部署了配备 NVIDIA RTX GPU 的 Dell Precision AI 就绪型工作站,为其生成式 AI 平台提供动力支持,以实现快速的预可视化。 借助 NVIDIA RTX GPU,KMM 团队能够实时处理复杂的计算,从而能够以前所未有的自由度和效率探索和执行创意,将预可视化过程从耗时的任务转变为动态的交互式体验。因此,在短短 6 至 8 个月的时间内,KMM 便能够借助少量人员对 “Furiosa:A Mad Max Saga” 的 15 分钟追踪序列进行预可视化,而这一过程通常需要 1 至 2 年时间。 图 1、NVIDIA AI 工作站专为实现高级可视化、设计和计算性能而设计 在后期制作中,KMM 面临着文件存储、分发和团队成员之间协作方面的挑战,这通常会导致工作流程中的瓶颈。 为了简化这些流程,KMM 集成了 Dell PowerScale 存储阵列,用于存储、分发和实现生产文件的实时协作,支持生成式 AI 任务,例如 rotoscoping、compositing 和 sound blending。 Dell PowerScale 系统通过 100-GB 连接到后端和近线存储,使团队能够在 30 分钟到 1 小时内将序列拉到一起进行审查,而以前这一过程需要几天时间。这种实施将分发生产文件的性能提升了 20 倍。 AI 在电影制作中的未来 随着 AI 技术的不断进步,您有望通过虚拟现实和增强现实、基于受众偏好的 AI 驱动叙事开发以及增强的视觉效果创作,看到更具沉浸感的故事情节。 发挥这一潜力的关键在于 AI 就绪型工作站和服务器的强大组合。配备 NVIDIA RTX GPU 的 AI 就绪型工作站与高性能服务器相结合,将成为这种转型的支柱。 KMM 工作室架构师 Yan Chen 指出:“搭载 NVIDIA RTX GPU 和 Dell PowerScale 的 Dell Precision 工作站让我们能够使用生成式 AI,以现代电影制作中前所未有的速度实现导演的创意愿景。” 陈强调,与传统方法相比,这种硬件集成已将电影制作流程的速度提高了 50 倍。这种技术协同作用将使电影制作人能够处理大量数据,实时渲染复杂的视觉效果,并与全球制作团队无缝协作。 图 2、创新驱动的现代电影制作 如需详细了解 KMM 如何使用 Dell Technologies 和 NVIDIA 彻底改变电影制作,请参阅:Kennedy Miller Mitchell:以创意的速度推动由 GenAI 驱动的创新。
https://developer.nvidia.com/blog/deploying-nvidia-riva-multilingual-asr-with-whisper-and-canary-architectures-while-selectively-deactivating-nmt/
Deploying NVIDIA Riva Multilingual ASR with Whisper and Canary Architectures While Selectively Deactivating NMT
NVIDIA has consistently developed automatic speech recognition (ASR) models that set the benchmark in the industry. Earlier versions of NVIDIA Riva , a collection of GPU-accelerated speech and translation AI microservices for ASR , TTS , and NMT , support English-Spanish and English-Japanese code-switching ASR models based on the Conformer architecture, along with a model supporting multiple common languages in the EMEA region (namely British English, European Spanish, French, Italian, Standard German, and Armenian) based on the Parakeet architecture. Recently, NVIDIA released the Riva 2.18.0 container and SDK to keep evolving its speech AI models. With this new release, we now offer the following: Support for Parakeet, the streaming multilingual ASR Support for OpenAI’s Whisper-Large and HuggingFace’s Distil-Whisper-Large models for offline ASR and Any-to-English AST The NVIDIA Canary models for offline ASR, Any-to-English, English-to-Any, and Any-to-Any AST A new <dnt> SSML tag that tells a Megatron NMT model not to translate the enclosed text A new DNT dictionary that tells a Megatron NMT model how to translate specified words or phrases Automatic speech translation (AST) is the translation of speech in one language to text in another language without intermediate transcription in the first language. NVIDIA also released NIM microservice implementations of Whisper and Canary (both 1B and 0.6B-Turbo ) for optimized, modular, portable support of offline ASR and AST. NVIDIA Riva continues to support additional SOTA models and new architectures for both streaming and offline use cases, such as ASR-Translation (AST) models, S2S capabilities, and multilingual models. In the demos in this post, we focus on Whisper and Canary for offline ASR and AST, along with selectively deactivating and editing Megatron NMT with <dnt> SSML tags and DNT dictionaries. Riva multilingual offline ASR with Whisper and Canary for offline ASR Riva’s new support of Whisper for offline multilingual ASR enables you to transcribe audio recordings in dozens of languages . Whisper can also translate audio from any of the supported languages into English automatically, instead of transcribing the audio in the source language and subsequently translating the transcription to English. The config.sh script included in the NGC Riva Skills Quick Start resource folder provides everything that you need for launching a Riva server with Whisper capabilities. Ensure that the following variables are set as indicated: service_enabled_asr=true asr_acoustic_model=("whisper") # or "distil_whisper" for lower memory requirements asr_acoustic_model_variant=("large") # the default "" will probably also work riva_model_loc="<path/to/model/files/outside/container>" To launch a Riva server with Canary capabilities instead, set those variables as follows: service_enabled_asr=true asr_acoustic_model=("canary")  asr_acoustic_model_variant=("1b") # or "0.6_turbo" for faster inference riva_model_loc="<path/to/model/files/outside/container>" Run the riva_init.sh script provided in the same directory to download the models in RMIR form and deploy versions of those models optimized for your particular GPU architecture. Then run the riva_start.sh script to launch the Riva server. NIM microservice versions of Whisper and Canary (both 1B and 0.6B-Turbo ) are also available. To launch either the Whisper or Canary NIM microservice on your own system, choose the Docker tab of the model’s landing page and follow the instructions. In either case, you must generate an NGC API key and export it as an environmental variable, NGC_API_KEY . Here’s the docker run command for the Whisper NIM microservice: docker run -it --rm --name=riva-asr \ --runtime=nvidia \ --gpus '"device=0"' \ --shm-size=8GB \ -e NGC_API_KEY \ -e NIM_HTTP_API_PORT=9000 \ -e NIM_GRPC_API_PORT=50051 \ -p 9000:9000 \ -p 50051:50051 \ -e NIM_TAGS_SELECTOR=name=whisper-large-v3 \ nvcr.io/nim/nvidia/riva-asr:1.3.0 To run the Canary NIM microservice instead, replace whisper-large-v3 with canary-1b or canary-0-6b-turbo in the docker run command. Irrespective of the ASR or AST model used, running a NIM microservice on your own system in this manner leaves the terminal hanging. You must use a different terminal or a different interface entirely to run inference with the Whisper or Canary NIM microservice. Otherwise, the process is identical to running inference with a Riva server set up with the classic Riva SDK. When the Riva server is launched, you can submit inference calls to it with C++ or Python APIs. We use Python examples for the rest of this post. Import the Riva Python client module and connect to the Riva server as follows: import riva.client import riva.client.proto.riva_asr_pb2 as riva_asr uri = 'localhost:50051' auth = riva.client.Auth(uri=uri) Next, define a function like the following to transcribe audio files with Whisper or Canary: def run_ast_inference(audio_file, model, auth=auth, source_language='multi', target_language=None, print_full_response=False): assert model in ['whisper', 'canary'] # The 'multi' language code doesn't work with Canary, so change it if model == 'canary' and source_language == 'multi': source_language = 'en-US' # Ensure that the ASR/AST model is available model_available = False client = riva.client.ASRService(auth) config_response = client.stub.GetRivaSpeechRecognitionConfig(riva_asr.RivaSpeechRecognitionConfigRequest()) for model_config in config_response.model_config: model_name = model_config.model_name if model in model_name and 'offline' in model_name: model_available = True break assert model_available == True, f'Error: {model.capitalize()} ASR/AST is not available' # Read in the audio file with open(audio_file, 'rb') as fh: data = fh.read() config = riva.client.RecognitionConfig( language_code=source_language, max_alternatives=1, enable_automatic_punctuation=True, model=model_name, ) if target_language is not None: riva.client.add_custom_configuration_to_config(config, f'target_language:{target_language}') riva.client.add_custom_configuration_to_config(config, 'task:translate') response = client.offline_recognize(data, config) if print_full_response: print(response) else: print(response.results[0].alternatives[0].transcript) For the Riva 2.17.0 version of Whisper, you had to set the language_code parameter in the call to riva.client.RecognitionConfig to "en-US" , irrespective of the language of the audio file being transcribed. Likewise, if you wanted to tell Whisper to transcribe or translate from a particular language, you needed to pass in the source_language parameter by calling as follows: riva.client.add_custom_configuration_to_config(config, f'source_language:{source_language}') For Riva 2.18.0 and later, setting language_code='multi' in the call to riva.client.RecognitionConfig enables Whisper to automatically detect the language of the input audio file. On the other hand, Canary does not support automatic language detection, and won’t accept the 'multi' value for the language_code parameter. In the following demo video, one of us plays recordings of himself reading Article 1 of the Universal Declaration of Human Rights in both English and Swedish. The subsequent instructions for Whisper and Canary ASR and AST refer to recordings used in that video. Video 1. Riva Multilingual ASR With Whisper and Canary for Offline ASR Demo Pass the English-language recording into the inference function with otherwise default arguments as follows: response = run_ast_inference('udhr-english.wav', model='whisper') This yields the following accurate transcription: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. In turn, pass the Swedish-language recording into the inference function with otherwise default arguments as follows: response = run_ast_inference('udhr-swedish.wav', model='whisper') This yields the following accurate transcription: Alla människor är födda fria och lika i värde och rättigheter. De är utrustade med förnuft och samvete och bör handla gentemot varandra i en anda av broderskap. To tell Whisper to perform any-to-English AST, pass in the target_language parameter and the source_language parameter, if desired in the form of a language code consisting of two lowercase letters or a language code combined with a country code consisting of two capital letters, with a dash separating the two. To obtain the two-letter code for a given country, use the pycountry Python module as follows: pycountry.countries.search_fuzzy('<Country Name>') For example, you can obtain an English transcription of the Swedish audio file as follows: response = run_ast_inference('udhr-swedish.wav', model='whisper', target_language='en-US') This yields the following translation: All people are born free and equal in value and rights. They are equipped with reason and conscience and should act against each other in a spirit of brotherhood. Ideally, this translated text would be identical to the English version of Article 1 of the Universal Declaration of Human Rights. For the most part, it’s close enough. However, while the Swedish preposition “gentemot” can mean “against,” in this context, it should be translated as “towards.” As of this writing, Riva’s implementation of Whisper does not support streaming ASR or AST, English-to-Any AST, or Any-to-Any AST. Canary likewise supports both offline (but not streaming) ASR and AST. While it recognizes fewer languages than Whisper, it enables English-to-Any and Any-to-Any AST. For example, consider a recording of the German version of Article 1 of the UDHR: Alle Menschen sind frei und gleich an Würde und Rechten geboren. Sie sind mit Vernunft und Gewissen begabt und sollen einander im Geist der Brüderlichkeit begegnen. Run Canary AST on that recording as follows: response = run_ast_inference('udhr-german.wav', model='canary', source_language='de-DE', target_language='es-US') This yields the following Spanish translation: Todos los hombres nace libres e iguales en dignidad y derechos, dotados de razón y conciencia y deben enfrentarse en el espíritu de la fraternidad. For comparison, the official Spanish version of Article 1 of the UDHR is as follows: Todos los seres humanos nacen libres e iguales en dignidad y derechos y, dotados como están de razón y conciencia, deben comportarse fraternalmente los unos con los otros. <dnt> SSML tags and dictionaries for selectively deactivating NMT and providing preferred translations Riva 2.17.0 introduced <dnt> (that is, “do not translate”) SSML tags. Surrounding a word or phrase in a set of <dnt> tags tells Riva not to translate it. Video 2. Riva Do Not Translate SSML Demo Riva 2.18.0 took the DNT concept a step further and enabled you to upload entire dictionaries of preferred translations, including none at all, of words and phrases. For words you don’t wish to translate and preferred translations, consider examples from Swedish and German, as one of us happens to speak those languages in addition to English. There are several reasons why you might not want a translation model to translate part of an input text: The text contains a proper name with a meaning in the source language but which is typically rendered untranslated in the target language. The target language lacks a precise equivalent to a given word or phrase in the source language. The Swedish adjective “lagom” is notoriously difficult to translate to English, but it means, approximately, “not too much, not too little, just right.” Oddly enough, dictionary.com lists a definition of “lagom” as a loanword into English. More curiously, it describes “lagom” as a noun in English, whereas in Swedish, it’s strictly an adjective. In the models_nmt field in the config.sh script included in the Riva Skills Quick Start resource folder, set and uncomment as follows: service_enabled_nmt=true "${riva_ngc_org}/${riva_ngc_team}/rmir_nmt_megatron_1b_any_any:${riva_ngc_model_version}" Next, import the Riva Client Python module in a Python script, interpreter, or notebook and connect to the Riva server. Now, you can define a function like the following to run NMT inference: def run_nmt_inference(texts, model, source_language, target_language, dnt_phrases_dict=None, auth=auth): client = riva.client.NeuralMachineTranslationClient(auth) resp = client.translate(texts, model, source_language, target_language, dnt_phrases_dict) return [translation.text for translation in resp.translations] The following code example shows how to use <dnt> SSML tags to tell Riva NMT not to translate “lagom.” input_strings = [ 'Hur säger man <dnt>"lagom"</dnt> på engelska?' ] model_name = 'megatronnmt_any_any_1b' source_language = 'sv' target_language = 'en' translations = run_nmt_inference(input_strings, model_name, source_language, target_language) for i, translation in enumerate(translations): print(f'\tTranslation {i}: {translation}') This yields the following result: Translation 0: How to say "lagom" in English? Ideally, the translation should read, “How does one say ‘lagom’ in English?” or “How do you say ‘lagom’ in English?” You can achieve the same result with a dnt_phrases_dict dictionary: input_strings = [ 'Hur säger man "lagom" på engelska?' ] dnt_phrases_dict = {"lagom": "lagom"} model_name = 'megatronnmt_any_any_1b' source_language = 'sv' target_language = 'en' translations = run_nmt_inference(input_strings, model_name, source_language, target_language) for i, translation in enumerate(translations): print(f'\tTranslation {i}: {translation}') Again, this yields the same result: Translation 0: How to say "lagom" in English? For preferred translations, consider the Swedish noun “särskrivning” and the German equivalent “Getrenntschreibung.” English has no direct translation for these words. Most Germanic languages other than English (including Swedish and German) make extensive use of compound words, particularly in the case of noun adjuncts (nouns used as adjectives). In both Swedish and German, noun adjuncts and the nouns which they modify form compound words. There is a tendency in both languages (partly due to English influence, partly due to typographers who believe that ending a line with a hyphen is aesthetically unappealing) to separate words which, according to current grammatical rules, should be joined. “Särskrivning” and “Getrenntschreibung,” both of which literally mean “separate-writing” or “separate-spelling,” are the respective Swedish and German words for this tendency and examples thereof. You can ask Riva to translate the Swedish sentence, “Särskrivningar förstörde mitt liv” (roughly speaking, “Särskrivningar [that is, the plural of särskrivning] ruined my life”) to German as follows. The following example uses a dictionary to indicate a preferred translation of “Särskrivningar” to “Getrenntschreibungen.” input_strings = [ 'Särskrivningar förstörde mitt liv.' ] dnt_phrases_dict = {"Särskrivningar": "Getrenntschreibungen"} model_name = 'megatronnmt_any_any_1b' source_language = 'sv' target_language = 'de' translations = run_nmt_inference(input_strings, model_name, source_language, target_language) for i, translation in enumerate(translations): print(f'\tTranslation {i}: {translation}') This yields the following result: Translation 0: Getrenntschreibungen hat mein Leben ruiniert. The auxiliary verb form should be “haben” rather than “hat” in this example, as “särskrivningar” in the source text and “Getrenntschreibungen” in the translated text are plural nouns, but otherwise, this translation is sufficiently accurate. As of Riva 2.18.0, the megatron_any_any_1b model now consists of 1.6B parameters and offers bidirectional translation support for 36 languages in total, four more than previous versions. For example, this model considers European and Latin American Spanish as separate languages, along with Simplified and Traditional Chinese. As such, the model now requires that some language codes be expressed as two lowercase letters (the previous standard language code) followed by a dash and two uppercase letters (representing the country). Under this system, European and Latin American Spanish are coded respectively as 'es-ES' and 'es-US' , while Simplified and Traditional Chinese are coded respectively as 'zh-CN' and 'zh-TW' . Languages that don’t require both a language and country code still support it. For example, you can tell Riva to use Swedish as a source or target language by passing in either 'sv' or 'sv-SE' to the appropriate parameter. Explore NGC’s Riva Skills Quick Start resource folder to launch a Riva server with NMT capabilities.
https://developer.nvidia.com/zh-cn/blog/deploying-nvidia-riva-multilingual-asr-with-whisper-and-canary-architectures-while-selectively-deactivating-nmt/
使用 Whisper 和 Canary 架构部署 NVIDIA Riva 多语种 ASR,同时选择性地停用 NMT
NVIDIA 持续开发自动语音识别 (ASR) 模型,这些模型在业内树立了基准。 早期版本的 NVIDIA Riva 是一种适用于 ASR 、 TTS 和 NMT 的 GPU 加速语音和翻译 AI 微服务的集合,支持基于 Conformer 架构的英语-西班牙语和英语-日语代码交换 ASR 模型,以及基于 Parakeet 架构的支持 EMEA 地区多种语言(即英国英语、欧洲西班牙语、法语、意大利语、标准德语和亚美尼亚语)的模型。 最近,NVIDIA 发布了 Riva 2.18.0 容器和 SDK ,以不断改进其语音 AI 模型。在此新版本中,我们现在提供以下服务: 支持 Parakeet,即多语言流式传输 ASR 支持 OpenAI 的 Whisper-Large 和 Hugging Face 的 Distil- Whisper-Large 模型,适用于离线 ASR 和任意英语 AST NVIDIA Canary 模型适用于离线 ASR、Any-to-English、English-to-Any 和 Any-to-Any AST 新的 <dnt> SSML 标签,用于告知 Megatron NMT 模型不要翻译所包含的文本 新的 DNT 字典,可告知 Megatron NMT 模型如何翻译指定的单词或短语 自动语音翻译 (AST)是将一种语言的语音翻译为另一种语言的文本,而无需以第一种语言进行中间转录。 NVIDIA 还发布了 Whisper 和 Canary ( 1B 和 0.6B-Turbo ) 的 NIM 微服务实现,可为离线 ASR 和 AST 提供优化的模块化便携支持。NVIDIA Riva 将继续为串流和离线用例支持其他 SOTA 模型和新架构,例如 ASR-Translation (AST) 模型、S2S 功能和多语种模型。 在本文的演示中,我们重点介绍用于离线 ASR 和 AST 的 Whisper 和 Canary,以及使用 <dnt> SSML 标签和 DNT 字典选择性地停用和编辑 Megatron NMT。 Riva 多语种离线 ASR,支持 Whisper 和 Canary 离线 ASR Riva 新增了对离线多语种 ASR 的 Whisper 支持,使您能够以数 十种 语言 转录录音内容。Whisper 还可以自动将任何支持 语言 的音频翻译成英语,而无需使用源 语言 转录音频,并随后将转录翻译成英语。 NGC Riva 技能快速入门 资源文件夹中包含的 config.sh 脚本提供了启动具有 Whisper 功能的 Riva 服务器所需的一切。确保已按指示设置以下变量: service_enabled_asr=true asr_acoustic_model=("whisper") # or "distil_whisper" for lower memory requirements asr_acoustic_model_variant=("large") # the default "" will probably also work riva_model_loc="<path/to/model/files/outside/container>" 要启动具有 Canary 功能的 Riva 服务器,请按如下所示设置这些变量: service_enabled_asr=true asr_acoustic_model=("canary") asr_acoustic_model_variant=("1b") # or "0.6_turbo" for faster inference riva_model_loc="<path/to/model/files/outside/container>" 运行同一目录中提供的 riva_init.sh 脚本,以下载 RMIR 格式的模型,并部署针对您的特定 GPU 架构优化的模型版本。然后运行 riva_start.sh 脚本以启动 Riva 服务器。 此外,还提供 NIM 微服务版本的 Whisper 和 Canary( 1B 和 0.6B-Turbo )。要在您自己的系统上启动 Whisper 或 Canary NIM 微服务,请选择模型登陆页面的 Docker 选项卡,然后按照说明操作。在此过程中,您必须生成 NGC API 密钥 ,并将其导出为环境变量 NGC_API_KEY 。 以下是 Whisper NIM 微服务的 docker run 命令: docker run -it --rm --name=riva-asr \ --runtime=nvidia \ --gpus '"device=0"' \ --shm-size=8GB \ -e NGC_API_KEY \ -e NIM_HTTP_API_PORT=9000 \ -e NIM_GRPC_API_PORT=50051 \ -p 9000:9000 \ -p 50051:50051 \ -e NIM_TAGS_SELECTOR=name=whisper-large-v3 \ nvcr.io/nim/nvidia/riva-asr:1.3.0 要改为运行 Canary NIM 微服务,请在 docker run 命令中将 whisper-large-v3 替换为 canary-1b 或 canary-0-6b-turbo ,无论使用何种 ASR 或 AST 模型,以这种方式在您自己的系统上运行 NIM 微服务都会使终端挂起。您必须完全使用不同的终端或不同的接口,才能使用 Whisper 或 Canary NIM 微服务运行推理。否则,此过程与使用安装了经典 Riva SDK 的 Riva 服务器进行推理相同。 启动 Riva 服务器后,您可以使用 C++ 或 Python API 向其提交推理调用。我们在本文的其余部分中使用了 Python 示例。 导入 Riva Python 客户端模块并连接到 Riva 服务器,如下所示: import riva.client import riva.client.proto.riva_asr_pb2 as riva_asr uri = 'localhost:50051' auth = riva.client.Auth(uri=uri) 接下来,像下面这样定义一个函数,使用 Whisper 或 Canary 转录音频文件: def run_ast_inference(audio_file, model, auth=auth, source_language='multi', target_language=None, print_full_response=False): assert model in ['whisper', 'canary'] # The 'multi' language code doesn't work with Canary, so change it if model == 'canary' and source_language == 'multi': source_language = 'en-US' # Ensure that the ASR/AST model is available model_available = False client = riva.client.ASRService(auth) config_response = client.stub.GetRivaSpeechRecognitionConfig(riva_asr.RivaSpeechRecognitionConfigRequest()) for model_config in config_response.model_config: model_name = model_config.model_name if model in model_name and 'offline' in model_name: model_available = True break assert model_available == True, f'Error: {model.capitalize()} ASR/AST is not available' # Read in the audio file with open(audio_file, 'rb') as fh: data = fh.read() config = riva.client.RecognitionConfig( language_code=source_language, max_alternatives=1, enable_automatic_punctuation=True, model=model_name, ) if target_language is not None: riva.client.add_custom_configuration_to_config(config, f'target_language:{target_language}') riva.client.add_custom_configuration_to_config(config, 'task:translate') response = client.offline_recognize(data, config) if print_full_response: print(response) else: print(response.results[0].alternatives[0].transcript) 对于 Riva 2.17.0 版本的 Whisper,您必须将调用 riva.client.RecognitionConfig 时的 language_code 参数设置为 "en-US" ,而不管转录的音频文件使用哪种语言。 同样,如果您想让 Whisper 转录或翻译特定语言,您需要通过如下调用传入 source_language 参数: riva.client.add_custom_configuration_to_config(config, f'source_language:{source_language}') 对于 Riva 2.18.0 及更高版本,在调用 riva.client.RecognitionConfig 时设置 language_code='multi' 可启用 Whisper 自动检测输入音频文件的语言。另一方面,Canary 不支持语言自动检测,也不会接受 language_code 参数的 'multi' 值。 在下面的演示视频中,我们中的一位播放了自己阅读《世界人权宣言》第 1 条的英语和瑞典语录像。Whisper 和 Canary ASR 和 AST 的后续说明参考了该视频中使用的录制内容。 1 Riva 多语种 ASR (结合 Whisper 和 Canary 进行离线 ASR 演示) 将英语录制内容传递至推理函数,否则需使用以下默认参数: response = run_ast_inference('udhr-english.wav', model='whisper') 这将生成以下准确的转录: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. 依次将瑞典语录制内容传递至推理函数,否则需使用以下默认参数: response = run_ast_inference('udhr-swedish.wav', model='whisper') 这将生成以下准确的转录: Alla människor är födda fria och lika i värde och rättigheter. De är utrustade med förnuft och samvete och bör handla gentemot varandra i en anda av broderskap. 要让 Whisper 执行任意英语 AST,请传递 target_language 参数和 source_language 参数 (如果需要),传递形式为 由两个小写字母组成的语言代码 ,或将语言代码与由两个大写字母组成的国家 地区代码组合,并用破折号将两者分开。要获取给定国家 地区的双字母代码,请按如下所示使用 pycountry Python 模块: pycountry.countries.search_fuzzy('<Country Name>') 例如,您可以获取瑞典语音频文件的英文转录,如下所示: response = run_ast_inference('udhr-swedish.wav', model='whisper', target_language='en-US') 这将产生以下转换: All people are born free and equal in value and rights. They are equipped with reason and conscience and should act against each other in a spirit of brotherhood. 理想情况下,此翻译文本应与《世界人权宣言》第 1 条的英文版本相同。在大多数情况下,它已经足够接近了。然而,虽然瑞典语的介词“gentemot”的意思可能是“反对”,但在这种情况下,应将其翻译为“朝”。 在撰写本文时,Riva 的 Whisper 实现不支持 ASR 或 AST、英语对任意 AST 或任意 AST 串流。 Canary 同样支持离线 (但不支持流式传输) ASR 和 AST。虽然与 Whisper 相比,它识别的语言更少,但支持英语对任意和任意对任意 AST。 例如,可以考虑录制德语版的《世界人权宣言》第 1 条的录音: Alle Menschen sind frei und gleich an Würde und Rechten geboren. Sie sind mit Vernunft und Gewissen begabt und sollen einander im Geist der Brüderlichkeit begegnen. 对该录制文件运行 Canary AST,如下所示: response = run_ast_inference('udhr-german.wav', model='canary', source_language='de-DE', target_language='es-US') dy to help. Please provide the and so I can begin the task Todos los hombres nace libres e iguales en dignidad y derechos, dotados de razón y conciencia y deben enfrentarse en el espíritu de la fraternidad. 为便于比较,以下是《世界人权宣言》第一条的官方西班牙语版本: Todos los seres humanos nacen libres e iguales en dignidad y derechos y, dotados como están de razón y conciencia, deben comportarse fraternalmente los unos con los otros. <dnt> SSML 标签和字典,用于选择性地停用 NMT 并提供首选翻译 Riva 2.17.0 引入了 <dnt> (即“请勿翻译”) SSML 标签。将单词或短语围绕在一组 <dnt> 标签中告知 Riva 不要将其翻译。 2 Riva SSML Riva 2.18.0 进一步提升了 DNT 概念,使您能够上传首选翻译的整个词和短语字典,包括完全不翻译的词语。对于您不希望翻译的词语和首选翻译,可以考虑瑞典语和德语的示例,因为我们中的一个人除了讲英语之外还讲这些语言。 您可能不希望翻译模型翻译部分输入文本的原因有以下几个: 文本包含正确的名称,该名称在源语言中具有含义,但通常不会翻译为目标语言。 目标语言缺乏与源语言中给定单词或短语的精确等价物。 众所周知,瑞典语形容词“lagom”很难翻译成英语,但它的意思大概是“ 不太多,不太少,没错 ”。很奇怪,dictionary.com 将“lagom”的定义列为英语借词。更令人好奇的是,它将“lagom”描述为英语中的名词,而在瑞典语中,它严格来说是一个形容词。 在“Riva Skills Quick Start”资源文件夹中包含的 config.sh 脚本的 models_nmt 字段中,按如下所示设置和取消注释: service_enabled_nmt=true "${riva_ngc_org}/${riva_ngc_team}/rmir_nmt_megatron_1b_any_any:${riva_ngc_model_version}" 接下来,在 Python 脚本、解释器或 notebook 中导入 Riva 客户端 Python 模块,并连接到 Riva 服务器。现在,您可以定义如下函数以运行 NMT 推理: def run_nmt_inference(texts, model, source_language, target_language, dnt_phrases_dict=None, auth=auth): client = riva.client.NeuralMachineTranslationClient(auth) resp = client.translate(texts, model, source_language, target_language, dnt_phrases_dict) return [translation.text for translation in resp.translations] 以下代码示例展示了如何使用 <dnt> SSML 标签告诉 Riva NMT 不要翻译“lagom”。 input_strings = [ 'Hur säger man <dnt>"lagom"</dnt> på engelska?' ] model_name = 'megatronnmt_any_any_1b' source_language = 'sv' target_language = 'en' translations = run_nmt_inference(input_strings, model_name, source_language, target_language) for i, translation in enumerate(translations): print(f'\tTranslation {i}: {translation}') 这将生成以下结果: Translation 0: How to say "lagom" in English? 理想情况下,译文应为:“How does one say ‘lagom’ in English?”或“How do you say ‘lagom’ in English?” 您可以使用 dnt_phrases_dict 字典获得相同的结果: input_strings = [ 'Hur säger man "lagom" på engelska?' ] dnt_phrases_dict = {"lagom": "lagom"} model_name = 'megatronnmt_any_any_1b' source_language = 'sv' target_language = 'en' translations = run_nmt_inference(input_strings, model_name, source_language, target_language) for i, translation in enumerate(translations): print(f'\tTranslation {i}: {translation}') 同样,这也会产生相同的结果: Translation 0: How to say "lagom" in English? 对于首选翻译,可以考虑使用瑞典语名词“särskrivning”和对应的德语“Getrenntschreibung”。这些词语没有直接的英语翻译。 英语以外的大多数日耳曼语言 (包括瑞典语和德语) 广泛使用合成词,尤其是名词辅助词 (用作形容词的名词)。在瑞典语和德语中,名词辅助词及其修改的名词构成合成词。这两种语言中都存在一种倾向 (部分原因是受英语的影响,部分原因是排版师认为用连字符结束一行在美学上没有吸引力),即分离词语,而根据现行语法规则,这些词语应合并在一起。 “Särskrivning”和“Getrenntschreibung”的字面意思都是“单独编写”或“单独拼写”,它们分别是对应这一趋势的瑞典语和德语单词及其示例。 您可以让 Riva 将瑞典语句子“Särskrivningar förstörde mitt liv”(大致而言,“Särskrivningar【即,särskrivning 的复数形式】ruined my life”) 翻译为德语,具体如下。以下示例使用字典指明首选的翻译版本,即“Särskrivningar”和“Getrenntschreibungen”。 input_strings = [ 'Särskrivningar förstörde mitt liv.' ] dnt_phrases_dict = {"Särskrivningar": "Getrenntschreibungen"} model_name = 'megatronnmt_any_any_1b' source_language = 'sv' target_language = 'de' translations = run_nmt_inference(input_strings, model_name, source_language, target_language) for i, translation in enumerate(translations): print(f'\tTranslation {i}: {translation}') 这将生成以下结果: Translation 0: Getrenntschreibungen hat mein Leben ruiniert. 在本示例中,辅助动词形式应为“haben”而非“hat”,因为源文本中的“särskrivningar”和翻译文本中的“Getrenntschreibungen”均为复数名词,否则,此翻译就足够准确。 从 Riva 2.18.0 开始, megatron_any_any_1b 模型现在由 1.6B 个参数组成,总共为 36 种语言提供双向翻译支持,比之前的版本多 4 种。例如,此模型将欧洲和拉丁美洲西班牙语与简体中文和繁体中文一起视为单独的语言。 因此,该模型现在要求将某些语言代码表示为两个小写字母(之前的标准语言代码),然后是破折号和两个大写字母(代表国家/地区)。 在此系统下,欧洲和拉丁美洲的西班牙语分别编码为 'es-ES' 和 'es-US' ,而简体中文和繁体中文分别编码为 'zh-CN' 和 'zh-TW' 。语言不需要同时使用语言和国家代码仍然支持该模式。例如,您可以通过将 'sv' 或 'sv-SE' 传入适当的参数,让 Riva 将瑞典语用作源语言或目标语言。 探索 NGC 的 Riva Skills Quick Start 资源文件夹,启动具有 NMT 功能的 Riva 服务器。
https://developer.nvidia.com/blog/improving-translation-quality-with-domain-specific-fine-tuning-and-nvidia-nim/
Improving Translation Quality with Domain-Specific Fine-Tuning and NVIDIA NIM
Translation plays an essential role in enabling companies to expand across borders, with requirements varying significantly in terms of tone, accuracy, and technical terminology handling. The emergence of sovereign AI has highlighted critical challenges in large language models (LLMs) , particularly their struggle to capture nuanced cultural and linguistic contexts beyond English-dominant frameworks. As global communication becomes increasingly complex, organizations must carefully evaluate translation solutions that balance technological efficiency with cultural sensitivity and linguistic precision. In this post, we explore how LLMs can address the following two distinct English to Traditional Chinese translation use cases: Marketing content for websites : Translating technical text with precision while maintaining a natural promotional tone. Online training courses: Translating slide text and markdown content used in platforms like Jupyter Notebooks, ensuring accurate technical translation and proper markdown formatting such as headings, sections, and hyperlinks. These use cases require a specialized approach beyond general translation. While prompt engineering with instruction-tuned LLMs can handle certain contexts, more refined tasks like these often do not meet expectations. This is where fine-tuning Low-Rank Adaptation (LoRA) adapters separately on collected datasets specific to each translation context becomes essential. Implementing LoRA adapters for domain-specific translation For this project, we are using Llama 3.1 8B Instruct as the pretrained model and implementing two models fine-tuned with LoRA adapters using NVIDIA NeMo Framework . These adapters were trained on domain-specific datasets—one for marketing website content and one for online training courses. For easy deployment of LLMs with simultaneous use of multiple LoRA adapters on the same pretrained model, we are using NVIDIA NIM . Refer to the Jupyter Notebook to guide you through executing LoRA fine-tuning with NeMo. Optimizing LLM deployment with LoRA and NVIDIA NIM NVIDIA NIM introduces a new level of performance, reliability, agility, and control for deploying professional LLM services. With prebuilt containers and optimized model engines tailored for different GPU types, you can easily deploy LLMs while boosting service performance. In addition to popular pretrained models including the Meta Llama 3 family and Mistral AI Mistral and Mixtral models, you can integrate and fine-tune your own models with NIM, further enhancing its capabilities. LoRA is a powerful customization technique that enables efficient fine-tuning by adjusting only a subset of the model’s parameters. This significantly reduces required computational resources. LoRA has become popular due to its effectiveness and efficiency. Unlike full-parameter fine-tuning, LoRA adapter weights are smaller and can be stored separately from the pretrained model, providing greater flexibility in deployment. NVIDIA TensorRT-LLM has established a mechanism that can simultaneously serve multiple LoRA adapters on the same pretrained model. This multi-adapter mechanism is also supported by NIM . The following sections demonstrate the advantages of these features and application in multipurpose translation tasks. Step-by-step LoRA fine-tuning deployment with NVIDIA LLM NIM This section describes the three steps involved in LoRA fine-tuning deployment using NVIDIA LLM NIM. Step 1: Set up the NIM instance and LoRA models First, launch a computational instance equipped with two NVIDIA L40S GPUs as recommended in the NIM support matrix . Next, upload the two fine-tuned NeMo files to this environment. Detailed examples of LoRA fine-tuning using NeMo Framework are available in the official documentation and a Jupyter Notebook . To organize the environment, use the following command to create directories for storing the LoRA adapters: $ mkdir -p loras/llama-3.1-8b-translate-course $ mkdir -p loras/llama-3.1-8b-translate-web $ export LOCAL_PEFT_DIRECTORY=$(pwd)/loras $ chmod -R 777 $(pwd)/loras $ tree loras loras ├── llama-3.1-8b-translate-course │ └── course.nemo └── llama-3.1-8b-translate-web └── web.nemo 2 directories, 2 files Step 2: Deploy NIM and LoRA models Now, you can proceed to deploy the NIM container. Replace <NGC_API_KEY> with your actual NGC API token. Generate an API key if needed. Then run the following commands: $ export NGC_API_KEY=<NGC_API_KEY> $ export LOCAL_PEFT_DIRECTORY=$(pwd)/loras $ export NIM_PEFT_SOURCE=/home/nvs/loras $ export CONTAINER_NAME=nim-llama-3.1-8b-instruct $ export NIM_CACHE_PATH=$(pwd)/nim-cache $ mkdir -p "$NIM_CACHE_PATH" $ chmod -R 777 $NIM_CACHE_PATH $ echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin $ docker run -it --rm --name=$CONTAINER_NAME \ --runtime=nvidia \ --gpus all \ --shm-size=16GB \ -e NGC_API_KEY=$NGC_API_KEY \ -e NIM_PEFT_SOURCE \ -v $NIM_CACHE_PATH:/opt/nim/.cache \ -v $LOCAL_PEFT_DIRECTORY:$NIM_PEFT_SOURCE \ -p 8000:8000 \ nvcr.io/nim/meta/llama-3.1-8b-instruct:1.1.2 After executing these steps, NIM will load the model. Once complete, you can check the health status and retrieve the model names for both the pretrained model and LoRA models using the following commands: # NIM health status $ curl http://<INSTANCE_URL>:8000/v1/health/ready # Get model names of the base model and LoRA models $ curl http://<INSTANCE_URL>:8000/v1/models | jq { "data" : [ { "created" : 1725516389, "id" : "meta/llama-3.1-8b-instruct", "max_model_len" : 131072, "object" : "model", "owned_by" : "system", "parent" : null, "permission" : [ { "allow_create_engine" : false, "allow_fine_tuning" : false, "allow_logprobs" : true, "allow_sampling" : true, "allow_search_indices" : false, "allow_view" : true, "created" : 1725516389, "group" : null, "id" : "modelperm-2274791587e4456b9ce921621377becb", "is_blocking" : false, "object" : "model_permission", "organization" : "*" } ], "root" : "meta/llama-3.1-8b-instruct" }, { "created" : 1725516389, "id" : "llama-3.1-8b-translate-course", "max_model_len" : null, "object" : "model", "owned_by" : "system", "parent" : null, "permission" : [ { "allow_create_engine" : false, "allow_fine_tuning" : false, "allow_logprobs" : true, "allow_sampling" : true, "allow_search_indices" : false, "allow_view" : true, "created" : 1725516389, "group" : null, "id" : "modelperm-cb8be2bce8db442d8347f259966e2c02", "is_blocking" : false, "object" : "model_permission", "organization" : "*" } ], "root" : "meta/llama-3.1-8b-instruct" }, { "created" : 1725516389, "id" : "llama-3.1-8b-translate-web", "max_model_len" : null, "object" : "model", "owned_by" : "system", "parent" : null, "permission" : [ { "allow_create_engine" : false, "allow_fine_tuning" : false, "allow_logprobs" : true, "allow_sampling" : true, "allow_search_indices" : false, "allow_view" : true, "created" : 1725516389, "group" : null, "id" : "modelperm-8e404c4d9f504e5fae92bf6caf04e93c", "is_blocking" : false, "object" : "model_permission", "organization" : "*" } ], "root" : "meta/llama-3.1-8b-instruct" } ], "object" : "list" } The output will display details of the models available for deployment. Step 3: Evaluate translation quality of fine-tuned LoRA models With NIM running, you can use it to perform English to Traditional Chinese translation, specifying the appropriate LoRA model name in the request body. Fine-tuning results and performance metrics We evaluated the translation quality of two test datasets using the pretrained model and the two fine-tuned LoRA models. The BLEU and COMET scores from these evaluations are shown in Figures 1 and 2. Figure 1. BLEU scores (higher is better) of different test datasets using the base model and two LoRA fine-tuned models Figure 2. COMET scores (higher is better) of different test datasets using the base model and two LoRA models These results demonstrate that fine-tuning specific datasets using the LoRA mechanism significantly enhances translation quality within their respective domains. To achieve the best performance, domain-specific datasets should leverage their corresponding fine-tuned LoRA models. The LoRA model fine-tuned on the web dataset demonstrated a more noticeable improvement in evaluation scores for web-related translations. Table 1 provides a comparison of translation examples. Source Text Pretrained Model Output LoRA Fine-Tuned Model (on Web Dataset) Output NVIDIA extends lead on MLPerf inference benchmark for computer vision, conversational AI, and recommender workloads. NVIDIA 在計算視覺、對話式人工智慧和建議系統工作負載方面,領先於 MLPerf 推理基準。 NVIDIA 在電腦視覺、對話式人工智慧和推薦系統工作負載的 MLPerf 推論基準中擴大領先地位。 Built on NVIDIA Omniverse, NVIDIA RTX Remix enables modders to easily capture game assets, automatically enhance materials with generative AI tools, and quickly create stunning RTX remasters with full ray tracing and DLSS 3. 在 NVIDIA Omniverse 上建立,RTX Remix 允許模組開發者輕鬆捕捉遊戲資源、自動使用生成 AI 工具增強材質,並快速創建具有全實時光線追蹤和 DLSS 3 的 RTX 重製版。 RTX Remix 是以 NVIDIA Omniverse 為基礎,讓模組玩家能輕鬆擷取遊戲素材、使用生成式人工智慧工具自動增強材質,並快速創造出充分採用全光線追蹤技術和 DLSS 3 的令人驚豔的 RTX 重製版。 Note that the prompt template is as follows: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are an AI assistant who translates an English context to traditional Chinese. The context will be given after a prompt “Translate the following input text to traditional Chinese”. Never respond text other than translation results.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTranslate the following input text to traditional Chinese.\n\n {English Text} .\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n Table 1. Sentence translation comparison for the web dataset Explore further with NVIDIA NIM and LoRA fine-tuning Fine-tuning LoRA adapters separately on marketing content from websites and online training course datasets significantly improved translation quality. This demonstrates that domain-specific datasets achieve better results when paired with their own LoRA models, which efficiently adjust the weights of pretrained models to enhance performance. Deploying these fine-tuned models within a single NVIDIA NIM instance provides a GPU-efficient solution for serving multiple specialized tasks simultaneously. Ready to take it further? Explore how NVIDIA NIM microservices can help you deploy and fine-tune LLMs for your specific tasks. With NVIDIA NeMo , you can fine-tune popular models such as Llama 3 , Mistral , and Phi using LoRA adapters , unlocking greater development efficiency and enhancing the performance of your applications.
https://developer.nvidia.com/zh-cn/blog/improving-translation-quality-with-domain-specific-fine-tuning-and-nvidia-nim/
利用特定领域的微调和 NVIDIA NIM 提高翻译质量
翻译在助力公司实现跨国业务扩展方面发挥着至关重要的作用,在语调、准确性和技术术语处理方面的要求差异很大。 主权 AI 的出现凸显了 大语言模型(LLM) 面临的严峻挑战,尤其是他们难以捕捉英语主导框架之外的细微文化和语言背景。随着全球通信变得日益复杂,组织必须仔细评估翻译解决方案,以平衡技术效率与文化敏感性和语言准确性。 在本文中,我们将探讨 LLMs 如何解决以下两种截然不同的英语到繁体中文翻译用例: 网站营销内容:精确翻译技术文本,同时保持自然的宣传基调。 在 线培训课程 :翻译 Jupyter Notebooks 等平台中使用的幻灯片文本和 markdown 内容,确保准确的技术翻译和正确的 markdown 格式 (如标题、部分和超链接)。 这些用例需要采用常规翻译以外的专门方法。虽然 使用指令调整 LLMs 进行提示工程 可以处理某些情境,但此类更精细的任务通常无法达到预期效果。因此,在针对每个翻译环境收集的特定数据集上单独微调 Low-Rank Adaptation(LoRA)适配器变得至关重要。 实施 LoRA 适配器以实现特定领域的翻译 在此项目中,我们将使用 Llama 3.1 8B Instruct 作为预训练模型,并使用 NVIDIA NeMo Framework 实现两个通过 LoRA adapters 微调的模型。这些 adapters 基于特定领域的数据集进行训练, 一个用于营销网站内容,另一个用于在线培训课程 。为了在同一预训练模型上同时使用多个 LoRA adapters 来轻松部署 LLMs,我们使用 NVIDIA NIM 。 请参阅 Jupyter Notebook ,了解如何使用 NeMo 执行 LoRA 微调。 使用 LoRA 和 NVIDIA NIM 优化 LLM 部署 NVIDIA NIM 为部署专业 LLM 服务引入了更高水平的性能、可靠性、敏捷性和控制力。借助针对不同 GPU 类型定制的预构建容器和优化模型引擎,您可以轻松部署 LLM,同时提高服务性能。除了 Meta Llama 3 系列以及 Mistral AI Mistral 和 Mixtral 模型等热门预训练模型外,您还可以使用 NIM 集成和微调自己的模型,从而进一步增强其功能。 LoRA 是一种强大的定制技术,仅通过调整模型的一部分参数即可实现高效的微调。这显著减少了所需的计算资源。LoRA 因其有效性和效率而广受欢迎。与全参数微调不同,LoRA 适配器权重更小,并且可以与预训练模型分开存储,从而提高部署的灵活性。 NVIDIA TensorRT-LLM 建立了一种机制,可在同一预训练模型上同时为 多个 LoRA 适配器 提供服务。NIM 也支持这种 多适配器机制 。 以下各节将展示这些功能的优势以及在多用途翻译任务中的应用。 借助 NVIDIA LLM NIM 逐步实现 LoRA 微调部署 本节将介绍使用 NVIDIA LLM NIM 进行 LoRA 微调部署所涉及的三个步骤。 第 1 步:设置 NIM 实例和 LoRA 模型 首先, 按照 NIM 支持矩阵中的建议 ,启动配备两个 NVIDIA L40S GPUs 的计算实例。 接下来,将两个经过微调的 NeMo 文件上传至此环境。有关使用 NeMo Framework 进行 LoRA 微调的详细示例,请参阅 官方文档 和 Jupyter Notebook 。 要组织环境,请使用以下命令创建用于存储 LoRA 适配器的目录: $ mkdir -p loras/llama-3.1-8b-translate-course $ mkdir -p loras/llama-3.1-8b-translate-web $ export LOCAL_PEFT_DIRECTORY=$(pwd)/loras $ chmod -R 777 $(pwd)/loras $ tree loras loras ├── llama-3.1-8b-translate-course │ └── course.nemo └── llama-3.1-8b-translate-web └── web.nemo 2 directories, 2 files 第 2 步:部署 NIM 和 LoRA 模型 现在,您可以继续部署 NIM 容器。将 替换为您的实际 NGC API 令牌。如有需要,请生成 API 密钥。然后运行以下命令: $ export NGC_API_KEY=<NGC_API_KEY> $ export LOCAL_PEFT_DIRECTORY=$(pwd)/loras $ export NIM_PEFT_SOURCE=/home/nvs/loras $ export CONTAINER_NAME=nim-llama-3.1-8b-instruct $ export NIM_CACHE_PATH=$(pwd)/nim-cache $ mkdir -p "$NIM_CACHE_PATH" $ chmod -R 777 $NIM_CACHE_PATH $ echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin $ docker run -it --rm --name=$CONTAINER_NAME \ --runtime=nvidia \ --gpus all \ --shm-size=16GB \ -e NGC_API_KEY=$NGC_API_KEY \ -e NIM_PEFT_SOURCE \ -v $NIM_CACHE_PATH:/opt/nim/.cache \ -v $LOCAL_PEFT_DIRECTORY:$NIM_PEFT_SOURCE \ -p 8000:8000 \ nvcr.io/nim/meta/llama-3.1-8b-instruct:1.1.2 执行这些步骤后,NIM 将加载模型。完成后,您可以使用以下命令查看运行状况,并检索预训练模型和 LoRA 模型的模型名称: # NIM health status $ curl http://<INSTANCE_URL>:8000/v1/health/ready # Get model names of the base model and LoRA models $ curl http://<INSTANCE_URL>:8000/v1/models | jq { "data" : [ { "created" : 1725516389, "id" : "meta/llama-3.1-8b-instruct", "max_model_len" : 131072, "object" : "model", "owned_by" : "system", "parent" : null, "permission" : [ { "allow_create_engine" : false, "allow_fine_tuning" : false, "allow_logprobs" : true, "allow_sampling" : true, "allow_search_indices" : false, "allow_view" : true, "created" : 1725516389, "group" : null, "id" : "modelperm-2274791587e4456b9ce921621377becb", "is_blocking" : false, "object" : "model_permission", "organization" : "*" } ], "root" : "meta/llama-3.1-8b-instruct" }, { "created" : 1725516389, "id" : "llama-3.1-8b-translate-course", "max_model_len" : null, "object" : "model", "owned_by" : "system", "parent" : null, "permission" : [ { "allow_create_engine" : false, "allow_fine_tuning" : false, "allow_logprobs" : true, "allow_sampling" : true, "allow_search_indices" : false, "allow_view" : true, "created" : 1725516389, "group" : null, "id" : "modelperm-cb8be2bce8db442d8347f259966e2c02", "is_blocking" : false, "object" : "model_permission", "organization" : "*" } ], "root" : "meta/llama-3.1-8b-instruct" }, { "created" : 1725516389, "id" : "llama-3.1-8b-translate-web", "max_model_len" : null, "object" : "model", "owned_by" : "system", "parent" : null, "permission" : [ { "allow_create_engine" : false, "allow_fine_tuning" : false, "allow_logprobs" : true, "allow_sampling" : true, "allow_search_indices" : false, "allow_view" : true, "created" : 1725516389, "group" : null, "id" : "modelperm-8e404c4d9f504e5fae92bf6caf04e93c", "is_blocking" : false, "object" : "model_permission", "organization" : "*" } ], "root" : "meta/llama-3.1-8b-instruct" } ], "object" : "list" } 输出将显示可用于部署的模型的详细信息。 第 3 步:评估微调 LoRA 模型的翻译质量 运行 NIM 后,您可以使用 NIM 执行英语到繁体中文的翻译,并在请求正文中指定适当的 LoRA 模型名称。 微调结果和性能指标 我们使用预训练模型和两个经过微调的 LoRA 模型评估了两个测试数据集的翻译质量。这些评估的 BLEU 和 COMET 分数如图 1 和 2 所示。 图 1、使用基础模型和两个 LoRA 微调模型的不同测试数据集的 BLEU 分数 (越高越好) 图 2、使用基础模型和两个 LoRA 模型的不同测试数据集的 COMET 分数 (越高越好) 这些结果表明,使用 LoRA 机制微调特定数据集可显著提高其各自领域内的翻译质量。为了获得最佳性能,特定领域的数据集应利用其相应的微调 LoRA 模型。 在 Web 数据集上微调的 LoRA 模型显示,与 Web 相关的翻译的评估分数有了更显著的提高。表 1 对比了翻译示例。 源文本 预训练模型输出 LoRA 微调模型 (基于 Web 数据集) 输出 NVIDIA 在计算机视觉、对话式 AI 和推荐系统工作负载的 MLPerf 推理基准测试中占据领先地位。 NVIDIA 在計算視覺、對話式人工智慧和建議系統工作負載方面,領先於 MLPerf 推理基準。 NVIDIA 在電腦視覺、對話式人工智慧和推薦系統工作負載的 MLPerf 推論基準中擴大領先地位。 NVIDIA RTX Remix 基于 NVIDIA Omniverse 构建,使 modders 能够轻松捕获游戏资产,使用生成式 AI 工具自动增强材质,并快速创建令人惊艳的 RTX 重制版游戏,具有全景光线追踪和 DLSS 3。 在 NVIDIA Omniverse 上建立,RTX Remix 允许模块开发者轻松捕捉游戏资源、自动使用生成 AI 工具增强材质,并快速创建具有全实时光线追踪和 DLSS 3 的 RTX 重制版。 RTX Remix 是以 NVIDIA Omniverse 为基础,让模块玩家能轻松擷取游戏素材、使用生成式人工智慧工具自动增强材质,并快速创造出充分采用全光线追踪技术和 DLSS 3 的令人惊豔的 RTX 重製版。 请注意,提示模板如下所示:*|begin_of_text|>*|start_header_id|> 系统*|end_header_id|> \ n \ n <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are an AI assistant who translates an English context to traditional Chinese. The context will be given after a prompt “Translate the following input text to traditional Chinese”. Never respond text other than translation results.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTranslate the following input text to traditional Chinese.\n\n .\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n 系统将在提示“Translate the following input text to traditional chinese”(将以下 输入文本翻译成繁体中文 ) 后提供上下文。切勿回复翻译结果以外的文本。*|eot_id|>*|start_header_id|> 用户*|end_header_id|> \ n \ n 将以下输入文本翻译成繁体中文。\ n \ n {English Text} .\ n}|eot_id|>*|start_header_id|> 助手*|end_header_id|> \ n \ n 表 1、Web 数据集的句子翻译比较 通过 NVIDIA NIM 和 LoRA 微调进一步探索 分别在网站和在线培训课程数据集的营销内容上微调 LoRA 适配器 ,可显著提高翻译质量。这表明,特定领域的数据集在与自己的 LoRA 模型搭配使用时可获得更好的结果,从而高效调整预训练模型的权重以提高性能。通过在单个 NVIDIA NIM 实例中部署这些经过微调的模型,可提供高效利用 GPU 的解决方案,从而同时为多个专业任务提供服务。 准备好更进一步了吗?探索 NVIDIA NIM 微服务 如何帮助您针对特定任务部署和微调 LLM。借助 NVIDIA NeMo ,您可以使用 LoRA 适配器 微调热门模型(例如 Llama 3 、 Mistral 和 Phi ),从而提高开发效率并增强应用性能。
https://developer.nvidia.com/blog/announcing-nemotron-cc-a-trillion-token-english-language-dataset-for-llm-pretraining/
Announcing Nemotron-CC: A Trillion-Token English Language Dataset for LLM Pretraining
NVIDIA is excited to announce the release of Nemotron-CC, a 6.3-trillion-token English language Common Crawl dataset for pretraining highly accurate large language models (LLMs) , including 1.9 trillion tokens of synthetically generated data. One of the keys to training state-of-the-art LLMs is a high-quality pretraining dataset, and recent top LLMs, such as the Meta Llama series, were trained on vast amounts of data comprising 15 trillion tokens. But little is known about the exact composition of these 15 trillion tokens. Nemotron-CC aims to remedy this and enable the wider community to train highly accurate LLMs. Internet crawl data, typically from Common Crawl, is generally the largest source of tokens. Recent open Common Crawl datasets, such as FineWeb-Edu and DCLM, have shown how to greatly improve benchmark accuracies over relatively short token horizons. However, this has been accomplished at the cost of removing 90% of data. This limits the suitability for long token horizon training, such as 15 trillion tokens for Llama 3.1. Nemotron-CC fills this gap and shows how to transform Common Crawl data into a high-quality dataset suitable for training LLMs better than Llama 3.1 8B through a combination of classifier ensembling, synthetic data rephrasing, and reduced reliance on heuristic filters. Results Shown in Figure 1 are MMLU scores when training 8B parameter models for 1 trillion tokens, varying only the 73% English Common Crawl portion of the training data. Compared to the leading open English Common Crawl dataset DCLM, the high-quality subset Nemotron-CC-HQ increases the MMLU by +5.6. Figure 1. MMLU scores for 8B parameter models trained for 1 trillion tokens Furthermore, the full 6.3-trillion-token dataset matches DCLM on MMLU, but contains four times more unique real tokens. This unlocks effective training over a long token horizon: an 8 billion parameter model trained for 15 trillion tokens, of which 7.2 trillion came from Nemotron-CC, is better than the Llama 3.1 8B model: +5 on MMLU, +3.1 on ARC-Challenge, and +0.5 on average across ten diverse tasks. Key insights Some of the key insights that led to these results include: Ensembling different model-based classifiers can help select a larger and more diverse set of high quality tokens. Rephrasing can effectively reduce noise and errors in low-quality data and produce diverse variants with fresh unique tokens from high-quality data, leading to better results in downstream tasks. Disabling traditional non-learned heuristic filters for high-quality data can further boost high quality token yield without hurting accuracy. Data curation steps Using NVIDIA NeMo Curator , we extracted and cleaned data from Common Crawl and then: Filtered it for the English language Performed global fuzzy deduplication as well as exact substring deduplication Leveraged model-based filters such as DCLM, fineweb-edu for quality classification Applied various heuristic and perplexity filters to further remove lower-quality data We also leveraged synthetic data generation pipelines to generate ~2 trillion tokens of synthetic data. The full recipe including the synthetic data generation pipelines will be merged into the NVIDIA/NeMo-Curator GitHub repo soon. To receive updates, star the repo. Conclusion Nemotron-CC is an open, large, high-quality English Common Crawl dataset that enables pretraining highly accurate LLMs over both short and long token horizons. In the future, we hope to release more datasets that are key ingredients for state-of-the-art LLM pretraining, such as a specialized math pretraining dataset. Download the dataset from Common Crawl. Use NeMo Curator to curate your own datasets. Learn more about the technical details in Nemotron-CC: Transforming Common Crawl into a Refined Long-Horizon Pretraining Dataset. Acknowledgments We thank the Common Crawl Foundation for hosting the dataset. We thank Pedro Ortiz Suarez for valuable feedback that improved the paper and Greg Lindahl for help with improving the data formatting and layout.
https://developer.nvidia.com/zh-cn/blog/announcing-nemotron-cc-a-trillion-token-english-language-dataset-for-llm-pretraining/
宣布推出 Nemotron-CC:用于 LLM 预训练的万亿级英语语言数据集
NVIDIA 很高兴地宣布发布 Nemotron-CC,这是一种包含 6.3 万亿个令牌的 英语语言 Common Crawl 数据集 ,用于预训练高度准确的 大语言模型 (LLMs) ,其中包括 1.9 万亿个令牌的合成生成数据。高质量预训练数据集是训练先进 LLM 的关键之一,近期的顶级 LLM (例如 Meta Llama 系列 ) 基于包含 15 万亿个令牌的大量数据进行了训练。 但我们对这 15 万亿个令牌的确切构成知之甚少。Nemotron-CC 旨在解决这一问题,并使更广泛的社区能够训练高度准确的 LLM。互联网爬行数据 (通常来自 Common Crawl) 通常是最大的令牌来源。近期开放的 Common Crawl 数据集 (例如 FineWeb-Edu 和 DCLM) 展示了如何在相对较短的令牌范围内大幅提高基准精度。但是,实现此目标的代价是移除 90%的数据。这限制了长令牌水平训练的适用性,例如 Llama 3.1 的 15 万亿令牌。 Nemotron-CC 填补了这一空白,并展示了如何通过结合使用分类器集成、合成数据重述以及减少对启发式过滤器的依赖,将 Common Crawl 数据转换为比 Llama 3.1 8B 更好地训练 LLM 的高质量数据集。 成果 图 1 显示了为 1 万亿个令牌训练 8 亿个参数模型时的 MMLU 分数,该分数仅与训练数据的 73% 英文 Common Crawl 部分不同。与领先的开放式英文 Common Crawl 数据集 DCLM 相比,高质量子集 Nemotron-CC-HQ 可将 MMLU 增加约 5.6 倍。 图 1. 针对 1 万亿个令牌训练的 8 亿个参数模型的 MMLU 分数 此外,完整的 6.3 万亿 token 数据集与 MMLU 上的 DCLM 相匹配,但包含的唯一真实 token 是后者的四倍。这样一来,便可在漫长的令牌范围内实现有效训练:使用 15 万亿个令牌训练的 8 亿个参数模型 (其中 7.2 万亿个来自 Nemotron-CC) 优于 Llama 3.1 8B 模型:在 MMLU 上为 +5,在 ARC-Challenge 上为 +3.1,在 10 个不同的任务中平均为 +0.5。 主要见解 产生这些结果的一些关键见解包括: 集成基于模型的不同分类器有助于选择更大、更多样化的高质量 token 集。 重写可以有效减少低质量数据中的噪声和错误,并使用来自高质量数据的新鲜且独特的 token 生成各种变体,从而在下游任务中获得更好的结果。 禁用传统的非学习启发式过滤器可以在不影响准确性的情况下进一步提高高质量的 token 产量。 数据管护步骤 使用 NVIDIA NeMo Curator ,从 Common Crawl 中提取和清理数据,然后: 将其过滤为英语 执行全局模糊重复数据和精确的子字符串重复数据 利用基于模型的过滤器 (例如 DCLM、fineweb-edu) 进行质量分类 应用各种启发式和困惑性滤镜,进一步移除低质量数据 我们还利用合成数据生成流程来生成约 2 万亿个合成数据令牌。 包括合成数据生成流程在内的完整 recipe 将很快合并到 NVIDIA/NeMo-Curator GitHub 存储库中。如要接收更新,请将存储库标记为星。 结束语 Nemotron-CC 是一个开放、大型、高质量的英语 Common Crawl 数据集,支持在短标记和长标记范围内预训练高度准确的 LLM。未来,我们希望发布更多数据集,这些数据集是先进 LLM 预训练的关键要素,例如专门的数学预训练数据集。 下载数据集 Common Crawl 中。 使用 NeMo 策展人 整理自己的数据集。 详细了解“Nemotron-CC:将 Common Crawl 转换为经过优化的长地平线预训练数据集”(Nemotron – CC:Transforming Common Crawl into a Refined Long-Horizon Pretraining Dataset) 中的技术细节。 致谢 我们感谢 Common Crawl Foundation 托管此数据集。我们感谢 Pedro Ortiz Suarez 为改进论文提供的宝贵反馈,以及 Greg Lindahl 为改进数据格式和布局提供的帮助。
https://developer.nvidia.com/blog/build-a-generative-ai-medical-device-training-assistant-with-nvidia-nim-microservices/
Build a Generative AI Medical Device Training Assistant with NVIDIA NIM Microservices
Innovation in medical devices continues to accelerate, with a record number authorized by the FDA every year . When these new or updated devices are introduced to clinicians and patients, they require training to use them properly and safely. Once in use, clinicians or patients may need help troubleshooting issues. Medical devices are often accompanied by lengthy and technically complex Instructions for Use (IFU) manuals, which describe the correct use of the device. It can be difficult to find the right information quickly and training on a new device is a time-consuming task. Medical device representatives often provide support training, but may not be present to answer all questions in real time. These issues can cause delays in using medical devices and adopting newer technologies, and in some cases, lead to incorrect usage. Using generative AI for troubleshooting medical devices Retrieval-augmented generation (RAG) uses deep learning models, including large language models (LLMs),  for efficient search and retrieval of information using natural language. Using RAG, users can receive easy-to-understand instructions for specific questions in a large text corpus, such as in an IFU. Speech AI models, such as automatic speech recognition (ASR) and text-to-speech (TTS) models, enable users to communicate with these advanced generative AI workflows using their voice, which is important in sterile environments like the operating room. NVIDIA NIM inference microservices are GPU-optimized and highly performant containers for these models that provide the lowest total cost of ownership and the best inference optimization for the latest models. By integrating RAG and speech AI with the efficiency and simplicity of deploying NIM microservices, companies developing advanced medical devices can provide clinicians with accurate, hands-free answers in real time. Figure 1. The chatbot user interface of the medical device training assistant A medical device training assistant built with NIM microservices In this tutorial, we build a RAG pipeline with optional speech capabilities to answer questions about a medical device using its IFU. The code used is available on GitHub . We use the following NIM microservices in our RAG pipeline. You have the flexibility to change the components in the pipeline to other NIM microservices for different models: Llama3 70B Instruct (meta/llama3-70b-instruct): A large language model that generates the answer to the user question based on the retrieved text. NV-EmbedQA-e5-v5 (nvidia/nv-embedqa-e5-v5): An embedding model that embeds the text chunks from the IFU and the queries from the user. NV-RerankQA-Mistral-4b-v3 (nvidia/nv-rerankqa/mistral-4b-v3): A reranking model that reranks the retrieved text chunks for the text generation step by the LLM. RIVA ASR : An automatic speech recognition model that transcribes the user’s speech query into text for the model. RIVA TTS : The text-to-speech model that outputs the audio of the response from the LLM. RAG has two steps: document ingestion, then retrieval and generation of answers. These steps and the associated NIM microservices can be found in the reference architecture diagram in Figure 2. Figure 2. The reference architecture shows document ingestion and retrieval Using NVIDIA NIM You can access NIM microservices by signing up for free API credits on the API Catalog at build.nvidia.com or by deploying on your own compute infrastructure. In this tutorial, we use the API Catalog endpoints. More information on using NIM microservices, finding your API key, and other prerequisites can be found on GitHub . Follow these steps to build a RAG pipeline with optional speech for answering medical device questions using its IFU. Build and start the containers See the docker compose files we’ve created to launch the containers with the NIM microservices and vector database. Detailed instructions and code can be accessed on GitHub. Ingest the device manual Navigate your browser to upload your IFU in the “Knowledge Base” tab as shown in Figure 3. Figure 3. The document ingestion page of the medical device training assistant Retrieve and generate answers Navigate to the “Converse” tab to begin the conversation with the IFU (Figure 1). Make sure to click “Use Knowledge Base” to use the IFU as a knowledge resource. To use speech to converse, click the microphone next to the text input area, and the RIVA ASR model will transcribe your question. To receive speech as an output, click the “Enable TTS output”. More information about using and troubleshooting the UI is on the GitHub documentation . Evaluate on a custom dataset Evaluate the performance of the RAG pipeline using a custom dataset of questions and automated RAGAS metrics. RAGAS metrics evaluate the performance of both the retriever and generator and are a common method for evaluating RAG pipelines in an automated fashion. Instructions on how to use the evaluation script are on GitHub . Getting started To get started with this workflow, visit the GenerativeAIExamples GitHub repository , which contains all of the code used in this tutorial as well as extensive documentation. For more information on NIM microservices, you can learn more from the official NIM documentation and ask questions on our NVIDIA Developer NIM Forum .
https://developer.nvidia.com/zh-cn/blog/build-a-generative-ai-medical-device-training-assistant-with-nvidia-nim-microservices/
利用 NVIDIA NIM 微服务构建生成式 AI 医疗设备训练助手
医疗设备创新不断加速,每年获得 FDA 批准的数量都创下新高。向临床医生和患者介绍这些全新或更新的设备时,他们需要接受培训,以正确、安全地使用这些设备。 使用后,临床医生或患者可能需要获得排查问题的帮助。医疗设备通常附有冗长且技术复杂的 Instructions for Use (IFU) 手册,这些手册描述了设备的正确用法。快速找到正确的信息可能很困难,在新设备上训练是一项耗时的任务。医疗设备代表通常会提供支持培训,但可能无法实时回答所有问题。这些问题可能会延迟医疗设备的使用和较新技术的采用,在某些情况下,还会导致错误使用。 使用生成式 AI 对医疗设备进行故障排除 检索增强型生成 (RAG) 使用深度学习模型 (包括大语言模型 (LLMs)),使用自然语言高效搜索和检索信息。借助 RAG,用户可以接收大型文本语料库 (例如 IFU) 中特定问题的易于理解的说明。语音 AI 模型,如自动语音识别 (ASR) 和文本转语音 (TTS) 模型,使用户能够使用语音与这些先进的生成式 AI 工作流进行通信,这在手术室等无菌环境中至关重要。 NVIDIA NIM 推理微服务是为这些模型优化的 GPU 高性能容器,可为最新模型提供最低的总体拥有成本和最佳的推理优化。通过将 RAG 和语音 AI 与部署 NIM 微服务的效率和简便性相集成,开发先进医疗设备的公司可以实时为临床医生提供准确、免持的答案。 图 1、医疗设备训练助手的聊天机器人用户界面 使用 NIM 微服务构建的医疗设备训练助手 在本教程中,我们构建了具有可选语音功能的 RAG 工作流,以回答有关使用其 IFU 的医疗设备的问题。所使用的代码可在 GitHub 上获取。 我们在 RAG 管道中使用以下 NIM 微服务。您可以灵活地将管道中的组件更改为适用于不同模型的其他 NIM 微服务: Llama3 70B Instruct (meta/llama3-70b-instruct) :一种大语言模型,可根据检索到的文本生成用户问题的答案。 NV-EmbedQA-e5-v5 (nvidia/nv-embedqa-e5-v5):嵌入来自 IFU 的文本块和用户查询的嵌入模型。 NV-RerankQA-Mistral-4b-v3 (nvidia/nv-rerankqa/mistral-4b-v3) :一种重新排序模型,用于对检索到的文本块进行重新排序,以便由 LLM 执行文本生成步骤。 RIVA ASR:自动语音识别模型 ,可将用户的语音查询转录为模型的文本。 RIVA TTS:输出 LLM 响应音频的文本转语音模型。 RAG 有两个步骤:文档摄取,然后检索和生成答案。可以在图 2 的参考架构图中找到这些步骤和相关的 NIM 微服务。 图 2、参考架构显示文档提取和检索 使用 NVIDIA NIM 您可以访问 NVIDIA 的 API Catalog( build.nvidia.com ),注册免费的 API 积分,或在自己的计算基础设施上部署,以访问 NIM 微服务。 在本教程中,我们使用 API Catalog 端点。有关使用 NIM 微服务、查找 API 密钥以及其他预备知识的更多信息,请访问 GitHub 。 按照以下步骤构建具有可选语音的 RAG 工作流,以便使用其 IFU 回答医疗设备问题。 构建和启动容器 查看我们为使用 NIM 微服务和向量数据库启动容器而创建的 Docker Compose 文件。您可以在 GitHub 上获取 详细说明 和代码。 提取设备手册 导航浏览器,在“Knowledge Base”(知识库)选项卡中上传您的 IFU,如图 3 所示。 图 3、医疗设备训练助手的文档提取页面 检索并生成答案导航 至“Converse”选项卡,开始与 IFU 对话(图 1)。请务必单击“Use Knowledge Base”,将 IFU 用作知识资源。要使用语音进行交谈,请单击文本输入区域旁边的麦克风,RIVA ASR 模型将转录您的问 题。要接收语音输出,请单击“Enable TTS output”。更多信息关于使用和故障排除的 UI 是在 Github 文档。 在自定义数据集上进行评估使用自定义问题数据集和自动化 RAGAS 指标评估 RAG 工作流的性能。RAGAS 指标评估检索器和生成器的性能,是以自动化方式评估 RAG 工作流的常用方法。有关如何使用评估脚本的说明,请参阅 GitHub 。 开始使用 要开始使用此工作流,请访问 GenerativeAIExamples GitHub 资源库 ,其中包含本教程中使用的所有代码以及大量文档。 有关 NIM 微服务的更多信息,您可以参阅官方 NIM 文档 了解详情,并在我们的 NVIDIA Developer NIM 论坛 上提出问题。
https://developer.nvidia.com/blog/sandboxing-agentic-ai-workflows-with-webassembly/
Sandboxing Agentic AI Workflows with WebAssembly
Agentic AI workflows often involve the execution of large language model (LLM) -generated code to perform tasks like creating data visualizations. However, this code should be sanitized and executed in a safe environment to mitigate risks from prompt injection and errors in the returned code. Sanitizing Python with regular expressions and restricted runtimes is insufficient, and hypervisor isolation with virtual machines is development and resource intensive. This post illustrates how you can gain the benefits of browser sandboxing for operating system and user isolation using WebAssembly (Wasm) , a binary instruction format for a stack-based virtual machine. This increases the security of your application without significant overhead. Securing agentic tool use One of the recent evolutions in LLM application development is exposing tools — functions, applications, or APIs that the LLM can call and use the response from. For example, if the application needs to know the weather in a specific location, it could call a weather API and use the results to craft an appropriate response. Python code execution is a powerful tool for extending LLM applications. LLMs are adept at writing Python code, and by executing that code, they can execute more advanced workflows such as data visualization. Extended with Python function calling, a text-based LLM would have the ability to generate image plots for users. However, it’s difficult to dynamically analyze the LLM-generated Python to ensure it meets the intended specification and doesn’t introduce broader application security risks. If you find yourself executing LLM-generated Python to extend your agentic application, this post is for you. Structuring the agent workflow In the simplest agentic workflow, your LLM may generate Python that you eventually pass to eval . For example, a prompt equivalent to generate the python code to make a bar chart in plotly would return import plotly.graph_objects as go\n\nfig = go.Figure(data=go.Bar(x=["A", "B", "C"], y=[10, 20, 15]))\nfig.update_layout(title="Bar Chart Example") . Your agent would pass this into eval to generate the plot as shown in Figure 1. Figure 1. The simplest agentic workflow between the user, their machine, and the AI model used to generate a plot with Python In Step 1, the user provides their prompt to the application. In Step 2, the application provides any additional prompt context and augmentation to the LLM. In Step 3, the LLM returns the code to be executed by the tool-calling agent. In Step 4, that code is executed on the host operating system to generate the plot, which is returned to the user in Step 5. Notice that the eval is performed on the server in Step 4, a significant security risk. It is prudent to implement controls to mitigate that risk to the server, application, and users. The easiest controls to implement are at the application layer, and filtering and sanitization are often done with regular expressions and restricted Python runtimes. However, these application-layer mitigations are rarely sufficient and can usually be bypassed. For instance, a regular expression may attempt to exclude calls to os but miss subprocess or not identify that there are ways to eventually reach those functions from various dependency internals. A more robust solution might be to only execute the LLM-generated Python in a micro virtual machine like Firecracker , but this is resource and engineering intensive. As an alternative, consider shifting the execution into the user’s browser. Browsers use sandboxes to isolate web page code and scripts from the user’s infrastructure. The browser sandbox is what prevents webpages from accessing local filesystems or viewing webcams without authorization, for example. Using Python in the browser Pyodide is a port of CPython into Wasm to create a sandbox that may be used inside existing JavaScript virtual machines. This means that we can execute Python client-side to inherit all of the security benefits of a browser sandbox. By designing the application to serve HTML with the Pyodide runtime and the LLM-generated code as shown in Figure 2, application developers can shift the execution into the users’ browsers, gaining the security of sandboxing and preventing any cross-user contamination. Figure 2. An agentic workflow modified to use Pyodide to shift the plot generation from the server into the user’s browser The main difference in this architecture is that instead of executing the tool on the application server, the application instead returns HTML to the user in Step 4. When the user views that HTML in their browser, Pyodide executes the LLM-provided Python in the sandbox and renders the visualization. This modification should require minimal change in prompting strategies, as the LLM-generated code can usually be templated into a static HTML document. For example, this function takes the LLM-provided code, and prepares it before dispatching it for execution with executeCode . window.runCode = async (LLMCode) => { try { console.log('Starting code execution process...'); const pyodide = await initPyodide(); const wrappedCode = [ 'import plotly.graph_objects as go', 'import json', 'try:', ' ' + LLMCode.replace(/\\n/g, '\\n '), ' if "fig" in locals():', ' plotJson = fig.to_json()', ' else:', ' raise Exception("No \\'fig\\' variable found after code execution")', 'except Exception as e:', ' print(f"Python execution error: {str(e)}")', ' raise' ].join('\\n'); await executeCode(pyodide, wrappedCode); ... If your code has Python dependencies (like Plotly ), you can install them with micropip in the client-side Javascript. Micropip supports python wheels from PyPI, including many with C extensions. await pyodide.loadPackage("micropip"); const micropip = pyodide.pyimport("micropip"); await micropip.install('plotly'); Improving application security with Wasm Imagine a scenario where the LLM returns malicious code, either as a result of prompt injection or error. In the case of the simplest agentic workflow, the call to eval results in the compromise of the application, potentially impacting the host operating system and other users as shown in Figure 3. Figure 3. The malicious code is executed on the server, impacting that service and its users However, after applying the Wasm flow, there are two possible cases illustrated in Figure 4. First, the application may throw an error because the malicious Python code cannot be executed in the narrowly scoped Pyodide runtime (that is, a missing dependency). Error: Traceback (most recent call last): File "/lib/python311.zip/_pyodide/_base.py", line 499, in eval_code .run(globals, locals) ^^^^^^^^^^^^^^^^^^^^ File "/lib/python311.zip/_pyodide/_base.py", line 340, in run coroutine = eval(self.code, globals, locals) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<exec>", line 2, in <module> ModuleNotFoundError: No module named 'pymetasploit3' Second, if the code does execute, it is restricted to the browser sandbox which greatly limits any potential impact to the end user’s device. Figure 4. The malicious code propagates into the HTML, but either cannot be executed on the user’s device or has minimal impact due to the browser sandbox In either case, using Pyodide improves security controls for the querying user while reducing risk to application resources and adjacent users. Get started Sandboxing LLM-generated Python with WebAssembly offers a convenient approach, requiring minimal changes to existing prompts and architectures. It is cost-effective by reducing compute requirements, and provides both host and user isolation with improved security of the service and its users. It is more robust than regular expressions or restricted Python libraries, and lighter weight than containers or virtual machines. To get started improving application security for your agentic workflows using Wasm, check out this example on GitHub. Learn more about AI agents and agentic workflows .
https://developer.nvidia.com/zh-cn/blog/sandboxing-agentic-ai-workflows-with-webassembly/
借助 WebAssembly 实现沙箱 Agentic AI 工作流
代理 AI 工作流通常 涉及执行由 大语言模型 (LLM) 生成的代码,以执行创建数据可视化等任务。但是,此代码应在安全环境中清理和执行,以降低提示 注入的风险 和返回代码中的错误。使用正则表达式和受限运行时清理 Python 是不够的,而且虚拟机的 Hypervisor 隔离需要大量的开发和资源。 本文将介绍如何使用 WebAssembly (Wasm) (一种基于堆栈的虚拟机的二进制指令格式) ,利用浏览器沙盒实现操作系统和用户隔离。这提高了应用的安全性,且不会产生重大开销。 确保代理工具的使用 LLM 应用开发的近期变化之一是公开工具,即 LLM 可以调用并使用响应的函数、应用或 API。例如,如果应用需要了解特定地点的天气,它可以调用天气 API,并使用结果制定适当的响应。 Python 代码执行是用于扩展 LLM 应用的强大工具。LLM 擅长编写 Python 代码,通过执行这些代码,他们可以执行更高级的工作流程,例如数据可视化。通过扩展 Python 函数调用,基于文本的 LLM 将能够为用户生成图像绘图。但是,很难动态分析 LLM 生成的 Python,以确保其满足预期规范,并且不会引入更广泛的应用程序安全风险。如果您正在执行由 LLM 生成的 Python 来扩展代理应用程序,本文就是为您准备的。 构建智能体工作流程 在最简单的代理工作流中,您的 LLM 可能会生成 Python,并最终传递给 eval 。例如,相当于 generate the python code to make a bar chart in plotly 的提示符将返回 import plotly.graph_objects as go\n\nfig = go.Figure(data=go.Bar(x=["A", "B", "C"], y=[10, 20, 15]))\nfig.update_layout(title="Bar Chart Example") 。您的代理会将其传递给 eval ,以生成如图 1 所示的图形。 图 1、用户、机器和 AI 模型之间最简单的代理工作流,用于使用 Python 生成图形 在第 1 步中,用户向应用程序提供提示。在第 2 步中,应用程序将为 LLM 提供任何其他提示上下文和增强功能。在第 3 步中,LLM 返回由工具调用代理执行的代码。在第 4 步中,在主机操作系统上执行该代码以生成图形,并在第 5 步中返回给用户。 请注意,第 4 步中的 eval 在服务器上执行,存在重大安全风险。为谨慎起见,应实施控制以降低服务器、应用程序和用户面临的风险。最简单的控制是在应用层,过滤和清理通常使用正则表达式和受限的 Python 运行时完成。但是,这些应用层缓解措施远远不够,通常可以绕过。 例如,正则表达式可能会尝试排除对 os 的调用,但错过 subprocess 或不确定是否有方法最终从各种依赖项内部访问这些函数。更可靠的解决方案可能是仅在微型虚拟机 (如 Firecracker ) 中执行 LLM 生成的 Python,但这需要大量资源和工程。 或者,可以考虑将执行转移到用户的浏览器中。浏览器使用沙盒将网页代码和脚本与用户的基础设施隔离开来。例如,浏览器沙盒可防止网页访问本地文件系统或未经授权查看网络摄像头。 在浏览器中使用 Python Pyodide 是将 CPython 移植到 Wasm 中,以创建可在现有 JavaScript 虚拟机中使用的沙盒。这意味着我们可以执行 Python 客户端,以继承浏览器沙盒的所有安全优势。 如图 2 所示,通过设计应用以提供具有 Pyodide 运行时和 LLM 生成代码的 HTML,应用开发者可以将执行转移到用户的浏览器中,从而获得沙盒的安全性并防止任何跨用户污染。 图 2、经过修改的代理工作流,可使用 Pyodide 将图形生成从服务器转移到用户浏览器 此架构的主要区别在于,应用程序在第 4 步中将 HTML 返回给用户,而不是在应用服务器上执行该工具。当用户在浏览器中查看该 HTML 时,Pyodide 会在沙盒中执行 LLM 提供的 Python 并渲染可视化效果。 由于 LLM 生成的代码通常可以模板化为静态 HTML 文档,因此此类修改应尽可能减少提示策略的更改。例如,此函数会接收 LLM 提供的代码,并在发送代码以供使用 executeCode 执行之前进行准备。 window.runCode = async (LLMCode) => { try { console.log('Starting code execution process...'); const pyodide = await initPyodide(); const wrappedCode = [ 'import plotly.graph_objects as go', 'import json', 'try:', ' ' + LLMCode.replace(/\\n/g, '\\n '), ' if "fig" in locals():', ' plotJson = fig.to_json()', ' else:', ' raise Exception("No \\'fig\\' variable found after code execution")', 'except Exception as e:', ' print(f"Python execution error: {str(e)}")', ' raise' ].join('\\n'); await executeCode(pyodide, wrappedCode); ... 如果您的代码具有 Python 依赖项(例如 Plotly ),您可以使用 micropip 在客户端 Javascript 中进行安装。Micropip 支持 PyPI 中的 Python wheel,包括许多具有 C 扩展的 wheel。 await pyodide.loadPackage("micropip"); const micropip = pyodide.pyimport("micropip"); await micropip.install('plotly'); 使用 Wasm 提高应用安全性 想象一下,在这种情况下,LLM 由于提示注入或错误而返回恶意代码。在最简单的代理工作流中,对 eval 的调用会损害应用程序,并可能影响主机操作系统和其他用户,如图 3 所示。 图 3、恶意代码在服务器上执行,会影响该服务及其用户 但是,应用 Wasm 流后,图 4 显示了两种可能的情况。首先,由于无法在狭义范围的 Pyodide 运行时 (即缺失依赖项) 中执行恶意 Python 代码,因此应用程序可能会引发错误。 Error: Traceback (most recent call last): File "/lib/python311.zip/_pyodide/_base.py", line 499, in eval_code .run(globals, locals) ^^^^^^^^^^^^^^^^^^^^ File "/lib/python311.zip/_pyodide/_base.py", line 340, in run coroutine = eval(self.code, globals, locals) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<exec>", line 2, in <module> ModuleNotFoundError: No module named 'pymetasploit3' 其次,如果代码确实执行,则仅限于浏览器沙盒,这极大地限制了对最终用户设备的任何潜在影响。 图 4、恶意代码传播到 HTML 中,但由于浏览器沙盒的原因,无法在用户的设备上执行或影响极小 在这两种情况下,使用 Pyodide 可改善查询用户的安全控制,同时降低应用资源和相邻用户面临的风险。 开始使用 使用 WebAssembly 将 LLM 生成的 Python 沙箱化提供了一种便捷的方法,只需对现有提示和架构进行少量更改即可。它通过降低计算需求实现成本效益,并提供主机和用户隔离,同时提高服务及其用户的安全性。它比正则表达式或受限 Python 库更稳健,比容器或虚拟机更轻。 要开始使用 Wasm 提高代理工作流的应用程序安全性,请查看 GitHub 上的此 示例 。详细了解 AI 代理和代理工作流 。
https://developer.nvidia.com/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/
Three Building Blocks for Creating AI Virtual Assistants for Customer Service with an NVIDIA AI Blueprint
In today’s fast-paced business environment, providing exceptional customer service is no longer just a nice-to-have—it’s a necessity. Whether addressing technical issues, resolving billing questions, or providing service updates, customers expect quick, accurate, and personalized responses at their convenience. However, achieving this level of service comes with significant challenges. Legacy approaches, such as static scripts or manual processes, often fall short when it comes to delivering personalized and real-time support. Additionally, many customer service operations rely on sensitive and fragmented data, which is subject to strict data governance and privacy regulations. With the rise of generative AI, companies aim to revolutionize customer service by enhancing operational efficiency, cutting costs, and maximizing ROI. Integrating AI into existing systems presents challenges related to transparency, accuracy, and security, which can impede adoption and disrupt workflows. To overcome these hurdles, companies are leveraging generative AI-powered virtual assistants to manage a wide range of tasks, ultimately improving response times and freeing up resources. This post outlines how developers can use the NVIDIA AI Blueprint for AI virtual assistants to scale operations with generative AI. By leveraging this information, including sample code, businesses can meet the growing demands for exceptional customer service while ensuring data integrity and governance. Whether improving existing systems or creating new ones, this blueprint empowers teams to meet customer needs with efficient and meaningful interactions. Smarter AI virtual assistants with an AI query engine using retrieval-augmented generation When building an AI virtual assistant, it’s important to align with the unique use case requirements, institutional knowledge, and needs of the organization. Traditional bots, however, often rely on rigid frameworks and outdated methods that struggle to meet the evolving demands of today’s customer service landscape. Across every industry, AI-based assistants can be transformational. For example, telecommunications companies, and the majority of retail and service providers, can use AI virtual assistants to enhance customer experience by offering support 24 hours a day, 7 days a week while handling a wide range of customer queries in multiple languages and providing dynamic, personalized interactions that streamline troubleshooting and account management. This helps reduce wait times and ensures consistent service across diverse customer needs. Another example is within the healthcare insurance payor industry, where ensuring a positive member experience is critical. Virtual assistants enhance this experience by providing personalized support to members, addressing their claims, coverage inquiries, benefits, and payment issues, all while ensuring compliance with healthcare regulations. This also helps reduce the administrative burden on healthcare workers. With the NVIDIA AI platform, organizations can create an AI query engine that uses retrieval-augmented generation (RAG) to connect AI applications to enterprise data. The AI virtual assistant blueprint enables developers to quickly get started building solutions that provide enhanced customer experiences. It is built using the following NVIDIA NIM microservices: NVIDIA NIM for LLM: Brings the power of state-of-the-art large language models (LLMs) to applications, providing unmatched natural language processing with remarkable efficiency. Llama 3.1 70B Instruct NIM : Powers complex conversations with superior contextual understanding, reasoning, and text generation. NVIDIA NeMo Retriever NIM: This collection provides easy access to state-of-the-art models that serve as foundational building blocks for RAG pipelines. These pipelines, when integrated into virtual assistant solutions, enable seamless access to enterprise data, unlocking institutional knowledge via fast, accurate, and scalable answers. NeMo Retriever Embedding NIM : Boosts text question-answering retrieval performance, providing high-quality embeddings for the downstream virtual assistant. NeMo Retriever Reranking NIM : Enhances the retrieval performance further with a fine-tuned reranker, finding the most relevant passages to provide as context when querying an LLM. The blueprint is designed to integrate seamlessly with existing customer service applications without breaking information security mandates. Thanks to the portability of NVIDIA NIM, organizations can integrate data wherever it resides. By bringing generative AI to the data, this architecture enables AI virtual assistants to provide more personalized experiences tailored to each customer by leveraging their unique profiles, user interaction histories, and other relevant data. A blueprint is a starting point that can be customized for an enterprise’s unique use case.  For example, integrate other NIM microservices, such as the Nemotron 4 Hindi 4B Instruct , to enable an AI virtual assistant to communicate in the local language. Other microservices can enable additional capabilities such as synthetic data generation and model fine-tuning to better align with your specific use case requirements. Give the AI virtual assistant a humanlike interface when connected to the digital human AI Blueprint. With the implementation of a RAG backend with proprietary data (both company and user profile and their specific data), the AI virtual assistant can engage in highly contextual conversations, addressing the specifics of each customer’s needs in real-time. Additionally, the solution operates securely within your existing governance frameworks, ensuring compliance with privacy and security protocols especially when working with sensitive data. Three building blocks for creating your own AI virtual assistant As a developer, you can build your own AI virtual assistant that retrieves the most relevant and up-to-date information, in real time, with ever-improving humanlike responses. Figure 1 shows the AI virtual assistant architecture diagram which includes three functional components. Figure 1. The NVIDIA AI Blueprint for AI virtual assistants 1. Data ingestion and retrieval pipeline Pipeline administrators use the ingestion pipeline to load structured and unstructured data into the databases. Examples of structured data include customer profiles, order history, and order status. Unstructured data includes product manuals, the product catalog, and supporting material such as FAQ documents. 2. AI agent The AI virtual assistant is the second functional component. Users interact with the virtual assistant through a user interface. An AI agent, implemented in the LangGraph agentic LLM programming framework, plans how to handle complex customer queries and solves recursively. The LangGraph agent uses the tool calling feature of the Llama 3.1 70B Instruct NIM to retrieve information from both the unstructured and structured data sources, then generates an accurate response. The AI agent also uses short-term and long-term memory functions to enable multi-turn conversation history. The active conversation queries and responses are embedded so they can be retrieved later in the conversation as additional context. This allows more human-like interactions and eliminates the need for customers to repeat information they’ve already shared with the agent. Finally, at the end of the conversation, the AI agent summarizes the discussion along with a sentiment determination and stores the conversation history in the structured database. Subsequent interactions from the same user can be retrieved as additional context in future conversations. Call summarization and conversation history retrieval can reduce call time and improve customer experience. Sentiment determination can provide valuable insights to the customer service administrator regarding the agent’s effectiveness. 3. Operations pipeline The customer operations pipeline is the third functional component of the overall solution. This pipeline provides important information and insight to the customer service operators. Administrators can use the operations pipeline to review chat history, user feedback, sentiment analysis data, and call summaries. The analytics microservice, which leverages the Llama 3.1 70B Instruct NIM, can be used to generate analytics such as average call time, time to resolution, and customer satisfaction. The analytics are also leveraged as user feedback to retrain the LLM models to improve accuracy. You can find the complete example of how to get started with this Blueprint on the NVIDIA AI Blueprint GitHub repository. Get to production with NVIDIA partners NVIDIA consulting partners are helping enterprises adopt world-class AI virtual assistants built using NVIDIA accelerated computing and NVIDIA AI Enterprise software , which includes NeMo, NIM microservices, and AI Blueprints. Accenture The Accenture AI Refinery built on NVIDIA AI Foundry helps design autonomous, intent-driven customer interactions, enabling businesses to tailor the journey to the individual through innovative channels such as digital humans or interaction agents. Specific use cases can be tailored to meet the needs of each industry, for example, telco call centers, insurance policy advisors, pharmaceutical interactive agents or automotive dealer network agents. Deloitte Deloitte Frontline AI enhances the customer service experience with digital avatars and LLM agents built with NVIDIA AI Blueprints that are accelerated by NVIDIA technologies such as NVIDIA ACE, NVIDIA Omniverse, NVIDIA Riva, and NIM. Wipro Wipro Enterprise Generative AI (WeGA) Studio accelerates industry-specific use cases including contact center agents across healthcare, financial services, retail, and more. Tech Mahindra Tech Mahindra is leveraging the NVIDIA AI Blueprint for digital humans to build solutions for customer service. Using RAG and NVIDIA NeMo, the solution provides the ability for a trainee to stop an agent during a conversation by raising a hand to ask clarifying questions. The system is designed to connect with microservices on the backend with a refined learning management system) which can be deployed across many industry use cases. Infosys Infosys Cortex , part of Infosys Topaz , is an AI-driven customer engagement platform that integrates NVIDIA AI Blueprints and the NVIDIA NeMo, Riva, and ACE technologies for generative AI, speech AI, and digital human capabilities to deliver specialized and individualized, proactive, and on-demand assistance to every member of a customer service organization, consequently playing a pivotal role in enhancing customer experience, improving operational efficiency, and reducing costs. Tata Consultancy Services The Tata Consultancy Services (TCS) virtual agent, powered by NVIDIA NIM, and integrated with ServiceNow’s IT Virtual Agent is designed to optimize IT and HR support. This solution uses prompt-tuning and RAG to improve response times, accuracy, and provide multi-turn conversational capabilities. Benefits include reduced service desk costs, fewer support tickets, enhanced knowledge utilization, faster deployment, and a better overall employee and customer experience. Quantiphi Quantiphi is integrating NVIDIA AI Blueprints into its conversational AI solutions to enhance customer service with lifelike digital avatars. These state-of-the-art avatars, powered by NVIDIA Tokkio and ACE technologies, NVIDIA NIM microservices and NVIDIA NeMo , seamlessly integrate with existing enterprise applications, enhancing operations and customer experiences with increased realism. Fine-tuned NIM deployments for digital avatar workflows have proven to be highly cost-effective, reducing enterprise spending on tokens. SoftServe SoftServe Digital Concierge , accelerated by NVIDIA AI Blueprints and NVIDIA NIM microservices, uses NVIDIA ACE, NVIDIA Riva, and the NVIDIA Audio2Face NIM microservice to deliver a highly realistic virtual assistant. Thanks to the Character Creator tool, it delivers speech and facial expressions with remarkable accuracy and lifelike detail. With RAG capabilities from NVIDIA NeMo Retriever, SoftServe Digital Concierge can intelligently respond to customer queries by referencing context and delivering specific, up-to-date information. It simplifies complex queries into clear, concise answers and can also provide detailed explanations when needed. EXL EXL’s Smart Agent Assist offering is a contact center AI solution leveraging NVIDIA Riva, NVIDIA NeMo, and NVIDIA NIM microservices. EXL plans to augment their solution using the NVIDIA AI Blueprint for AI virtual agents. This week at NVIDIA AI Summit India , NVIDIA consulting partners announced a collaboration with NVIDIA to transform India into a Front Office for AI. Using NVIDIA technologies, these consulting giants can help customers tailor the customer service agent blueprint to build unique virtual assistants using their preferred AI model—including sovereign LLMs from India-based model makers—and run it in production efficiently on the infrastructure of their choice. Get started To try the blueprint for free, and to see system requirements, navigate to the Blueprint Card . To start building applications using those microservices, visit the NVIDIA API catalog . To sign in , you’ll be prompted to enter a personal or business email address to access different options for building with NIM. For more information, see the NVIDIA NIM FAQ . This post was originally published on 10/23/2024.
https://developer.nvidia.com/zh-cn/blog/three-building-blocks-for-creating-ai-virtual-assistants-for-customer-service-with-an-nvidia-nim-agent-blueprint/
借助 NVIDIA NIM 智能体蓝图创建客户服务 AI 虚拟助理的三大核心组件
在当今快节奏的商业环境中,提供卓越的客户服务已不再是一种好事情,而是一种必要条件。无论是解决技术问题、解决计费问题,还是提供服务更新,客户都期望在方便时得到快速、准确和个性化的响应。然而,实现这种服务水平面临重大挑战。 传统方法,例如静态脚本或手动流程,通常在提供个性化和实时支持方面存在不足。此外,许多客户服务运营依赖于敏感和零碎的数据,这些数据受严格的数据治理和隐私法规的约束。随着生成式 AI 的兴起,公司旨在通过提高运营效率、降低成本和最大限度地提高投资回报率(ROI)来革新客户服务。 将 AI 集成到现有系统会带来透明度、准确性和安全性方面的挑战,这些挑战可能会阻碍采用并中断工作流程。为此,公司正在利用由生成式 AI 提供支持的虚拟助理来管理各种任务,进而缩短响应时间并释放资源。 本文概述了开发者如何使用 AI 虚拟助理 NVIDIA NIM 智能体蓝图 (NVIDIA NIM Agent Blueprint)通过生成式 AI 扩展客户服务运营。通过利用这些信息(包括示例代码),企业可以满足日益增长的对卓越客户服务的需求,同时确保数据完整性和治理。无论是改进现有系统还是创建新系统,此蓝图都能使团队通过高效、有意义的交互满足客户需求。 更智能的 AI 虚拟助理,以及使用检索增强型生成技术的 AI 查询引擎。 构建 AI 虚拟助理时,务必要与组织的独特用例要求、机构知识和需求保持一致。然而,传统机器人通常依赖于刚性框架和过时的方法,难以满足当今客户服务环境不断变化的需求。 在各行各业,基于 AI 的助理都可以发挥变革性作用。例如,电信公司以及大多数零售和服务提供商可以使用 AI 虚拟助理来增强客户体验,方法是每周 7 天、每天 24 小时提供支持,同时使用多种语言处理各种客户查询,并提供动态的个性化互动,以简化故障排除和账户管理。这有助于减少等待时间,并确保针对不同客户需求提供一致的服务。 另一个例子是在医疗保险付款人行业,确保积极的会员体验至关重要。虚拟助理通过为会员提供个性化支持来增强这种体验,解决他们的索赔、保险查询、福利和付款问题,同时确保遵守医疗健康法规。这还有助于减轻医疗健康工作者的行政负担。 借助 NVIDIA AI 平台,组织可以创建 AI 查询引擎,该引擎使用 检索增强生成(RAG) 将 AI 应用与企业数据相连接。借助 AI 虚拟助理 NIM Agent Blueprint,开发者可以快速着手构建可提供增强客户体验的解决方案。它使用以下 NVIDIA NIM 微服务构建: NVIDIA NIM for LLM: 将先进的大语言模型 (LLM) 的强大功能引入应用,以非凡的效率提供出色的自然语言处理能力。 Llama 3.1 70B Instruct NIM :凭借出色的上下文理解、推理和文本生成能力,为复杂对话提供支持。 NVIDIA NeMo Retriever NIM: 通过此集合,您可以轻松访问作为 RAG 工作流基础模组的先进模型。这些工作流集成到虚拟助理解决方案时,可以无缝访问企业数据,通过快速、准确和可扩展的答案释放机构知识 NeMo 检索器嵌入(NIM) :提升文本问答检索性能,为下游虚拟助理提供高质量的嵌入。 NeMo 检索器重新排名 NIM :通过微调的重新排名器进一步增强检索性能,在查询 LLM 时找到要作为上下文提供的最相关的段落。 该蓝图旨在与现有的客户服务应用无缝集成,且不会破坏信息安全要求。得益于 NVIDIA NIM 的可移植性,组织可以在数据驻留的任何位置集成数据。通过将生成式 AI 引入数据,该架构使 AI 虚拟助理能够利用每位客户的独特配置文件、用户交互历史记录和其他相关数据,为其提供更个性化的体验。 蓝图是一个起点,可以根据企业的独特用例进行定制。例如,集成其他 NIM 微服务(例如 Nemotron 4 Hindi 4B Instruct ),使 AI 虚拟助理能够使用当地语言进行通信。其他微服务可以启用其他功能,例如合成数据生成和模型微调,以更好地满足您的特定用例需求。在将 AI 虚拟助理连接到数字人 NIM Agent 蓝图时,为其提供类似人类的界面。 通过实施包含专有数据(公司和用户资料及其特定数据)的 RAG 后端,AI 虚拟助理可以进行高度上下文相关的对话,实时满足每位客户的具体需求。此外,该解决方案可在现有的治理框架内安全运行,确保遵守隐私和安全协议,尤其是在处理敏感数据时。 用于创建自己的 AI 虚拟助手的三个构建块 作为开发者,您可以构建自己的 AI 虚拟助手,实时检索相关性极高的最新信息,并获得不断改进的人性化响应。图 1 显示了包含三个功能组件的 AI 虚拟助手架构图。 图 1. 客户服务 NIM Agent Blueprint 的 AI 虚拟助手 1.数据提取和检索管道 工作流管理员使用提取工作流将结构化和非结构化数据加载到数据库中。结构化数据的示例包括客户档案、订单历史记录和订单状态。非结构化数据包括产品手册、产品目录和支持材料(例如 FAQ 文档)。 2.AI 智能体 AI 虚拟助理是第二个功能组件。用户通过用户界面与虚拟助理交互。在 LangGraph 智能体 LLM 编程框架中实施的 AI 智能体计划如何处理复杂的客户查询并递归地解决问题。LangGraph 智能体使用 Llama 3.1 70B Instruct NIM 的工具调用功能从非结构化和结构化数据源检索信息,然后生成准确的响应。 AI 智能体还使用短期和长期记忆功能来实现多回合对话历史记录。内置主动对话查询和响应,以便稍后在对话中作为附加上下文进行检索。这使得客户与智能体的交互更像人类,并消除了客户重复已与智能体共享的信息的需求。 最后,在对话结束时,AI 坐席会总结讨论内容以及情感判断,并将对话历史记录存储在结构化数据库中。来自同一用户的后续交互可以在未来对话中作为附加上下文进行检索。呼叫摘要和对话历史记录检索可以缩短通话时间并改善客户体验。情感判断可以为客户服务管理员提供有关坐席有效性的宝贵见解。 3.运营管线 客户运营流程是整个解决方案的第三个功能组件。该流程为客户服务操作员提供重要信息和见解。管理员可以使用运营流程来查看聊天记录、用户反馈、情感分析数据和通话摘要。分析微服务利用 Llama 3.1 70B Instruct NIM,可用于生成平均通话时间、解决问题所需时间和客户满意度等分析。分析还可用作用户反馈,重新训练 LLM 模型以提高准确性。 与 NVIDIA 合作伙伴一起投入生产 NVIDIA 咨询合作伙伴正在帮助企业采用使用 NVIDIA 加速计算和 NVIDIA AI Enterprise 软件 构建的世界级 AI 虚拟助手,其中包括 NeMo、NIM 微服务和 NIM Agent 蓝图。 Accenture Accenture AI Refinery 基于 NVIDIA AI Foundry 构建,可帮助设计自主、意图驱动的客户交互,使企业能够通过数字人或交互代理等创新渠道为个人定制旅程。具体的用例可以根据各个行业的需求进行定制,例如电信呼叫中心、保险政策顾问、制药互动代理或汽车经销商网络代理等。 Deloitte 德勤前线 AI 借助由 NVIDIA ACE、NVIDIA Omniverse、NVIDIA Riva 和 NIM 等 NVIDIA 技术加速的 NVIDIA NIM Agent 蓝图构建的数字虚拟形象和 LLM 代理,提升客户服务体验。 Wipro Wipro Enterprise Generative AI (WeGA) Studio 加速特定行业的用例,包括医疗健康、金融服务、零售等领域的联系中心代理。 Tech Mahindra Tech Mahindra 正在利用 NVIDIA 数字人 NIM Agent Blueprint 构建客户服务解决方案。借助 RAG 和 NVIDIA NeMo,该解决方案使学员能够在对话期间举起一只手提出澄清问题,从而阻止代理。该系统旨在通过改进的学习管理系统与后端微服务连接,该系统可部署在许多行业用例中。 Infosys Infosys Cortex 是 Infosys Topaz 的一部分,是一个 AI 驱动型客户互动平台,集成了 NVIDIA NIM Agent 蓝图以及 NVIDIA NeMo、Riva 和 ACE 技术,可实现生成式 AI、语音 AI 和数字人功能,为客户服务组织的每位成员提供专业、个性化、前瞻性和按需协助,从而在增强客户体验、提高运营效率和降低成本方面发挥关键作用。 Tata 咨询服务 Tata 咨询服务 (TCS) 虚拟代理由 NVIDIA NIM 提供支持,并与 ServiceNow 的 IT 虚拟代理集成,旨在优化 IT 和人力资源支持。该解决方案使用 prompt-tuning 和 RAG 来改善响应时间、提高准确性,并提供多回合对话功能。优势包括降低服务台成本、减少支持工单、提高知识利用率、加快部署速度,以及改善员工和客户的整体体验。 Quantiphi Quantiphi 正在将 NVIDIA NIM Agent 蓝图集成到其对话式 AI 解决方案中,通过逼真的数字虚拟形象增强客户服务。这些先进的虚拟形象由 NVIDIA Tokkio 和 NVIDIA ACE 技术、NVIDIA NIM 微服务和 NVIDIA NeMo 提供支持,可与现有的企业应用无缝集成,从而以更高的真实感增强运营和客户体验。事实证明,针对数字虚拟形象工作流程进行微调的 NVIDIA NIM 部署极具成本效益,可减少企业在令牌方面的支出。 SoftServe SoftServe Digital Concierge 由 NVIDIA NIM Agent 蓝图和 NVIDIA NIM 微服务加速,使用 NVIDIA ACE、NVIDIA Riva 和 NVIDIA Audio2Face NIM 微服务提供高度逼真的虚拟助手。借助 Character Creator 工具,它可以提供语音和面部表情,准确性和细节都非常逼真。 借助 NVIDIA NeMo Retriever 的 RAG 功能,SoftServe Digital Concierge 可以智能地回复客户查询,方法是参考上下文并提供特定、最新的信息。它可以将复杂的查询简化为清晰、简洁的答案,还可以在需要时提供详细的解释。 EXL EXL 的智能体助手产品是一种利用 NVIDIA Riva、NVIDIA NeMo 和 NVIDIA NIM 微服务的联络中心 AI 解决方案。EXL 计划使用 AI 虚拟助手 NVIDIA NIM Agent 蓝图来增强其解决方案。 在本周的 NVIDIA AI 峰会印度大会 上,NVIDIA 咨询合作伙伴宣布与 NVIDIA 合作,将印度转变为 AI 前沿办公室。使用 NVIDIA 技术,这些咨询巨头可以帮助客户定制客户服务代理蓝图,使用他们喜欢的 AI 模型(包括来自印度模型制造商的主权 LLM)构建独特的虚拟助理,并在他们选择的基础设施上高效运行。 开始使用 要免费试用蓝图并查看系统要求,请导航至 Blueprint Card 。您还可以 注册,以便在 AI 虚拟助手 NIM Agent Blueprint 准备下载时收到通知。 要开始使用这些微服务构建应用程序,请访问 NVIDIA API Catalog 。要 登录 ,系统会提示您输入个人或企业电子邮件地址,以访问使用 NIM 构建的不同选项。有关更多信息,请参阅 NVIDIA NIM 常见问题解答 。
https://developer.nvidia.com/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/
Hymba Hybrid-Head Architecture Boosts Small Language Model Performance
Transformers, with their attention-based architecture, have become the dominant choice for language models (LMs) due to their strong performance, parallelization capabilities, and long-term recall through key-value (KV) caches. However, their quadratic computational cost and high memory demands pose efficiency challenges. In contrast, state space models (SSMs) like Mamba and Mamba-2 offer constant complexity and efficient hardware optimization but struggle with memory recall tasks, affecting their performance on general benchmarks. NVIDIA researchers recently proposed Hymba , a family of small language models (SLMs) featuring a hybrid-head parallel architecture that integrates transformer attention mechanisms with SSMs to achieve both enhanced efficiency and improved performance. In Hymba, attention heads provide high-resolution recall, while SSM heads enable efficient context summarization. The novel architecture of Hymba reveals several insights: Overhead in attention: Over 50% of attention computation can be replaced by cheaper SSM computation. Local attention dominance: Most global attention can be replaced by local attention without sacrificing performance on general and recall-intensive tasks, thanks to the global information summarized by SSM heads. KV cache redundancy: Key-value cache is highly correlated across heads and layers, so it can be shared across heads (group query attention) and layers (cross-layer KV cache sharing). Softmax attention limitation: Attention mechanisms are constrained to sum to one, limiting sparsity, and flexibility. We introduce learnable meta-tokens that are prepended to prompts, storing critical information and alleviating the “forced-to-attend” burden associated with attention mechanisms. This post shows that Hymba 1.5B performs favorably against state-of-the-art open-source models of similar size, including Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, and Qwen2.5 1.5B. Compared to Transformer models of similar size, Hymba also achieves higher throughput and requires 10x less memory to store cache. Hymba 1.5B is released to the Hugging Face collection and GitHub . Hymba 1.5B performance Figure 1 compares Hymba 1.5B against sub-2B models (Llama 3.2 1B, OpenELM 1B, Phi 1.5, SmolLM2 1.7B, Danube2 1.8B, Qwen2.5 1.5B) in terms of average task accuracy, cache size (MB) relative to sequence length, and throughput (tok/sec). Figure 1. Performance comparison of Hymba 1.5B Base against sub-2B models In this set of experiments, the tasks include MMLU, ARC-C, ARC-E, PIQA, Hellaswag, Winogrande, and SQuAD-C. The throughput is measured on an NVIDIA A100 GPU with a sequence length of 8K and a batch size of 128 using PyTorch. For models encountering out of memory (OOM) issues during throughput measurement, the batch size was halved until the OOM is resolved to measure the maximal achievable throughput without OOM. Hymba model design SSMs such as Mamba were introduced to address the quadratic complexity and large inference-time KV cache issues of transformers. However, due to their low-resolution memory, SSMs struggle with memory recall and performance. To overcome these limitations, we propose a road map for developing efficient and high-performing small LMs in Table 1. Configuration Commonsense reasoning (%) ↑ Recall (%) ↑ Throughput (token/sec) ↑ Cache size (MB) ↓ Design reason Ablations on 300M model size and 100B training tokens Transformer (Llama) 44.08 39.98 721.1 414.7 Accurate recall while inefficient State-space models (Mamba) 42.98 19.23 4720.8 1.9 Efficient while inaccurate recall A. + Attention heads (sequential) 44.07 45.16 776.3 156.3 Enhance recall capabilities B. + Multi-head heads (parallel) 45.19 49.90 876.7 148.2 Better balance of two modules C. + Local / global attention 44.56 48.79 2399.7 41.2 Boost compute/cache efficiency D. + KV cache sharing 45.16 48.04 2756.5 39.4 Cache efficiency E. + Meta-tokens 45.59 51.79 2695.8 40.0 Learned memory initialization Scaling to 1.5B model size and 1.5T training tokens F. + Size / data 60.56 64.15 664.1 78.6 Further boost task performance G. + Extended context length (2K→8K) 60.64 68.79 664.1 78.6 Improve multishot and recall tasks Table 1. Design road map of the Hymba model Fused hybrid modules Fusing attention and SSM heads in parallel within a hybrid-head module outperforms sequential stacking, according to the ablation study. Hymba fuses attention and SSM heads in parallel within a hybrid head module, enabling both heads to process the same information simultaneously. This architecture improves reasoning and recall accuracy. Figure 2. The hybrid-head module in Hymba Efficiency and KV cache optimization While attention heads improve task performance, they increase KV cache requirements and reduce throughput. To mitigate this, Hymba optimizes the hybrid-head module by combining local and global attention and employing cross-layer KV cache sharing. This improves throughput by 3x and reduces cache by almost 4x without sacrificing performance. Figure 3. Hymba model architecture Meta-tokens A set of 128 pretrained embeddings prepended to inputs, functioning as learned cache initialization to enhance focus on relevant information. These tokens serve a dual purpose: Mitigating attention drain by acting as backstop tokens, redistributing attention effectively Encapsulating compressed world knowledge Figure 4. Interpretation of Hymba from the memory aspect Model analysis This section presents an apples-to-apples comparison across different architectures under the same training settings. We then visualize the attention maps of SSM and Attention in different pretrained models. Finally, we perform head importance analysis for Hymba through pruning. All the analyses in this section help to illustrate how and why the design choices for Hymba are effective. Apples-to-apples comparison We performed an apples-to-apples comparison of Hymba, pure Mamba2, Mamba2 with FFN, Llama3 style, and Samba style (Mamba-FFN-Attn-FFN) architectures. All models have 1 billion parameters and are trained from scratch for 100 billion tokens from SmolLM-Corpus with exactly the same training recipe. All results are obtained through lm-evaluation-harness using a zero-shot setting on Hugging Face models. Hymba performs the best on commonsense reasoning as well as question answering and recall-intensive tasks. Table 2 compares various model architectures on language modeling and recall-intensive and commonsense reasoning tasks, with Hymba achieving strong performance across metrics. Hymba demonstrates the lowest perplexity in language tasks (18.62 for Wiki and 10.38 for LMB) and solid results in recall-intensive tasks, particularly in SWDE (54.29) and SQuAD-C (44.71), leading to the highest average score in this category (49.50). Model Language (PPL) ↓ Recall intensive (%) ↑ Commonsense reasoning (%) ↑ Mamba2 15.88 43.34 52.52 Mamba2 w/ FFN 17.43 28.92 51.14 Llama3 16.19 47.33 52.82 Samba 16.28 36.17 52.83 Hymba 14.5 49.5 54.57 Table 2. Comparison of architectures trained on 100 billion tokens under the same settings In commonsense reasoning and question answering, Hymba outperforms other models in most tasks, such as SIQA (31.76) and TruthfulQA (31.64), with an average score of 54.57, slightly above Llama3 and Mamba2. Overall, Hymba stands out as a balanced model, excelling in both efficiency and task performance across diverse categories. Attention map visualization We further categorized elements in the attention map into four types: Meta: Attention scores from all real tokens to meta-tokens. This category reflects the model’s preference for attending to meta-tokens. In attention maps, they are usually located in the first few columns (for example, 128 for Hymba) if a model has meta-tokens. BOS: Attention scores from all real tokens to the beginning-of-sequence token. In the attention map, they are usually located in the first column right after the meta-tokens. Self: Attention scores from all real tokens to themselves. In the attention map, they are usually located in the diagonal line. Cross: Attention scores from all real tokens to other real tokens. In the attention map, they are usually located in the off-diagonal area. The attention pattern of Hymba is significantly different from that of vanilla Transformers. In vanilla Transformers, attention scores are more concentrated on BOS, which is consistent with the findings in Attention Sink. In addition, vanilla Transformers also have a higher proportion of Self attention scores. In Hymba, meta-tokens, attention heads, and SSM heads work complementary to each other, leading to a more balanced distribution of attention scores across different types of tokens. Specifically, meta-tokens offload the attention scores from BOS, enabling the model to focus more on the real tokens. SSM heads summarize the global context, which focuses more on current tokens (Self attention scores). Attention heads, on the other hand, pay less attention to Self and BOS tokens, and more attention to other tokens (that is, Cross attention scores). This suggests that the hybrid-head design of Hymba can effectively balance the attention distribution across different types of tokens, potentially leading to better performance. Figure 5. Schematics of the attention map of Hymba as a combination of meta-tokens, sliding window attention, and Mamba contributions Figure 6. Sum of the attention score from different categories in Llama 3.2 3B and Hymba 1.5B Heads importance analysis We analyzed the relative importance of attention and SSM heads in each layer by removing them and recording the final accuracy. Our analysis reveals the following: The relative importance of attention/SSM heads in the same layer is input-adaptive and varies across tasks, suggesting that they can serve different roles when handling various inputs. The SSM head in the first layer is critical for language modeling, and removing it causes a substantial accuracy drop to random guess levels. Generally, removing one attention/SSM head results in an average accuracy drop of 0.24%/1.1% on Hellaswag, respectively. Figure 7. The achieved accuracy, measured using 1K samples from Hellaswag, after removing the Attention or SSM heads in each layer Model architecture and training best practices This section outlines key architectural decisions and training methodologies for Hymba 1.5B Base and Hymba 1.5B Instruct. Model architecture Hybrid architecture: Mamba is great at summarization and usually closer focuses on the current token, while attention is more precise and acts as snapshot memory. Combining them in parallel merges these benefits, but standard sequential fusion does not. We chose a 5:1 parameter ratio between SSM and attention heads. Sliding window attention: Full attention heads are preserved in three layers (first, last, and middle), with sliding window attention heads used in the remaining 90% layers. Cross-layer KV cache sharing: Implemented between every two consecutive attention layers. It is done in addition to GQA KV cache sharing between heads. Meta-tokens: These 128 tokens are learnable with no supervision, helping to avoid entropy collapse problems in large language models (LLMs) and mitigate the attention sink phenomenon. Additionally, the model stores general knowledge in these tokens. Training best practices Pretraining: We opted for two-stage base model training. Stage 1 maintained a constant large learning rate and used less filtered large corpus data. Continuous learning rate decay was then performed to 1e-5 using high-quality data. This approach enables continuous training and resuming of Stage 1. Instruction fine-tuning: Instruct model tuning is performed in three stages. First, SFT-1 provides the model with strong reasoning abilities by training on code, math, function calling, role play, and other task-specific data. Second, SFT-2 teaches the model to follow human instructions. Finally, DPO is leveraged to align the model with human preferences and improve the model’s safety. Figure 8. Training pipeline adapted for the Hymba model family Performance and efficiency evaluation With only 1.5T pretraining tokens, the Hymba 1.5B model performs the best among all small LMs and achieves better throughput and cache efficiency than all transformer-based LMs. For example, when benchmarking against the strongest baseline, Qwen2.5, which is pretrained on 13x more tokens, Hymba 1.5B achieves a 1.55% average accuracy improvement, 1.41x throughput, and 2.90x cache efficiency. Compared to the strongest small LM trained on fewer than 2T tokens, namely h2o-danube2, our method achieves a 5.41% average accuracy improvement, 2.45x throughput, and 6.23x cache efficiency. Model # Para-ms Train tokens Token per sec Cache (MB) MMLU 5- shot ARC-E 0-shot ARC-C 0-shot PIQA 0-shot Wino. 0-shot Hella. 0-shot SQuAD -C 1-shot Avg Open ELM-1 1.1B 1.5T 246 346 27.06 62.37 19.54 74.76 61.8 48.37 45.38 48.57 Rene v0.1 1.3B 1.5T 800 113 32.94 67.05 31.06 76.49 62.75 51.16 48.36 52.83 Phi 1.5 1.3B 0.15T 241 1573 42.56 76.18 44.71 76.56 72.85 48 30.09 55.85 Smol LM 1.7B 1T 238 1573 27.06 76.47 43.43 75.79 60.93 49.58 45.81 54.15 Cosmo 1.8B .2T 244 1573 26.1 62.42 32.94 71.76 55.8 42.9 38.51 47.2 h20 dan-ube2 1.8B 2T 271 492 40.05 70.66 33.19 76.01 66.93 53.7 49.03 55.65 Llama 3.2 1B 1.2B 9T 535 262 32.12 65.53 31.39 74.43 60.69 47.72 40.18 50.29 Qwen 2.5 1.5B 18T 469 229 60.92 75.51 41.21 75.79 63.38 50.2 49.53 59.51 AMD OLMo 1.2B 1.3T 387 1049 26.93 65.91 31.57 74.92 61.64 47.3 33.71 48.85 Smol LM2 1.7B 11T 238 1573 50.29 77.78 44.71 77.09 66.38 53.55 50.5 60.04 Llama 3.2 3B 3.0B 9T 191 918 56.03 74.54 42.32 76.66 69.85 55.29 43.46 59.74 Hymba 1.5B 1.5T 664 79 51.19 76.94 45.9 77.31 66.61 53.55 55.93 61.06 Table 2. Hymba 1.5B Base model results Instructed models The Hymba 1.5B Instruct model achieves the highest performance on an average of all tasks, outperforming the previous state-of-the-art model, Qwen 2.5 Instruct, by around 2%. Specifically, Hymba 1.5B surpasses all other models in GSM8K/GPQA/BFCLv2 with a score of 58.76/31.03/46.40, respectively. These results indicate the superiority of Hymba 1.5B, particularly in areas requiring complex reasoning capabilities. Model # Params MMLU ↑ IFEval ↑ GSM8K ↑ GPQA ↑ BFCLv2 ↑ Avg. ↑ SmolLM 1.7B 27.80 25.16 1.36 25.67 -* 20.00 OpenELM 1.1B 25.65 6.25 56.03 21.62 -* 27.39 Llama 3.2 1.2B 44.41 58.92 42.99 24.11 20.27 38.14 Qwen2.5 1.5B 59.73 46.78 56.03 30.13 43.85 47.30 SmolLM2 1.7B 49.11 55.06 47.68 29.24 22.83 40.78 Hymba 1.5B 1.5B 52.79 57.14 58.76 31.03 46.40 49.22 Table 3. Hymba 1.5B Instruct model results Conclusion The new Hymba family of small LMs features a hybrid-head architecture that combines the high-resolution recall capabilities of attention heads with the efficient context summarization of SSM heads. To further optimize the performance of Hymba, learnable meta-tokens are introduced to act as a learned cache for both attention and SSM heads, enhancing the model’s focus on salient information. Through the road map of Hymba, comprehensive evaluations, and ablation studies, Hymba sets new state-of-the-art performance across a wide range of tasks, achieving superior results in both accuracy and efficiency. Additionally, this work provides valuable insights into the advantages of hybrid-head architectures, offering a promising direction for future research in efficient LMs. Learn more about Hybma 1.5B Base and Hymba 1.5B Instruct . Acknowledgments This work would not have been possible without contributions from many people at NVIDIA, including Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, Nikolaus Binder, Hanah Zhang, Maksim Khadkevich, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov, and Nathan Horrocks.
https://developer.nvidia.com/zh-cn/blog/hymba-hybrid-head-architecture-boosts-small-language-model-performance/
Hymba 混合头架构提高小型语言模型性能
Transformer 及其基于注意力的架构,凭借强大的性能、并行化功能以及通过键值 (KV) 缓存进行的长期召回,已成为语言模型 (LM) 的主要选择。然而,其二次计算成本和高内存需求带来了效率挑战。相比之下,Mamba 和 Mamba-2 等状态空间模型 (SSM) 可提供恒定的复杂性和高效的硬件优化,但难以处理记忆回收任务,从而影响其在常规基准测试中的性能。 NVIDIA 研究人员最近提出了 Hymba ,这是一系列小语言模型 (SLMs),采用混合 head 并行架构,将 Transformer Attention 机制与 SSMs 集成,以提高效率和性能。在 Hymba 中,attention heads 可实现高分辨率召回,而 SSM heads 可实现高效的上下文摘要。 Hymba 的新型架构揭示了以下几点见解: 注意力开销: 超过 50% 的注意力计算可以被更便宜的 SSM 计算所取代。 本地注意力优势: 大多数全球注意力可以被本地注意力取代,而不会影响一般任务和召回密集型任务的性能,得益于 SSM heads 汇总的全局信息。 KV 缓存冗余: 键值缓存在 heads 和层之间高度相关,因此可以在 heads (组查询注意力)和层(跨层 KV 缓存共享)之间共享。 Softmax 注意力限制: 注意力机制的总和被限制为 1,从而限制了稀疏性和灵活性。我们引入了可学习的元令牌,这些元令牌在提示之前提供,用于存储关键信息,并减轻与注意力机制相关的“强制关注”的负担。 本文展示了 Hymba 1.5B 与类似大小的先进开源模型 (包括 Llama 3.2 1B、OpenELM 1B、Phi 1.5、SmolLM2 1.7B、Danube2 1.8B 和 Qwen2.5 1.5B) 相比,表现良好。与类似大小的 Transformer 模型相比,Hymba 还实现了更高的吞吐量,存储缓存所需的内存减少了 10 倍。 Hymba 1.5B 已发布至 Hugging Face 集合和 GitHub 。 Hymba 15 亿性能 图 1 比较了 Hymba 1.5B 与次 2B 模型(Llama 3.2 1B、OpenELM 1B、Phi 1.5、SmolLM2 1.7B、Danube2 1.8B、Qwen2.5 1.5B)在平均任务准确性、相对于序列长度的缓存大小(MB)和吞吐量(tok/秒)方面的表现。 图 1、Hymba 1.5B 基准与低于 2B 模型的性能比较 在这组实验中,任务包括 MMLU、ARC-C、ARC-E、PIQA、Hellaswag、Winogrande 和 SQuAD-C。使用 PyTorch 在序列长度为 8K、批量大小为 128 的 NVIDIA A100 GPU 上测量吞吐量。对于在吞吐量测量期间遇到内存不足(OOM)问题的模型,批量大小减半,直到 OOM 得到解决,以测量在不使用 OOM 时可实现的最大吞吐量。 Hymba 模型设计 引入 Mamba 等 SSM 是为了解决 Transformer 的二次复杂性和推理时间较大的 KV 缓存问题。然而,由于其低分辨率内存,SSM 在内存召回和性能方面遇到困难。为了克服这些限制,我们在表 1 中提出了开发高效、高性能小型语言模型的路线图。 配置 常识推理 (*) 召回 (%)* 吞吐量 (令牌/秒)* 缓存大小 (MB) 设计理由 Ablations 在 300M 模型大小和 100B 训练令牌上的消融 Transformer (Llama) 44.08 39.98 721.1 414.7 准确召回,低效 状态空间模型 (Mamba) 42.98 19.23 4720.8 1.9 高效且召回不准确 A. + Attention heads (顺序) 44.07 45.16 776.3 156.3 增强召回功能 B. + Multi-heads (并行) 45.19 49.90 876.7 148.2 更好地平衡两个模块 C. 本地/全球 attention 44.56% 48.79 2399.7 41.2 提高计算/缓存效率 D. + KV 缓存共享 45.16 48.04 2756.5 39.4 缓存效率 E. + Meta-tokens 45.59% 51.79 2695.8 40.0 学习内存初始化 扩展至 1.5 亿模型大小和 1.5 万亿训练令牌 F. 规模/数据 60.56% 64.15 664.1 78.6% 进一步提高任务性能 G. 扩展上下文长度 (2K→8K) 60.64% 68.79 664.1 78.6% 改进 multishot 和召回任务 表 1、Hymba 模型的设计路线图 融合混合模组 根据消融研究,在混合 head 模块中并行融合 attention heads 和 SSM heads 的表现优于顺序堆叠。Hymba 在混合 head 模块中并行融合 attention heads 和 SSM heads,使两个 heads 能够同时处理相同的信息。此架构可提高推理和召回准确性。 图 2、Hymba 中的混合 head模块 效率和 KV 缓存优化 Attention heads 可提高任务性能,但会增加 KV 缓存需求并降低吞吐量。为缓解此问题,Hymba 通过结合本地和全局 attention 并采用跨层 KV 缓存共享来优化混合 head 模块,从而将吞吐量提高了 3 倍,并在不牺牲性能的情况下将缓存减少了近 4 倍。 图 3、Hymba 模型架构 Meta-tokens 一组包含 128 个预训练嵌入的输入,可用作学习缓存初始化,以增强对相关信息的关注。这些 token 具有双重用途: 充当后盾令牌,有效地重新分配 attention,从而减轻 attention 流失。 封装压缩世界知识 图 4、从内存方面解读 Hymba 模型分析 本节介绍了在相同训练设置下跨不同架构的苹果对比。然后,我们在不同的预训练模型中可视化 SSM 和 Attention 的 attention 图。最后,我们通过剪枝对 Hymba 执行头部重要性分析。本节中的所有分析有助于说明 Hymba 的设计选择如何有效以及为何有效。 苹果与苹果对比 我们对 Hymba、纯 Mamba2、Mamba2 with FFN、Llama3 风格和 Samba 风格(Mamba-FFN-Attn-FFN)架构进行了苹果到苹果的比较。所有模型都有 1 亿个参数,并使用完全相同的训练方法从 SmolLM-Corpus 中针对 100 亿个令牌从头开始进行训练。所有结果均通过使用 Hugging Face 模型上的零样本设置的 lm-evaluation-harness 获得。Hymba 在常识推理以及问答和召回密集型任务方面表现出色。 表 2 比较了用于语言建模以及召回密集型和常识推理任务的各种模型架构,其中 Hymba 实现了跨指标的强大性能。Hymba 在语言任务中的困惑度最低(Wiki 为 18.62,LMB 为 10.38),并且在召回密集型任务中表现出色,尤其是在 SWDE(54.29)和 SQuAD-C(44.71)中,从而在此类别中获得最高平均分(49.50)。 模型 语言 (PPL) 召回密集型 (%) 常识推理 (*) Mamba2 15.88 43.34 52.52% Mamba2 w/ FFN 17.43 28.92 51.14 Llama3 16.19 47.33 52.82% Samba 16.28 3617 52.83 Hymba 14.5 49.5% 54.57 表 2、在相同设置下使用 100 亿个令牌进行训练的架构对比 在常识推理和问答方面,Hymba 在大多数任务中的表现优于其他模型,例如 SIQA (31.76) 和 TruthfulQA (31.64),平均分为 54.57,略高于 Llama3 和 Mamba2。总的来说,Hymba 是一款出色的平衡模型,在效率和任务性能方面表现出色,适用于各种类别。 Attention 贴图可视化 我们将 attention 贴图中的元素进一步分为四种类型: 元: 从所有真实令牌到元令牌的 attention 分数。此类别反映了模型对元令牌的偏好。在注意力图中,如果模型具有元令牌,它们通常位于前几列(例如,Hymba 的 128 列)。 BOS: 从所有真实令牌到序列开始令牌的 attention 分数。在 attention 图中,它们通常位于元令牌之后的第一列中。 Self: 从所有真实令牌到自身的 attention 数。在 attention 映射中,它们通常位于对角线上。 交叉: 从所有真实令牌到其他真实令牌的 attention 数。在 attention 地图中,它们通常位于对角线外区域。 Hymba 的 attention 模式与 Vanilla Transformer 明显不同。在 Vanilla Transformer 中,attention 得分更集中在 BOS 上,这与 Attention Sink 中的结果一致。此外,Vanilla Transformer 的 Self Attention 得分比例也较高。在 Hymba 中,meta-tokens、attention heads 和 SSM heads 相辅相成,从而在不同类型的 tokens 之间更平衡地分配注意力得分。 具体来说,meta-tokens 可分流 BOS 的 attention 数,使模型能够更专注于真实标记。SSM heads 对全局上下文进行总结,更侧重于当前令牌(Self attention scores)。另一方面,attention heads 对 Self 和 BOS 令牌的关注度较低,而对其他令牌(即 Cross attention scores)的关注度较高。这表明 Hymba 的混合 head 设计可以有效平衡不同类型令牌的 attention 分布,从而有可能带来更好的性能。 图 5、Hymba 的 attention图示意图 (元令牌、滑动窗口attention和 Mamba 贡献的组合) 图 6、Llama 3.2 3B 和 Hymba 1.5B 中不同类别的 attention 得分总和。 主管重要性分析 我们通过移除 attention 和 SSM heads 并记录最终精度来分析每层中的相对重要性。我们的分析揭示了以下内容: 同一层中的 attention/SSM heads的相对重要性会根据输入进行自适应,并且会因任务而异,这表明它们在处理各种输入时可以发挥不同的作用。 第一层中的 SSM heads 对于语言建模至关重要,移除它会导致准确度大幅下降到随机猜测的水平。 通常,移除一个attention/SSM heads 会导致 Hellaswag 的平均准确率分别下降 0.24%/1.1%。 图 7、移除每层的 Attention 或 SSM 头后,使用 Hellaswag 的 1K 个样本测量得出的准确率。 模型架构和训练最佳实践 本节概述 Hymba 1.5B Base 和 Hymba 1.5B Instruct 的关键架构决策和训练方法。 模型架构 混合架构: Mamba 擅长总结,通常更专注于当前 token,而 attention 更精确,可用作快照内存。并行组合可以合并这些优势,但标准顺序融合则不然。我们在 SSM 和 attention heads 之间选择了 5:1 的参数比。 滑窗法 attention heads:全 attention heads 被保留在三个层级(第一层、最后一层和中间层),其余 90%的层级使用滑窗法 attention heads。 跨层 KV 缓存共享 :在每两个连续的 attention 层之间实现。除了在 heads 之间共享 GQA KV 缓存之外,还完成了这一过程。 元令牌: 这些 128 个令牌无需监督即可学习,有助于避免大语言模型 (LLMs) 中的熵崩溃问题,并缓解 attention 汇集现象。此外,模型会将一般知识存储在这些令牌中。 训练最佳实践 预训练: 我们选择了两个阶段的基础模型训练。第 1 阶段保持恒定的高学习率,并使用较少的过滤大型语料库数据。然后,使用高质量数据将连续学习率衰减至 1e-5。这种方法支持持续训练和恢复第 1 阶段。 指令微调: 指令模型调优分三个阶段执行。首先,SFT-1 通过对代码、数学、函数调用、角色扮演和其他特定任务数据进行训练,为模型提供强大的推理能力。其次,SFT-2 教会模型遵循人类指令。最后,利用 DPO 使模型与人类偏好保持一致,并提高模型的安全性。 图 8、适用于 Hymba 模型系列的训练管线。 性能和效率评估 Hymba 1.5B 模型仅使用 1.5T 预训练令牌,在所有小型语言模型中表现最佳,并实现比所有基于 Transformer 的语言模型更高的吞吐量和缓存效率。 例如,在与最强基准 Qwen2.5(使用 13 倍以上的 tokens 进行预训练)进行基准测试时,Hymba 1.5B 实现了 1.55%的平均准确性提升、1.41 倍的吞吐量和 2.90 倍的缓存效率。与使用少于 2T 的 tokens 训练的最强小型 LM(即 h2o-danube2)相比,我们的方法实现了 5.41%的平均准确性提升、2.45 倍的吞吐量和 6.23 倍的缓存效率。 模型 #参数 训练令牌 令牌/秒 缓存 (MB) MMLU 5-shot ARC-E 0-shot ARC-C 0-shot PIQA 0-shot Wino0-shot Hella0-shot SQuAD-C 1-shot 平均 开放 ELM-1 11 亿 1.5 T 249 346 27.06 62.37 19.54 74.76 61.8 48.37 45.38 48.57 Rene v0.1 13 亿 1.5 T 800 113 32.94 67.05 31.06 76.49 62.75 51.16 48.36 52.83 Phi 1.5 13 亿 0.15 241 1573 42.56 76.18 44.71 76.56 72.85 48 30.09 55.85 Smol LM 17 亿 1T 238 1573 27.06 76.47 43.43 75.79 60.93 49.58 45.81 54.15 Cosmo 18 亿 .2T 244 1573 26.1 62.42 32.94 71.76 55.8 42.9 38.51 47.2 h20 danube2 18 亿 2T 271 492 40.05 70.66 33.19 76.01 66.93 53.7 49.03 55.65 Llama 3.2 1B 12 亿 9T 535 262 32.12 65.53 31.39 74.43 60.69 47.72 40.18 50.29 Qwen 2.5 15 亿 18T 469 229 60.92 75.51 41.21 75.79 63.38 50.2 49.53 59.51 AMD OLMo 12 亿 1.3 T 387 1049 26.93 65.91 31.57 74.92 61.64 47.3 33.71 48.85 Smol LM2 17 亿 11T 238 1573 50.29 77.78 44.71 77.09 66.38 53.55 50.5 60.04 Llama 32 3B 30 亿 9T 191 918 56.03 74.54 42.32 76.66 69.85 55.29 43.46 59.74 Hymba 15 亿 1.5 T 664 79 51.19 76.94 45.9 77.31 66.61 53.55 55.93 61.06 表 2、Hymba 1.5 B 基础模型结果 指令模型 在所有任务中,Hymba 1.5B Instruct 模型的平均性能最高,比之前的先进模型 Qwen 2.5 Instruct 约高出 2%。具体来说,Hymba 1.5B 模型在 GSM8K、GPQA 和 BFCLv2 中的得分分别为 58.76、31.03 和 46.40,优于所有其他模型。这些结果表明 Hymba 1.5B 模型在复杂推理能力方面具有优势,特别是在需要复杂推理能力的领域。 模型 #参数 MMLU ↑ IFEval ↑ GSM8K ↑ GPQA ↑ BFCLv2 ↑ 平均 ↑ SmolLM 17 亿 27.80 25.16 1.36 25.67 – 20.00 OpenELM 11 亿 25.65 6.25 56.03 21.62 – 27.39 Llama 3.2 12 亿 44.41 58.92 42.99 24.11 20.27 38.14 Qwen2.5 15 亿 59.73 46.78 56.03 30.13 43.85 47.30 SmolLM2 17 亿 49.11 55.06 47.68 29.24 22.83 40.78 Hymba 15 亿 15 亿 52.79 57.14 58.76 31.03 46.40 49.22 表 3、Hymba 1.5 B 指令模型结果 结束语 新的 Hymba 系列小型 LM 采用混合 head 架构,将 attention heads 的高分辨率召回功能与 SSM heads 的高效上下文摘要相结合。为进一步优化 Hymba 的性能,我们引入了可学习的元令牌,用作 attention 和 SSM heads 的学习缓存,从而增强模型对显著信息的关注。通过 Hymba 的路线图、全面评估和消融研究,Hymba 在各种任务中设定了新的 state-of-the-art 性能,在准确性和效率方面实现了出色的结果。此外,这项工作还对混合 head 架构的优势提供了宝贵见解,为高效 LM 的未来研究提供了前景光明的方向。 详细了解 Hybma 1.5B Base 和 Hymba 1.5B Instruct 。 致谢 这项工作如果没有 NVIDIA 许多人的贡献是不可能完成的 ,包括 Wonmin Byeon、Zijia Chen、Ameya Sunil Mahabaleshwarkar、Shih-Yang Liu、Matthijs Van Keirsbilck、Min-Hung Chen、Yoshi Suhara、Nikolaus Binder、Hanah Zhang、Maksim Khadkevich、Yingyan Celine Lin、Jan Kautz、Pavlo Molchanov 和 Nathan Horrocks。
https://developer.nvidia.com/blog/advancing-robot-learning-perception-and-manipulation-with-latest-nvidia-isaac-release/
Advancing Robot Learning, Perception, and Manipulation with Latest NVIDIA Isaac Release
At CES 2025, NVIDIA announced key updates to NVIDIA Isaac , a platform of accelerated libraries, application frameworks, and AI models that accelerate the development of AI robots. NVIDIA Isaac streamlines the development of robotic systems from simulation to real-world deployment. In this post, we discuss all the new advances in NVIDIA Isaac: Isaac Sim Isaac Lab Isaac Manipulator Isaac Perceptor What’s new in Isaac Sim 4.5 NVIDIA Isaac Sim is a reference application built on NVIDIA Omniverse that enables you to develop, simulate, and test AI-driven robots in physically based virtual environments. The new Isaac Sim 4.5 will offer a number of significant changes, including the following: A reference application template Improved URDF import and setup Improved physics simulation and modeling New joint visualization tool Simulation accuracy and statistics NVIDIA Cosmos world foundation model Reference application template Isaac Sim has been redesigned as a customizable reference application. A minimal template for faster startup and a full template with complete functionality along with all dependencies. This enables you to tailor the application to your specific needs, whether for headless applications or full Isaac Sim experiences. Improved URDF import and setup Significant improvements have been made to the URDF importer. The User Interface has been simplified to provide a more streamlined process and align with how other formats are imported. You can now individually configure joint drives, making the robot ready to use immediately after import. To assist with joint drive configuration, a natural frequency-based tuning option is provided. Improved physics simulation and modeling Isaac Sim 4.5 features significant advancements in physics modeling and simulation. You can define and configure various joint types between robot components, setting parameters like stiffness and damping to fine-tune joint behavior. New joint visualization tool A new joint visualization tool enables you to inspect the physics properties of selected prims, including their position, rotation, linear and angular velocities, and accelerations. You can now review and optimize various scene parameters such as deformable surfaces or memory usage, before running simulations. Figure 1. Various attributes of selected objects and their association with the other objects Simulation accuracy and statistics Simulation accuracy is greatly improved with a new implementation of full momentum conservation for rigid bodies and articulations. You can now also visualize the simulation statistics of objects and scenes that can either interact with each other or are completely independent. You can review different parameters from deformable surfaces to overall memory used in the buffer. This provides a way to troubleshoot and optimize your scene prior to running the simulation. Figure 2. Simulation statistics associated inside a given scene NVIDIA Cosmos world foundation model Also announced at CES, the NVIDIA Cosmos world foundation model platform can be used to generate massive amounts of controllable synthetic data to train perception robots when paired with Isaac Sim. In Isaac Sim, you compose SimReady 3D scenes by unifying diverse data inputs including CAD, lidar-to-point cloud scans, and generated 3D objects from AI models such as Edify 3D . Then, you compose and stage the scenario to reflect the specific task the robot must perform, and render images or videos. Cosmos can ingest images and videos and output photoreal video clips to then retrain policy models. Video 1. NVIDIA Cosmos: A World Foundation Model Platform for Physical AI What’s new in Isaac Lab 2.0 NVIDIA Isaac Lab is an open-source unified framework for robot learning to train robot policies. Isaac Lab is built on top of NVIDIA Isaac Sim , helping developers and researchers more efficiently build intelligent, adaptable robots with robust, perception-enabled, simulation-trained policies. A new 2.0 version of Isaac Lab includes performance and usability improvements: Tiled rendering: Up to a 1.2x boost in tiled rendering speed, which combines outputs from simultaneous simulations into a single, large image rather than processing numerous smaller images from individual cameras. Quality of life improvements: Simplified installation process using Python package managers. Isaac Lab will also be available as a container, enabling the movement of workloads across systems without underlying dependencies. Previewed at CES, humanoid robot developers can also take advantage of the NVIDIA Isaac GR00T Blueprint for building custom data pipelines for generating vast amounts of synthetic trajectory data from just a small number of human demonstrations. The GR00T blueprint is currently in invite-only early access. Join the NVIDIA Humanoid Developer Program when it becomes widely available in beta. Video 2. Streamline Data Collection With NVIDIA Isaac GR00T What’s new in Isaac Manipulator NVIDIA Isaac Manipulator , built on ROS 2, is a collection of NVIDIA CUDA-accelerated libraries, AI models, and reference workflows. It enables you to build AI-enabled robot arms, or manipulators , that can perceive, understand, and interact with their environments. Isaac Manipulator now includes new end-to-end reference workflows for pick-and-place and object-following, enabling you to quickly get started on fundamental industrial robot arm tasks: Object-following : Shows the robot gripper’s ability to maintain a consistent position relative to a moving object, while maneuvering around obstacles. Pick-and-place : Shows how a robot can pick up an object and release it in a predetermined region while avoiding obstacles (Figure 3). Figure 3. Perception-driven pick-and-place These reference workflows are now supported in Isaac Sim , enabling rapid testing without the need for physical hardware setup. Developer support and other enhancements include the following: Performance improvements to FoundationPose Updates to nvblox for manipulator use cases A tutorial for robot hand-eye calibration An Isaac Sim-based tool for setting and simulating custom grasps for a gripper and object pair What’s new in Isaac Perceptor NVIDIA Isaac Perceptor , built on ROS 2, is a collection of NVIDIA CUDA-accelerated libraries, AI models, and reference workflows for the development of autonomous mobile robots (AMRs). It enables AMRs to perceive, localize, and operate in unstructured environments such as warehouses or factories. Isaac Perceptor encompasses CUDA-accelerated libraries such as nvblox for 3D scene reconstruction and cuVSLAM for stereo-visual-inertial SLAM (simultaneous localization and mapping), which you can integrate into existing AMR workflows. Isaac Perceptor’s latest updates bring significant improvements to AMR’s environmental awareness and operational efficiency in dynamic settings such as warehouses. Key new features and improvements include: New end-to-end visual SLAM reference workflow New examples on running nvblox with multiple cameras for 3D scene reconstruction with people detection and dynamic scene elements Improved 3D scene reconstruction by running Isaac Perceptor on multiple RGB-D camera These updates significantly improve 3D scene reconstruction, leading to higher accuracy and robustness in 3D scene capture and mapping performance in real-world scenarios and complex, dynamic environments. Figure 4. 3D scene reconstruction using multiple cameras NVIDIA ecosystem partners such as Orbbec, LIPS (Realsense), StereoLabs (Zed) offer compatible cameras. Developer support also includes the Mapping and Localization with Isaac Perceptor tutorial for offline mapping capabilities with the cuVGL and cuVSLAM libraries using Nova sensors. Ecosystem adoption Multiple industry partners have announced integrations of NVIDIA Isaac into their platforms and solutions: Boston Dynamics is using Isaac Lab and NVIDIA Jetson AGX Orin to enable simulated policies to be directly deployed for inference, simplifying the deployment process. To train their GR-1 and GR-2 humanoid robots, the Fourier team turned to NVIDIA Isaac Gym (now deprecated) for reinforcement learning. They are currently porting their workflows to NVIDIA Isaac Lab. Foxglove has developed an extension in Isaac Sim that enables the real-time visualization of robotics simulation data directly in Foxglove. Main Street Autonomy’s Calibration Anywhere software automates sensor calibration improving sensor fusion for robotics using Isaac Perceptor. Miso Robotics automates kitchen tasks using advanced robotics, Isaac Manipulator, and NVIDIA Isaac ROS, enhancing efficiency, consistency, and customer satisfaction in commercial kitchens. RGo Robotics and NVIDIA are transforming mobile robotics with advanced AI and perception technologies using Isaac Perceptor and Isaac ROS. Scaled Foundations , an NVIDIA inception member, has developed General Robot Intelligence Development (GRID) , an advanced cloud-based platform that accelerates robot AI solution development. GRID seamlessly integrates Isaac Sim and Isaac Lab technologies, offering an end-to-end platform for robotics developers and researchers to train, simulate, and deploy their robotics applications. For more information, see NVIDIA Isaac Sim on GRID . Virtual Incision is using NVIDIA platforms involving Holoscan, IGX, Sensor Bridge, Isaac Sim. They are exploring Cosmos to train, simulate, and test, and also exploring inference AI features for the next generation of assisted robotic surgery devices. NVIDIA Inception member and deep tech startup Wandelbots is building custom robot simulations with their operating system Wandelbots NOVA, which is seamlessly integrated with Isaac Sim. Get started developing your own robotics solutions Sign up for the NVIDIA Developer Program for updates on additional resources and reference architectures to support your development goals. NVIDIA Cosmos is a platform that helps you build custom world models for physical AI systems and includes pretrained world foundation models for robotics applications. NVIDIA Isaac Lab is an open-source unified framework for robot learning to train robot policies. NVIDIA Isaac ROS , built on the open-source ROS 2 software framework , is a collection of accelerated computing packages and AI models, bringing NVIDIA-acceleration to ROS developers everywhere. NVIDIA Isaac Sim , built on NVIDIA Omniverse , lets you build your own OpenUSD -based applications to design, simulate, test, and train AI-based robots and machines in a physically based virtual environment. Stay up-to-date on LinkedIn , Instagram , X , and Facebook .  For more information, explore our documentation and join the Robotics community on the developer forums and YouTube channels. Follow our self-paced training and webinars ( Isaac ROS and Isaac Sim ).
https://developer.nvidia.com/zh-cn/blog/advancing-robot-learning-perception-and-manipulation-with-latest-nvidia-isaac-release/
利用最新的 NVIDIA Isaac 版本推进机器人学习、感知和操控技术
在 CES 2025 上,NVIDIA 宣布了对 NVIDIA Isaac 的重要更新。NVIDIA Isaac 是一个由加速库、应用框架和 AI 模型组成的平台,可加速 AI 机器人的开发。 NVIDIA Isaac 简化了从仿真到实际部署的机器人系统开发过程。在本文中,我们将讨论 NVIDIA Isaac 的所有新进展: Isaac Sim Isaac Lab Isaac Manipulator Isaac Perceptor Isaac Sim 4.5 的新增功能 NVIDIA Isaac Sim 是一款基于 NVIDIA Omniverse 构建的参考应用,使您能够在基于物理的虚拟环境中开发、模拟和测试 AI 驱动的机器人。 新的 Isaac Sim 4.5 将于 1 月底推出,将带来一系列重大变化,包括: 参考应用模板 改进了 URDF 导入和设置 改进了物理模拟和建模 新的关节可视化工具 模拟准确性和统计数据 NVIDIA Cosmos 世界基础模型 参考应用模板 Isaac Sim 经过重新设计,是一款可定制的参考应用。用于加快启动速度的最小模板和具有完整功能以及所有依赖项的完整模板。这使您能够根据自己的特定需求定制应用,无论是用于无外设应用还是完整的 Isaac Sim 体验。 改进了 URDF 导入和设置 对 URDF 导入器进行了重大改进。用户界面经过简化,可提供更精简的流程,并与其他格式的导入方式保持一致。现在,您可以单独配置关节驱动,使机器人在导入后可以立即使用。为辅助关节驱动配置,我们提供了基于自然频率的调整选项。 改进了物理模拟和建模 Isaac Sim 4.5 在物理建模和仿真方面取得了显著进步。您可以定义和配置机器人组件之间的各种关节类型,通过设置刚度和阻尼等参数来微调关节行为。 新的关节可视化工具 借助新的关节可视化工具,您可以检查选定基元的物理属性,包括其位置、旋转、线性和角速度以及加速度。现在,您可以在运行模拟之前查看和优化各种场景参数,例如可变形表面或显存使用情况。 图 1、所选对象的各种属性及其与其他对象的关联 模拟准确性和统计数据 通过对刚体和关节实施新的全动量守势,模拟准确性得到大幅提高。 现在,您还可以可视化物体和场景的仿真统计信息,这些物体和场景可以相互交互,也可以完全独立。您可以查看从可变形表面到缓冲区中使用的总显存的不同参数。这样可以在运行仿真之前对场景进行故障排除和优化。 图 2、给定场景内关联的模拟统计数据 NVIDIA Cosmos 世界基础模型 此外,在 CES 上宣布的 NVIDIA Cosmos World Foundation 模型平台与 Isaac Sim 搭配使用时,可用于生成大量可控制的合成数据来训练感知机器人。 在 Isaac Sim 中,您可以通过统一不同的数据输入 (包括 CAD、激光雷达到点云扫描) 以及从 AI 模型 (例如 Edify 3D ) 生成的 3D 对象来合成 SimReady 3D 场景 。然后,合成场景并对其进行暂存,以反映机器人必须执行的特定任务,并渲染图像或视频。 Cosmos 可以提取图像和视频,输出逼真的视频片段,然后重新训练策略模型。 视频 1、NVIDIA Cosmos:适用于物理 AI 的世界基础模型平台 Isaac Lab 2.0 的新增功能 NVIDIA Isaac Lab 是一个用于机器人学习的开源统一框架,用于训练机器人策略。Isaac Lab 基于 NVIDIA Isaac Sim 构建,可帮助开发者和研究人员更高效地构建智能、适应性强的机器人,并提供稳健、支持感知、经过仿真训练的策略。 新版 Isaac Lab 2.0 将于本月底推出,其中包括以下性能和可用性改进: 平铺式渲染: 平铺式渲染速度最高可提升 1.2 倍,可将同时模拟的输出结果整合到单个大型图像中,而无需处理来自单个摄像头的大量较小图像。 改善生活质量: 使用 Python 包管理器简化安装流程。Isaac Lab 还可用作容器,无需底层依赖项即可跨系统迁移工作负载。 在 CES 展会上,类人型机器人开发者还可以利用 NVIDIA Isaac GR00T Blueprint 构建自定义数据流水线,通过少量人类演示生成大量合成轨迹数据。GR00T 蓝图目前处于邀请抢先体验阶段。当 NVIDIA Humanoid 开发者计划在测试版中广泛推出时,加入该计划。 视频 2、借助 NVIDIA Isaac GR00T 简化数据采集 Isaac Manipulator 新增功能 NVIDIA Isaac Manipulator 基于 ROS 2 构建,是 NVIDIA CUDA 加速库、AI 模型和参考工作流的集合。它使您能够构建支持 AI 的机械臂 (Manipulator),能够感知和理解周围环境并与之交互。 Isaac Manipulator 现在包含新的端到端参考工作流,用于拾放和对象跟踪,能够让您快速开始执行基本的工业机器人手臂任务。 物体追踪 :展示机器人抓手在绕过障碍物时相对于移动物体保持稳定位置的能力。 拾放 :展示机器人如何拾取物体并将其释放到预定区域,同时避开障碍物 (Figure 3)。 图 3、感知驱动拾放 这些参考工作流程现在支持在 Isaac Sim 中使用,实现了无需物理硬件设置的快速测试。 开发者支持和其他增强功能包括: 性能提升 FoundationPose 更新 NVBlox 操纵器用例 用于机器人手眼校准的 教程 一款基于 Isaac Sim 的工具 ,用于为抓手和对象对设置和模拟自定义抓取 Isaac Perceptor 新增功能 NVIDIA Isaac Perceptor 基于 ROS 2 构建,是 NVIDIA CUDA 加速库、AI 模型和参考工作流的集合,用于开发自主移动机器人 (Autonomous Mobile Robots, AMRs)。它使 AMRs 能够在仓库或工厂等非结构化环境中进行感知、定位和操作。 Isaac Perceptor 包含 CUDA 加速库,例如用于 3D 场景重建的 nvblox 和用于立体视觉惯性 SLAM(同步定位和映射)的 cuVSLAM ,您可以将其集成到现有的自动移动机器人(AMR)工作流程中。 Isaac Perceptor 的最新更新显著提高了 AMR 在仓库等动态环境中的环境意识和运营效率。主要的新功能和改进包括: 新的端到端可视化 SLAM 参考工作流 关于使用多个摄像头运行 NVIDIA 的 nvblox 的新示例,用于 3D 场景重建的 人员检测和动态场景元素 通过 在多个 RGB-D 摄像头上运行 Isaac Perceptor ,改进了 3D 场景重建 这些更新显著改进了 3D 场景重建,从而提高了真实场景和复杂动态环境中 3D 场景捕获和映射性能的准确性和可靠性。 图 4、使用多个摄像头重建 3D 场景 Orbbec、LIPS (Realsense)、StereoLabs (Zed) 等 NVIDIA 生态系统合作伙伴提供兼容的摄像头。开发者支持还包括“Mapping and Localization with Isaac Perceptor”( 使用 Isaac Perceptor 进行映射和定位 ) 教程,该教程通过使用 Nova 传感器的 cuVGL 和 cuVSLAM 库提供离线映射功能。 采用生态系统 多个行业合作伙伴已宣布将 NVIDIA Isaac 集成到其平台和解决方案中: Boston Dynamics 正在使用 Isaac Lab 和 NVIDIA Jetson AGX Orin 直接部署模拟策略以进行推理,从而简化部署流程。 为了训练他们的 GR-1 和 GR-2 类人型机器人, Fourier 团队转而使用 NVIDIA Isaac Gym (现已弃用) 进行强化学习。他们目前正在将工作流移植到 NVIDIA Isaac Lab。 Foxglove 在 Isaac Sim 中开发了一个扩展程序 ,可直接在 Foxglove 中实时可视化机器人仿真数据。 Main Street Autonomy 的 Calibration Anywhere 软件 使用 Isaac Perceptor 自动进行传感器校准,改善机器人的传感器融合。 Miso Robotics 使用先进的机器人、 Isaac Manipulator 和 NVIDIA Isaac ROS 实现厨房任务自动化,提高商业厨房的效率、一致性和客户满意度。 RGo Robotics 和 NVIDIA 正在利用 Isaac Perceptor 和 Isaac ROS,利用先进的 AI 和感知技术变革移动机器人技术 。 作为 NVIDIA 初创加速计划的成员, Scaled Foundations 开发了通用机器人智能开发 ( GRID ),这是一个基于云的先进平台,可加速机器人 AI 解决方案的开发。GRID 无缝集成了 NVIDIA Isaac Sim 和 NVIDIA Isaac Lab 技术,为机器人开发者和研究人员提供了一个端到端平台,用于训练、模拟和部署其机器人应用。有关更多信息,请参阅基于 GRID 的 NVIDIA Isaac Sim。 Virtual Incision 正在使用 NVIDIA 平台,涉及 Holoscan、IGX、Sensor Bridge、Isaac Sim。他们正在探索 Cosmos 进行训练、模拟和测试,还在探索用于新一代机器人辅助手术设备的推理 AI 功能。 NVIDIA Inception 计划会员兼深度技术初创公司 Wandelbots 正在使用其操作系统 Wandelbots NOVA 构建自定义机器人仿真,该系统与 Isaac Sim 无缝集成。 开始开发您自己的机器人开发解决方案 注册 NVIDIA 开发者计划 ,获取有关其他资源和参考架构的更新信息,以支持您的开发目标。 NVIDIA Cosmos 平台 可帮助您为物理 AI 系统构建自定义世界模型,并包含用于机器人应用的预训练世界基础模型。 NVIDIA Isaac Lab 是一个用于机器人学习的开源统一框架 ,用于训练机器人策略。 NVIDIA Isaac ROS 基于开源 ROS 2 软件框架构建,是加速计算包和 AI 模型的集合,可为各地的 ROS 开发者提供 NVIDIA 加速。 NVIDIA Isaac Sim 基于 NVIDIA Omniverse 构建,可让您构建自己的基于 OpenUSD 的应用,以便在基于物理的虚拟环境中设计、仿真、测试和训练基于 AI 的机器人和机器。 关注 LinkedIn 、 Instagram 、 X 和 Facebook ,了解最新动态。如需了解更多信息,请浏览我们的文档,并在开发者论坛和 YouTube 频道上加入机器人开发社区。关注我们的自主培训和网络会议 (Isaac ROS 和 Isaac Sim)。
https://developer.nvidia.com/blog/ai-vision-helps-green-recycling-plants/
AI Vision Helps Green Recycling Plants
Each year, the world recycles only around 13% of its two billion-plus tons of municipal waste. By 2050, the world’s annual municipal waste will reach 3.88B tons. But the global recycling industry is far from efficient. Annually, as much as $120B of potentially recoverable plastic—let alone paper or metals—ends up in landfills rather than within new products made with recycled materials. A UK-based startup, Greyparrot , has developed a small, AI-powered device offering “waste intelligence,” and aims to help recycling plants become more efficient and environmentally friendly. The sleek, two-foot-square device—dubbed the Greyparrot Analyzer—uses embedded cameras to identify and differentiate among materials on conveyor belts running through recycling plants. The analyzer—using a machine learning, object detection model (ODM)—is attached to, and helps retrofit recycling plants’ infrastructure. Trained on tens of millions of waste images, the ODM takes less than 60 milliseconds to capture and analyze images of quickly-moving waste. The analyzer’s ODM, which was trained on-premise and in the cloud using NVIDIA H100 Tensor Core GPUs , NVIDIA L4 Tensor Core GPUs , as well as PyTorch and NVIDIA CUDA-X libraries, can identify roughly 90 different types of materials. Each analyzer, which weighs around 40 pounds, is also powered by an NVIDIA GeForce RTX 4060 GPU . In 2024, Greyparrot analyzed more than 40 billion waste objects across its worldwide facilities. Armed with this “waste intelligence,” operations managers use the data for two primary purposes. First, plant operators get real-time and longitudinal analytical insights into how effectively their facility separates and creates homogenous piles of recyclable materials—like paper, metals, and plastics. Creating purer collections of recyclable materials, which are then sold to product manufacturers, is a key component of the world’s $1.3T recycling industry. A second use case for the data—or really, the ODM—is to supplement the operating systems of robotic arms that are deployed within facilities to sort different types of waste. The arms communicate with the ODM to help them more accurately sort materials. “A recycling plant’s job is to turn what we discard, and never think of again, back into virgin material that can be recycled to make more of those same things,” said James Paterson, Greyparrot’s vice president of engineering. “We’re constantly tackling throughput and quality, so recycling facilities can process more waste, and still end up with more purified products.” Figure 1. Greyparrot’s analyzer system sits above conveyor belts in recycling facilities and provides real-time as well as longitudinal data insights to improve efficiency Recycling plants deploying the analyzer also use it as something of a trip wire. It’s not uncommon, for instance, for one component of a sorting facility to jam. When that happens, different materials can get mixed together, or end up inadvertently getting thrown out rather than being readied for recycling. With the analyzer’s ODM, sorting errors—triggered by unanticipated jams—are quickly identified and relayed to operators, who can quickly intervene. Without the ODM, it might take hours for facilities to identify a mechanical mishap. “You can imagine in a big facility with a lot of machines, it may take three to four hours for someone to notice a machine is jammed,” said Paterson. “And then someone looks and sees there are all these valuable aluminum cans [incorrectly] getting thrown out, and they say, ‘Oh my god. The system is down. We better check the machines.’ And that’s money that’s literally going down the drain and into landfills.” The five-year-old London-based startup—which is part of the NVIDIA Inception Program — operates in more than 55 recycling facilities across 20 different countries. With about 50 full-time employees, primarily in the UK and US, but also in Spain, Italy, and The Netherlands, Greyparrot is expanding its global presence. In early 2024, the Dutch recycling firm Bollegraaf Group, the world’s largest recycling plant builder, agreed to integrate Greyparrot’s technology into its new facilities. As more state-of-the-art facilities integrate AI into their waste analytics and robotic capabilities, Greyparrot anticipates the analyzer will help improve operational performance and improve recycling yields at scale. Read more about Greyparrot’s recycling efforts and its relationship with iconic product designer, Tony Fadell. There’s additional reporting on Greyparrot’s efforts to bring AI to the world’s recycling industry.
https://developer.nvidia.com/zh-cn/blog/ai-vision-helps-green-recycling-plants/
AI 视觉技术助力绿色回收工厂智能化
全球每年仅回收约 13% 的 20 多亿吨城市垃圾。到 2050 年,全球每年的城市废弃物量将达到 3.88 亿吨。 但全球回收行业的效率远远不够。每年,价值高达 120 亿美元的可回收塑料 (更不用说纸张或金属了) 最终进入垃圾填埋场,而不是使用回收材料制造的新产品。 英国的初创公司 Greyparrot 开发了一款由 AI 驱动的小型设备,可提供“废弃物智能”功能,旨在帮助回收工厂变得更高效、更环保。 这款名为 Greyparrot Analyzer 的两英尺方形时尚设备使用嵌入式摄像头来识别和区分流经回收工厂的传送带上的材料。 该分析器使用机器学习物体检测模型(ODM)连接到回收工厂,并帮助改造其基础设施。ODM 根据数千万张废弃物图像进行训练,捕获和分析快速废弃物图像只需不到 60 毫秒。 分析器的 ODM 使用 NVIDIA H100 Tensor Core GPU 、 NVIDIA L4 Tensor Core GPU 以及 PyTorch 和 NVIDIA CUDA-X 库在本地和云端进行训练,可以识别大约 90 种不同类型的材料。每台重约 40 磅的分析器也由 NVIDIA GeForce RTX 4060 GPU 提供支持。 2024 年,Greyparrot 分析了其全球设施中超过 40 亿个废弃物。借助这种“废弃物情报”,运营经理可以将数据用于两个主要目的。 首先,工厂操作人员可以获得实时的纵向分析见解,了解其设施分离和生成同质可回收材料(如纸张、金属和塑料)的效率。打造更纯净的可回收材料系列,然后将其出售给产品制造商,这是全球价值 1.3 万亿美元的回收行业的重要组成部分。 数据(实际上是 ODM)的第二个用例是补充部署在设施内的机械臂操作系统,以分类不同类型的废弃物。手臂与 ODM 通信,帮助他们更准确地分类材质。 Greyparrot 工程副总裁 James Paterson 表示:“回收工厂的工作是将我们丢弃且永远不会想到的东西重新转化为可以回收的原生材料,以制造更多类似的东西。”我们在不断提高吞吐量和质量,因此回收设施可以处理更多的废弃物,最终仍能获得更多的净化产品。” 图 1、Greyparrot 的分析系统位于回收设施的传送带上方,可提供实时和纵向数据洞察,从而提高效率 部署分析器的回收工厂也将其用作触发器。 例如,分拣设施的一个组件卡死的情况并不少见。当这种情况发生时,不同的材料可能会混合在一起,或者最终在无意中被丢弃,无法准备回收。 借助分析器的 ODM,可快速识别由意外的卡阻引发的分拣错误并将其转发给操作员,以便操作员快速干预。如果没有 ODM,设施可能需要数小时才能识别机械事故。 Paterson 说:“可以想象,在拥有大量机器的大型设施中,人们可能需要 3 到 4 个小时才能注意到机器卡住了。”“然后有人看了看,发现有很多有价值的铝罐【错误】被丢弃,他们说,‘天哪,我的天哪。系统机了。我们最好检查机器。“这笔钱实际上会流入垃圾填埋场。” 这家成立五年的初创公司位于伦敦,是 NVIDIA Inception Program 的一部分,在 20 个不同国家地区的超过 55 家回收设施中运营。Greyparrot 拥有大约 50 名全职员工,主要分布在英国,也分布在西班牙、意大利和荷兰,正在扩展其全球业务。 2024 年初,全球最大的回收工厂制造商荷兰回收公司 Bollegraaf Group 同意将 Greyparrot 的技术整合到其新设施中。 随着越来越多的先进设施将 AI 集成到其垃圾分析和机器人功能中,Greyparrot 预计该分析器将有助于提高运营性能并大规模提高回收产量。 详细了解 Greyparrot 的 回收工作 及其与标志性产品设计师 Tony Fadell 的关系。 还有 更多 关于 Greyparrot 为将 AI 引入全球回收行业所做努力的报道。
https://developer.nvidia.com/blog/five-takeaways-from-nvidia-6g-developer-day-2024/
Five Takeaways from NVIDIA 6G Developer Day 2024
NVIDIA 6G Developer Day 2024 brought together members of the 6G research and development community to share insights and learn new ways of engaging with NVIDIA 6G research tools. More than 1,300 academic and industry researchers from across the world attended the virtual event. It featured presentations from NVIDIA, ETH Zürich, Keysight, Northeastern University, Samsung, Softbank, and University of Oulu. This post explores five key takeaways from the event. 1. 6G will be AI-native and implement AI-RAN It’s expected that 6G will ride the AI wave to unlock new potential for both consumers and enterprises, and transform the telecommunications infrastructure. This was the key message of the keynote presented by NVIDIA SVP Ronnie Vasishta . With the rapidly growing adoption of generative AI and AI applications, AI-enhanced endpoints are interacting and making decisions on the move, creating huge volumes of voice, video, data, and AI traffic on the telecommunications network. The emergence of AI traffic, generated from AI applications at the edge, and requiring differing levels of cost economics, energy economics, latency, reliability, security and data sovereignty, provides new opportunities and challenges for the telecommunications infrastructure. This requires the underlying infrastructure to be designed and built to be AI-native, natively leveraging AI capabilities and supporting AI traffic. The keynote unveiled the strategic and technical drivers for an AI-native 6G infrastructure. Telcos want to maximize infrastructure efficiency with higher spectral efficiency, throughput, and capacity. Equally, telcos seek to maximize return on investment with better monetization, agility to introduce new features, and support for growth of traffic and new services on the RAN. AI-RAN offers a pathway to realize the benefits of the AI-native infrastructure with an AI-With-RAN and an AI-For-RAN implementation (Figure 1). AI-With-RAN encompasses what the AI-RAN Alliance describes as AI-on-RAN and AI-and-RAN. It enables telcos to move from single purpose to multipurpose networks that can dynamically run AI and RAN workloads on a software-defined, unified, and accelerated infrastructure. With AI-For-RAN, RAN-specific AI algorithms are deployed on the same infrastructure to drive RAN performance and improvement. AI-RAN will revolutionize the telecommunications industry, enabling telcos to unlock new revenue streams, and deliver enhanced experiences through generative AI, robotics, and automation tools. Figure 1. AI-RAN is the technology framework for AI-native 6G 2. AI-RAN models the three computer problems of AI for the physical world AI-RAN is the technology framework to build the AI-native 6G​ and is a good model of how AI is integral to developing, simulating, and deploying solutions for the physical world. This aligns with the classic three computer problems: Creating AI models using huge amounts of data Testing and improving network behavior with large-scale simulations, especially for site-specific data Deploying and operating a live network For 6G, this means creating and developing AI models for 6G; simulating, modeling, and improving an AI-native 6G; and deploying and operating an AI-native 6G. The session on Inventing 6G with NVIDIA AI Aerial Platform introduced NVIDIA AI Aerial as a platform for implementing AI-RAN and with three components for addressing the three computer problems: NVIDIA Aerial AI radio frameworks for developing and training 6G algorithms. This is suited for datacenter scale computing platforms such as NVIDIA DGX . NVIDIA Aerial Omniverse Digital Twin (AODT) for large city-scale simulations with accurate physical properties and high degree of fidelity. It is used for simulating and fine-tuning 6G algorithms and is deployed on NVIDIA OVX systems . NVIDIA Aerial CUDA Accelerated RAN , a carrier-grade, GPU-accelerated, software-defined vRAN and is deployed on the Aerial RAN Computer-1 platform. NVIDIA AI Aerial provides a set of tools for algorithm development, system-level integration and benchmarking, as well as production-level integration and benchmarking. Introduced for 5G, these tools cover 5G-advanced and lead the path towards 6G. Figure 2. NVIDIA AI Aerial addresses the three computer problems for delivering 6G 3. GPU-based accelerated computing is best suited for deploying 6G 6G AI-RAN will continue the trend from 5G towards a software-defined, high-performance RAN running on COTS infrastructure. It will also be fully AI-native, O-RAN based, with hardware/software disaggregation, and multipurpose in nature to support both AI and RAN workloads. With these, it is increasingly clear that new approaches will be needed for the computer where the 6G AI-RAN is deployed to realize new opportunities for telcos and handle vRAN baseband challenges. Among all the competing solutions to match these industry requirements, there are three key reasons GPU acceleration is the best computer platform for deploying 6G, as explained in the session on CUDA/GPU System for Low-Latency RAN Compute . Specifically, GPUs: Deliver very high throughput to handle heavy traffic. This is possible because, thanks to parallel computing, the GPU excels at managing multiple data streams simultaneously, better utilization of multiple Physical Resource Blocks (PRBs) and handling complex algorithms for tasks such as beamforming. Run low-latency and real-time critical workloads efficiently. GPU SIMT (single instruction, multiple threads) architecture is optimized for linear algebra operations. With additional CUDA features, this enables a software-defined digital signal processing compute machine to run physical layer workloads efficiently. Are ideal for the multipurpose platform for AI and RAN, thanks to its well-established suitability for AI workloads. This makes it significantly better at delivering a profitable AI-RAN and provides a platform that offers a sustainable Gbps/watt energy efficiency. For more details from the Softbank announcement in November 2024, see AI-RAN Goes Live and Unlocks a New AI Opportunity for Telcos . NVIDIA Aerial CUDA Accelerated RAN running on the Aerial RAN Computer-1 offers a high-performance and scalable GPU-based solution for the 6G AI-RAN. It includes a set of software-defined RAN libraries (cuPHY, cuMAC, pyAerial) optimized to run on multiple GPU-accelerated computing configurations. Figure 3 shows the different strategies required to scale such a system from low density to medium density and high density configurations and across CPU, GPU, and NIC subsystems. It highlights strategies to minimize data movement, synchronization and execution overhead; maximize concurrency and asynchrony; tools for optimization, pipelining, prioritization, and resource partitions for QoS. Figure 3. Scaling a low-latency system​ for CUDA/GPU for low-latency RAN compute​ 4. Digital twins will be an integral part of 6G AI-RAN At NVIDIA GTC 2024, NVIDIA CEO Jensen Huang said in a keynote that “we believe that everything manufactured will have digital twins.” This is increasingly a reality across many industrial sectors. For telecommunications, 6G will be the first cellular technology generation to be first created and simulated as a digital twin. This will create a continuum among the design, deployment, and operations phases for 6G RAN products. The session on Building a RAN Digital Twin explored how to build a RAN digital twin using NVIDIA Aerial Omniverse Digital Twin (AODT) and how this enables new AI techniques and algorithms for 6G (Figure 4). AODT is a next-generation, system-level simulation platform for 5G/6G. It is based on, and benefits from the richness of, the NVIDIA Omniverse platform. In addition, Keysight showcased how they are using AODT as part of their RF Raytracing Digital Twin Solution in the session, 6G Developer Spotlight Session 1 . Both sessions showcased how aspects of the physical world (including 5G/6G RAN, user devices, and radio frequency signals) and the digital twin world (including the electromagnetic engine, mobility model for user devices, geospatial data, antenna data, and channel emulator) are combined to create and simulate a RAN digital twin. In doing so, the RAN digital twin becomes a tool to benchmark system performance and explore machine learning-based wireless communication algorithms in real-world conditions. Figure 4. Scene creation from the NVIDIA Aerial Omniverse Digital Twin 5. The industry needs platforms for AI training for 6G AI-RAN As AI becomes integral to 6G design and development, the need for training platforms and testbeds is important for the industry. Such a platform and testbed provides an opportunity to take 6G AI/ML from simulation to reality. Ongoing research areas for native AI include waveform learning, MAC acceleration, ​site-specific optimizations, beamforming​, spectrum sensing,​ and semantic communication. The session on AI and Radio Frameworks for 6G explored how AI and radio frameworks can be used for 6G R&D with NVIDIA Aerial AI Radio Frameworks and its tools pyAerial , Aerial Data Lake , and NVIDIA Sionna . pyAerial is a Python library of physical layer components that can be used as part of the workflow in taking a design from simulation to real-time operation. It provides end-to-end verification of a neural network integration into a physical layer pipeline and helps bridge the gap from the world of training and simulation in TensorFlow and PyTorch to real-time operation in an over-the-air testbed. Aerial Data Lake is a data capture platform supporting the capture of OTA radio frequency (RF) data from vRAN networks built on the Aerial CUDA-Accelerated RAN. It consists of a data capture application running on the base station distributed unit, a database of samples collected by the app, and an API for accessing the database. Sionna is a GPU-accelerated open-source library for link-level simulations. It enables rapid prototyping of complex communication system architectures and provides native support for the integration of machine learning in 6G signal processing. The session on Setting Up 6G Research Testbeds explored how to set up a 6G research testbed to accelerate innovation, drive standardization, and provide real-world testing and performance benchmarking with the NVIDIA Aerial RAN CoLab (ARC-OTA). In the 6G Developer Spotlight Session 1 and 6G Developer Spotlight Session 2 , Softbank, Samsung, University of Oulu, Northeastern University (NEU), and ETH Zurich showcased how they are working with the NVIDIA AI and radio frameworks plus ARC-OTA to accelerate their 6G research (Table 1). Most commonly, these research groups presented how they are using AI for the complex and challenging problem of channel estimation and on how they are using ARC-OTA to close the “reality gap” between simulation and real-world OTA. Organization​ Project​s NVIDIA tools used​ Samsung​ AI channel estimation​ Lab-to-field methodology ​Site-specific optimizations Close reality gap between sim and real-world OTA ARC-OTA, pyAerial, ADL, Sionna, SionnaRT, AODT in the future​ ETH​ NN PUSCH​ Deep-unfolding for iterative detector-decoder​ ARC-OTA, pyAerial, ADL, Sionna, SionnaRT, AODT in the future​ NEU​ x5G 8-node ARC-OTA testbed​ RIC and real-time apps (dApps)​ Deployment automation on OpenShift O-RAN intelligent orchestrator ARC-OTA, pyAerial, ADL, Sionna, SionnaRT, AODT ​ University of Oulu​ Sub-THz​ Sionna, ARC-OTA in the future​ Keysight​ Product development: Deterministic channel modeling for 6GAI-assisted channel modeling​​; Physical digital twin for 6G applications​ AODT: GIS + mobility model + RT​ Softbank​ AI-for-RAN project: ML channel estimation and interpolation​ ARC-OTA, pyAerial​ Table 1. Summary of the spotlight presentations from various research groups using NVIDIA assets for 6G research What’s next? The 6G Developer Day is one of the channels to engage with the 6G research and development community and will become a feature of the NVIDIA event calendar. Check out the NVIDIA 6G Developer Day playlist to view all the sessions presented at the event on demand. Reference the NVIDIA Aerial FAQ based on the event Q&A for more information. To engage and connect with 6G researchers, join the NVIDIA 6G Developer Program .
https://developer.nvidia.com/zh-cn/blog/five-takeaways-from-nvidia-6g-developer-day-2024/
2024 年 NVIDIA 6G 开发者日的 5 大关键收获
2024 年 NVIDIA 6G 开发者日让 6G 研发社区成员共聚一堂 ,分享见解,并学习使用 NVIDIA 6G 研究工具的新方式。来自世界各地的 1,300 多名学术和行业研究人员参加了此次虚拟活动。会上,NVIDIA、ETH Zürich、Keysight、Northeastern University、Samsung、Softbank 和 University of Oulu 发表了演讲。本文将探讨本次活动的五个要点。 1、6G 将采用 AI 原生技术并实施 AI-RAN 预计 6G 将驾 AI 浪潮,为消费者和企业释放新的潜力,并改变电信基础设施。这是 NVIDIA 高级副总裁 Ronnie Vasishta 发表的主题演讲 的关键信息。随着生成式 AI 和 AI 应用的采用率迅速增长,AI 增强型端点正在进行交互并在移动中做出决策,从而在电信网络上产生大量语音、视频、数据和 AI 流量。 AI 流量由边缘的 AI 应用生成,需要不同级别的成本经济性、能源经济性、延迟、可靠性、安全性和数据主权,这为电信基础设施带来了新的机遇和挑战。这需要设计和构建原生 AI 的底层基础设施,利用原生 AI 功能并支持 AI 流量。 主题演讲揭示了 AI-native 6G 基础设施的战略和技术驱动力。电信公司希望通过提高频谱效率、吞吐量和容量,更大限度地提高基础设施效率。同样,电信公司寻求通过更好的货币化、引入新功能的灵活性以及对 RAN 流量增长和新服务的支持来更大限度地提高投资回报。 AI-RAN 通过 AI-With-RAN 和 AI-For-RAN 实施提供了实现 AI 原生基础设施优势的途径 (图 1)。AI-With-RAN 包含 AI-RAN 联盟所说的 AI-on-RAN 和 AI-and-RAN。它使电信公司能够从单一用途网络转向多用途网络,这些网络可以在软件定义、统一和加速的基础设施上动态运行 AI 和 RAN 工作负载。 借助 AI-For-RAN,RAN 特定的 AI 算法部署在同一基础架构上,以推动 RAN 的性能和改进。AI-RAN 将彻底改变电信行业,使电信公司能够挖掘新的收入来源,并通过生成式 AI、机器人和自动化工具提供增强的体验。 图 1、AI-RAN 是 AI 原生 6G 的技术框架 2. AI-RAN 为物理世界的三个计算机问题建模 AI-RAN 是构建 AI 原生 6G 的技术框架,也是一个很好的模型,可以说明 AI 是物理世界开发、仿真和部署解决方案不可或缺的一部分。这与经典的三个计算机问题一致: 使用海量数据创建 AI 模型 通过大规模模拟测试和改进网络行为,尤其是针对特定站点的数据 部署和运营实时网络 对于 6G,这意味着创建和开发适用于 6G 的 AI 模型;仿真、建模和改进 AI 原生 6G;以及部署和运行 AI 原生 6G。 在关于使用 NVIDIA AI Aerial 平台发明 6G 的会议上,NVIDIA AI Aerial 作为实施 AI-RAN 的平台,并包含解决三个计算机问题的三个组件: 用于开发和训练 6G 算法的 NVIDIA Aerial AI 无线电框架 。这适用于 NVIDIA DGX 等数据中心级计算平台 。 NVIDIA Aerial Omniverse 数字孪生 (AODT) 具有准确的物理属性和高保真度,适用于大规模城市仿真。它用于仿真和微调 6G 算法,并部署在 NVIDIA OVX 系统上 。 NVIDIA Aerial CUDA Accelerated RAN 是电信级 GPU 加速的软件定义 vRAN , 部署在 Aerial RAN Computer-1 平台上 。 NVIDIA AI Aerial 为算法开发、系统级集成和基准测试以及生产级集成和基准测试提供了一套工具。这些工具专为 5G 推出,涵盖 5G 先进技术,引领 6G 发展。 图 2、NVIDIA AI Aerial 解决了提供 6G 的三个计算机问题 3. 基于 GPU 的加速计算非常适合部署 6G 6G AI-RAN 将延续从 5G 到在 COTS 基础设施上运行软件定义的高性能 RAN 的趋势。它还将完全采用 AI 原生,基于 O-RAN,具有硬件/软件分解和多用途的性质,以支持 AI 和 RAN 工作负载。有了这些,我们越来越清楚,部署 6G AI-RAN 的计算机将需要新的方法来实现电信公司的新机遇并应对 vRAN 基带挑战。 在满足这些行业需求的所有竞争解决方案中,GPU 加速成为部署 6G 的最佳计算机平台的主要原因有三个,这在关于低延迟 RAN 计算的 CUDA/GPU 系统的会议中有所解释 。具体来说,GPU: 提供极高的吞吐量,以处理繁忙的交通。之所以能够做到这一点,是因为得益于并行计算,GPU 擅长同时管理多个数据流、更好地利用多个物理资源块 (PRBs) 并处理波束成形等任务的复杂算法。 高效运行低延迟的实时关键工作负载。GPU SIMT(单指令多线程)架构针对线性代数运算进行了优化。借助额外的 CUDA 功能,这使得软件定义的数字信号处理计算机器能够高效运行物理层工作负载。 凭借其成熟的 AI 工作负载适应性,非常适合用于 AI 和 RAN 的多用途平台。这使得 AI-RAN 在提供盈利性 AI-RAN 方面表现显著,并提供了一个可持续提供 Gbps/瓦能效的平台。 有关 2024 年 11 月 Softbank 公告的更多详细信息,请参阅 AI-RAN Goes Live 和 Unlocks a New AI Opportunity for Telcos。 在 Aerial RAN Computer-1 上运行的 NVIDIA Aerial CUDA Accelerated RAN 为 6G AI-RAN 提供了基于 GPU 的高性能和可扩展解决方案。它包含一组经过优化的软件定义 RAN 库 (cuPHY、cuMAC、pyAerial),可在多个 GPU 加速计算配置上运行。 图 3 显示了跨 CPU、GPU 和 NIC 子系统将此类系统从低密度扩展到中密度和高密度配置所需的不同策略。它重点介绍了以下策略:最大限度地减少数据移动、同步和执行开销;最大限度地提高并发和异步;用于优化、流水线、优先级和 QoS 资源分区的工具。 图 3、扩展低延迟系统​ for CUDA/GPU for 低延迟 RAN 计算 4. 数字孪生将成为 6G AI-RAN 不可或缺的一部分 在 2024 年 NVIDIA GTC 大会上,NVIDIA 首席执行官 Jensen Huang 先生在主题演讲中表示:“我们相信,所有制造业都将拥有数字孪生。”这在许多工业领域正日益成为现实。在电信领域,6G 将是率先作为数字孪生创建和模拟的蜂窝技术。这将在 6G RAN 产品的设计、部署和运营阶段之间形成一个连续统一体。 关于 构建 RAN 数字孪生 的会议探讨了如何使用 NVIDIA Aerial Omniverse Digital Twin (AODT) 构建 RAN 数字孪生,以及这如何为 6G 实现新的 AI 技术和算法 (图 4) 。AODT 是面向 5G/6G 的新一代系统级仿真平台。 它基于 NVIDIA Omniverse 平台的丰富内容,并从中受益。此外, 在 6G 开发者聚焦会议 1 中 ,Keysight 展示了他们如何将 AODT 用作其 RF Raytracing Digital Twin Solution 的一部分。 两场会议都展示了如何结合物理世界 (包括 5G/6G RAN、用户设备和射频信号) 和数字孪生世界 (包括电磁引擎、用户设备的移动模型、地理空间数据、天线数据和信道仿真器) 的各个方面来创建和模拟 RAN 数字孪生。如此一来,RAN 数字孪生便成为了一款工具,用于在真实环境中对系统性能进行基准测试,并探索基于机器学习的无线通信算法。 图 4、通过 NVIDIA Aerial Omniverse 数字孪生创建场景 5. 该行业需要为 6G AI-RAN 提供 AI 训练平台 随着 AI 成为 6G 设计和开发不可或缺的一部分,该行业对训练平台和测试平台的需求变得尤为重要。这样的平台和测试平台提供了将 6G AI/ML 从仿真变为现实的机会。原生 AI 的持续研究领域包括波形学习、MAC 加速、站点特定优化、波束赋形、频谱感知和语义通信。 面向 6G 的 AI 和无线电框架 会议探讨了如何使用 AI 和无线电框架进行 6G 研发,以及 NVIDIA Aerial AI 无线电框架及其工具 pyAerial 、Aerial Data Lake 和 NVIDIA Sionna。 pyAerial 是一个物理层组件的 Python 库,可作为工作流的一部分,将设计从模拟变为实时操作。它为物理层管道中的神经网络集成提供端到端验证,并有助于弥合从 TensorFlow 和 PyTorch 中的训练和模拟到实时操作在一个无线测试平台之间的差距。 Aerial Data Lake 是一个数据捕获平台,支持从基于 Aerial CUDA 加速 RAN 的 vRAN 网络中捕获 OTA 射频 (RF) 数据。它由在基站分布式单元上运行的数据捕获应用、应用收集的样本数据库以及用于访问数据库的 API 组成。 Sionna 是一个 GPU 加速的开源库,用于链路级模拟。它支持复杂通信系统架构的快速原型设计,并为在 6G 信号处理中集成机器学习提供原生支持。 关于 设置 6G 研究测试平台 的会议探讨了如何设置 6G 研究测试平台,以加速创新、推动标准化,并利用 NVIDIA Aerial RAN CoLab (ARC-OTA) 提供真实的测试和性能基准测试。 在 6G 开发者聚焦会议 1 和 6G 开发者聚焦会议 2 中,Softbank、Samsung、University of Oulu、Northeastern University (NEU) 和 ETH Zurich 展示了他们如何使用 NVIDIA AI 和 radio 框架以及 ARC-OTA 来加速其 6G 研究 (表 1)。最常见的情况是,这些研究小组介绍了他们如何使用 AI 来解决复杂且具有挑战性的信道估计问题,以及他们如何使用 ARC-OTA 来缩小仿真与现实 OTA 之间的“现实差距”。 组织* 项目 使用的 NVIDIA 工具* Samsung® AI 信道估计实验室到现场的方法特定站点的优化缩小模拟与现实 OTA 之间的现实差距 未来的 ARC-OTA、pyAerial、ADL、Sionna、SionnaRT、AODT ETH* NN PUSCH​Deep-unfolding for iterative detector-decoder 未来的 ARC-OTA、pyAerial、ADL、Sionna、SionnaRT、AODT 东北大学* x5G 8-node ARC-OTA testbed​RIC 和实时应用 (dApps)​Deployment automation on OpenShiftO-RAN intelligent orchestrator ARC-OTA、pyAerial、ADL、Sionna、SionnaRT、AODT 奥卢大学* Sub-THz* Sionna,未来的 ARC-OTA 是德科技 产品开发:适用于 6G 的确定性信道建模和 AI 辅助信道建模​​;适用于 6G 应用的物理数字孪生 AODT:GIS 移动模型 RT 软银 AI-for-RAN 项目:ML 信道估计和插值 ARC-OTA、pyAerial 表 1、各研究团队使用 NVIDIA 资产进行 6G 研究的聚焦演讲摘要 接下来该怎么做? 6G 开发者日是与 6G 研发社区互动的渠道之一,并将成为 NVIDIA 活动日历中的一项内容。查看 NVIDIA 6G 开发者日播放列表 ,点播观看活动中呈现的所有会话。有关更多信息,请参阅基于活动问答的 NVIDIA Aerial 常见问题解答。要与 6G 研究人员互动和交流,请加入 NVIDIA 6G 开发者计划 。
https://developer.nvidia.com/blog/nvidia-jetson-orin-nano-developer-kit-gets-a-super-boost/
NVIDIA Jetson Orin Nano Developer Kit Gets a “Super” Boost
The generative AI landscape is rapidly evolving, with new large language models (LLMs), visual language models (VLMs), and vision language action (VLA) models emerging daily. To stay at the forefront of this transformative era, developers need a platform powerful enough to seamlessly deploy the latest models from the cloud to the edge with optimized inferencing and open ML frameworks using CUDA. To support emerging applications for generative AI workloads in robotics and multimodal agents, NVIDIA is refreshing the NVIDIA Jetson Orin Nano Developer Kit through a software update with an unprecedented boost in performance of up to 1.7x at an amazing new price of $249. In recognition of the superb boost in performance and accessibility of advanced AI, the Jetson Orin Nano Developer Kit is being renamed as the NVIDIA Jetson Orin Nano Super Developer Kit. Jetson Orin Nano Developer Kit can be upgraded to Jetson Orin Nano Super Developer Kit with just a software update. In this post, learn about the new capabilities of the developer kit and how you can seamlessly tap into the boosted performance. The new Jetson Orin Nano Super Developer Kit The Jetson Orin Nano Super Developer Kit now offers the following improvements: 1.7x higher generative AI model performance. 67 Sparse TOPs, a significant increase from the previous 40 Sparse TOPs 102 GB/s of memory bandwidth, a significant leap from the previous 65 GB/s memory bandwidth. 1.7 GHz of CPU clock speed, up from 1.5 GHz. New reduced price of $249, down from $499. Video 1. Introducing NVIDIA Jetson Orin Nano Super: The World’s Most Affordable Generative AI Computer With the same hardware architecture, this performance boost is enabled by a new power mode which increases the GPU, memory, and CPU clocks. All previous Jetson Orin Nano Developer Kits can use the new power mode by upgrading to the latest version of JetPack . With this performance boost, the Jetson Orin Nano Super Developer Kit delivers up to 70% higher generative AI performance, enabling you to run an emerging class of newer Transformer-based models. This compact yet powerful system can effortlessly handle a wide range of LLMs, VLMs, and Vision Transformers (ViTs), from smaller models to those with up to 8B parameters, such as the Llama-3.1-8B model. Jetson supports the same ML frameworks and optimized inferencing infrastructure as other platforms, such as HuggingFace Transformers, Ollama, llama.cpp, vLLM, MLC, NVIDIA TensorRT-LLM, and more. This makes it an ideal choice for you to quickly adapt and deploy scalable solutions between the cloud, edge, and PC. Jetson Orin Nano Super Developer Kit configuration NVIDIA Jetson Orin Nano Developer Kit (original) NVIDIA Jetson Orin Nano Super Developer Kit GPU NVIDIA Ampere architecture 1,024 CUDA Cores 32 Tensor Cores 635 MHz NVIDIA Ampere architecture 1,024 CUDA Cores 32 Tensor Cores 1,020 MHz AI PERF 40 INT8 TOPS (Sparse) 20 INT8 TOPS (Dense) 10 FP16 TFLOPs 67 TOPS (Sparse) 33 TOPS (Dense) 17 FP16 TFLOPs CPU 6-core Arm Cortex-A78AE v8.2 64-bit CPU 1.5 GHz 6-core Arm Cortex-A78AE v8.2 64-bit CPU 1.7 GHz Memory 8GB 128-bit LPDDR5 68 GB/s 8GB 128-bit LPDDR5 102 GB/s MODULE POWER 7W | 15W 7W | 15W | 25W Table 1. Jetson Orin Nano Super Developer Kit configuration comparison Runs a wide range of LLMs, VLMs, and ViTs The NVIDIA Jetson Orin Nano Super Developer Kit offers performance that is a game-changer if you want to bring generative AI capabilities to your embedded applications or experience cutting-edge generative AI models in an affordable computer. LLMs, VLMs, and vision transformers (ViTs) are transformative AI architectures driving innovation across domains. Foundation LLMs excel at generalized language processing and generating human-like text, enabling natural human/machine interfaces when coupled with streaming speech recognition and synthesis pipelines such as NVIDIA Riva . Having advanced rapidly to compete with larger models through techniques like pruning and distillation , small language models (SLMs) are optimized for edge use cases, and are generally referred to as being <7B parameters in size. Today, it’s common for open LLMs and SLMs to have been trained for agentic tool use and function calling with structured I/O, which connects LLM parsing and reasoning to real-world systems. Support for LLM fine-tuning and memory-efficient LoRAs have also improved across many ML frameworks, including NVIDIA NeMo , enabling well-understood recipes for the alignment and specialization of SLMs in application-specific domains. ViTs harness the power of Transformers for vision by tokenizing patches of pixels into embeddings. They have achieved state-of-the-art performance across fundamental tasks: Feature extraction Classification Detection Segmentation Pose estimation They also extend to higher-dimensional modalities such as 3D point clouds and video with NVIDIA Cosmos Tokenizers . ViTs prompt creative new hybrid models that blend bespoke vision capabilities with open-vocabulary language support and dynamic runtime queries against a wide variety of subject matter and object classes, without needing additional fine-tuning. VLMs such as VILA fuse ViTs and LLMs across visual and textual modalities, empowering models to understand and generate detailed scene descriptions, detecting objects of interest and extracting text with OCR. They can apply multimodal reasoning in response to user queries about images or video sequences. Having undergone similar minification efforts to SLMs, VLMs are increasingly useful for edge applications when applied to the remote monitoring of camera streams with flexible event-based alerts and summarization, such as in the reference VLM Workflow in Jetson Platform Services. Together, these technologies underpin advances in generative AI and multimodal systems, and real-world applications. The new NVIDIA Jetson Orin Nano Super delivers substantial performance increases on a wide variety of LLMs, SLMs, ViTs, and VLMs and significantly lowers the barrier of entry for gaining hands-on experience with the latest Transformer models as a gateway to physical AI. We benchmarked some of the popular LLMs, VLMs and ViTs below and showcase the speed up the Jetson Orin Nano Super developer kit provides over the predecessor. Figure 1. LLM performance boost on Jetson Orin Nano Super Developer Kit Model Jetson Orin Nano (original) Jetson Orin Nano Super Perf Gain (X) Llama 3.1 8B 14 19.14 1.37 Llama 3.2 3B 27.7 43.07 1.55 Qwen2.5 7B 14.2 21.75 1.53 Gemma 2 2B 21.5 34.97 1.63 Gemma 2 9B 7.2 9.21 1.28 Phi 3.5 3B 24.7 38.1 1.54 SmolLM2 41 64.5 1.57 Table 2. Benchmark performance in tokens\sec for popular LLMs * LLM generation performance (tokens per second) with INT4 quantization using MLC API. Figure 2. VLM performance boost on NVIDIA Jetson Orin Nano Super Developer Kit Model Jetson Orin Nano (original) Jetson Orin Nano Super Perf Gain (X) VILA 1.5 3B 0.7 1.06 1.51 VILA 1.5 8B 0.574 0.83 1.45 LLAVA 1.6 7B 0.412 0.57 1.38 Qwen2 VL 2B 2.8 4.4 1.57 InternVL2.5 4B 2.5 5.1 2.04 PaliGemma2 3B 13.7 21.6 1.58 SmolVLM 2B 8.1 12.9 1.59 Table 3. Benchmark performance in tokens\sec for popular VLMs *All VILA and LLAVA models were run with INT4 precision using MLC while the rest of the models were run in FP4 precision with Hugging Face Transformers. Figure 3. ViTs performance boost on NVIDIA Jetson Orin Nano Super Developer Kit Model Jetson Orin Nano (original) Jetson Orin Nano Super Perf Gain (X) clip-vit-base-patch32 196 314 1.60 clip-vit-base-patch16 95 161 1.69 DINOv2-base-patch14 75 126 1.68 SAM2 base 4.42 6.34 1.43 Grounding DINO 4.11 6.23 1.52 vit-base-patch16-224 98 158 1.61 vit-base-patch32-224 171 273 1.60 Table 4. Benchmark performance in frames\sec for popular Vision Transformers *All ViT models were run with FP16 precision using NVIDIA TensorRT (measurements are in FPS). Getting started with Jetson Orin Nano Super Developer Kit To enjoy the boosted performance, download the SD Card image from the JetPack SDK page and follow the Getting Started Guide . NVIDIA released a SD card image based on JetPack 6.1 with support for boosted performance. You can also install JetPack 6.1 with boosted performance for Jetson Orin Nano Developer Kit using SDK Manager. Make sure that you have updated the SDK Manager and select JetPack 6.1 (rev. 1) while installing. When you’re up and running with JetPack, change the power mode using the following command to unlock the super performance. Mode 2 is MAXN mode, which brings the super performance. sudo nvpmodel -m 2 You can also change the power mode using the Power Mode Selector on the right side of the Ubuntu desktop’s top bar. Figure 4. Power mode selection tool Experience generative AI on Jetson Orin Nano Super Developer Kit NVIDIA offers a range of tutorials and pre-built containers in the Jetson AI Lab for exploring generative AI on the Jetson Orin Nano Developer Kit. If you’re interested in robotics, be sure to explore the LeRobot tutorial. For those looking to create a generative AI chatbot, there’s a dedicated tutorial for building one. Hugging Face LeRobot NVIDIA has partnered with Hugging Face to accelerate robotic research on the LeRobot open AI platform. You can run HuggingFace LeRobot on Jetson Orin Nano Super Developer Kit , which runs generative AI models for predicting actions for a particular task from visual inputs and prior trajectories. Figure 5. Hugging Face LeRobot running on Jetson Orin Nano Developer Kit Generative AI–driven chatbot with Ollama Run a generative AI chatbot on Jetson Orin Nano Super Developer Kit. This chatbot features Ollama with Open WebUI , a widely used, open-source, chatbot server interface that connects to locally running LLMs. It uses retrieval-augmented generation (RAG) to further enhance the user experience and capabilities. Figure 6. Generative AI chatbot running on Jetson Orin Nano Super Developer Kit Advancing generative AI with Jetson AI Lab The Jetson AI Lab is the hub for discovering and experimenting with the latest generative AI technologies optimized for edge devices. By fostering an openly collaborative and community-driven environment, NVIDIA works alongside developers and partners to advance open-source edge AI and robot learning. With our comprehensive support for popular machine learning frameworks and optimized inferencing microservices on Jetson devices, you can quickly build and deploy the latest research innovations and models on your Jetson computers to keep up with the rapid pace of innovation. Figure 7 shows examples of the NanoOWL open-vocabulary real-time object detection ViT. Figure 7. Real-time object detection vision example Figure 8 shows the NanoDB multimodal vector database with interactive txt2img and img2img similarity search. Figure 8. Image similarity search examples Jetson AI Lab offers almost 50 easy-to-follow tutorials and prebuilt containers to get developers of any experience level to quickly get started with on-device LLMs, SLMs, and multimodal VLMs, along with variants for VLAs, diffusion policies, and speech models. They’re all deployed locally with the same optimized inferencing infrastructure used in the cloud. The collaborative, community-driven tutorials and resources of the Jetson AI Lab significantly reduces the barriers to entry for deploying advanced generative AI to the edge. All Jetson Orin Nano series and Jetson Orin NX series modules gets a super performance boost NVIDIA is also refreshing the performance across the Jetson Orin Nano series and Jetson Orin NX series. From small AI cameras to large autonomous machines, every edge device needs the compute capability to run generative AI models​. With the same module hardware, you can take advantage of the up to 1.7x increased performance across the different modules. Orin Nano series: Improvements to the frequencies and performance on the GPU, CPU, and memory. Orin NX series: Improvements across the GPU and DLA. Table 5. Jetson Orin Nano Super and Orin NX Super module specifications Support and documentation for boosted performance for Jetson Orin Nano and Orin NX production modules will be released in the first half of January with JetPack 6.1.1. Due to the growing customer demand for Orin, NVIDIA also recently announced the extension of the product lifecycle of Jetson Orin through 2032. With this super performance boost, the Orin Nano series and Orin NX series are the ideal platforms for both current and future models. Jumpstart your generative AI developer today The Jetson Orin Nano Super Developer Kit is your ultimate platform for leading the way in generative AI development for edge computing. Now is the perfect time to get started and join the vibrant and diverse community of advanced developers and researchers working together with NVIDIA to address real-world challenges with physical AI. For existing Jetson Orin Nano Developer Kit users, upgrade your JetPack SDK to unlock boosted performance today. Stay up to date by subscribing to our newsletter , and follow NVIDIA Robotics on LinkedIn , Instagram , X , and Facebook .  For more information, explore our documentation or join the Robotics community on our developer forums , Discord , and YouTube channels.
https://developer.nvidia.com/zh-cn/blog/nvidia-jetson-orin-nano-developer-kit-gets-a-super-boost/
NVIDIA Jetson Orin Nano 开发者套件得到“超级”提升
生成式 AI 格局正在迅速发展,新的 大语言模型 (LLMs) 、视觉语言模型 (VLMs) 和视觉语言动作 (VLAs) 模型每天都在涌现。为了在这个变革时代站在前沿,开发者需要一个足够强大的平台,通过使用 CUDA 的优化推理和开放 ML 框架,将最新模型从云端无缝部署到边缘。 为了支持机器人和多模态智能体中生成式 AI 工作负载的新兴应用,NVIDIA 正在通过软件更新来更新 NVIDIA Jetson Orin Nano 开发者套件 ,以 249 美元的惊人新价格将性能提升高达 1.7 倍。为表彰 Jetson Orin Nano 开发者套件在性能和可访问性方面的出色提升,我们将 Jetson Orin Nano 开发者套件更名为 NVIDIA Jetson Orin Nano Super Developer Kit。 只需更新软件,即可将 Jetson Orin Nano 开发者套件升级到 Jetson Orin Nano 超级开发者套件。 在本文中,了解开发者套件的新功能,以及如何无缝利用提升的性能。 全新 Jetson Orin Nano 超级开发者套件 Jetson Orin Nano 超级开发者套件 现在提供以下改进: 生成式 AI 模型性能提升 1.7 倍。 67 Sparse TOPs 的数量显著增加,较之前的 40 Sparse TOPs 有了明显提升 102 GB/s 的显存带宽,与之前的 65 GB/s 显存带宽相比,实现了重大飞跃。 CPU 主频为 1.7 GHz,高于 1.5 GHz。 新降价 249 美元,低于 499 美元。 视频 1、隆重推出 NVIDIA Jetson Orin Nano Super:世界上最具经济实惠的生成式 AI 计算机 在相同的硬件架构下,这种性能提升是通过增加 GPU、内存和 CPU 时钟的新电源模式实现的。之前的所有 Jetson Orin Nano 开发者套件都可以通过升级到最新版本的 JetPack 来使用新的功率模式。 借助这种性能提升,Jetson Orin Nano 超级开发者套件可提供高达 70% 的生成式 AI 性能,使您能够运行新兴的基于 Transformer 的新型模型。从较小的模型到具有多达 8B 参数的模型 (例如 Llama-3.1-8B 模型),这款紧凑而强大的系统可以轻松处理各种 LLM、VLM 和 Vision Transformers (ViTs)。 Jetson 支持与其他平台相同的 ML 框架和优化的推理基础设施,例如 Hugging Face Transformers、Llama.cpp、vLLM、MLC、NVIDIA TensorRT-LLM 等。这使其成为在云、边缘和 PC 之间快速调整和部署可扩展解决方案的理想选择。 JETSON Orin Nano Super Developer Kit 配置 NVIDIA Jetson Orin Nano 开发者套件 (原件) NVIDIA Jetson Orin Nano 超级开发者套件 GPU NVIDIA Ampere 架构,1,024 个 CUDA Cores,32 个 Tensor Cores,635 MHz NVIDIA Ampere 架构,1,024 个 CUDA Cores,32 个 Tensor Cores, 1,020 MHz AI 性能 40 INT8 TOPS (Sparse) 20 INT8 TOPS (Dense) 10 FP16 TFLOPs 67 TOPS (稀疏) 33 TOPS (密集) 17 FP16 TFLOPS CPU 6 核 Arm Cortex-A78AE v8.2 64 位 CPU 1.5 GHz 6 核 Arm Cortex-A78AE v8.2 64 位 CPU 1.7 GHz 显存 8GB 128 位 LPDDR5 68 GB/秒 8GB 128 位 LPDDR5 102 GB/秒 模组功率 7 瓦|15 瓦 7 瓦|15 瓦| 25 瓦 表 1、Jetson Orin Nano 超级开发者套件配置对比 可运行各种 LLM、VLM 和 ViT 如果您想将生成式 AI 功能引入嵌入式应用程序或在经济实惠的计算机上体验先进的生成式 AI 模型,NVIDIA Jetson Orin Nano 超级开发者套件可提供颠覆性性能。 LLMs、VLMs 和视觉 Transformers (ViTs) 是推动跨领域创新的变革性 AI 架构。Foundation LLMs 擅长通用语言处理和生成类似人类的文本,在与流式语音识别和合成流程 (例如 NVIDIA Riva ) 结合使用时,可实现自然的人类/机器界面。 小语言模型 (SLMs) 通过 剪枝和蒸馏 等技术取得了快速发展,可以与更大的模型竞争,并针对边缘用例进行了优化,一般来说,参数大小在 7 亿以下。 如今,开放式 LLM 和 SLM 已通过结构化 I/O (将 LLM 解析和推理连接到真实系统) 接受代理工具使用和函数调用训练,这已成为常态。 许多 ML 框架 (包括 NVIDIA NeMo ) 也改进了对 LLM 微调和内存高效型 LoRAs 的支持,实现了在应用特定领域中对 SLM 进行对齐和专门化的易于理解的方法。ViTs 通过将像素片标记为嵌入,利用 Transformers 的强大功能实现视觉效果。它们在以下基本任务中实现了出色的性能: 特征提取 分类 检测 分割 姿态估计 它们还扩展到更高维的模式,例如 3D 点云和采用 NVIDIA Cosmos Tokenizers 的视频。ViTs 提示创意新型混合模型,该模型将定制视觉功能、开放词汇语言支持以及针对各种主题和对象类别的动态运行时查询融合在一起,无需额外微调。VLMs,例如 VILA ,融合 ViTs 和 LLMs 跨视觉和文本模式,使模型能够理解和生成详细的场景描述、检测感兴趣的物体并使用 OCR 提取文本。它们可以应用多模态推理来回应用户对图像或视频序列的查询。 在经历了与 SLMs 类似的精简工作后,VLMs 在应用于摄像头流的远程监控时(例如在 Jetson Platform Services 中的 参考 VLM Workflow 中)具有灵活的基于事件的警报和汇总功能,因此对于边缘应用的用途越来越大。 这些技术共同支撑着生成式 AI 和多模态系统以及现实世界应用的进步。新的 NVIDIA Jetson Orin Nano Super 可显著提升各种 LLM、SLM、ViT 和 VLM 的性能,并显著降低入门门槛,以便获得使用最新的 Transformer 模型作为物理 AI 门户的实践经验。我们在下方对一些热门的 LLM、VLM 和 ViT 进行了基准测试,并展示了 Jetson Orin Nano Super 开发者套件相较于前代产品所提供的加速。 图 1、Jetson Orin Nano 超级开发者套件的 LLM 性能提升 模型 Jetson Orin Nano (原件) Jetson Orin Nano Super 性能增益 (X) Llama 3.1 8B 14 19.14 1.37 Llama 3.2 3B 27.7 43.07 1.55 Qwen2.5 7B 14.2 21.75 1.53 Gemma 2 2B 21.5% 34.97 1.63 Gemma 2 9B 7.2 9.21 1.28 Phi 3.5 3B 24.7 38.1 1.54 SmolLM2 41 64.5% 1.57 表 2、使用 MLC API 进行 INT4 量化,对热门 LLM 生成性能(每秒令牌数)进行基准测试 。 图 2、NVIDIA Jetson Orin Nano 超级开发者套件的 VLM 性能提升 模型 Jetson Orin Nano (原图) Jetson Orin Nano Super 性能增益 (X) VILA 1.5 3B 0.7 1.06 1.51 VILA 1.5 8B 0.574 0.83 1.45 LLAVA 1.6 7B 0.412 0.57 1.38 Qwen2 VL 2B 2.8 4.4 1.57 InterVL2.5 4B 2.5 5.1 2.04 PaliGemma2 3B 13.7 21.6% 1.58 SmolVLM 2B 8.1 12.9 1.59 表 3、对热门 VLM 进行性能基准测试 (令牌/秒) 所有 VILA 和 LLAVA 模型均使用 MLC 以 INT4 精度运行,而其余模型则使用 Hugging Face Transformers 以 FP4 精度运行。 图 3、NVIDIA Jetson Orin Nano 超级开发者套件上的 ViTs 性能提升 模型 Jetson Orin Nano (原图) Jetson Orin Nano Super 性能增益 (X) clip-vit-base-patch32 196 314 1.60 clip-vit-base-patch16 95% 163 1.69 DINOv2-base-patch-14 75 126 1.68 SAM2 base 4.42 6.34 1.43 Grounding DINO 4.11 6.23 1.52 vit-base-patch16-224 98 158 1.61 vit-base-patch32-224 174 273 1.60 表 4、主流 Vision Transformer 的基准性能 (以帧 秒为单位) 所有 ViT 模型均使用 NVIDIA TensorRT 以 FP16 精度运行 (测量以 FPS 为单位)。 Jetson Orin Nano Super Developer Kit 入门 要享受提升的性能,请从 JetPack SDK 页面 下载 SD 卡镜像,并按照 Getting Started Guide 操作 。 NVIDIA 基于 JetPack 6.1 发布了 SD 卡镜像,并支持提升性能。您还可以使用 SDK Manager 安装 JetPack 6.1,并提升 Jetson Orin Nano Developer Kit 的性能。请确保您已更新 SDK Manager 并选择 JetPack 6.1 (rev. 1) 安装时。 启动并运行 JetPack 后,请使用以下命令更改电源模式,以解锁超强性能。模式 2 是 MAXN 模式,可带来超强性能。 sudo nvpmodel -m 2 您还可以使用 Ubuntu 桌面顶部栏右侧的 Power Mode Selector 更改电源模式。 图 4、电源模式选择工具 在 Jetson Orin Nano 超级开发者套件上体验生成式 AI NVIDIA 在 Jetson AI 实验室中提供一系列教程和预构建容器,用于探索 Jetson Orin Nano 开发者套件中的生成式 AI。如果您对机器人感兴趣,请务必查看 LeRobot 教程。对于希望创建生成式 AI 聊天机器人的用户,我们提供了一个专门的教程来构建生成式 AI 聊天机器人。 Hugging Face 乐机器人 NVIDIA 与 Hugging Face 合作 ,在 LeRobot 开放式 AI 平台上加速机器人研究。您可以在 Jetson Orin Nano 超级开发者套件上运行 HuggingFace LeRobot,该套件运行生成式 AI 模型,用于根据视觉输入和先前的轨迹预测特定任务的动作。 图 5、Hugging Face LeRobot 运行在 Jetson Orin Nano 开发者套件上 OLlama 的生成式 AI 驱动聊天机器人 在 Jetson Orin Nano 超级开发者套件上运行生成式 AI 聊天机器人。此聊天机器人采用 Open WebUI 的 Ollama ,以 Open WebUI 的形式呈现,一个广泛使用的开源聊天机器人服务器接口,可连接到本地运行的 LLM。它使用检索增强生成 (RAG) 进一步增强用户体验和功能。 图 6、在 Jetson Orin Nano 超级开发者套件上运行的生成式 AI 聊天机器人 借助 Jetson AI 实验室推进生成式 AI 发展 Jetson AI Lab 是发现和试验针对边缘设备优化的最新生成式 AI 技术的中心。 通过营造公开协作和社区驱动的环境,NVIDIA 与开发者和合作伙伴携手推进开源边缘 AI 和机器人学习。我们全面支持 Jetson 设备上的热门机器学习框架和优化的推理微服务,因此您可以在 Jetson 计算机上快速构建和部署最新的研究创新和模型,以跟上创新的步伐。 图 7 显示了 NanoOWL 开放词汇实时物体检测 ViT 示例。 图 7. 实时物体检测视觉示例 图 8 显示了包含交互式 txt2img 和 img2img 相似性搜索的 NanoDB 多模态向量数据库。 图 8、图像相似度搜索示例 Jetson AI Lab 提供近 50 个易于学习的教程和预构建的容器,可帮助任何经验水平的开发者快速开始使用设备上的 LLM、SLM 和多模态 VLM,以及 VLA、扩散策略和语音模型的变体。它们均在本地部署,并与云端使用的优化推理基础架构相同。 Jetson AI Lab 的协作式社区驱动教程和资源 可显著降低在边缘部署高级生成式 AI 的门槛。 所有 Jetson Orin Nano 系列和 Jetson Orin NX 系列模组均可实现超强性能提升 NVIDIA 还更新了 Jetson Orin Nano 系列和 Jetson Orin NX 系列的性能。从小型 AI 摄像头到大型自主机器,每台边缘设备都需要具备运行生成式 AI 模型所需的计算能力。 借助相同的模组硬件,您可以充分利用不同模组性能提升高达 1.7 倍的优势。 Orin Nano 系列: 改进 GPU、CPU 和内存的频率和性能。 Orin NX series: 跨 GPU 和 DLA 的改进。 表 5、Jetson Orin Nano Super 和 Orin NX Super 模组规格 有关提升 Jetson Orin Nano 和 Orin NX 生产模组性能的支持和文档将于 1 月上半月发布,并支持 JetPack 6.1.1。 鉴于客户对 Orin 的需求日益增长,NVIDIA 最近还宣布 将 Jetson Orin 的产品生命周期延长至 2032 年。凭借这种超强性能提升,Orin Nano 系列和 Orin NX 系列成为当前和未来模型的理想平台。 立即启动您的生成式 AI 开发者 Jetson Orin Nano 超级开发者套件 是引领边缘计算生成式 AI 开发的终极平台。现在正是时候,您可以加入充满活力且多元化的高级开发者和研究人员社区,与 NVIDIA 携手合作,利用物理 AI 应对现实世界的挑战。 对于现有的 Jetson Orin Nano 开发者套件用户,请立即升级 JetPack SDK 以解锁性能提升。 订阅我们的 时事通讯 ,及时了解相关信息, 并在 LinkedIn、Instagram、X 和 Facebook 上关注 NVIDIA Robotics 。 如需了解更多信息 , 请浏览我们的文档 ,或通过我们的 开发者论坛 、Discord 和 YouTube 频道加入 NVIDIA Robotics 社区 。
https://developer.nvidia.com/blog/nvidia-jetpack-6-1-boosts-performance-and-security-through-camera-stack-optimizations-and-introduction-of-firmware-tpm/
NVIDIA JetPack 6.1 Boosts Performance and Security through Camera Stack Optimizations and Introduction of Firmware TPM
NVIDIA JetPack has continuously evolved to offer cutting-edge software tailored to the growing needs of edge AI and robotic developers . With each release, JetPack has enhanced its performance, introduced new features, and optimized existing tools to deliver increased value to its users. This means that your existing Jetson Orin -based products experience performance optimizations by upgrading to the latest version of JetPack. JetPack 6, released in Q3 of 2023, marked a significant milestone in this journey. It introduced powerful features like Bring Your Own Kernel , providing flexibility to bring any LTS Linux Kernel, and expanded support for a wider range of Linux distributions, providing greater flexibility for different development environments. Its modular design also enabled the ability to easily upgrade to the latest JetPack compute stack without upgrading Jetson Linux . Along with these advances, the performance improvements, including enhanced hardware acceleration, ensure that Jetson platforms operate at peak efficiency. JetPack 6.1 features In July 2024, NVIDIA released NVIDIA JetPack 6.1 , furthering the commitment to continuous improvements in the JetPack stack. One of the key improvements is the enhanced Argus camera stack, which significantly reduces CPU usage by up to 40% for camera applications. Another new feature is the introduction of firmware-based Trusted Platform Module (fTPM) support, offering edge-to-cloud security with strong device attestation and secure storage to store certificates and keys, all without the need for additional discrete TPM hardware. Other highlights include updates to the AI compute stack, over-the-air (OTA) support with secure boot, and improvements in multimedia and networking. This post discusses the improved camera stack and the new fTPM feature. Figure 1. New features with JetPack 6.1 Enhanced camera stack : An enhanced Argus library reduces CPU usage by up to 40% for cameras using the Argus library, by switching to a more efficient Tegra hardware–based sync method. Enhanced security : JetPack 6.1 introduces Firmware-based Trusted Platform Module (fTPM) support, enhancing security through secure boot, cryptographic key management, and platform integrity without extra hardware, thus reducing costs and space requirements. Upgraded stack : JetPack 6.1 packs the latest and upgraded compute stack. This compute stack can also be used with JetPack 6.0 BSP. CUDA 12.6 TensorRT 10.3 cuDNN 9.3 VPI 3.2 DLA 3.1 DLFW 24.0 OTA improvements : Image-based OTA support for NVIDIA Jetson devices with UEFI Secure Boot enabled. Enhanced camera stack The JetPack camera stack provides a complete software and driver solution for robust camera support on NVIDIA Jetson platforms. It integrates low-level drivers, multimedia APIs, and advanced image-processing tools to enable efficient video capture, processing, and streaming. Supporting various camera interfaces like MIPI CSI, USB, and Ethernet, the stack leverages NVIDIA hardware accelerators for tasks such as scaling, de-noising, and encoding. The Argus Camera API , built on the NVIDIA Camera Core, offers an accessible interface, enabling advanced image capture, sensor control, and video processing while ensuring high performance and customization. This makes it ideal for computer vision and AI applications. JetPack 6.1 introduces a more efficient implementation of the Argus Camera API, reducing CPU usage by up to 40% for CSI camera capture through Argus. This optimization significantly frees up processing power, enabling additional tasks and improving the overall performance of applications using CSI cameras. Figure 2 shows the CPU performance improvement with the default Argus API implementation in JetPack 6.0 compared to the enhanced Argus API implementation in JetPack 6.1 for 12 CSI cameras used simultaneously. AR0234 and dual-IMX274–based cameras were used at 1920x1200p, 30 fps for verification. Figure 2. Camera stack in JetPack 6.1 CPU utilization The enhanced camera stack also contains multiple stability and image quality improvements. Introducing firmware-based Trusted Platform Module JetPack has consistently enhanced security offerings by integrating advanced features with each new release, ensuring robust protection across the NVIDIA Jetson platform. Foundational security features help provide a robust and comprehensive security framework for Jetson-based applications, protecting against a wide range of threats and ensuring the confidentiality, integrity, and protection of sensitive data. These features include the following: Hardware root of trust (RoT) Secure boot Trusted execution environment Disk encryption Memory encryption Rollback protection Protection from physical attacks JetPack 6.1 enhances the security capabilities of the JetPack platform by introducing firmware-based Trusted Platform Module (fTPM) , a software-based implementation of the TPM standard . fTPM delivers multiple advantages, such as secure key management without the need for extra hardware, simplifying system design and reducing costs. It supports trusted boot and remote attestation, ensuring device integrity and defense against tampering. By securing cryptographic operations, fTPM strengthens data protection and enables secure communication. For Jetson users, fTPM offers the following key benefits: Stronger security: The fTPM feature significantly strengthens the security of cryptographic operations by creating a dedicated, isolated environment within the trusted execution environment (TEE). This ensures that sensitive processes, such as key generation and encryption, are safeguarded from external tampering or malicious software attacks. Data protection: It securely stores critical information like cryptographic keys, digital certificates, and passwords, preventing unauthorized access. This ensures that sensitive data remains confidential and is only accessible to authorized components. Attestation: fTPM provides cryptographic proofs, enabling the system to demonstrate the integrity and security of both the hardware and software to external entities. This attestation capability ensures that the system’s current state can be verified and trusted by third parties, such as remote servers or other devices. Using fTPM, Jetson Orin devices now have enhanced protection against unauthorized access and tampering, making them more secure for AI, robotics, and edge computing applications. For more information, see Jetson firmware-based TPM . Distribution partnerships NVIDIA Linux distribution partners have also made significant progress in supporting their offerings on Jetson. Here are a few noteworthy updates: Canonical now has a fully certified Ubuntu environment with long-term support right out of the box for Jetson Orin modules. Red Hat Device Edge with Red Hat Enterprise Linux 9.4 on Jetson Orin is now in Tech preview . Wind River Linux is now powering Jetson Orin-based products in robotics and autonomy. RedHawk Linux is also supporting mission-critical applications across various industries, while the meta-tegra community has been instrumental in assisting Jetson-based products using Yocto. The NVIDIA collaboration with Linux distribution partners is gaining traction, enabling us to offer the ideal Linux distribution options for all development and deployment needs. JetPack branch roadmap A JetPack branch typically remains in the active state for about two years, during which it receives regular updates, including new features, CVE fixes, and PCN additions. After this period, the branch transitions to the sustaining state. In the sustaining state, it continues to receive CVE fixes and PCN updates but no longer incorporates new features. JetPack 6.1 is the final active release in the JetPack 6 series. After this, the JetPack 6 branch transitions to the sustaining support phase, and continues to receive CVE patches and PCN updates. Figure 3 shows the life cycle for JetPack 5 and JetPack 6. Figure 3. JetPack branch roadmap for JetPack 5 and JetPack 6 Jetson modules product lifecycle update With increasing demand for the Jetson Xavier and Jetson Orin product family,  NVIDIA has extended the product life cycle for the following products: The NVIDIA Jetson Xavier NX 16GB module will be extended from the initially planned Q1 2026 to Q1 2028. All NVIDIA Jetson Orin commercial modules including AGX Orin, Orin NX, and Orin Nano products will be extended from the initially planned Q1 2030 to Q1 2032. For more information, see Jetson Product Lifecycle . Summary NVIDIA JetPack 6.1 offers substantial enhancements for developers using the Jetson Orin platform. Key features include an upgraded camera stack, resulting in up to a 40% performance boost for camera-based AI applications. The introduction of firmware-based TPM enhances security and device attestation, leading to more secure deployments, along with several other minor improvements. Jetson Orin users can upgrade to JetPack 6.1 to get the most out of these advancements and unlock new possibilities in AI and robotics. For more information, see the following resources: JetPack SDK JetPack 6.1 Release Notes How to Install and Configure JetPack SDK
https://developer.nvidia.com/zh-cn/blog/nvidia-jetpack-6-1-boosts-performance-and-security-through-camera-stack-optimizations-and-introduction-of-firmware-tpm/
NVIDIA JetPack 6.1 通过摄像头堆栈优化和固件 TPM 实现性能和安全性提升
NVIDIA JetPack 不断演进,为满足 边缘 AI 和机器人开发者 的不断增长需求提供最新的软件。每个版本中,JetPack 都会增强性能,引入新功能,并优化现有工具,为用户提供更大的价值。这意味着,您现有的基于 Jetson Orin 的产品可以通过升级到最新版本的 JetPack 体验到性能优化。 JetPack 6 于 2023 年第三季度发布,标志着这一进程中的一个重要里程碑。它引入了强大的功能,如 自带内核 (Bring Your Own Kernel),提供了灵活性,可以引入任何 LTS Linux 内核,并扩展了对更广泛的 Linux 发行版的支持,为不同的开发环境提供了更大的灵活性。 它采用模块化设计,无需升级 Jetson Linux ,即可轻松升级到最新的 JetPack 计算堆栈。伴随这些进步,性能改进(包括增强的硬件加速)可确保 Jetson 平台以峰值效率运行。 JetPack 6.1 功能 2024 年 7 月,NVIDIA 发布了 NVIDIA JetPack 6.1 ,进一步致力于持续改进 JetPack 堆栈。 其中一项关键改进是增强的 Argus 摄像头堆栈,可将摄像头应用的 CPU 占用率显著降低高达 40%。 另一项新功能是引入基于固件的可信平台模块(Trusted Platform Module,TPM)支持,通过强大的设备认证和安全存储来存储证书和密钥,提供边缘到云的安全性,而无需额外的独立 TPM 硬件。 其他亮点包括 AI 计算堆栈更新、支持安全启动的无线 (OTA) 支持,以及多媒体和网络的改进。 本文将讨论改进的摄像头堆栈和新的 fTPM 功能。 图 1、JetPack 6.1 的新功能 增强型摄像头堆栈 :通过切换到更高效的基于硬件的 Tegra 同步方法,增强型 Argus 库可将使用 Argus 库的摄像头的 CPU 占用率降低高达 40%。 增强安全性 :JetPack 6.1 引入了基于固件的可信平台模块(TPM)支持,无需额外硬件即可通过安全启动、加密密钥管理和平台完整性来增强安全性,从而降低成本和空间需求。 升级堆栈 : JetPack 6.1 包含最新升级的计算堆栈。此计算堆栈还可用于 JetPack 6.0 BSP。 CUDA 12.6 TensorRT 10.3 cuDNN 9.3 VPI 3.2 DLA 3.1 DLFW 24.0 OTA 改进 :在启用 UEFI 安全启动的情况下,为 NVIDIA Jetson 设备提供基于图像的 OTA 支持。 增强的摄像头堆栈 JetPack 摄像头堆栈为 NVIDIA Jetson 平台提供了完整的软件和驱动解决方案,实现了强大的摄像头支持。它集成了低级驱动、多媒体 API 和高级图像处理工具,实现了高效的视频捕获、处理和流式传输。该堆栈支持各种摄像头接口,如 MIPI CSI、USB 和以太网,并利用 NVIDIA 硬件加速器执行缩放、去噪和编码等任务。 Argus Camera API 基于 NVIDIA Camera Core 构建,提供可访问的界面,可实现先进的图像捕获、传感器控制和视频处理,同时确保高性能和自定义。这使其成为计算机视觉和人工智能应用的理想之选。 JetPack 6.1 引入了更高效的 Argus Camera API 实现,通过 Argus 将 CSI 摄像头捕获的 CPU 占用率降低高达 40%。这种优化可显著释放处理能力,实现额外任务,并提高使用 CSI 摄像头的应用程序的整体性能。 图 2 显示了 JetPack 6.0 中默认 Argus API 实现与 JetPack 6.1 中同时使用 12 个 CSI 摄像头的增强 Argus API 实现相比,CPU 性能的提升情况。使用基于 AR0234 和双路 IMX274 的摄像头,分辨率为 1920x1200p,30 fps 进行验证。 图 2、JetPack 6.1 中的摄像头堆栈 CPU 利用率 增强的摄像头堆栈还包含多项稳定性和图像质量改进。 推出基于固件的可信平台模块 JetPack 通过在每个新版本中集成高级功能来不断增强安全产品,确保 NVIDIA Jetson 平台上提供强大的保护。 基础安全功能有助于为基于 Jetson 的应用提供稳健且全面的安全框架,抵御各种威胁,并确保机密性、完整性和对敏感数据的保护。这些功能包括: 硬件信任根 (RoT) 安全启动 可信执行环境 磁盘加密 内存加密 回滚保护 免受物理攻击 JetPack 6.1 通过引入 基于固件的可信平台模块(Trusted Platform Module,TPM) 来增强 JetPack 平台的安全功能,fTPM 是 TPM 标准 的基于软件的实现。 fTPM 具有多种优势,例如无需额外硬件的安全密钥管理、简化系统设计和降低成本。它支持可信启动和远程证明,确保设备的完整性并防止篡改。通过保护加密操作,fTPM 加强数据保护并实现安全通信。 对于 Jetson 用户,fTPM 提供以下主要优势: 更高的安全性: fTPM 功能通过在可信执行环境(TEE)中创建专用的隔离环境,显著增强了加密操作的安全性。这确保了敏感进程(如密钥生成和加密)免受外部篡改或恶意软件攻击。 数据保护: 它可以安全存储加密密钥、数字证书和密码等关键信息,防止未经授权的访问。这确保敏感数据保持机密性,并且只有授权的组件才能访问。 认证: fTPM 提供加密证明,使系统能够向外部实体展示硬件和软件的完整性和安全性。这种认证功能确保系统的当前状态可以被远程服务器或其他设备等第三方验证和信任。 借助 fTPM,Jetson Orin 设备现在可以增强对未经授权的访问和篡改的保护,使其在人工智能、机器人开发和边缘计算应用中更加安全。 有关更多信息,请参阅 基于 Jetson 固件的 TPM 。 分销合作伙伴关系 NVIDIA Linux 发行合作伙伴在支持 Jetson 产品方面也取得了重大进展。以下是一些值得注意的更新: Canonical 现已拥有 经过全面认证的 Ubuntu 环境 ,并为 Jetson Orin 模组提供开箱即用的长期支持。 Jetson Orin 上搭载 Red Hat Enterprise Linux 9.4 的红帽设备边缘现已提供 技术预览 。 Wind River Linux 正在为 基于 Jetson Orin 的机器人和自主产品提供支持。 RedHawk Linux 还支持各行各业的任务关键型应用程序,同时 meta-tegra 社区在帮助 Jetson 产品使用 Yocto 方面发挥了重要作用。NVIDIA 与 Linux 发行合作伙伴的合作日益密切,这使我们能够针对所有开发和部署需求提供理想的 Linux 发行版选项。 JetPack 分支路线图 JetPack 分支通常会保持大约两年的活动状态,在此期间会定期更新,包括新功能、CVE 修复和 PCN 添加。在此期间之后,分支过渡到维持状态。在维持状态下,它将继续接收 CVE 修复和 PCN 更新,但不再包含新功能。 JetPack 6.1 是 JetPack 6 系列的最后一个活跃版本。在此之后,JetPack 6 分支将过渡到持续支持阶段,并继续接收 CVE 补丁和 PCN 更新。图 3 显示了 JetPack 5 和 JetPack 6 的生命周期。 图 3、适用于 JetPack 5 和 JetPack 6 的 JetPack 分支路线图。 Jetson 模组产品生命周期更新 随着对 Jetson Xavier 和 Jetson Orin 产品系列的需求不断增加,NVIDIA 已经延长了以下产品的产品生命周期: NVIDIA Jetson Xavier NX 16GB 模组将从最初计划的 2026 年第一季度延期到 2028 年第一季度。 所有 NVIDIA Jetson Orin 商用模组(包括 AGX Orin、Orin NX 和 Orin Nano 产品)将从最初计划的 2030 年第一季度延长至 2032 年第一季度。 有关更多信息,请参阅 Jetson 产品生命周期 。 总结 NVIDIA JetPack 6.1 为使用 Jetson Orin 平台的开发者提供了大量增强功能。主要功能包括升级的摄像头堆栈,可为基于摄像头的 AI 应用带来高达 40% 的性能提升。基于固件的 TPM 的引入可增强安全性和设备认证,从而带来更安全的部署,以及其他一些小改进。 Jetson Orin 用户可以 升级到 JetPack 6.1 ,以充分利用这些进步,并在 AI 和机器人领域发掘新的可能性。 有关更多信息,请参阅以下资源: JetPack SDK JetPack 6.1 版本说明 如何安装和配置 JetPack SDK
https://developer.nvidia.com/blog/nvidia-doca-2-9-enhances-ai-and-cloud-computing-infrastructure-with-new-performance-and-security-features/
NVIDIA DOCA 2.9 Enhances AI and Cloud Computing Infrastructure with New Performance and Security Features
NVIDIA DOCA enhances the capabilities of NVIDIA networking platforms by providing a comprehensive software framework for developers to leverage hardware acceleration, boosting performance, security, and efficiency. Its ecosystem of APIs, libraries, and tools streamlines development for data center infrastructure, enabling workload offloading, acceleration, and isolation to support modern, efficient data centers. Today, DOCA is used by many major CSPs and NVIDIA Cloud Partners (NCPs) and provides a standardized platform to rapidly develop and deploy innovative solutions. This approach accelerates time to market, reduces costs, and enables partners to focus on their core strengths while using NVIDIA hardware acceleration. The open ecosystem fostered by DOCA promotes collaboration and interoperability, creating a diverse array of complementary solutions. Partners can explore emerging markets such as AI-driven networking, advanced security, and high-performance storage, positioning themselves at the forefront of data center innovation. The latest release of DOCA 2.9 marks a major advancement in AI compute fabrics and cloud computing infrastructure. This extensive update introduces many new features and improvements to help transform the high-performance computing and networking landscape. Figure 1. DOCA 2.9 architecture Optimizing AI networks with an enhanced east-west compute fabric The highlights in the DOCA 2.9 release include improved congestion control and a new telemetry library essential for optimizing network traffic within the data center—providing better performance, efficiency, visibility, and control. Spectrum-X 1.2 reference architecture support The Spectrum-X (SPC-X) 1.2 reference architecture received several enhancements, targeting east-west Ethernet AI cloud environments. This update supports massive scale-out capabilities, accommodating up to 128,000 GPUs in a single fabric. The architecture uses the powerful combination of NVIDIA BlueField-3 SuperNICs and NVIDIA Spectrum-4 switches, connected to NVIDIA DGX H100 and NVIDIA HGX H100 platforms, to deliver unprecedented performance and efficiency for AI workloads. DOCA 2.9 also improves telemetry capabilities and the congestion control algorithm for Spectrum-X. This allows for more granular, real-time monitoring of network performance, and improved topology detection, which is crucial for optimizing AI workloads at scale and distances. DOCA telemetry library New to this release, the DOCA telemetry library introduces high-frequency sampling capabilities, advancing network monitoring for AI-driven environments. This update enables counter readings at sub-100 microsecond intervals, a sizable leap from the previous 0.5-1-second frequency. Key features include new APIs for specifying counters, intervals, and frequency, along with support for multiple performance counters such as RX/TX bytes, ports, congestion notifications, and PCIe latency. These enhancements cater to critical use cases like high-frequency telemetry (HFT) for cluster-wide anomaly detection and local performance analysis for application profiling. North-south cloud computing infrastructure is enhancing connectivity and security The north-south improvements in DOCA 2.9 focus on enhancing the connectivity between cloud resources and external networks. DOCA Flow DOCA 2.9 introduces an exciting new feature to DOCA Flow: the ‘tune’ performance analysis tool. Currently in the alpha stage, this tool is seamlessly integrated into the DOCA-Flow delivery package, offering users unprecedented insights into their network flow configurations. The ‘tune’ tool provides a visual representation of configured pipelines, enabling users to gain a clear understanding of their flow structures. This visualization capability enables administrators and developers to quickly identify and optimize flow configurations. OVS-DOCA OVS-DOCA is generally available, bringing with it local mirroring capabilities, and adding a step forward in software-defined networking for NVIDIA BlueField DPUs. This long-term support (LTS) release offers users a stable, high-performance alternative to legacy OVS solutions, providing better efficiency and expanded features for modern networking environments, using DPDK or kernel datapath. The GA release introduces key enhancements that elevate the capabilities of DOCA, including a major performance boost for the connection tracking (CT) feature through the DOCA Flow API. Users can expect a 100% improvement in connections per second (CPS) and up to a 50% increase in packets per second (PPS). Scalability and throughput have been enhanced with support for up to three NICs, for more flexible and powerful networking configurations. DOCA host-based networking 2.4 DOCA host-based networking (HBN) continues to evolve and version 2.4 brings numerous enhancements to controller-less VPC networking for bare-metal-as-a-service environments. Building on the foundation of BGP EVPN, DOCA HBN 2.4 introduces impressive scalability improvements, supporting up to 8,000 VTEPs and 80,000 Type-5 routes. The latest release enhances ECMP routing with kernel next-hop groups and OVS-DOCA failover, improving network resilience and performance. A key addition is the Stateful SNAT+PAT for overlay gateways, enabling private tenant IPs to access external networks through a shared public IP address. DOCA FireFly The enhanced DOCA Firefly service brings advanced time synchronization capabilities to NVIDIA DPUs through hardware acceleration. This update introduces two notable features: Synchronous Ethernet (SyncE) support and Data Transfer Service (DTS) integration. SyncE provides high-precision frequency synchronization between network devices, crucial for telecommunications networks, particularly in mobile infrastructure. DTS support enables PTP information transmission over the telemetry channel, allowing for continuous network time service monitoring. NVIDIA Rivermax SDK The Rivermax SDK received enhancements focused on reducing latency, minimizing CPU usage, and maximizing bandwidth and GPU utilization for data-intensive applications. A key addition is support for Internet Protocol Media Experience (IPMX), the emerging open standard for AV over IP in professional audiovisual environments. Underpinned by Rivermax, DOCA 2.9 also supports NVIDIA Holoscan for Media; a certified platform tailored for the media and entertainment industry. This feature optimizes the handling of uncompressed and compressed video streams, streamlining I/O operations for high-performance media processing. NVIDIA DOCA App Shield Enhancements to the DOCA App Shield library boost its capabilities in host monitoring and threat detection. A key addition is the inclusion of pre-generated OS profiles, streamlining the setup process for various operating systems. For Linux environments, App Shield now offers advanced container monitoring features, enabling security teams to keep a vigilant eye on containerized workloads. The service has been expanded to list network connections and provide detailed information on network connections per process, offering deeper insights into potential security threats. DOCA SNAP virtio-fs The beta version of DOCA SNAP virtio-fs, is a service for secured and accelerated file system storage that leverages the power of NVIDIA BlueField-3 DPUs. This solution exposes local file system semantics to the host using the in-box virtio-fs driver, while running remote file system storage logic directly on the DPU. The release also introduces a beta version of SNAP Virtio-fs, delivered as a public NGC service container. It’s enabled with the NFS Linux kernel file system, integrated into the BlueField-3 kernel. Developers can create custom file system stacks based on SPDK FSDEV, offering flexibility and performance optimization. This solution enables cloud-scale distributed file system storage in AI compute servers, providing a secured environment with isolation and policy enforcement, while accelerating performance and offloading hypervisor tasks. Open Virtual Network Bare-Metal Tenant Isolation DOCA 2.9 includes a new orchestration service, enhancing tenant isolation in software-defined networking (SDN) environments. The Open Virtual Network (OVN) Bare-Metal tenant isolation feature secures north-south traffic in multi-tenant environments, ensuring AI workloads stay secure and separate, even in dense compute clusters. Built on upstream OVN, this service provides streamlined, robust APIs for tenant isolation, along with an Ansible playbook for effortless deployment on BlueField DPUs. The key innovation lies in offloading and accelerating SDN-based tenant BlueField DPUs to improve both speed and efficiency by isolating specific processes. This central organization enables easy changes to isolation settings using API calls, providing more control over network management. This makes it perfect for AI clouds and factories that want to create multi-tenant clouds using SDN. Device management and development tools DOCA 2.9 streamlines and simplifies the device management process necessary for large-scale deployments. This release also provides developers with new optimization and analysis tools that offer improved insights into application and datapath performance. DOCA Management Service (DMS) DOCA Management Service (DMS) also moved to GA status, offering enhanced capabilities for managing BlueField DPUs and SuperNICs. This update introduces the ability to manage multiple devices through a single API endpoint, streamlining operations in complex, multi-device environments. A key improvement is the support for configuration ‘persistency’ across node reboots, ensuring that device settings remain intact during system restarts. This is complemented by new bulk import/export features for device configurations, enabling efficient management of large-scale deployments. DOCA data path accelerator Enhancements for the data path accelerator (DPA) toolkit offer developers more powerful performance optimization and analysis tools. A standout feature is the integration of DPA performance counters with a new Nsight tool, providing deeper insights into application performance. A major addition is the DOCA-DPA-Comms library, currently in beta. This library simplifies the implementation of DPA-based datapaths, offering a higher level of abstraction for developers. It’s available for both BlueField-3 DPUs and the upcoming NVIDIA ConnectX-8 SuperNIC, ensuring broad compatibility across the NVIDIA advanced networking hardware portfolio. Platform and DOCA packages for streamlined deployment and support Improving the user experience is central to the evolution of DOCA. This release includes many features intended to simplify and improve deployment, such as the DOCA-Host profile DOCA-ROCE, which caters to environments that require RDMA over Converged Ethernet capabilities. NVIDIA also introduced PLDM firmware updates for BlueField-3. This beta release enables seamless firmware updates using standard PLDM over MCTP over PCIe, for servers to operate normally until activation. This zero-trust feature supports both NIC and DPU modes and eliminates the need for DPU-BMC 1GbE connectivity. In addition, this release marks the final Long Term Support (LTS) standalone release of MLNX_OFED which is now available as the host profile ‘DOCA-OFED’. Transitioning from the MLNX_OFED suite of drivers and tools for InfiniBand and Ethernet solutions to DOCA-OFED enables a unified, scalable, and programmable networking stack integrated within the DOCA framework. Learn more about the MLNX_OFED to DOCA-OFED transition . Learn more NVIDIA DOCA 2.9 marks significant advancements in both AI compute fabric and cloud computing infrastructure. Download NVIDIA DOCA to begin your development journey with all the benefits DOCA has to offer.
https://developer.nvidia.com/zh-cn/blog/nvidia-doca-2-9-enhances-ai-and-cloud-computing-infrastructure-with-new-performance-and-security-features/
NVIDIA DOCA 2.9 借助新性能和安全功能强化人工智能和云计算基础设施
NVIDIA DOCA 通过为开发者提供全面的软件框架来利用硬件加速来增强 NVIDIA 网络平台的功能,从而提高性能、安全性和效率。它的 API、库和工具生态系统可简化数据中心基础设施的开发,实现工作负载卸载、加速和隔离,以支持现代、高效的数据中心。 如今,许多主要的 CSP 和 NVIDIA 云合作伙伴(NCP)都在使用 DOCA,并为快速开发和部署创新解决方案提供了标准化平台。这种方法可缩短上市时间、降低成本,并使合作伙伴能够在使用 NVIDIA 硬件加速的同时专注于自身的核心优势。 DOCA 培育的开放生态系统促进了协作和互操作性,创造了各种互补解决方案。合作伙伴可以探索 AI 驱动的网络、高级安全性和高性能存储等新兴市场,将自己定位在数据中心创新前沿。 DOCA 2.9 的最新版本标志着 AI 计算结构和云计算基础设施的重大进步。这次全面更新引入了许多新功能和改进,有助于改变高性能计算和网络格局。 图 1、DOCA 2.9 架构 利用增强的东西向计算结构优化人工智能网络 DOCA 2.9 版本的亮点包括改进的拥塞控制和新的遥测库,这对于优化数据中心内的网络流量至关重要,从而提供更好的性能、效率、可见性和控制。 支持 Spectrum-X 1.2 参考架构 Spectrum-X (SPC-X) 1.2 参考架构针对东西向以太网 AI 云环境进行了多项增强。该更新支持大规模横向扩展能力,可在单个网络中容纳多达 128,000 个 GPU。该架构使用 NVIDIA BlueField-3 SuperNIC 和 NVIDIA Spectrum-4 交换机的强大组合,连接到 NVIDIA DGX H100 和 NVIDIA HGX H100 平台,为 AI 工作负载提供出色的性能和效率。 DOCA 2.9 还改进了遥测能力和 Spectrum-X 的拥塞控制算法。这有助于更精细、更实时地监控网络性能,并改进拓扑检测,这对于大规模和远程优化 AI 工作负载至关重要。 DOCA 拥塞控制 DOCA 2.9 提升了高性能计算和 AI 工作负载的拥塞控制算法,即 NVIDIA 网络拥塞控制 (NVNCC)。 NVNCC Gen2 和 InfiniBand 拥塞控制 IBCC 的全面推出标志着一个重要的里程碑。IBCC 专门针对 InfiniBand 上的人工智能工作负载进行了优化,而 NVNCC 通过改进的拓扑检测功能增强了 Spectrum-X 拥塞控制算法,现在支持长距离的 RoCE。 DOCA 遥测库 作为此版本的新成员,DOCA 遥测库引入了高频采样功能,推进了人工智能驱动环境的网络监控。此更新可实现低于 100 微秒的计数器读数间隔,与之前的 0.5-1 秒频率相比,实现了巨大飞跃。 主要功能包括用于指定计数器、时间间隔和频率的新 API,以及对多个性能计数器(例如 RX/TX 字节、端口、拥塞通知和 PCIe 延迟)的支持。这些增强功能可满足关键用例的需求,例如用于集群范围异常检测的高频遥测(HFT)和用于应用程序分析的本地性能分析。 北-南云计算基础设施正在增强连接性和安全性 DOCA 2.9 中的南北向改进侧重于增强云资源与外部网络之间的连接性。 DOCA Flow DOCA 2.9 为 DOCA Flow 引入了一项令人兴奋的新功能:‘tune’性能分析工具。目前处于 alpha 阶段,此工具已无缝集成到 DOCA-Flow 交付包中,为用户提供有关其网络流配置的深入见解。 `tune`工具可直观呈现已配置的流水线,使用户能够清楚地了解其流结构。这项可视化功能使管理员和开发者能够快速识别和优化流配置。 OVS-DOCA OVS-DOCA 现已正式推出,带来了本地镜像功能,并在 NVIDIA BlueField DPU 软件定义网络方面向前迈出了一步。此长期支持(LTS)版本为用户提供了传统 OVS 解决方案的稳定、高性能替代方案,使用 DPDK 或内核数据路径为现代网络环境提供更高的效率和扩展功能。 通用版本引入了一些关键的增强功能,可提升 DOCA 的功能,包括通过 DOCA Flow API 大幅提升连接追踪(CT)功能的性能。用户每秒连接数(CPS)有望提升 100%,每秒数据包数(PPS)有望提升高达 50%。最多可支持三个 NIC,可实现更灵活、更强大的网络配置,从而增强可扩展性和吞吐量。 基于 DOCA 主机的网络 2.4 DOCA 基于主机的网络 (HBN) 不断发展,版本 2.4 为裸机即服务环境的无控制器 VPC 网络带来了许多增强功能。DOCA HBN 2.4 以 BGP EVPN 为基础,引入了令人印象深刻的可扩展性改进,支持高达 8,000 个 VTEP 和 80,000 个 Type-5 路由。 最新版本通过内核 Next-Hop 组和 OVS-DOCA 故障转移增强了 ECMP 路由,提高了网络弹性和性能。其中一个关键的新增功能是 Overlay 网关的有状态 SNAT+PAT,使私有租户 IP 能够通过共享公共 IP 地址访问外部网络。 DOCA FireFly 增强型 DOCA Firefly 服务通过硬件加速为 NVIDIA 数据处理单元(DPU)带来先进的时间同步功能。此更新引入了两项显著特性:同步以太网(SyncE)支持和数据传输服务(DTS)集成。 SyncE 可在网络设备之间实现高精度频率同步,对于电信网络(尤其是移动基础设施)至关重要。DTS 支持支持通过遥测通道传输 PTP 信息,从而实现持续的网络时间服务监控。 NVIDIA Rivermax SDK Rivermax SDK 获得了一些增强功能,这些功能侧重于降低延迟、最大限度地降低 CPU 占用率,以及最大限度地提高数据密集型应用的带宽和 GPU 利用率。一个关键的补充是支持 Internet Protocol Media Experience (IPMX),这是专业视听环境中 AV over IP 的新兴开放标准。 在 Rivermax 的支持下,DOCA 2.9 还支持 NVIDIA Holoscan for Media;一个专为媒体和娱乐行业定制的认证平台。该功能优化了未压缩和压缩视频流的处理,简化了用于高性能媒体处理的 I/O 操作。 NVIDIA DOCA App Shield DOCA App Shield 库的增强功能提高了其在主机监控和威胁检测方面的能力。主要增强包括预先生成的操作系统配置文件,使各种操作系统的设置过程更加简化。 对于 Linux 环境,App Shield 现在提供高级容器监控功能,使安全团队能够密切关注容器化工作负载。该服务已得到扩展,可列出网络连接,并提供有关每个进程的网络连接详细信息,从而更深入地了解潜在的安全威胁。 DOCA SNAP 虚拟化 DOCA SNAP virtio-fs 的测试版是一项利用 NVIDIA BlueField-3 DPU 的强大功能的安全加速文件系统存储服务。此解决方案使用内置的 virtio-fs 驱动程序向主机公开本地文件系统语义,同时直接在 DPU 上运行远程文件系统存储逻辑。 该版本还引入了 SNAP Virtio-fs 的测试版,作为公共 NGC 服务容器提供。它通过集成到 BlueField-3 内核中的 NFS Linux 内核文件系统启用。开发者可以基于 SPDK FSDEV 创建自定义文件系统堆栈,从而实现灵活性和性能优化。 此解决方案可在 AI 计算服务器中实现云规模分布式文件系统存储,提供具有隔离和策略执行功能的安全环境,同时加速性能并卸载虚拟机管理程序任务。 开放虚拟网络裸机租户隔离 DOCA 2.9 包括一项新的编排服务,增强软件定义网络(SDN)环境中的租户隔离。Open Virtual Network(OVN)裸机租户隔离功能保护多租户环境中的南北流量,确保 AI 工作负载保持安全和独立,即使在密集计算集群中也是如此。 此服务基于上游 OVN 构建,提供用于租户隔离的简化、健壮的 API,以及用于在 BlueField DPUs 上轻松部署的 Ansible 手册。关键创新在于卸载和加速基于 SDN 的租户 BlueField DPUs,通过隔离特定进程来提高速度和效率。这种中央组织允许使用 API 调用轻松更改隔离设置,从而更好地控制网络管理。这使得它非常适合希望使用 SDN 创建多租户云的 AI 云和工厂。 设备管理和开发工具 DOCA 2.9 简化了大规模部署所需的大型语言模型设备管理流程。此版本还为开发者提供了新的优化和分析工具,可帮助他们更好地了解应用程序和数据路径性能。 DOCA 管理服务 (DMS) DOCA 管理服务(DMS)也升级为通用状态,为管理 BlueField DPUs 和 SuperNICs 提供增强功能。此更新引入了通过单个 API 端点管理多台设备的能力,从而简化了在复杂的多设备环境中的操作。 一项关键改进是在节点重启期间支持配置“持久性”,从而确保设备设置在系统重启期间保持不变。这种改进还伴随着新的批量导入/导出功能,用于设备配置的高效管理,特别是在大规模部署中。 DOCA 数据路径加速器 数据路径加速器(DPA)工具包的增强功能为开发者提供了更强大的性能优化和分析工具。其中一项突出功能是将 DPA 性能计数器与新的 Nsight 工具相集成,从而更深入地了解应用程序性能。 DOCA-DPA-Comms 库是一个重要的补充,目前处于测试阶段。该库简化了基于 DPA 的数据路径的实现,为开发者提供了更高级别的抽象化。它可用于 BlueField-3 DPUs 和即将推出的 NVIDIA ConnectX-8 SuperNIC,确保 NVIDIA 高级网络硬件产品组合的广泛兼容性。 用于简化部署和支持的平台和 DOCA 软件包 改善用户体验是 DOCA 发展的核心。本次发布包括许多旨在简化和改进部署的功能,例如 DOCA-Host 配置文件 DOCA-ROCE,它迎合了需要 RDMA over Converged Ethernet 功能的环境。 NVIDIA 还针对 BlueField-3 推出了 PLDM 固件更新。该测试版支持使用标准 PLDM over MCTP over PCIe 实现无缝固件更新,使服务器在激活之前能够正常运行。这一零信任功能支持 NIC 和 DPU 模式,并且无需 DPU-BMC 1GbE 连接。 此外,此版本标志着 MLNX_OFED 的最终长期支持(LTS)独立版本,该版本现已作为主机配置文件‘DOCA-OFED’提供。 从用于 InfiniBand 和以太网解决方案的 MLNX_OFED 驱动和工具套件过渡到 DOCA-OFED,可实现集成在 DOCA 框架内的统一、可扩展和可编程网络堆栈。详细了解 MLNX_OFED 向 DOCA-OFED 过渡 。 了解详情 NVIDIA DOCA 2.9 标志着 AI 计算结构和云计算基础设施的显著进步。 下载 NVIDIA DOCA 开始您的开发之旅,享受 DOCA 提供的所有优势。
https://developer.nvidia.com/blog/ai-powered-devices-track-howls-to-save-wolves/
AI-Powered Devices Track Howls to Save Wolves
A new cell-phone-sized device—which can be deployed in vast, remote areas—is using AI to identify and geolocate wildlife to help conservationists track endangered species, including wolves around Yellowstone National Park. The battery-powered devices—dubbed GrizCams—are designed by a small Montana startup, Grizzly Systems. Together with biologists, they’re deploying a constellation of the devices across the Greater Yellowstone ecosystem to record audio and video of when and where wolves or wolf packs howl. Once fully deployed, the data can help scientists and conservationists better understand wolf behavior and create new strategies for deterring wolves from attacking livestock. Conservationists retrieve audio data from SD cards on remote recorders every few months. That data is fed into and analyzed by AI models trained using terabytes of data of howling wolves. The model—a convolutional neural network—converts the audio into a spectrogram, which analyzes the data, identifying different aspects of a wolf’s howl and geolocating where the sounds originated. Grizzly Systems trained the model using NVIDIA A100 Tensor Core GPUs in the Azure cloud and PyTorch framework running NVIDIA CUDA-X libraries . For inferencing, they use NVIDIA Triton Inference Server and ONNX Runtime for model optimization, with an NVIDIA RTX 4090 for on-prem storage of sensitive data and local inference. Video 1. A wolf pack recorded in 2023 in Yellowstone National Park vocalizes in chorus and asynchronously Grizzly Systems CEO, Jeff Reed, PhD, highlighted how the system monitors large tracts of land 24 hours a day, every day of the year. The devices can help perennially under-resourced wildlife managers and state and federal agencies monitor lands that often lack personnel. The AI model can identify varied pitches and intonations of wolf vocalizations, which can carry more than six kilometers from where they originate. Knowing where a pack moves by tracking their howls can help conservationists identify a wolf’s territorial boundaries. While the model can’t yet identify individual wolves from their howls, Reed said future iterations of the technology will have that capability. Figure 1. GrizCam collects sounds or video, runs through an on-device thin model layer, and analyzes data on a cloud-based LLM Today, the GrizCams make up one part of a larger conservation effort aimed at balancing competing interests in the land. These include the small but growing wolf population in Montana, which needs wild prey for food; the billion-dollar Yellowstone eco-tourism economy, which relies upon healthy wildlife populations; and the ranchers who need to protect their livestock, and whose land offers critical habitat for wildlife. “Wolves, grizzlies, elk can be a hassle to a rancher because they might kill their livestock, or tear down their fences,” said Reed, who before starting up Grizzly Systems three years ago, spent his career working in the tech industry. “On the flip side, those ranches also provide critical habitat for wildlife on private lands around Yellowstone. “If our devices can detect a lone wolf coming through a ranchland because we have AI on it, then we can playback the sound of guardian dogs barking, or a gunshot, or a large territorial wolf pack, which can “encourage” that wolf to move out of that area. But that requires vigilance throughout the day and the night—and nobody is sitting outside 24/7—which is where AI comes in.” Another way AI is helping conservationists is by streamlining the data collection process. The remote recorders—which can also be deployed with video capabilities—run a very thin-layer AI on-device, which weeds out most motion that would otherwise trigger false-positive recordings. The recorders can ignore wind rustling through grass or trees, or bright light reflecting off snow—two common stimuli that trigger false-positive recordings on remote devices. As a result, the GrizCam’s batteries last longer and require less servicing by wildlife managers and landowners. AI is also useful to conservationists as it quickly sifts through terabytes of recorded data to quickly identify and flag relevant audio or visual signatures. While the on-device AI cuts down on unwanted recordings, it nevertheless records sounds and imagery of biological activity—including birds, elk, or bears moving across terrain and making noises. “These acoustic recorders are gathering data with AI, they’re recording 24-7, every day for a year across 50 or so recorders,” said Reed. “With AI, we can crunch through the data, go through and identify wolves or other endangered species if they’re there, and then work with conservationists to say, ‘okay, we gotta go protect this area and do some additional conservation over there.’” Grizzly Systems plans to continue its close collaboration with conservationists. It also foresees its rugged edge devices’ relevance for a variety of industrial use cases, including remote surveillance. Reed points out that 97% of the Earth’s surface lacks access to an electrical outlet. A rugged recorder can monitor oil and gas rigs, as well as remote electrical transformers, which, in very rural areas, can attract vandals who take them offline. “AI is a great example of how, if we can get it right, with battery life and ruggedness, we can monitor illegal activity that hurts us all,” Reed said. “Poaching, illegal wildlife trafficking, illegal logging or mining in the Amazon— this is activity that ends up hurting the vast majority of people and the planet–and which technology can help prevent.” Read more about the partnership between Grizzly Systems and Yellowstone National Park. Check out additional reporting on wolf conservation and decoding wolf verbalizations .
https://developer.nvidia.com/zh-cn/blog/ai-powered-devices-track-howls-to-save-wolves/
AI 驱动的设备追踪嚎叫以拯救狼群
一种可部署在广大偏远地区的手机大小的新型设备正在使用人工智能识别和地理定位野生动物,以帮助保护主义者追踪濒危物种,包括黄石国家公园周围的狼群。 这款名为 GrizCams 的电池供电设备由蒙大拿州的一家小型初创公司 Grizzly Systems 设计。他们与生物学家合作,在整个大黄石生态系统中部署一系列设备,以录制有关狼或狼群叫的时间和地点的音频和视频。 一旦完全部署,这些数据可以帮助科学家和环保人士更好地了解狼的行为,并制定新的策略来阻止狼攻击牲畜。 环保人士每隔几个月就会在远程录音机上从 SD 卡中检索音频数据。这些数据由使用数 TB 的狼嚎叫数据训练的 AI 模型输入和分析。该模型是一种卷积神经网络,可将音频转换为频谱图,然后分析数据,识别狼嚎叫的不同方面,并对声音的来源进行地理定位。 Grizzly Systems 使用 Azure 云中的 NVIDIA A100 Tensor Core GPU 和运行 NVIDIA CUDA-X 库 的 PyTorch 框架训练模型。在推理方面,他们使用 NVIDIA Triton 推理服务器 和 ONNX Runtime 进行模型优化,并使用 NVIDIA RTX 4090 在本地存储敏感数据和进行本地推理。 视频 1.2023 年录制的黄石国家公园狼群以合唱异步发声 Grizzly Systems 首席执行官杰夫·里德(Jeff Reed)博士强调了该系统如何全年 24 小时监控大片土地。这些设备可以帮助长期资源不足的野生动物管理者以及州和联邦机构监控往往缺少人员的土地。 AI 模型可以识别狼发声的不同音高和音调,这些音高和音调距离发声地可能超过 6 公里。通过跟踪狼的叫声了解狼群的移动位置,可以帮助环保人士识别狼群的领地界限。 虽然该模型还无法从狼嚎声中识别出单个狼,但 Reed 表示,未来的技术迭代将具备这种能力。 图 1.GrizCam 收集声音或视频,在设备上的薄模型层中运行,并在基于云的 LLM 上分析数据 如今,GrizCams 参与了更大规模的保护工作,旨在平衡土地上相互竞争的利益。 这些群体包括:蒙大拿州的狼群虽然很小,但不断增长,需要依靠野生猎物为食;价值数十亿美元的黄石生态旅游经济,依赖于健康的野生动物数量;以及需要保护牲畜的牧场主,他们的土地为野生动物提供了重要的栖息地。 Reed 说:“狼、灰熊、鹿可能会给牧场主带来麻烦,因为它们可能会杀死牲畜,或拆掉围栏。”在三年前创办 Grizzly Systems 之前,Reed 一直在科技行业工作。“另一方面,这些牧场也为黄石周围私人土地上的野生动物提供了重要的栖息地。 “如果我们的设备因为搭载了 AI 技术而能够检测到一只孤狼穿过牧场,那么我们就可以播放护卫犬的狂吠声、枪声或大型领地狼群的声音,这可能会“鼓励”那只狼离开该区域。但这需要日日夜夜保持警觉,并且没有人全天候坐在室外,这是 AI 的用武之地。” AI 帮助环保人士的另一種方式是简化数据收集流程。 这些远程录像机还可以通过视频功能部署,它在设备上运行非常薄的 AI 层,从而淘汰了原本会触发误报录像的大多数动作。录像机可以忽略穿过草地或树木的风沙声,或从雪地反射的强光——这两种常见的刺激因素会在远程设备上触发误报录像。 因此,GrizCam 的电池续航时间更长,并且野生动物管理员和土地所有者需要的维护量减少。 AI 对环保人士也很有用,因为它可以快速筛选数 TB 的记录数据,快速识别和标记相关的音频或视频签名。 虽然设备上的 AI 会减少不必要的录音,但它会记录生物活动的声音和图像,包括在地形上移动并制造噪音的鸟、麋鹿或熊 “这些声学记录器使用人工智能收集数据,它们每天 24-7 小时记录 50 个左右的记录器,”Reed 说。“借助人工智能,我们可以处理数据,浏览并识别狼或其他濒危物种(如果它们在那里),然后与环保人士合作,说‘好,我们必须保护这个地区,并在那里做一些额外的保护工作。’” Grizzly Systems 计划继续与环保人士密切合作,并预计其坚固耐用的边缘设备适用于各种工业用例,包括远程监控。 Reed 指出,地球表面的 97% 地方没有电源插座。坚固耐用的记录仪可以监控油气钻井平台以及远程电力转换器,在非常偏远的地区,这些设施可能会吸引破坏者将其关闭。 Reed 说:“AI 是一个很好的例子,说明在电池续航时间长、坚固耐用的情况下,如果我们能够做到正确,我们就可以监控伤害我们所有人的非法活动。在亚马逊河上的盗猎、非法野生动物走私、非法采伐或采矿 – 这种活动最终会伤害到绝大多数人和地球,而技术可以帮助预防。” 详细了解 Grizzly Systems 与黄石国家公园 之间的合作 。 查看有关 狼保护 和 解码狼语 的其他报道。
https://developer.nvidia.com/blog/ai-accurately-forecasts-extreme-weather-up-to-23-days-ahead/
AI Accurately Forecasts Extreme Weather Up to 23 Days Ahead
New research from the University of Washington is refining AI weather models using deep learning for more accurate predictions and longer-term forecasts. The study, published in Geophysical Research Letters , shows how adjusting initial atmospheric data enables advanced AI models to extend current forecast limits. As extreme weather becomes increasingly more severe and frequent due to climate change, giving governments, businesses, the public, and emergency responders more time to prepare for natural disasters such as floods, heatwaves, or hurricanes could help reduce loss of life and property. “If a perfect weather model is given slightly imperfect initial conditions, the error compounds over time and results in an inaccurate forecast,” said lead author Trent Vonich, a PhD candidate at the University of Washington. “This is especially true when modeling a chaotic system such as the Earth’s atmosphere. There has been great focus recently on making better models, while somewhat ignoring the fact that a perfect model is only half the problem. Machine learning models help us address this because they are fully differentiable end-to-end, allowing us to capture nonlinear interactions between inputs and outputs—something legacy techniques cannot do.” While state-of-the-art AI weather forecasting systems, such as Google’s GraphCast and Huawei’s Pangu-Weather , reliably predict upcoming weather up to 10 days ahead, they’re limited by the accuracy of the initial data fed into their system. These models were trained on the massive ERA5 reanalysis dataset containing petabytes of information. The dataset captures hourly temperature, wind speed, humidity, air pressure, precipitation, and cloud cover across a global grid of 37 pressure levels. It includes historical weather conditions, dating back to 1979, and near-real-time data. The researchers focused on refining the initial atmospheric variables leading up to the June 2021 Pacific Northwest Heat Wave to improve the accuracy of this extreme event. They applied nonlinear optimization using the GPU-accelerated JAX framework to optimize the data. According to Vonich, it takes only 20 minutes to perform 100 initial conditions updates on an NVIDIA A100 Tensor Core GPU . Figure 1. Comparison of atmospheric patterns and temperature anomalies on June 30, 2021, for ERA5 reanalysis and GraphCast 10-day forecasts using control, globally optimized, and regionally optimized initial conditions from 20 June 2021 The researchers tested their framework’s accuracy using atmospheric data captured during the 2021 Pacific Northwest heatwave, which was excluded from the original training dataset. The optimized data reduced 10-day forecast errors by 90%, successfully predicting the intensity and timing of the heatwave. It also more than doubled the prediction window, improving unoptimized forecasts up to 23 days in advance. “This research may show that more accurate weather observations and measurements may be just as important as developing better models,” Vonich said. “If this technique can be used to identify systematic biases in the initial conditions, it could have an immediate impact on improving operational forecasts. Plus, more lead time enables greater preparation for communities. Aviation, shipping, and countless other industries rely on accurate weather forecasts, too. Improvements can translate to an economic benefit for them as well.” Read the full news story on Eos . Catch up on the study Predictability Limit of the 2021 Pacific Northwest Heatwave From Deep-Learning Sensitivity Analysis .
https://developer.nvidia.com/zh-cn/blog/ai-accurately-forecasts-extreme-weather-up-to-23-days-ahead/
AI 提前 23 天实现极端天气高精度预测
华盛顿大学的新研究正在使用深度学习完善 AI 天气模型,以实现更准确的预测和更长期的预测。这项研究发表在 《地球物理研究通讯》 上,展示了调整初始大气数据如何使先进的 AI 模型能够扩展当前的预测极限。随着极端天气因气候变化而变得越来越严重和频繁,让政府、企业、公众和应急响应人员有更多的时间为洪水、热浪或飓风等自然灾害做好准备,有助于减少生命和财产的损失。 华盛顿大学博士生 Trent Vonich 说:“如果为完美的天气模型提供略微不完善的初始条件,误差会随着时间的推移而加剧,并导致预测不准确。”“在对地球大气等混沌系统进行建模时尤其如此。最近人们非常关注制作更好的模型,同时在某种程度上忽略了一个事实,即完美的模型只是问题的一半。机器学习模型帮助我们解决了这个问题,因为它们是完全可微的端到端模型,允许我们捕获输入和输出之间的非线性交互——这是传统技术无法做到的。” 虽然 Google 的 GraphCast 和 华为的 Pangu-Weather 等先进的人工智能天气预报系统能够可靠地预测未来 10 天的天气,但它们受到输入系统的初始数据准确性的限制。 这些模型基于包含 PB 级信息的大型 ERA5 再分析数据集进行训练。该数据集捕获 37 个大气压级别的全球电网中的小时温度、风速、湿度、气压、降水量和云量。它包括可追溯到 1979 年的历史天气条件和近乎实时的数据。 研究人员专注于细化 2021 年 6 月太平洋西北热波之前的初始大气变量,以提高这一极端事件的准确性。他们使用 GPU 加速的 JAX 框架应用非线性优化来优化数据。 Vonich 表示,在 NVIDIA A100 Tensor Core GPU 上执行 100 个初始条件更新只要 20 分钟。 图 1. 2021 年 6 月 30 日大气模式和温度异常的比较,用于 ERA5 再分析和 GraphCast 使用控制、全球优化和区域优化的初始条件从 2021 年 6 月 20 日起进行的 10 天预测。 研究人员使用在 2021 年太平洋西北热浪期间捕获的大气数据(这些数据不包含在原始训练数据集中)测试了他们的框架的准确性。经过优化的数据将 10 天预测误差减少了 90%,成功预测了热浪的强度和时间。它还将预测窗口期提高了一倍以上,使未经优化的预测提前了 23 天得到改善。 “这项研究可能表明,更准确的天气观测和测量可能与开发更好的模型同样重要,”Vonich 说。“如果这项技术可用于识别初始条件中的系统偏差,它可能会对改进运营预测产生直接影响。此外,更多的准备时间可以为社区提供更好的准备。航空、航运和无数其他行业也依赖于准确的天气预报。改进也可以为他们带来经济效益。” 阅读完整的新闻案例 Eos 。 通过深度学习灵敏度分析了解 2021 年太平洋西北热浪的可预测性极限 。
https://developer.nvidia.com/blog/llm-model-pruning-and-knowledge-distillation-with-nvidia-nemo-framework/
LLM Model Pruning and Knowledge Distillation with NVIDIA NeMo Framework
Model pruning and knowledge distillation are powerful cost-effective strategies for obtaining smaller language models from an initial larger sibling. Pruning: Either drop layers (depth-pruning) or drop neurons, attention heads, and embedding channels (width-pruning). Knowledge distillation: Transfer knowledge from a large teacher model to a smaller student model, with the goal of creating a more efficient, smaller model that is faster and less resource-intensive to run. The How to Prune and Distill Llama-3.1 8B to an NVIDIA Llama-3.1-Minitron 4B Model post discussed the best practices of using large language models (LLMs) that combine depth, width, attention, and MLP pruning with knowledge distillation–based retraining. In this post, we provide a walk-through tutorial of the pruning and distillation pipeline in the NVIDIA NeMo framework on a simple dataset. This tutorial uses Meta-Llama-3.1-8B as the teacher model, with 4B being the target model size. We also visualize and discuss the training results. Overview This tutorial focuses on creating a simple pipeline that can prepare the dataset,  fine-tune the teacher on the WikiText-103-v1 dataset, and then prune and distill the model to create the 4B model. The WikiText-103-v1 dataset is a collection of over 100M tokens extracted from a set of verified ‘Good’ and ‘Featured’ articles on Wikipedia. It is publicly available on Hugging Face. In this tutorial, you are defining the pruning and distillation pipeline that involves the following high-level steps (Figure 1). Figure 1. Steps from obtaining the dataset to creating the distilled 4B model Preparation: Download the dataset and convert to JSONL. Preprocess by tokenizing the dataset. Fine-tune the teacher model on the dataset. Depth-prune the fine-tuned teacher model. The depth-pruned model is the starting point for the student network. Width-prune the fine-tuned teacher model. The width-pruned model is the starting point for the student network. Distilling knowledge from teacher to student by using the 8B model as the teacher and the 4B pruned model as the student. To access the Jupyter notebooks for this tutorial, see the /NVIDIA/NeMo GitHub repo. Prerequisites You require access to at least eight NVIDIA GPUs with an individual memory of 80 GB, for example, eight H100-80GB or A100-80GB GPUs, and a Docker-enabled environment. Follow the instructions in the project’s README file to install the NeMo framework, download the Meta-Llama-3.1-8B teacher model and get access to your Hugging Face access token. Download the dataset Download the WikiText-103-v1 dataset and convert the train, test, and validation splits into JSONL files using the following code or by running the introduction notebook : # Split into train, test and val files import json import os from datasets import load_dataset # Load the WikiText-103 dataset dataset = load_dataset("wikitext", "wikitext-103-v1") # Define the destination folder data_folder = 'wikitext-data' os.makedirs(data_folder, exist_ok=True) # Define file paths and destination paths file_paths = { 'train': os.path.join(data_folder, 'wikitext-train.jsonl'), 'validation': os.path.join(data_folder, 'wikitext-val.jsonl'), 'test': os.path.join(data_folder, 'wikitext-test.jsonl') } # Function to save dataset split to a JSONL file def save_to_jsonl(file_path, data): with open(file_path, 'w') as file: for item in data: file.write(json.dumps(item) + '\n') # Define splits splits = ["train", "validation", "test"] # Save splits to JSONL files and calculate their sizes for split in splits: if split in dataset: save_to_jsonl(file_paths[split], dataset[split]) else: print(f"Split {split} not found in the dataset.") Prepare the dataset The pruning and the distillation scripts requires the data files to be preprocessed by tokenizing them using the meta-llama/Meta-Llama-3.1-8B tokenizer model to convert the data into a memory-map format. This can be done with the preprocessing script, preprocess_data_for_megatron.py in the NeMo framework. Run the following script on the train split to prepare the dataset for pruning and distillation: !python /opt/NeMo/scripts/nlp_language_modeling/preprocess_data_for_megatron.py \ --input="./wikitext-data/wikitext-train.jsonl" \ --tokenizer-library='huggingface' \ --tokenizer-type='meta-llama/Meta-Llama-3.1-8B' \ --output-prefix=wikitext_tokenized_train \ --append-eod \ --workers=32 Run the script on the test and the validation split as well. The data preparation notebook contains all the scripts to create the tokenized wikitext_tokenized_{train/val/test}_text_document.{idx/bin} files that can be used to fine-tune the teacher model. Fine-tune the teacher model on the dataset With the prepared dataset, perform a light fine-tuning procedure on the unpruned teacher model. This section shows the usage of the scripts rather than focusing on performance, so the fine-tuning setup is run with GLOBAL_BATCH_SIZE set to 128 and STEPS set to 30 to ensure a low training time. Figure 2. Teacher fine-tuning Run the megatron_gpt_pretraining.py script to correct the distribution shift across the original dataset on which the model was trained. Without correcting for the distribution shift, the teacher provides suboptimal guidance on the dataset when being distilled. %%bash export CUDA_DEVICE_MAX_CONNECTIONS=1 # Set path(s) if different: MODEL="/workspace/llama-3_1-8b-nemo_v1.0/llama3_1_8b.nemo" # Can change these to accommodate resources: TENSOR_PARALLEL_SIZE=8 NODES=1 MICRO_BATCH_SIZE=4 # Don't change the following: EXPERIMENT_DIR="distill_trainings" EXPERIMENT_NAME="megatron_llama_ft" DATA_TRAIN='wikitext_tokenized_train_text_document' DATA_VAL='wikitext_tokenized_test_text_document' DATA_TEST='wikitext_tokenized_val_text_document' STEPS=30 GLOBAL_BATCH_SIZE=128 LOG_INTERVAL=1 VAL_INTERVAL=10 NUM_VAL_BATCHES=5 LR=1e-4 MIN_LR=1e-5 WARMUP_STEPS=2 cmd="torchrun --nproc-per-node=${TENSOR_PARALLEL_SIZE}" ${cmd} /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_pretraining.py \ --config-path /opt/NeMo/examples/nlp/language_modeling/conf/ \ --config-name megatron_llama_distill.yaml \ \ name=${EXPERIMENT_NAME} \ \ exp_manager.exp_dir=${EXPERIMENT_DIR} \ exp_manager.checkpoint_callback_params.save_top_k=1 \ exp_manager.checkpoint_callback_params.save_nemo_on_train_end=True \ \ trainer.max_steps=${STEPS} \ trainer.log_every_n_steps=${LOG_INTERVAL} \ Running the script or executing the teacher fine-tuning notebook creates a fine-tuned teacher model. Prune the fine-tuned teacher model to create a student You can use two methods to prune the fine-tuned teacher model: depth-pruning and width-pruning. Per the technology report , you can see that width-pruning generally outperforms depth-pruning in accuracy, but at the cost of increased inference latency. Choose to perform depth-pruning , width-pruning , or both methods based on these considerations. Figure 3. Pruning the fine-tuned teacher model Depth-prune the fine-tuned teacher model to create a student In the first method, you depth-prune the model. To go from an 8B to a 4B model, prune the last 16 layers (layers 16-31). Run the megatron_gpt_drop_layers.py script to depth-prune the fine-tuned teacher model: !python -m torch.distributed.launch --nproc_per_node=8 \ /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_drop_layers.py \ --path_to_nemo "./distill_trainings/megatron_llama_ft/checkpoints/megatron_llama_ft.nemo" \ --path_to_save "/workspace/4b_depth_pruned_model.nemo" \ --tensor_model_parallel_size 8 \ --pipeline_model_parallel_size 1 \ --gpus_per_node 8 \ --drop_layers 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Running this script or executing the depth pruning notebook results in the creation of a smaller checkpoint with the last 16 layers removed: 4b_depth_pruned_model.nemo . Width-prune the fine-tuned teacher model to create a student In the second method, you width-prune the model. To go from an 8B to a 4B model, prune the model by reducing the MLP intermediate dimension and hidden size and by retraining the attention headcount and number of layers. Run the megatron_gpt_prune.py script to width-prune the fine-tuned teacher model: !torchrun --nproc-per-node=8 /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_prune.py \ model.restore_from_path="./distill_trainings/megatron_llama_ft/checkpoints/megatron_llama_ft.nemo" \ model.tensor_model_parallel_size=1 \ model.pipeline_model_parallel_size=8 \ +model.dist_ckpt_load_strictness=log_all \ inference.batch_size=64 \ trainer.num_nodes=1 \ trainer.precision=bf16 \ trainer.devices=8 \ prune.ffn_hidden_size=9216 \ prune.num_attention_heads=null \ prune.num_query_groups=null \ prune.hidden_size=3072 \ export.save_path="/workspace/4b_width_pruned_model.nemo" Running this script or executing the width pruning notebook results in the creation of a smaller width-pruned checkpoint: 4b_width_pruned_model.nemo . Distill knowledge from teacher into the student The distillation process uses the fine-tuned model (8B) as the teacher model and the pruned model as the student model (4B) to distill to a smaller 4B model. Only logit loss is currently available in NeMo. Figure 4. Distillation workflow In this section, you distill the knowledge from the teacher model into both student models and compare them: Distill the knowledge from the fine-tuned teacher to the depth-pruned student Distill the knowledge from the fine-tuned teacher to the width-pruned student Distill knowledge from the fine-tuned teacher into the depth-pruned student Run the megatron_gpt_distillation.py script to distill knowledge from the teacher to the depth-pruned student model. %%bash export CUDA_DEVICE_MAX_CONNECTIONS=1 # Can change these to accommodate resources: TENSOR_PARALLEL_SIZE=8 NODES=1 MICRO_BATCH_SIZE=4 # Don't change the following: EXPERIMENT_DIR="distill_trainings" EXPERIMENT_NAME="megatron_llama_distill_depth_pruned_student" TEACHER="${EXPERIMENT_DIR}/megatron_llama_ft/checkpoints/megatron_llama_ft.nemo" STUDENT="/workspace/4b_depth_pruned_model.nemo" FINAL_MODEL_PATH="${EXPERIMENT_DIR}/${EXPERIMENT_NAME}/checkpoints/depth_pruned_distilled_4b_model.nemo" DATA_TRAIN='wikitext_tokenized_train_text_document' DATA_VAL='wikitext_tokenized_test_text_document' DATA_TEST='wikitext_tokenized_val_text_document' STEPS=30 GLOBAL_BATCH_SIZE=128 LOG_INTERVAL=1 VAL_INTERVAL=10 NUM_VAL_BATCHES=5 LR=1e-4 MIN_LR=1e-5 WARMUP_STEPS=2 cmd="torchrun --nproc-per-node=${TENSOR_PARALLEL_SIZE}" ${cmd} /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_distillation.py \ name=${EXPERIMENT_NAME} \ \ exp_manager.exp_dir=${EXPERIMENT_DIR} \ exp_manager.checkpoint_callback_params.save_top_k=1 \ \ trainer.max_steps=${STEPS} \ trainer.log_every_n_steps=${LOG_INTERVAL} \ trainer.val_check_interval=${VAL_INTERVAL} \ trainer.limit_val_batches=${NUM_VAL_BATCHES} \ +trainer.num_sanity_val_steps=0 \ \ trainer.precision=bf16 \ trainer.devices=${TENSOR_PARALLEL_SIZE} \ trainer.num_nodes=${NODES} \ \ "model.data.data_prefix={train:[1.0,$DATA_TRAIN],validation:[$DATA_VAL],test:[$DATA_TEST]}" \ \ model.restore_from_path=${STUDENT} \ model.kd_teacher_restore_from_path=${TEACHER} \ model.nemo_path=${FINAL_MODEL_PATH} \ \ model.tensor_model_parallel_size=${TENSOR_PARALLEL_SIZE} \ model.sequence_parallel=True \ model.micro_batch_size=${MICRO_BATCH_SIZE} \ model.global_batch_size=${GLOBAL_BATCH_SIZE} \ \ model.optim.name=distributed_fused_adam \ model.optim.lr=${LR} \ model.optim.sched.min_lr=${MIN_LR} \ model.optim.sched.warmup_steps=${WARMUP_STEPS} Running this script or the distilling depth-pruned student notebook results in the creation of a distilled model: depth_pruned_distilled_4b_model.nemo . Distill knowledge from the fine-tuned teacher into the width-pruned student Run the megatron_gpt_distillation.py script to distill knowledge from the teacher to the width-pruned student model. Change the student model ( STUDENT ) and the directory where the distilled model will be saved ( FINAL_MODEL_PATH ) before running the script. Running the distilling width-pruned student notebook results in the creation of a distilled model, width_pruned_distilled_4b_model.nemo . Display the validation loss Run the following code command or execute the results notebook to visualize the validation loss. Modify the path to the checkpoint before running the code example: %load_ext tensorboard %tensorboard --logdir "distill_trainings/megatron_llama_distill/" --port=6007 You can see the validation loss after running the distillation script over a STEPS value of 30, with the depth-pruned student and the width-pruned student in Figures 5 and 6, respectively. Figure 5. Depth-pruned validation loss over 30 steps Figure 6. Width-pruned validation loss over 30 steps To configure this pipeline for your use case, run the scripts on a multi-node cluster with larger GLOBAL_BATCH_SIZE, STEPS and VAL_INTERVAL values to see an improvement in the validation loss. Figures 7 and 8 show the validation loss decreasing when you run the training step in the distillation script over a STEPS value of 880 and a GLOBAL_BATCH_SIZE value of 2048 with the depth-pruned and width-pruned students, respectively. Figure 7. Depth-pruned validation loss over 880 steps (with GBS=2048) Figure 8. Width-pruned validation loss over 880 steps (with GBS=2048) Conclusion Pruning and distillation represent a significant advancement in the field of language model optimization. The ability to create smaller, more efficient models like the Llama-3.1-Minitron-4B in resource-constrained environments while preserving performance and without sacrificing substantial accuracy is a game changer for the AI industry. The Mistral-NeMo-Minitron-8B model is developed using this approach and outperforms the Llama-3.1-8B model on a variety of benchmarks . This approach reduces computational costs and energy consumption at inference time and also democratizes access to advanced NLP capabilities. This could revolutionize real-world applications in mobile devices, edge computing, and constrained resource settings. As these techniques continue to evolve, you can expect to see even more compact yet powerful language models, further expanding the reach of this technology across various industries. For more information, see the following resources: Jupyter notebooks with pruning and distillation recipes Compact Language Models via Pruning and Knowledge Distillation research paper LLM Pruning and Distillation in Practice: The Minitron Approach with a discussion on performance metrics How to Prune and Distill Llama-3.1 8B to an NVIDIA Llama-3.1-Minitron 4B Model post introducing good practices surrounding pruning and distillation techniques Mistral-NeMo-Minitron 8B Model Delivers Unparalleled Accuracy post showing the performance benchmarks of the Mistral-NeMo-Minitron-8B model
https://developer.nvidia.com/zh-cn/blog/llm-model-pruning-and-knowledge-distillation-with-nvidia-nemo-framework/
使用 NVIDIA NeMo 框架进行 LLM 模型剪枝和知识蒸馏
模型剪枝和知识蒸馏是功能强大且经济高效的策略,用于从最初较大的同级获得较小的语言模型。 剪枝:丢弃图层 (深度剪枝) 或丢弃神经元、注意力头和嵌入通道 (宽度剪枝)。 知识蒸馏: 将知识从大型教师模型转移到较小的学生模型,目标是创建更高效、更小、运行速度更快且资源密集型更低的模型。 在一篇“ 如何剪枝和蒸馏 Llama-3.1 8B ”博文中,讨论了使用 大语言模型(LLM) 的最佳实践,该模型将深度、宽度、注意力和 MLP 剪枝与基于蒸馏的知识重新训练相结合。 在本文中,我们提供了一个关于 NVIDIA NeMo 框架中基于简单数据集的剪枝和蒸馏工作流的演练教程。本教程使用 Meta-Llama-3.1-8B 作为教师模型,目标模型大小为 4B。我们还会可视化并讨论训练结果。 概述 本教程重点介绍如何创建一个简单的工作流,用于准备数据集,针对 WikiText-103-v1 数据集对教师进行微调,然后对模型进行剪枝和蒸馏以创建 4B 模型。WikiText-103-v1 数据集包含从维基百科上一系列经过验证的“良好”和“精选”文章中提取的逾 100M 个令牌。它已在 Hugging Face 上公开发布。 在本教程中,您将定义涉及以下高级步骤的剪枝和蒸馏工作流 (图 1)。 图 1. 从获取数据集到创建蒸 4B 模型的步骤 准备工作: 下载数据集并转换为 JSONL。 通过对数据集进行标记化预处理。 在数据集上微调教师模型。 深度剪枝微调的教师模型。深度剪枝模型是学生网络的起点。 Width-prune 经过微调的教师模型。宽度剪枝模型是学生网络的起点。 通过将 8B 模型用作教师,将 4B 剪枝模型用作学生,将知识从教师提炼给学生。 要访问本教程中的 Jupyter 笔记本,请参阅 /NVIDIA/NeMo GitHub 存储库。 预备知识 您需要访问至少 8 个 NVIDIA GPUs(单个显存为 80 GB),例如 8 个 H100-80GB 或 A100-80GB GPUs,以及一个支持 Docker 的环境。 按照项目的 README 文件 中的说明安装 NeMo 框架,下载 Meta-Llama-3.1-8B Instruct 模型,并获取 Hugging Face 访问令牌的访问权限。 下载数据集 下载 WikiText-103-v1 数据集,并使用以下代码或运行 introduction notebook ,将训练、测试和验证拆分转换为 JSONL 文件: # Split into train, test and val files import json import os from datasets import load_dataset # Load the WikiText-103 dataset dataset = load_dataset("wikitext", "wikitext-103-v1") # Define the destination folder data_folder = 'wikitext-data' os.makedirs(data_folder, exist_ok=True) # Define file paths and destination paths file_paths = { 'train': os.path.join(data_folder, 'wikitext-train.jsonl'), 'validation': os.path.join(data_folder, 'wikitext-val.jsonl'), 'test': os.path.join(data_folder, 'wikitext-test.jsonl') } # Function to save dataset split to a JSONL file def save_to_jsonl(file_path, data): with open(file_path, 'w') as file: for item in data: file.write(json.dumps(item) + '\n') # Define splits splits = ["train", "validation", "test"] # Save splits to JSONL files and calculate their sizes for split in splits: if split in dataset: save_to_jsonl(file_paths[split], dataset[split]) else: print(f"Split {split} not found in the dataset.") 准备数据集 剪枝和蒸馏脚本需要通过使用 meta-llama/Meta-Llama-3.1-8B 标记器模型对数据文件进行标记化来预处理数据文件,从而将数据转换为内存映射格式。这可以通过 NeMo 框架中的预处理脚本 preprocess_data_for_megatron.py 完成。 在 train split 中运行以下脚本,以准备用于剪枝和蒸馏的数据集: !python /opt/NeMo/scripts/nlp_language_modeling/preprocess_data_for_megatron.py \ --input="./wikitext-data/wikitext-train.jsonl" \ --tokenizer-library='huggingface' \ --tokenizer-type='meta-llama/Meta-Llama-3.1-8B' \ --output-prefix=wikitext_tokenized_train \ --append-eod \ --workers=32 对测试和验证拆分运行脚本。 数据准备 notebook 包含用于创建可用于微调 teacher model 的标记化 wikitext_tokenized_{train/val/test}_text_document.{idx/bin} 文件的所有脚本。 在数据集上微调教师模型 使用准备好的数据集,对未剪枝的教师模型执行微调过程。本节展示了脚本的用法,而非性能,因此运行微调设置时,将 GLOBAL_BATCH_SIZE 设置为 128,将 STEPS 设置为 30,以确保缩短训练时间。 图 2、教师微调 运行 megatron_gpt_pretraining.py 脚本,以修正用于训练模型的原始数据集的分布偏移。在不修正分布偏移的情况下,教师会在提取数据集时提供次优指导。 %%bash export CUDA_DEVICE_MAX_CONNECTIONS=1 # Set path(s) if different: MODEL="/workspace/llama-3_1-8b-nemo_v1.0/llama3_1_8b.nemo" # Can change these to accommodate resources: TENSOR_PARALLEL_SIZE=8 NODES=1 MICRO_BATCH_SIZE=4 # Don't change the following: EXPERIMENT_DIR="distill_trainings" EXPERIMENT_NAME="megatron_llama_ft" DATA_TRAIN='wikitext_tokenized_train_text_document' DATA_VAL='wikitext_tokenized_test_text_document' DATA_TEST='wikitext_tokenized_val_text_document' STEPS=30 GLOBAL_BATCH_SIZE=128 LOG_INTERVAL=1 VAL_INTERVAL=10 NUM_VAL_BATCHES=5 LR=1e-4 MIN_LR=1e-5 WARMUP_STEPS=2 cmd="torchrun --nproc-per-node=${TENSOR_PARALLEL_SIZE}" ${cmd} /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_pretraining.py \ --config-path /opt/NeMo/examples/nlp/language_modeling/conf/ \ --config-name megatron_llama_distill.yaml \ \ name=${EXPERIMENT_NAME} \ \ exp_manager.exp_dir=${EXPERIMENT_DIR} \ exp_manager.checkpoint_callback_params.save_top_k=1 \ exp_manager.checkpoint_callback_params.save_nemo_on_train_end=True \ \ trainer.max_steps=${STEPS} \ trainer.log_every_n_steps=${LOG_INTERVAL} \ 运行脚本或执行 教师微调 notebook 可创建经过微调的教师模型。 剪枝经过微调的教师模型以创建学生模型 您可以使用两种方法来剪枝经过微调的教师模型:depth-pruning 和 width-pruning。 从 技术报告 中可以看到, 宽度剪枝 的准确性通常优于 深度剪枝 ,但代价是增加推理延迟。根据这些考虑因素,选择执行深度剪枝、宽度剪枝或这两种方法。 图 3、剪枝经过微调的教师模型 对经过微调的教师模型进行深度剪枝,以创建一个学生模型 在第一种方法中,您可以对模型进行深度剪枝。要从 8B 到 4B 模型,请剪枝最后 16 层 (第 16 至 31 层)。运行 megatron_gpt_drop_layers.py 脚本以深度微调经过调优的教师模型 : !python -m torch.distributed.launch --nproc_per_node=8 \ /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_drop_layers.py \ --path_to_nemo "./distill_trainings/megatron_llama_ft/checkpoints/megatron_llama_ft.nemo" \ --path_to_save "/workspace/4b_depth_pruned_model.nemo" \ --tensor_model_parallel_size 8 \ --pipeline_model_parallel_size 1 \ --gpus_per_node 8 \ --drop_layers 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 运行此脚本或执行 深度剪枝 notebook 会导致创建较小的检查点,并删除最后 16 层: 4b_depth_pruned_model.nemo 。 Width-prune 经过微调的教师模型,以创建一个学生模型 在第二种方法中,您可以调整模型的宽度。要从 8B 模型升级到 4B 模型,请通过减少 MLP 中间维度和隐藏大小以及重新训练注意力头数和层数来剪枝模型。 运行 megatron_gpt_prune.py 脚本,以调整经过微调的教师模型的宽度: !torchrun --nproc-per-node=8 /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_prune.py \ model.restore_from_path="./distill_trainings/megatron_llama_ft/checkpoints/megatron_llama_ft.nemo" \ model.tensor_model_parallel_size=1 \ model.pipeline_model_parallel_size=8 \ +model.dist_ckpt_load_strictness=log_all \ inference.batch_size=64 \ trainer.num_nodes=1 \ trainer.precision=bf16 \ trainer.devices=8 \ prune.ffn_hidden_size=9216 \ prune.num_attention_heads=null \ prune.num_query_groups=null \ prune.hidden_size=3072 \ export.save_path="/workspace/4b_width_pruned_model.nemo" 运行此脚本或执行宽度剪枝 notebook 会导致创建较小的宽度剪枝检查点: 4b_width_pruned_model.nemo 。 蒸馏知识从教师转化为学生模型 蒸馏过程将微调模型 (8B) 用作教师模型,将剪枝模型用作学生模型 (4B),将蒸馏用作较小的 4B 模型。目前 NeMo 中只提供 logit 损失函数。 图 4. 蒸馏工作流程 在本节中,您将教师模型中的知识分为两个学生模型,并进行比较: 蒸馏从微调教师到深度剪枝学生的知识 蒸馏从微调教师到宽度剪枝学生的知识 蒸馏知识,从经过 fine-tuned 的教师到经过 depth-pruned 的学生模型 运行 megatron_gpt_distillation.py 脚本,将蒸馏知识从教师扩展到深度剪枝学生模型。 %%bash export CUDA_DEVICE_MAX_CONNECTIONS=1 # Can change these to accommodate resources: TENSOR_PARALLEL_SIZE=8 NODES=1 MICRO_BATCH_SIZE=4 # Don't change the following: EXPERIMENT_DIR="distill_trainings" EXPERIMENT_NAME="megatron_llama_distill_depth_pruned_student" TEACHER="${EXPERIMENT_DIR}/megatron_llama_ft/checkpoints/megatron_llama_ft.nemo" STUDENT="/workspace/4b_depth_pruned_model.nemo" FINAL_MODEL_PATH="${EXPERIMENT_DIR}/${EXPERIMENT_NAME}/checkpoints/depth_pruned_distilled_4b_model.nemo" DATA_TRAIN='wikitext_tokenized_train_text_document' DATA_VAL='wikitext_tokenized_test_text_document' DATA_TEST='wikitext_tokenized_val_text_document' STEPS=30 GLOBAL_BATCH_SIZE=128 LOG_INTERVAL=1 VAL_INTERVAL=10 NUM_VAL_BATCHES=5 LR=1e-4 MIN_LR=1e-5 WARMUP_STEPS=2 cmd="torchrun --nproc-per-node=${TENSOR_PARALLEL_SIZE}" ${cmd} /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_distillation.py \ name=${EXPERIMENT_NAME} \ \ exp_manager.exp_dir=${EXPERIMENT_DIR} \ exp_manager.checkpoint_callback_params.save_top_k=1 \ \ trainer.max_steps=${STEPS} \ trainer.log_every_n_steps=${LOG_INTERVAL} \ trainer.val_check_interval=${VAL_INTERVAL} \ trainer.limit_val_batches=${NUM_VAL_BATCHES} \ +trainer.num_sanity_val_steps=0 \ \ trainer.precision=bf16 \ trainer.devices=${TENSOR_PARALLEL_SIZE} \ trainer.num_nodes=${NODES} \ \ "model.data.data_prefix={train:[1.0,$DATA_TRAIN],validation:[$DATA_VAL],test:[$DATA_TEST]}" \ \ model.restore_from_path=${STUDENT} \ model.kd_teacher_restore_from_path=${TEACHER} \ model.nemo_path=${FINAL_MODEL_PATH} \ \ model.tensor_model_parallel_size=${TENSOR_PARALLEL_SIZE} \ model.sequence_parallel=True \ model.micro_batch_size=${MICRO_BATCH_SIZE} \ model.global_batch_size=${GLOBAL_BATCH_SIZE} \ \ model.optim.name=distributed_fused_adam \ model.optim.lr=${LR} \ model.optim.sched.min_lr=${MIN_LR} \ model.optim.sched.warmup_steps=${WARMUP_STEPS} 运行此脚本或经过 深度剪枝的提炼学生 notebook 会创建一个提炼模型: depth_pruned_distilled_4b_model.nemo 。 蒸馏知识,从经过微调的教师到宽度剪枝的学生模型 运行 megatron_gpt_distillation.py 脚本 ,将蒸馏知识从教师传递到宽度剪枝的学生模型。在运行脚本之前,更改学生模型 ( STUDENT ) 和蒸馏模型的保存目录 ( FINAL_MODEL_PATH )。 运行经宽度剪枝的提炼学生 notebook 会生成提炼模型 width_pruned_distilled_4b_model.nemo 。 显示验证损失 运行以下代码命令或执行 结果 notebook 以可视化验证损失。在运行代码示例之前,请修改检查点的路径: %load_ext tensorboard %tensorboard --logdir "distill_trainings/megatron_llama_distill/" --port=6007 当在 STEPS 值为 30 的情况下运行蒸馏脚本时,您可以看到验证损失,图 5 和图 6 分别为深度剪枝学生和宽度剪枝学生。 图 5. Depth-pruned 验证损失超过 30 步 图 6、超过 30 步的宽度剪枝验证损失 要为您的用例配置此管道,请在具有更大 GLOBAL_BATCH_SIZE, STEPS 和 VAL_INTERVAL 值的多节点集群上运行脚本,以确保验证损失得到改善。 图 7 和图 8 显示,当您在蒸馏脚本中运行训练步骤时,在分别使用深度剪枝和宽度剪枝学生的情况下, STEPS 值为 880 和 GLOBAL_BATCH_SIZE 值为 2048 时,验证损失会减少。 图 7、深度剪枝验证损失超过 880 步 (使用 GBS=2048 时) 图 8、宽度剪枝验证损失超过 880 步 (使用 GBS=2048 时) 结束语 剪枝和蒸馏代表了语言模型优化领域的重大进步。能够在资源受限的环境中创建更小、更高效的模型 (如 Llama-3.1-Minitron-4B),同时保持性能且不牺牲大量准确性,这是 AI 行业的游戏规则变革。 Mistral-NeMo-Minitron-8B 模型 是使用这种方法开发的,在各种基准测试中表现优于 Llama-3.1-8B 模型。 这种方法降低了推理时的计算成本和能耗,还普及了对高级 NLP 功能的使用。这可能会彻底改变移动设备、边缘计算和受限资源设置中的真实应用。随着这些技术的不断发展,您预计会看到更紧凑但强大的语言模型,进一步扩展这项技术的覆盖范围到各行各业。 有关更多信息,请参阅以下资源: 支持剪枝和蒸馏 recipes 的 Jupyter notebooks 通过剪枝和知识构建紧凑语言模型蒸馏 Compact Language Models via Pruning and Knowledge Distillation 研究论文 LLM 剪枝和蒸馏的实际应用:Minitron 方法与性能指标的讨论 如何剪枝和蒸馏 Llama-3.1 8B 到 NVIDIA Llama-3.1-Minitron 4B 模型的帖子 ,介绍了围绕剪枝和蒸馏技术的良好实践 Mistral-NeMo-Minitron 8B 模型 在展示 Mistral-NeMo-Minitron 8B 模型的性能基准测试时,可提供无与伦比的准确性
https://developer.nvidia.com/blog/nvidia-dgx-cloud-introduces-ready-to-use-templates-to-benchmark-ai-platform-performance/
NVIDIA DGX Cloud Introduces Ready-To-Use Templates to Benchmark AI Platform Performance
In the rapidly evolving landscape of AI systems and workloads, achieving optimal model training performance extends far beyond chip speed. It requires a comprehensive evaluation of the entire stack, from compute to networking to model framework. Navigating the complexities of AI system performance can be difficult. There are many application changes that you can make, from precision to parallelism, but they currently require significant effort and specialized knowledge to implement effectively. NVIDIA DGX Cloud Benchmarking Recipes offer guidance for boosting training performance by sharing what good performance looks like per workload and best practices for how to get there. For more information, see NVIDIA DGX Cloud Benchmarking Recipes . Evaluating an AI system holistically DGX Cloud Benchmarking Recipes are an end-to-end benchmarking suite that can both measure performance in real-world environments and identify optimization opportunities in AI training workloads. These benchmarking recipes are crafted to evaluate performance for real-world AI applications, factoring in the complete AI stack. Chip-centric metrics, such as peak floating-point operations per second (FLOPS) and bandwidth, can be inadequate in estimating end-to-end performance. Traditionally, peak FLOPS has often been used to compare platforms, but FLOPS is only one of the many components that affect the end-to-end application performance. In practice, the training time for an AI model is a function of many other components, such as network, software, firmware, and underlying infrastructure. For example, the high-bandwidth NVIDIA NVLink network fabric enables scaling parallelism strategies, such as tensor parallelism, beyond the traditional single-server, 8-GPU limit. With NVIDIA Grace class systems, the NVLink networking layer enables you to achieve higher FLOPS in real-world applications, bridging the gap between theoretical and practical performance. Evaluating AI platforms solely through FLOPS can result in inaccurate estimations of total training time and the associated costs, without accounting for the rest of the platform. For modern AI workloads such as fine-tuning Llama 3.1 models, it’s more accurate to use benchmarks that measure end-to-end performance across the entire system, providing a holistic view of how a platform will perform in actual usage scenarios. Figure 1. Performance impacts in cloud-based AI development Infrastructure factors affecting performance include the following: Server hardware designs Operating systems Virtualization layers Software stacks Network architectures Storage implementations AI workload factors affecting performance include the following: Compute-to-communication ratio Model scaling factors Batch size Precision format Data loading strategies Tuning workloads to optimal performance Beyond the job execution aspect of benchmarking, NVIDIA DGX Cloud Benchmarking Recipes are also playbooks for optimizing popular models and workloads. These recipes provide workload-specific strategies to maximize performance for popular models such as Llama 3.1, Grok, and Mixtral. Workload Type Description Container Version Dataset Max Scale (#GPUs) DTYPE Nemotron4 Training 15B and 340B benchmarks 24.09 Synthetic 2048 FP8, BF16 Nemo Megatron Training 175B benchmarks 24.05 Pile 2048 FP8, BF16 Llama 3.1 Training 8B, 70B, and 405B benchmarks 24.09 Pile 2304 FP8, BF16 PaXML Training 5B and 175B benchmarks 24.03.04 Synthetic 2048 FP8, BF16 Maxtext Training Llama2 70B benchmarks 2024.12.09 Synthetic 2048 FP8, BF16 Grok1 Training Grok1 314B benchmarks 24.09 Synthetic 2048 FP8, BF16 Llama 2 Fine Tuning Hugging Face 70B benchmarks 24.02 HF Llama2 512 BF16 Mistral Fine Tuning Hugging Face 7B benchmarks 24.02 HF Mistral 256 BF16 Table 1. Available workloads in the DGX Cloud Benchmarking Recipes catalog In Table 1, workloads include both training and fine tuning and support both FP8 and BF16 where possible. Each training workload has a different fingerprint of how it exercises the platform. A basic question you might ask about a workload’s fingerprint is, “How much does compute time overlap with communication or networking time?” Some models may be more compute-bound and some are more communication bound, depending on the choice of parallelism and hyperparameters such as sequence length and batch size. Scaling behavior also varies between models as the number of GPUs are increased, as well as type of scaling, weak or strong. For each workload and cluster scale, you must tune your model and system to achieve optimal performance. On the model side, this may involve adjusting the parallelism strategy, batch sizes, precision formats, and data loading strategies, among other configurations. On the system side, make sure that the workload makes maximal use of NVLink high bandwidth (for example for tensor and context parallelism) and confirm that the scale-out fabric is not a blocker for the corresponding networking collectives (for example for pipeline or expert parallelism). The latter requires a fabric that provides low transport latency (RDMA) and effective congestion management and adaptive routing as found in the reference NVIDIA SpectrumX and InfiniBand networking architectures. For efficient scaling of AI workloads, using these technologies is essential as they help mitigate the impact of jitter, ensuring consistent performance and reliability. Using FP8 DGX Cloud Benchmarking Recipes provide optimized configurations and tuning recommendations specifically for FP8 workloads, helping you achieve optimal performance with this precision format. For example, the recipe for Llama 3.1 70B training includes FP8 settings that have been carefully tested and optimized for DGX Cloud platforms. Understanding what constitutes good performance for a given AI workload can be complex. DGX Cloud Benchmarking Recipes provide a range of baseline performance results for various popular models, enabling you to set realistic expectations and goals for your own implementations. These baselines include metrics such as model FLOPS utilization (MFU), which measures how efficiently a model uses the available compute resources. You can see how MFU and throughput compare for popular models. By comparing your results to these benchmarks, you can gauge the effectiveness of optimizations and identify areas for improvement. DeepSeek-R1 is a 671B-parameter model that runs on one NVIDIA H200 GPU node. Its high compute utilization shows how holistic optimization of compute, networking, and parallelism strategies can push delivered performance closer to theoretical limits. Systematic benchmarking enables direct comparisons, helping teams collaboratively optimize models and platforms to maximize the value of GPU systems. Finally, these per-workload performance optimizations also show the need for further research and discussion around application tuning. For example, the recommended usage of the parallelization strategies varies across combinations of workloads and platforms. Get started with DGX Cloud Benchmarking Recipes The recipes for benchmarking platform performance are hosted in NVIDIA’s public registry, NGC Catalog. For more information about the latest release of recipes, see DGX Cloud Benchmarking 24.11.1 . Within each workload recipe, you can access the following: Containerized benchmarks for reproducibility across environments Scripts that generate synthetic data as needed Performance metrics collection and reporting (to stdout ) Configuration best practices for that workload per platform Performance data from the NVIDIA reference architecture for comparison The recipes require Slurm cluster management. Support for Kubernetes is currently in development. To use the DGX Cloud Benchmarking Recipes, download the recipe that best matches your workload and execute the cluster setup and benchmarking scripts. Keep moving the platform performance goalposts In today’s AI landscape, achieving optimal performance requires looking beyond individual components to understand how entire systems work together. While raw GPU capabilities matter, full optimization comes from carefully tuning every layer of the stack, from hardware and software configuration to workload-specific parameters. At NVIDIA, we use benchmark recipes to continuously refine every layer of the technology stack, from hardware interconnects such as NVIDIA NVLink and NVLink Switch to software libraries such as NVIDIA TensorRT-LLM, enabling substantial performance gains over time. For example, accelerated computing performance has increased 3.4x in MLPerf Inference on NVIDIA H100 GPUs in just one year through continuous improvements in software developments alone. These ongoing optimizations enable organizations to run more complex models, reduce infrastructure requirements, and improve efficiency, driving further innovation. These benchmarking recipes enable your team to: Optimize AI workloads for specific environments, including for FP8. Assess how close a cluster’s performance is to NVIDIA’s observed performance. Identify performance bottlenecks in your current setups. Training large models can take weeks or months and cost millions in compute resources, so modest performance improvements can translate into substantial time and cost savings. By using continually evolving performance optimizations and workload-specific recipes from NVIDIA, your organization can maximize AI infrastructure investments and focus engineering efforts on innovation rather than infrastructure tuning. For more information, see DGX Cloud Benchmarking Recipes .
https://developer.nvidia.com/zh-cn/blog/nvidia-dgx-cloud-introduces-ready-to-use-templates-to-benchmark-ai-platform-performance/
NVIDIA DGX 云推出即用型模板,更好地评估 AI 平台性能
在快速发展的 AI 系统和工作负载环境中,实现出色的模型训练性能远远超出芯片速度。这需要对整个堆栈进行全面评估,从计算到网络,再到模型框架。 了解复杂的 AI 系统性能可能比较困难。从精度到并行,您可以对许多应用程序进行更改,但目前需要付出大量努力和掌握专业知识才能有效实施这些更改。 NVIDIA DGX Cloud Benchmarking Recipes 为提高训练性能提供了指导,通过分享每个工作负载的良好性能以及实现方法的最佳实践。 如需了解更多信息,请参阅 NVIDIA DGX Cloud Benchmarking Recipes 。 全面评估 AI 系统 DGX 云 Benchmarking Recipes 是一款端到端基准测试套件,既可以衡量真实环境中的性能,也可以识别 AI 训练工作负载中的优化机会。这些基准测试方法经过精心设计,可评估真实 AI 应用的性能,并将其纳入完整的 AI 堆栈。 以芯片为中心的指标 (例如每秒浮点运算峰值 (FLOPS) 和带宽峰值) 可能不足以估算端到端性能。传统上,峰值 FLOPS 通常用于比较平台,但 FLOPS 只是影响端到端应用程序性能的众多组件之一。 在实践中,AI 模型的训练时间是许多其他组件(例如网络、软件、固件和底层基础设施)的函数。 例如,高带宽 NVIDIA NVLink 网络结构支持扩展并行策略(例如张量并行),从而超越传统的单服务器 8-GPU 限制。借助 NVIDIA Grace 类系统,NVLink 网络层可让您在实际应用中实现更高的 FLOPS,从而缩小理论性能和实际性能之间的差距。 仅通过 FLOPS 评估 AI 平台会导致对总训练时间和相关成本的估计不准确,而不考虑平台的其余部分。对于现代 AI 工作负载 (例如微调 Llama 3.1 模型),使用基准来衡量整个系统的端到端性能更为准确,从而全面了解平台在实际使用场景中的表现。 图 1. 基于云的 AI 开发对性能的影响 影响性能的基础架构因素包括: 服务器硬件设计 操作系统 虚拟化层 软件堆栈 网络架构 存储实现 影响性能的 AI 工作负载因素包括: 计算与通信比 模型扩展系数 批量大小 精度格式 数据加载策略 调整工作负载以获得最佳性能 除了基准测试的作业执行方面,NVIDIA DGX Cloud Benchmarking Recipes 也是用于优化热门模型和工作负载的手册。这些方法提供了特定于工作负载的策略,可更大限度地提高 Llama 3.1、Grok 和 Mixtral 等热门模型的性能。 工作负载 类型 说明 容器版本 数据集 最大规模 (GPU 数量) DTYPE Nemotron4 训练 150B 和 340B 基准测试 24.09 合成 2048 FP8、BF16 Nemo Megatron 训练 1750 亿次基准测试 24.05 堆栈 2048 FP8、BF16 Llama 3.1 训练 8B、70B 和 405B 基准测试 24.09 堆栈 2304 FP8、BF16 PaXML 训练 50 亿和 1750 亿项基准测试 2004 年 3 月 24 日 合成 2048 FP8、BF16 Maxtext 训练 Llama2 70B 基准测试 2024.12.09 合成 2048 FP8、BF16 Grok1 训练 Grok1 114B 基准测试 24.09 合成 2048 FP8、BF16 Llama 2 微调 Hugging Face 700B 基准测试 24.02 HF Llama2 512 BF16 Mistral 微调 Hugging Face 70B 基准测试 24.02 高频 Mistral 256 BF16 表 1、DGX Cloud Benchmarking Recipes 目录中的可用工作负载 在表 1 中,工作负载包括训练和微调,并尽可能支持 FP8 和 BF16。 每个训练工作负载对于其如何练习平台都有不同的指纹识别。关于工作负载的指纹识别,您可能会问一个基本问题:“计算时间与通信或网络时间的重叠程度如何?” 根据并行性和超参数(如序列长度和批量大小)的选择,一些模型的计算受限程度可能更高,而一些模型的通信受限程度可能更高。扩展行为也会随着 GPU 数量的增加以及扩展类型(弱或强)的增加而在模型之间发生变化。 对于每种工作负载和集群规模,您必须调整模型和系统以获得最佳性能。 在模型方面,这可能涉及调整并行策略、批量大小、精度格式和数据加载策略以及其他配置。在系统端,请确保工作负载充分利用 NVLink 高带宽(例如,用于张量和上下文并行),并确认横向扩展结构不是相应网络集合的阻塞器(例如,用于 pipeline 或 expert 并行)。 后者需要能够提供低传输延迟(RDMA)、有效的拥塞管理和动态路由的网络,如参考 NVIDIA SpectrumX 和 InfiniBand 网络架构中所示。为了高效扩展 AI 工作负载,使用这些技术至关重要,因为它们有助于减轻抖动的影响,确保一致的性能和可靠性。 使用 FP8 DGX 云 Benchmarking Recipes 专为 FP8 工作负载提供优化的配置和调优建议,可帮助您利用这种精度格式实现出色性能。例如,Llama 3.1 70B 训练方法包括针对 DGX 云平台仔细测试和优化的 FP8 设置。 了解给定 AI 工作负载的良好性能的构成要素可能非常复杂。DGX 云基准测试方法为各种热门模型提供一系列基准性能结果,使您能够为自己的实现设定现实的期望和目标。 这些基准包括模型 FLOPS 利用率(MFU)等指标,用于衡量模型使用可用计算资源的效率。您可以看到热门模型的 MFU 和吞吐量比较情况。通过将结果与这些基准进行比较,您可以衡量优化的有效性并找出需要改进的地方。 DeepSeek-R1 是在一个 NVIDIA H200 GPU 节点上运行的 671B 参数模型。它的高计算利用率表明,计算、网络和并行策略的整体优化如何将交付的性能推向理论极限。系统化的基准测试可实现直接比较,帮助团队协作优化模型和平台,从而更大限度地提高 GPU 系统的价值。 最后,这些针对每个工作负载的性能优化还表明,需要围绕应用程序调优开展进一步的研究和讨论。例如,PyTorch 和 pandas 等框架的并行化策略的推荐用法因工作负载和平台的组合而异。 开始使用 DGX Cloud 基准测试方法 平台性能基准测试方法托管在 NVIDIA 的公共注册表 NGC Catalog 中。有关最新版本 recipes 的更多信息,请参阅 DGX Cloud Benchmarking 24.11.1 。 在每个工作负载 recipe 中,您可以访问以下内容: 针对跨环境再现性的容器化基准测试 根据需要生成合成数据的 Scripts 性能指标收集和报告 (发送至 stdout ) 针对每个平台的工作负载的配置最佳实践 用于比较的 NVIDIA 参考架构的性能数据 方法需要 Slurm 集群管理。对 Kubernetes 的支持目前正在开发中。要使用 DGX Cloud Benchmarking Recipes,下载最适合您工作负载的方法,并执行集群设置和基准测试脚本。 继续推进平台性能目标 在当今的 AI 环境中,实现出色性能需要超越单个组件的视野,以了解整个系统如何协同工作。虽然原始 GPU 功能很重要,但完整的优化来自仔细调整堆栈的每一层,从硬件和软件配置到特定工作负载的参数。 在 NVIDIA,我们使用基准测试方法来不断优化技术堆栈的每一层,从 NVIDIA NVLink 和 NVLink Switch 等硬件互连到 NVIDIA TensorRT-LLM 等软件库,从而随着时间的推移实现显著的性能提升。 例如,仅通过持续改进软件开发, NVIDIA H100 GPU 上的 MLPerf 推理在短短一年内就将加速计算性能提高了 3.4 倍 。这些持续优化使组织能够运行更复杂的模型、降低基础架构需求、提高效率,从而推动进一步创新。 这些基准测试方法使您的团队能够: 针对特定环境 (包括 FP8) 优化 AI 工作负载。 评估集群性能与 NVIDIA 观察到的性能的接近程度。 识别当前设置中的性能瓶颈。 训练大型模型可能需要数周或数月时间,并且会耗费数百万计算资源,因此适度的性能改进可以转化为大幅节省时间和成本。通过使用 NVIDIA 不断改进的性能优化和特定于工作负载的方法,您的组织可以更大限度地投资 AI 基础架构,并将工程工作重点放在创新上,而不是基础架构调优。 有关更多信息,请参阅 DGX Cloud 基准测试方法 。
https://developer.nvidia.com/blog/nvidia-grace-cpu-integrates-with-the-arm-software-ecosystem/
NVIDIA Grace CPU Integrates with the Arm Software Ecosystem
The NVIDIA Grace CPU is transforming data center design by offering a new level of power-efficient performance. Built specifically for data center scale, the Grace CPU is designed to handle demanding workloads while consuming less power. NVIDIA believes in the benefit of leveraging GPUs to accelerate every workload. However, not all workloads are accelerated. This is especially true for those workloads involving complex, branchy code such as graph analytics, commonly used in popular use cases like fraud detection, operational optimization, and social network analysis. As data centers face increasing power constraints, it’s crucial to accelerate as many workloads as possible and run the rest on the most efficient compute possible. The Grace CPU is optimized to handle both accelerated and CPU-only tasks, delivering up to 2x the performance at the same power as conventional CPUs. The Grace CPU features 72 high-performance and energy-efficient Arm Neoverse V2 cores, connected by the NVIDIA Scalable Coherency Fabric (SCF). This high-bandwidth fabric ensures smooth data flow between CPU cores, cache, memory, and system I/O, providing up to 3.2 TB/s of bisection bandwidth—double that of traditional CPUs. The Grace CPU also uses high-speed LPDDR5X memory with server-class reliability, delivering up to 500 GB/s of memory bandwidth while consuming just one-fifth the energy of traditional DDR memory. In this post, we wanted to share how the Grace CPU builds on the existing Arm ecosystem while taking advantage of the vast array of NVIDIA software and tools. Standard software infrastructure The Grace CPU was designed to be a balanced general-purpose CPU and to work just like any other CPU. The workflow for getting software to run on the Grace CPU is the same workflow that you’d use on any x86 CPU. Standard Linux distros (Ubuntu, RHEL, SLES, and so on) and any multi-platform, open-source compiler (GCC, LLVM, and so on) all support the Grace CPU. The majority of open source software today already supports Arm, and thus is supported on  the Grace CPU. Similarly, any software optimizations and porting done on the Grace CPU also work on the rest of the Arm Neoverse software ecosystem. NVIDIA continues to work with developers and partners in the Arm ecosystem and is committed to ensure that open-source compilers, libraries, frameworks, tools, and applications fully leverage Arm Neoverse-based CPUs, like the Grace CPU. Many cloud-native and commercial ISV applications already provide optimized executables for Arm. The Arm Developer Hub provides a showcase of selected software packages for AI, cloud, data center, 5G, networking, and edge. This hub also provides guidance on how to migrate applications to Arm. This ecosystem is enabled by Arm standards, such as the Arm Server Base System Architecture (SBSA) and the Base Boot Requirements (BBR) of the Arm SystemReady Certification Program. NVIDIA software supports the Arm ecosystem Arm has invested for decades in the software ecosystem. You can innovate and know that the software not only just works but is optimized for Arm. The NVIDIA software ecosystem also takes advantage of decades of work in accelerated computing and has now been optimized for Arm: The NVIDIA HPC SDK and every CUDA component have Arm-native installers and containers . The NVIDIA container ecosystem of NVIDIA NIM microservices and NGC provides deep learning, machine learning, and HPC containers optimized for Arm. NVIDIA NIM enhances inference performance, enabling high-throughput and low-latency AI at scale. NVIDIA is also expanding its software ecosystem for Arm CPUs. NVIDIA previously launched a new suite of high performance math libraries for Arm CPUs called NVIDIA Performance Libraries (NVPL) . These libraries implement standard APIs, making their adoption an easy drop-in replacement from x86 at the linking stage. Similarly, math libraries such as the Arm’s Performance Library (ArmPL) are also tuned to maximize the performance of the Grace CPU in addition to any other Arm CPU. For example, Arm has shared how ArmPL Sparse can be used in a similar fashion to x86 . ArmPL has similar APIs to those of the x86 math libraries, which means that developing a wrapper may require nothing more than just a few API changes in the code. NVIDIA is an active participant in the open-source software communities like those for GCC and LLVM compilers. If you don’t want to wait for these regular releases and want to build code that performs optimally on the Grace CPU, the latest optimizations are also made available through the Clang distribution . Seamlessly moving your software to Arm The Arm software ecosystem is large and growing, with hundreds of open source projects and commercial ISVs already supporting the Arm architecture. If your application is not yet supported, you may need to just recompile the source code. There are a variety of tools available to help you do so: NVIDIA LaunchPad has a module on porting to Arm that you can try for yourself. For vector intrinsics, such as AVX, there are tools that simplify conversion. SIMD Everywhere and the A Demonstration of AI and HPC Applications for NVIDIA Grace CPU GTC session walk through several examples. The Arm Neoverse Migration Overview has detailed training. For more information about application porting and optimization, see the NVIDIA Grace Performance Tuning Guide . It includes instructions for setting up and optimizing performance on the Grace CPU. It also provides high-level developer guidance on Arm SIMD programming, the Arm memory model, and other details. Use this guide to help you realize the best possible performance for your particular NVIDIA Grace system. Figure 1. Running software on the Grace CPU uses the same process to optimize as for any other CPU Summary The NVIDIA Grace CPU is designed for the modern data center with 72 high-performance Arm Neoverse V2 cores, an NVIDIA-designed high-bandwidth SCF to maximize performance and high-bandwidth low-power memory. It can deliver up to 2x the performance in the same power envelope as leading traditional x86 CPUs. The NVIDIA Grace CPU is a standards-based Arm SBSA design that works just like any other CPU and is fully compatible with the broad Arm software ecosystem. For more information about software and system setup, see NVIDIA Grace CPU .
https://developer.nvidia.com/zh-cn/blog/nvidia-grace-cpu-integrates-with-the-arm-software-ecosystem/
NVIDIA Grace CPU 与 Arm 软件生态系统集成
NVIDIA Grace CPU 将节能高效的性能提升到了新的高度,正在改变数据中心的设计。Grace CPU 专为数据中心规模打造,旨在处理要求严苛的工作负载,同时降低功耗。 NVIDIA 相信利用 GPU 加速各种工作负载的好处。但是,并非所有工作负载都得到加速。对于涉及复杂的分支代码 (例如图形分析) 的工作负载尤其如此,这些代码通常用于欺诈检测、运营优化和社交网络分析等热门用例。 随着数据中心面临越来越大的功率限制,加速尽可能多的工作负载并在尽可能高效的计算上运行其余工作负载变得至关重要。Grace CPU 经过优化,可处理加速任务和仅使用 CPU 的任务,在功耗与传统 CPU 相同的情况下提供高达 2 倍的性能。 Grace CPU 配备 72 个高性能、高能效的 Arm Neoverse V2 核心,由 NVIDIA Scalable Coherency Fabric (SCF) 连接。这种高带宽结构可确保在 CPU 核心、缓存、内存和系统 I/O 之间实现流畅的数据流,并提供高达 3.2 TB/s 的对分带宽,是传统 CPU 带宽的两倍。 Grace CPU 还采用具有服务器级可靠性的高速 LPDDR5X 内存,提供高达 500 GB/s 的内存带宽,而能耗仅为传统 DDR 内存的五分之一。 在本文中,我们想分享 Grace CPU 如何在现有 Arm 生态系统的基础上构建,同时利用大量 NVIDIA 软件和工具。 标准软件基础架构 Grace CPU 被设计为平衡型通用 CPU,与其他 CPU 一样工作。在 Grace CPU 上运行软件的工作流与在任何 x86 CPU 上使用的工作流相同。标准 Linux 发行版 (Ubuntu、RHEL、SLES 等) 和任何多平台开源编译器 (GCC、LLVM 等) 都支持 Grace CPU。 如今,大多数开源软件已经支持 Arm,因此 Grace CPU 也支持这些软件。同样,在 Grace CPU 上完成的任何软件优化和移植也适用于 Arm Neoverse 软件生态系统的其余部分。 NVIDIA 将继续与 Arm 生态系统中的开发者和合作伙伴合作,并致力于确保开源编译器、库、框架、工具和应用充分利用基于 Arm Neoverse 的 CPU (例如 Grace CPU)。 许多云原生和商用 ISV 应用已经为 Arm 提供了优化的可执行文件。Arm 开发者中心展示了适用于 AI、云、数据中心、5G、网络和边缘的精选软件包。该中心还提供了有关 如何将应用程序迁移到 Arm 的指导 。 此生态系统由 Arm 标准提供支持,例如 Arm Server Base System Architecture(SBSA)和 Arm SystemReady 认证计划的 Base Boot Requirements(BBR)。 NVIDIA 软件支持 Arm 生态系统 数十年来,Arm 一直在软件生态系统方面进行投资。您可以不断创新,并且知道该软件不仅有效,还针对 Arm 进行了优化:NVIDIA 软件生态系统还利用数十年来在加速计算方面的工作成果,现已针对 Arm 进行了优化: NVIDIA HPC SDK 和每个 CUDA 组件都具有 Arm 原生安装程序和 容器 。 NVIDIA 容器生态系统的 NVIDIA NIM 微服务和 NGC 提供深度学习、机器学习和 HPC 容器,优化为 Arm。NVIDIA NIM 可增强推理性能,实现高吞吐量和低延迟的大规模 AI。 NVIDIA 还在扩展面向 Arm CPU 的软件生态系统。NVIDIA 之前为 Arm CPU 推出了一套新的高性能数学库,称为 NVIDIA Performance Libraries (NVPL) 。这些库实现了标准 API,因此在链接阶段采用这些 API 可以轻松替代 x86, 同样, Arm 的性能库(ArmPL) 等数学库也进行了调整,以更大限度地提高 Grace CPU 以及任何其他 Arm CPU 的性能。例如,Arm 分享了如何以与 x86 类似的方式使用 ArmPL Sparse。ArmPL 具有与 x86 数学库类似的 API ,这意味着开发包装器只需要更改代码中的几个 API 即可。 NVIDIA 积极参与开源软件社区,例如 GCC 和 LLVM 编译器社区。如果您不想等待这些常规版本的发布,并且想要构建在 Grace CPU 上以最佳性能执行的代码,您还可以通过 Clang 发行版获得最新的优化。 将您的软件无缝迁移到 Arm Arm 软件生态系统规模庞大且不断发展,已有数百个开源项目和商业 ISV 支持 Arm 架构。如果应用程序尚不受支持,您可能需要重新编译源代码。有多种工具可帮助您实现这一目标: NVIDIA LaunchPad 提供了一个移植到 Arm 的模块,您可以自行试用。 对于 AVX 等向量内部函数,有一些工具可以简化转换。 SIMD Everywhere 和 NVIDIA Grace CPU 的 AI 和 HPC 应用演示 GTC 会议将介绍几个示例。 Arm Neoverse 迁移概述提供了详细的训练。 有关应用程序移植和优化的更多信息,请参阅 NVIDIA Grace Performance Tuning Guide 。其中包含有关在 NVIDIA Grace CPU 上设置和优化性能的说明。它还为开发者提供有关 Arm SIMD 编程、Arm 内存模型和其他详细信息的高级指导。使用本指南帮助您实现特定 NVIDIA Grace 系统的最佳性能。 图 1、在 Grace CPU 上运行软件会使用与任何其他 CPU 相同的流程进行优化 总结 NVIDIA Grace CPU 专为现代数据中心而设计,拥有 72 个高性能 Arm Neoverse V2 核心,这是 NVIDIA 设计的高带宽 SCF,可更大限度地提高性能和高带宽低功耗内存。与领先的传统 x86 CPU 相比,它可以在相同的功率范围内提供高达 2 倍的性能。 NVIDIA Grace CPU 是基于标准的 Arm SBSA 设计,其工作方式与任何其他 CPU 相同,并且与广泛的 Arm 软件生态系统完全兼容。 有关软件和系统设置的更多信息,请参阅 NVIDIA Grace CPU 。
https://developer.nvidia.com/blog/openai-triton-on-nvidia-blackwell-boosts-ai-performance-and-programmability/
OpenAI Triton on NVIDIA Blackwell Boosts AI Performance and Programmability
Matrix multiplication and attention mechanisms are the computational backbone of modern AI workloads. While libraries like NVIDIA cuDNN provide highly optimized implementations, and frameworks such as CUTLASS offer deep customization, many developers and researchers need a middle ground that combines performance with programmability. The open-source Triton compiler on the NVIDIA Blackwell architecture addresses this need by exposing Blackwell’s advanced features through an intuitive programming model. As a result of NVIDIA’s ongoing collaboration with OpenAI, the Triton compiler now supports the NVIDIA Blackwell architecture. This ensures that developers and researchers can use the latest and greatest features from Blackwell architecture easily from the comfort of a Python-based compiler such as Triton. Performance advances on NVIDIA Blackwell The NVIDIA Blackwell architecture introduces substantial improvements in both raw computing power and architectural innovations. NVIDIA’s collaboration with OpenAI has focused on leveraging these capabilities transparently through Triton’s compiler infrastructure, particularly in two key areas: Matrix multiplications including flash attention New precision formats Matrix multiplications The NVIDIA Blackwell architecture adds a brand-new Tensor Core designed from the ground up for improved throughput and energy efficiency . By extending Triton’s Matrix Multiply-Accumulate (MMA) pipelining machinery, we’ve enabled automatic exploitation of NVIDIA Blackwell’s new Tensor Cores. This required careful analysis of memory access patterns and sophisticated compiler transformations to ensure correct and efficient compute / data-movement overlap. The result is exceptional performance for both FP8 and FP16 GEMM operations out of the box, with these optimizations automatically applying to any kernel using Triton’s `tl.dot` primitive. Overall, Triton manages to achieve near-optimal performance, comparable to library implementations across several critical use cases. Figure 1.  Performance improvements with Triton on NVIDIA Blackwell Figure 1 shows that Triton optimizations on NVIDIA Blackwell architecture bring hardware performance improvements to users in both FP16 and FP8 in this K sweep analysis for a typical generative AI size of GEMM kernel, as provided in the Triton tutorials. Flash attention Flash attention, a crucial primitive in modern transformer architectures, sees significant speedups on NVIDIA Blackwell through Triton, with up to 1.5x for FP16 attention over the NVIDIA Hopper GPU architecture. While we continue to optimize absolute performance through ongoing compiler enhancements on FP8 and other precisions, the current work helps customers readily transition to NVIDIA Blackwell on Day 0 for existing products. Another important aspect to note here is the ability to deliver this performance gain “for free” with existing Triton flash attention implementations, requiring no code changes. Figure 2. Large performance gains for more complex workloads Figure 2 shows that more complex workloads, such as the flash attention kernel as provided in the Triton tutorials, again show the large performance gains of the NVIDIA Blackwell architecture when unlocked with Triton compiler improvements. Some improvements from this work have improved NVIDIA Hopper attention performance as well and don’t show up in this data. New precision formats NVIDIA Blackwell introduces revolutionary block-scaled floating point formats, including the Open Computing Project’s microscaling formats , which Triton now unlocks for NVIDIA Blackwell-powered hardware acceleration. These formats provide higher average precision at higher performance than the non-native block-scaling techniques emulated frequently in LLM inference projects today. For OCP format support, MXFP8 GEMMs on Triton showcase exceptional performance similar to the FP8 GEMMs performance accelerated and shown earlier in this post, while natively allowing for scaling in the Tensor Core. Similarly, MXFP4 provides a new operating point in the precision-performance trade-off space but while offering double the hardware-accelerated performance of FP8 and MXFP8 GEMMs. To learn more about the new block-scaled floating point support, take a look at the new Triton tutorial dedicated to this functionality. Areas of improvement going forward The layout and packing of sub-byte datatype formats like MXFP4 still require care by the end user. We look forward to working with the community to improve the ergonomics for kernel authors and seamless framework integrations. The earlier-referenced matrix multiplication kernels across all data types still achieve relatively low utilization when GEMM_K is small. This can be mitigated through manual sub-tiling in the kernel itself and has been implemented in the GEMM tutorials as an example. It will eventually be addressed transparently in the compiler through automatic warp-specialization. More information Phillippe Tillet, the creator of Triton, and NVIDIA will be diving into the details of this NVIDIA Blackwell work and the resulting performance at the NVIDIA GTC conference on March 17. Register to attend GTC 2025 virtually or or attend live . This release establishes a powerful foundation for NVIDIA Blackwell support in Triton—but it’s just the beginning. Here’s how you can help shape what’s next: Explore the complete implementation in the /triton-lang/triton GitHub repo. Join the vibrant GPU MODE Discord community. Dive into our growing collection of examples and tutorials . Start building with Triton on NVIDIA Blackwell today and unlock the full potential of NVIDIA’s latest architecture while maintaining complete control over your development. Have ideas or encountered issues? Contact our NVIDIA product manager Matthew Nicely by tagging him on GitHub.
https://developer.nvidia.com/zh-cn/blog/openai-triton-on-nvidia-blackwell-boosts-ai-performance-and-programmability/
NVIDIA Blackwell 上的 OpenAI Triton 提升 AI 性能和可编程性
矩阵乘法和注意力机制是现代 AI 工作负载的计算支柱。虽然库如 NVIDIA cuDNN 提供高度优化的实现,而框架如 CUTLASS 提供深度自定义,但许多开发者和研究人员都需要将性能与可编程性结合起来的中间点。 NVIDIA Blackwell 架构 上的 开源 Triton 编译器 通过直观的编程模型展示 Blackwell 的高级功能来满足这一需求。 得益于 NVIDIA 与 OpenAI 的持续合作,Triton 编译器现已支持 NVIDIA Blackwell 架构。这可确保开发者和研究人员可以通过基于 Python 的编译器 (例如 Triton) 轻松使用 Blackwell 架构中最新的出色功能。 NVIDIA Blackwell 的性能提升 NVIDIA Blackwell 架构在原始计算能力和架构创新方面实现了显著提升。NVIDIA 与 OpenAI 的合作重点是通过 Triton 的编译器基础设施透明地利用这些功能,特别是在两个关键领域: 矩阵乘法,包括 flash attention 全新精度格式 矩阵乘法 NVIDIA Blackwell 架构添加了全新的 Tensor Core,该核心经过全新设计,可提高吞吐量和 能效 。 通过扩展 Triton 的矩阵乘积累加(MMA)流水线机制,我们可以自动利用 NVIDIA Blackwell 的全新 Tensor Cores。这需要仔细分析内存访问模式和复杂的编译器转换,以确保正确高效的计算/数据移动重叠。 因此,开箱即用的 FP8 和 FP16 GEMM 运算性能卓越,这些优化可自动应用到使用 Triton 的 `tl.dot` 基元的任何内核。总体而言,Triton 设法实现了近乎最佳的性能,可与多个关键用例中的库实现相媲美。 图 1. 在 NVIDIA Blackwell 上使用 Triton 提升性能 图 1 显示,在此针对 GEMM 内核典型生成式 AI 大小的 K 扫描分析中,NVIDIA Blackwell 架构上的 Triton 优化为 FP16 和 FP8 用户带来了硬件性能提升,正如 Triton 教程中所提供的那样。 闪光注意力 Flash Attention 是现代 Transformer 架构中的一个关键基元,通过 Triton 在 NVIDIA Blackwell 上显著加速,与 NVIDIA Hopper GPU 架构相比,FP16 注意力提升高达 1.5 倍。 虽然我们会继续通过 FP8 和其他精度的持续编译器增强来优化绝对性能,但目前的工作可以帮助客户在现有产品发布当天就轻松过渡到 NVIDIA Blackwell。 这里要注意的另一个重要方面是,能够通过现有的 Triton 闪存注意力实现“免费”提供这种性能提升,而无需更改代码。 图 2、大幅提升性能,适用于更复杂的工作负载 图 2 显示更复杂的工作负载 (例如 Triton 教程中提供的 flash attention kernel) 再次显示,在使用 Triton 编译器改进进行解锁时,NVIDIA Blackwell 架构可实现巨大的性能提升。这项工作的一些改进也提高了 NVIDIA Hopper 的注意力表现,并且不会显示在此数据中。 全新精度格式 NVIDIA Blackwell 引入了革命性的块级浮点格式,包括 Open Computing Project 的 microscaling 格式 ,Triton 现在为 NVIDIA Blackwell 提供硬件加速功能。 相较于目前 LLM 推理项目中常见的非原生块扩展技术,这些格式可提供更高的平均精度和更高的性能。 对于 OCP 格式支持,Triton 上的 MXFP8 GEMMs 展示了非凡的性能,与本文前面展示的 FP8 GEMMs 加速性能相似,同时原生允许在 Tensor Core 中进行扩展。 同样,MXFP4 在精度性能权衡空间中提供了一个新的操作点,同时提供的硬件加速性能是 FP8 和 MXFP8 GEMMs 的两倍。 如需详细了解新的块级浮点支持,请参阅专门介绍此功能的 新 Triton 教程 。 未来的改进领域 最终用户仍需小心处理 MXFP4 等子字节数据类型格式的布局和打包。我们期待与社区合作,为内核作者改进人体工程学,并实现无缝框架集成。 在 GEMM_K 很小的情况下,所有数据类型中先前引用的矩阵乘法内核的利用率仍然相对较低。这可以通过内核本身中的手动子平铺来缓解,并已在 GEMM 教程中作为示例实施。最终将通过自动 warp-specialization 在编译器中以透明方式解决此问题。 更多信息 在 3 月 17 日举行的 NVIDIA GTC 大会上,Triton 的创建者 Phillippe Tillet 和 NVIDIA 将深入探讨 NVIDIA Blackwell 的这项工作的细节以及由此产生的性能。 报名参加 GTC 2025 线上大会或现场参会。 此版本为 NVIDIA Blackwell 在 Triton 中的支持奠定了坚实基础,但这还只是开始。以下是您如何帮助塑造未来: 在 /triton-lang/triton GitHub 存储库中探索完整的实现。 加入充满活力的 GPU MODE Discord 社区。 深入了解我们不断增加的 示例和教程 。 立即开始在 NVIDIA Blackwell 上使用 Triton 进行构建,充分发挥 NVIDIA 最新架构的潜力,同时完全掌控您的开发。 是否有想法或遇到问题?请在 Github 上为我们的 NVIDIA 产品经理 Matthew Nicely 添加标签,联系他。
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025
From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet. At NVIDIA GTC 2025 , running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world. Strengthening extreme weather predictions and disaster response As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe. Sessions Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters. Boosting Earth System Model Outputs Using Exascale Climate Emulators Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts. Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs. Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively. Autonomous Systems and Remote Sensing for Better Earth Data Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale. Boosting climate forecasting for energy and infrastructure planning Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future. Sessions The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions. Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2 NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers. Applying AI Weather Models with NVIDIA Earth-2 NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors. Advancing AI-driven environmental monitoring and conservation AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity. Session Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience. In-person posters Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef ​​AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts Mangrove Simulation Predicts Carbon Sequestration Solutions Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful. Revolutionizing Antarctic Flora Monitoring with AI and Drones AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem. Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges. From NVIDIA CEO Jensen Huang’s must-see keynote to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact. The session catalog is open—start building your agenda today.
https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力
从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。 在 3 月 17 日至 21 日举行的 NVIDIA GTC 2025 大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。 加强极端天气预测和灾害响应 随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。 会议 借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。 使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出 KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。 将 AI 用于高级洪水风险建模和缓解策略 JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。 在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式 野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。 利用自主系统和遥感获取更好的地球数据 配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。 提升气候预测能力以加强能源和基础设施规划 准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。 会议 AI 和加速计算在了解和减缓城市气候变化方面的作用 研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。 借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测 NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。 将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用 ,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。 推进 AI 驱动的环境监测和保护 AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。 会议 探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性 ,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。 线下海报 逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作 Mangrove Simulation 预测碳封存解决方案 红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。 借助 AI 和无人机革新南极植物监测 AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。 在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。 从 NVIDIA 首席执行官 Jensen Huang 不容错过的主题演讲 ,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。 会议目录 现已开放,请立即开始构建您的议程。
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling
As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking , this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution. In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming. The results turned out to be better than the optimized kernels developed by skilled engineers in some cases. The need for optimized attention kernels and associated challenges Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data. The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes. There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task. ‌ Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on. Figure 1. Neighborhood attention on 2D inputs Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers. ‌ Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code. The following prompt is sample user input for a relative positional embeddings attention kernel. Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx):     return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel. Inference-time scaling for generating optimized GPU Kernels To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration. Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as ‌input to the DeepSeek-R1 model. This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel. Figure 3. Performance of automatically generated optimized attention kernels with flex attention This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by Stanford’s KernelBench benchmark. ‌ The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming. Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems. Figure 4. Inference-time scaling results in optimized GPU kernels Optimized GPU kernels on DeepSeek-R1 These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels. While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential. For more information or to get started, see the DeepSeek-R1 NIM microservice , now available on build.nvidia.com .
https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成
随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“ 测试时扩展 ”或“ 推理时扩展 ”的新扩展法则正在出现。该技术也称为 AI 推理 或 长时思考 技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。 在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。 事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。 优化注意力内核的需求和相关挑战 注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。 相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。 注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。 多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。 图 1. 邻域注意力在 2D 输入上的应用 创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。 最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。 以下提示是用户输入相对位置嵌入注意力核函数的示例。 Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx): return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。 用于生成经过优化的 GPU 内核的推理时间扩展 为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。 图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展 该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。 这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。 图 3、具有 Flex Attention 的自动生成优化注意力内核的性能 此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为 斯坦福大学的 KernelBench 基准测试。* KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。 图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。 图 4、在优化的 GPU 内核中实现推理 – 时间扩展 DeepSeek-R1 上经过优化的 GPU 内核 这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。 虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。 如需了解更多信息或入门,请参阅 DeepSeek-R1 NIM 微服务 (现已在 build.nvidia.com 上提供)。
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment
A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in medical AI research,” said Ruijiang LI , an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.” Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The study , recently published in Nature , highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI. Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights. To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options. The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data. According to Jinxi Xiang , study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64 NVIDIA V100 Tensor Core GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used NVIDIA A100 80 gb Tensor Core GPUs . The researchers also used NVIDIA RTX A6000 GPUs for evaluating downstream tasks. The framework was accelerated with NVIDIA CUDA and NVIDIA cuDNN libraries, for optimized performance. When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy. Figure 1. An example of the visual question-answering MUSK can perform It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected ‌breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%. Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%. “One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.” A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes. Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval. “We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li. The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on GitHub .
https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI 基础模型增强癌症诊断并实现个性化治疗
斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。 “多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者 Ruijiang LI 说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。” 在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。 借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。 为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。 研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。 据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个 NVIDIA V100 Tensor Core GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用 NVIDIA A100 80GB Tensor Core GPUs 。研究人员还使用 NVIDIA RTX A6000 GPUs 评估下游任务。该框架通过 NVIDIA CUDA 和 NVIDIA cuDNN 库进行加速,以优化性能。 在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。 图 1. 例如,视觉问答 MUSK 可以执行 它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。 此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。 “一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。” 这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。 他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。 “我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。 研究人员的工作(包括安装说明、模型权重、评估代码和样本数据) 可在 GitHub 上获取。
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA Toolkit Now Available for NVIDIA Blackwell
The latest release of the CUDA Toolkit , version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release: NVIDIA Blackwell architecture support CUDA Graphs conditional nodes enhancements Blackwell CUTLASS kernels for large language models (LLMs) NVIDIA Nsight Developer Tools updates Math libraries updates cudaStreamGetDevice Compiler updates Accelerated Python updates Feature-complete architectures NVIDIA Blackwell architecture support CUDA Toolkit 12.8 is the first version of the Toolkit to support the NVIDIA Blackwell architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in NVIDIA Hopper GPUs—Blackwell is the largest GPU ever built. Key Blackwell capabilities supported include: Second-generation Transformer Engine through custom Tensor Core technology: Accelerates inference and training for LLMs and mixture-of-experts (MoE) models. Decompression: Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate. Network interconnect: NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models. To learn more about the leading innovations in Blackwell, see the NVIDIA Blackwell Architecture Technical Brief . 2x faster CUDA Graphs with runtime kernel selection for lower latency inference With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional conditional node types. In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes. Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes. With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making. Training : By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train. Inference : For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence. To learn more, see Dynamic Control Flow in CUDA Graphs with Conditional Nodes . Blackwell CUTLASS kernels for LLMs CUTLASS , since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI. With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations. Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16. Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions NVIDIA Nsight Developer Tools NVIDIA Nsight Compute 2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data. Figure 3. Tensor Memory traffic in the Nsight Compute memory chart It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance. This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag -g-tmem-access-check when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage. Math libraries updates With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing. New features cuBLAS APIs were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher. Introduced initial support for CUDA in Graphics (CIG) on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits. cuSOLVER now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting. nvJPEG now provides support for the Tegra architecture. NPP now provides support for the DRIVE Thor architecture. cudaStreamGetDevice Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics. Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the cudaStreamGetDevice CUDA API to retrieve the device associated with a CUDA stream can simplify applications. Compiler updates New compiler updates include the following: The CUDA Toolkit 12.8 release introduces support for GCC 14 as a host-side compiler. The default high-level optimizer is now based on LLVM 18 for the Blackwell architecture. nvdisasm now supports emitting JSON formatted SASS disassembly. Accelerated Python updates The following two beta releases are now available for Python users: CUDA Python has released an early prototype of a new idiomatic object model called cuda.core and moved the CUDA binding to a submodule, cuda.bindings . For more information, see the documentation in the NVIDIA/cuda-python GitHub repo. CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about CCCL . Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability. Feature-complete architectures With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release. This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle. Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using nvcc , nvrtc , and nvjitlink . In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window. For more details, read the CUDA Toolkit 12.8 Release Notes . Summary The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python. Want more information? Check out the CUDA documentation , browse the latest NVIDIA Deep Learning Institute (DLI) offerings, and visit the NGC catalog . Ask questions and join the conversation in the CUDA Developer Forums. Acknowledgments Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash.
https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA 工具包现已支持 NVIDIA Blackwell 架构
CUDA 工具包 的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能: NVIDIA Blackwell 架构支持 CUDA 图形处理条件节点增强功能 用于大语言模型(LLMs)的 Blackwell CUTLASS 内核 NVIDIA Nsight 开发者工具更新 数学库更新 cudaStreamGetDevice 编译器更新 加速 Python 更新 功能齐全的架构 NVIDIA Blackwell 架构支持 CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持 NVIDIA Blackwell 架构。Blackwell 由 208 亿个晶体管构建而成,是 NVIDIA Hopper GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。 Blackwell 支持的主要功能包括:Key Blackwell 采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。 解压缩: 使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。 网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。 如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。 使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理 借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他 条件节点类型 。 在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。 条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。 借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。 训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。 推理:对于使用测试时计算的新一代推理模型 ,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。 如需了解详情, 请参阅使用条件节点的 CUDA 图形中的动态控制流。 适用于 LLMs 的 Blackwell CUTLASS 内核 自 2017 年首次推出以来, CUTLASS 一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。 随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。 图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能 对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。 图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍 NVIDIA Nsight 开发者工具 NVIDIA Nsight Compute 2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。 图 3、Nsight Compute 内存图中的 Tensor 内存流量 它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。 此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志 -g-tmem-access-check 来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。 数学库更新 借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。 新功能 cuBLAS API 经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。 为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。 cuSOLVER 现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。 nvJPEG 现在支持 Tegra 架构。 NPP 现在为 DRIVE Thor 架构提供支持。 cudaStreamGetDevice 应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。 以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加 cudaStreamGetDevice CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。 编译器更新 新的编译器更新包括以下内容: CUDA 工具包 12.8 版本引入了对作为主机端编译器的 GCC 14 的支持。 现在,Blackwell 架构的默认高级优化器基于 LLVM 18 nvdisasm 现在支持发射 JSON 格式的 SASS 反汇编。 加速 Python 更新 以下两个测试版现已面向 Python 用户提供: CUDA Python 已发布名为 cuda.core 的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块 cuda.bindings 。有关更多信息,请参阅 NVIDIA/cuda-python GitHub 存储库中的文档。 CUDA 核心计算库 ( CCCL ) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。 此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。 功能齐全的架构 在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。 这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。 从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用 nvcc 、 nvrtc 和 nvjitlink 时输出警告消息。 在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。 如需了解更多详情,请参阅 CUDA Toolkit 12.8 版本说明 。 总结 CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。 想要了解更多信息?查看 CUDA 文档 ,浏览最新的 NVIDIA Deep Learning Institute (DLI) 产品 ,并访问 NGC 目录 。在 CUDA Developer Forums 中提出问题并加入对话。 致谢 感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。
https://developer.nvidia.com/blog/recent-posts/
Recent posts
No content found
https://developer.nvidia.com/zh-cn/blog/recent-posts/
最近文章
No content found
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/
High-Performance Remote IO With NVIDIA KvikIO
Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload. Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have higher and more variable latency . Every storage service has their own set of best practices and performance guidelines ( AWS , Azure ). Here, we’ll give some general guidelines that are focused on data analysis workloads. Location Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt. File format “Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located). Apache Parquet , Zarr , and Cloud Optimized GeoTIFF are some examples of cloud-native file formats for various types of data. Because object storage services typically support range requests , clients (like cuDF ) can read the metadata and then download just the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out. File size Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples). That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency. Concurrency Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a remote service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead. Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a thread pool : pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) Or with asyncio : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) We’re able to have a lot of reads waiting around doing nothing at the same time , which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput. Libraries Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA KvikIO is notable because It automatically chunks large requests into multiple smaller ones and makes those requests concurrently. It can read efficiently into host or device memory, especially if GPU Direct Storage is enabled. It’s fast. As mentioned in the RADIDS 24.12 release announcement , KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does. Benchmarks When you read a file, KvikIO splits that read into smaller reads of kvikio.defaults.task_size bytes. It makes those read requests in parallel using a thread pool with kvikio.defaults.num_threads workers. These can be controlled using the environment variables KVIKIO_TASK_SIZE and KVIKIO_NTHREADS , or through Python with: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... See Runtime Settings for more. This chart shows the throughput, in megabits per second, of reading a 1 GB blob from S3 to a g4dn EC2 instance in the same region for various sizes of the thread pool (higher is better). Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of kvikio.RemoteFile.read for various values of kvikio.defaults.num _threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point. Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system. With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case. As shown in the next figure, the task size also affects the maximum throughput. Figure 2. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of kvikio.RemoteFile.read . The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts. As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput. KvikIO achieves higher throughput on this workload when compared with boto3 , the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently. Figure 3. From a benchmark reading a 1 GB from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3. As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS g4dn.12xlarge instance , which has 4 NVIDIA T4 GPUs and 48 vCPUs. Figure 4. From a benchmark reading a parquet data set from S3 to a g4dn.12xlarge EC2 instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include an optimization to read parquet footers in parallel. With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node. Conclusion As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on GitHub . You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing.
https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/
借助 NVIDIA KvikIO 实现高性能远程 IO
处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。 您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。 地址 将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。 文件格式 “云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。 Apache Parquet 、 Zarr 和 Cloud Optimized GeoTIFF 是适用于各种类型数据的云原生文件格式的一些示例。 由于对象存储服务通常支持范围请求,因此客户端 (如 cuDF ) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。 文件大小 由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。 也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。 并发 使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。 对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成: pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) 或使用 异步 : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) 我们能够让大量读取 同时 不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。 库 上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中, NVIDIA KvikIO 值得注意,因为 它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。 它可以高效读取主机或设备内存,尤其是启用 GPU Direct Storage 时。 速度很快。 正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。 基准测试 当您读取文件时,KvikIO 会将读取的文件拆分成较小的 kvikio.defaults.task_size 字节读取。它使用具有 kvikio.defaults.num_threads 工作线程的线程池并行执行这些读取请求。可以使用环境变量 KVIKIO_TASK_SIZE 和 KVIKIO_NTHREADS 控制这些内容,也可以通过 Python 使用: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... 详情请参阅 Runtime Settings 。 此图表显示了在同一区域内,针对不同大小的线程池,从 S3 到 g4dn EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。 图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是 kvikio.RemoteFile.read 的吞吐量,适用于各种值的 kvikio.defaults.num _threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。 线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。 借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。 如下图所示,任务大小也会影响最大吞吐量。 图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例 。这显示了 kvikio.RemoteFile.read 吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。 只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。 与 boto3 (适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。 图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。 对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS g4dn.12xlarge 实例上运行,该实例包含 4 个 NVIDIA T4 GPU 和 48 个 vCPUs。 图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。 启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。 结束语 随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs
Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have. The rise of small language models (SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data. Microsoft announces the new generation of open SLMs to the Phi family with two new additions: Phi-4-mini Phi-4-multimodal Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs. These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the Phi-4 14B parameter SLM and enables commercial use for the two new smaller models. The new models are available on the Azure AI Foundry , Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents. You can test out each member of the Phi family through the NVIDIA API Catalog , which is the first sandbox environment to support each modality and tool-calling for Phi-4-multimodal . Use the preview NIM microservice to integrate the model into your applications today. Why invest in SLMs? SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements. Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems. Phi-4-multimodal Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days. Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications. Figure 1. Visual QA demo in NVIDIA API Catalog You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data. Figure 2. Tool-calling demo in NVIDIA API Catalog Phi-4-mini Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days. For both models, the training data is intentionally focused on high quality educational data and code which results in a textbook-like quality to the models. Text, speech, and vision benchmark data can be found in the model cards. Advancing community models NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience. Using the NVIDIA NeMo platform , these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry. NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences. Get started today Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at build.nvidia.com/microsoft . On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production.
https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展
大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。 小语言模型 (SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。 Microsoft 宣布在 Phi 系列中 推出新一代开放式 SLM ,并新增两项功能: Phi-4-mini Phi-4-multimodal Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。 这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的 Phi-4 14B 参数 SLM 的研究版本构建而成,可用于两个新的较小模型的商业用途。 这些新模型可在 Microsoft 的云 AI 平台 Azure AI Foundry 上使用,用于设计、定制和管理 AI 应用和代理。 您可以通过 NVIDIA API Catalog 测试 Phi 系列的每个成员,这是第一个支持 Phi-4 多模态 的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。 为何投资 SLM? SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。 SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。 Phi-4-multimodal Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。 事实证明,该模型在 ASR 方面表现出色,因为它在 Huggingface OpenASR 排行榜上排名第一 ,单词错误率为 6.14%。 词错误率 (WER) 是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。 图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。 图 1、NVIDIA API Catalog 中的可视化问答演示 您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。 图 2、NVIDIA API Catalog 中的工具调用演示 Phi-4-mini Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。 对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。 推进社区模式 NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。 借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。 NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。 立即开始使用 请访问 build.nvidia.com/microsoft ,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。 在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM
In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on. Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms. NVIDIA NIM microservices simplify the secure and reliable deployment of AI foundation models for language, computer vision , speech, biology, and more. NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex. This post helps you get started with building a vision language model (VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using LangGraph, the state-of-the-art llama-3.2-90b-vision-instruct VLM, the optimized mistral-small-24B-instruct large language model (LLM), and NVIDIA NIM for deployment. This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of LangChain’s tool calling enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources. This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the /NVIDIA/GenerativeAIExamples GitHub repo. A simple HTML multimodal retrieval pipeline The system consists of the following pipelines: Document ingestion and preprocessing: Runs a VLM on the images and translates them into text. Question-answering: Enables the user to ask questions of the system. Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively. Data ingestion and preprocessing pipeline This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text. Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an LLM with long context modeling capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed. Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers. Figure 1. Data ingestion and preprocessing pipeline LLMs with long context modeling can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval. In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information. However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy. QA pipeline All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a LLM with long context modeling (mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents. Figure 2. Question-answering pipeline Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content. Finally, the system combines both textual and visual insights to deliver a comprehensive answer. Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems. Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text. NVIDIA NIM now natively supports capabilities for generating structured outputs . This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering. Integrating NVIDIA NIM with LangGraph NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with LangGraph , a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes: Assistant node: Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools. Tools node: A collection of tools that perform specific tasks required by the assistant. Figure 3. Use LangGraph to build an agent for the pipeline Assistant node The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the /NVIDIA/GenerativeAIExamples GitHub repo. Here are the agent inputs: Collection_name : The set of documents on which to search. Question : The user’s question. document_id : (Optional) If provided, the agent skips the document ranking phase. This is the agent process: Document selection : If document_id is not provided, the agent invokes the find_best_document_id tool, which identifies the most relevant document for the user’s question within the specified collection. Question answering : With document_id , the agent uses the query_document tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document. Image analysis (if necessary): If the query_document tool indicates that the answer might be in an image (by returning an image_hash value), the agent invokes the query_image tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer. Tools node We implemented three key tools for the agent to perform its tasks. Find_best_document_id : Identify the most relevant document for the user’s question when document_id is not provided. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_document : Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_image : Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the /NVIDIA/GenerativeAIExamples . Binding external tools with models Tool calling is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks. Tool binding empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses. Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following: Extended capabilities : Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation. Dynamic tool selection : The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance. Seamless integration : NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems. In this implementation, use LangChain’s @tool decorator to create three tools, then use the .bind_tools method to bind the tools with models. Defining structured outputs with Pydantic By defining the output schema with Pydantic and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph. Define the structure The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for downstream processing . from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s with_structured_output method. Define the prompt The prompt_document_expert contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the BestDocuments schema). from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) Prepare context The get_context function prepares the input data by retrieving document summaries and formatting them appropriately. def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} Bind the structured output The llm.with_structured_output(BestDocuments) method instructs the LLM to produce output conforming to the BestDocuments Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure. LangChain’s with_structured_output method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions. Finally, create a chain to process the input and generate the structured output: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) End-to-end tool in action To get started with the multimodal retrieval system, clone the /NVIDIA/GenerativeAIExamples GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to http://localhost:7860 to access the system through the Gradio user interface. For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The Select Question field is for evaluation purposes, with the Ground Truth Answer field value provided by a human. Figure 4. Agent multi-document evaluation This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data. Figure 5. Agent result with source graph for verification Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM Challenges and solutions As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt. Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses. Hierarchical document reranking solution To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages: Batch processing : Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations. Intermediate reranking : The model evaluates each batch separately, ranking the documents within each group. Selection of top candidates : The most relevant documents from each batch are selected to proceed to the next stage. Final reranking : The top candidates from all batches are combined and re-evaluated to identify the most relevant document. Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified. Future prospects with smaller models Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale. The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost. Conclusion This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods: Enhanced comprehension of documents A multimodal model to extract information from images, tables, and text Seamless integration of external tools Generation of consistent and structured output Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the /NVIDIA/GenerativeAIExamples GitHub repo. NVIDIA NIM also offers access to more models optimized for NVIDIA GPUs. You can explore NVIDIA NeMo , a scalable generative AI framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more. If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time multilingual and cross-lingual information retrieval systems to generate context-aware responses, learn more about NVIDIA NeMo Retriever .
https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统
在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。 多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。 NVIDIA NIM 微服务可简化 AI 基础模型 在语言、 计算机视觉 、语音、生物学等领域的安全可靠部署。 NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。 本文将帮助您着手构建基于 视觉语言模型 (VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用 LangGraph 部署应用程序、先进的 llama-3.2-90b-vision-instruct VLM、经过优化的 mistral-small-24B-instruct 大语言模型(LLM),以及用于部署的 NVIDIA NIM。 与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成 LangChain 的工具调用 ,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。 此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 仓库。 简单的 HTML 多模态检索工作流 该系统由以下管道组成: 文档提取和预处理:在图像上运行 VLM 并将其转换为文本。 问答:允许用户提出系统问题。 这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。 数据提取和预处理 pipeline 此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。 接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。 最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道 采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。 相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。 但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。 QA 管道 所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道 接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。 最后,该系统将文本和视觉见解相结合,以提供全面的答案。 结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。 从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。 NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。 将 NVIDIA NIM 与 LangChain 集成 NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成: 助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。 工具节点:用于执行助手所需特定任务的工具集合。 图 3、使用 LangGraph 为管道构建代理 助理节点 助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到。 智能体输入如下: Collection_name :要在其上搜索的文档集。 Question :用户的问题。 document_id :(可选) 如果提供,代理会跳过文档排名阶段。 这是智能体流程: 文档选择 :如果未提供 document_id ,代理会调用 find_best_document_id 工具,该工具可在指定集合中识别与用户问题最相关的文档。 问题回答:对于 document_id ,代理使用 query_document 工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。 图像分析 (如有必要):如果 query_document 工具表明答案可能在图像中 (通过返回 image_hash 值),代理会调用 query_image 工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。 工具节点 我们为智能体实施了三个关键工具来执行任务。 Find_best_document_id :在未提供 document_id 时,确定与用户问题最相关的文档。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 存储库。 query_document :在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples GitHub 存储库。 query_image :当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/ NVIDIA/GenerativeAIExamples 。 将外部工具与模型绑定 工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。 工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。 绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括: 扩展功能 :模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。 动态工具选择 :模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。 无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。 在此实现中,使用 LangChain 的 @tool 装饰器创建三个工具,然后使用 .bind_tools 方法将这些工具与模型绑定。 使用 PyTorch 定义结构化输出 通过使用 Pydantic 定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。 定义结构 首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。 from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") 接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的 with_structured_output 方法实现的。 定义提示 prompt_document_expert 包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与 BestDocuments 架构匹配的 JSON)。 from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) 准备上下文 get_context 函数通过检索文档摘要并对其进行适当格式化来准备输入数据。 def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} 绑定结构化输出 llm.with_structured_output(BestDocuments) 方法指示 LLM 生成符合 BestDocuments Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。 LangChain 的 with_structured_output 方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。 最后,创建一个链来处理输入并生成结构化输出: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) 端到端工具的实际应用 要开始使用多模态检索系统,请克隆 /NVIDIA/GenerativeAIExamples GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至 http://localhost:7860 ,通过 Gradio 用户界面访问系统。 例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“ Select Question ” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估 该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形 视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合 视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像 挑战和解决方案 随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。 处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。 分层文档重新排序解决方案 为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段: 批量处理 :将摘要分组为模型可以处理的批量,且不会超过提示大小限制。 中级重新排序 :模型分别评估每个批次,对每个组中的文档进行排序。 选择最优秀的候选文档 :从每个批次中选择最相关的文档,以进入下一阶段。 最终重新排名 :系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。 考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。 小型模型的未来前景 使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。 对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。 结束语 本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势: 增强对文档的理解 用于从图像、表格和文本中提取信息的多模态模型 无缝集成外部工具 生成一致的结构化输出 借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到源代码。 NVIDIA NIM 还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索 NVIDIA NeMo ,这是一个可扩展的生成式 AI 框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。 如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解 NVIDIA NeMo Retriever 。
https://developer.nvidia.com/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding
Large language models (LLMs) that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving AI agents , these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation. To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in NVIDIA TensorRT-LLM , and the ease of deployment with NVIDIA NIM for transformative potential and coding efficiency. Qwen2.5-Coder models The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks. NVIDIA TensorRT-LLM has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic inflight batching , KV caching , KV cache reuse , and several speculative decoding techniques, among others. These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM high-level API to speed up Qwen2.5-Coder inference to generate multiline autocode completion. Lookahead decoding Lookahead decoding is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with draft target speculative decoding , and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example. Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding. Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the Jacobi iteration method , a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch. The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G). Window size (W): Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently. N-gram size (N): Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory. Verification set size (G): Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities. Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models. Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on NVIDIA H100 Tensor Core GPUs , as shown in Figure 2. Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Similar throughput speedups are achieved on NVIDIA H200 Tensor Core GPUs . With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3). Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Steps to run lookahead decoding with TensorRT-LLM To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below. # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com Then run lookahead decoding in TensorRT-LLM using the high-level API. # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() Summary Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to try the Qwen2.5-Coder models optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable NVIDIA NIM microservices for ease of deployment. Acknowledgments We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer  for their efforts in supporting this post.
https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量
专注于编码的 大语言模型(LLMs) 已稳步应用于开发者工作流程。从配对编程到自我改进的 AI 智能体 ,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。 为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对 NVIDIA TensorRT-LLM 支持 的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。 Qwen2.5-Coder 模型 Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。 NVIDIA TensorRT-LLM 已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上 批处理 、 KV 缓存 、 KV 缓存重复使用 和几种预测性解码技术等优化功能。 这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。 解码前景展望 解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。 与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。 每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器 Jacobi 迭代法 ,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。 前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。 窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。 N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。 验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。 图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding 未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。 通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过 NVIDIA H100 Tensor Core GPUs 上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。 图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 NVIDIA H200 Tensor Core GPU 也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。 图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 使用 TensorRT-LLM 进行解码的前瞻性运行步骤 要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。 # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com 然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。 # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() 总结 前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。 访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。 为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的 NVIDIA NIM 微服务。 致谢 在此, 我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力 。
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025
From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet. At NVIDIA GTC 2025 , running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world. Strengthening extreme weather predictions and disaster response As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe. Sessions Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters. Boosting Earth System Model Outputs Using Exascale Climate Emulators Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts. Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs. Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively. Autonomous Systems and Remote Sensing for Better Earth Data Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale. Boosting climate forecasting for energy and infrastructure planning Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future. Sessions The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions. Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2 NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers. Applying AI Weather Models with NVIDIA Earth-2 NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors. Advancing AI-driven environmental monitoring and conservation AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity. Session Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience. In-person posters Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef ​​AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts Mangrove Simulation Predicts Carbon Sequestration Solutions Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful. Revolutionizing Antarctic Flora Monitoring with AI and Drones AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem. Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges. From NVIDIA CEO Jensen Huang’s must-see keynote to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact. The session catalog is open—start building your agenda today.
https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力
从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。 在 3 月 17 日至 21 日举行的 NVIDIA GTC 2025 大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。 加强极端天气预测和灾害响应 随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。 会议 借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。 使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出 KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。 将 AI 用于高级洪水风险建模和缓解策略 JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。 在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式 野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。 利用自主系统和遥感获取更好的地球数据 配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。 提升气候预测能力以加强能源和基础设施规划 准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。 会议 AI 和加速计算在了解和减缓城市气候变化方面的作用 研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。 借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测 NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。 将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用 ,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。 推进 AI 驱动的环境监测和保护 AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。 会议 探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性 ,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。 线下海报 逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作 Mangrove Simulation 预测碳封存解决方案 红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。 借助 AI 和无人机革新南极植物监测 AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。 在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。 从 NVIDIA 首席执行官 Jensen Huang 不容错过的主题演讲 ,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。 会议目录 现已开放,请立即开始构建您的议程。
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling
As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking , this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution. In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming. The results turned out to be better than the optimized kernels developed by skilled engineers in some cases. The need for optimized attention kernels and associated challenges Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data. The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes. There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task. ‌ Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on. Figure 1. Neighborhood attention on 2D inputs Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers. ‌ Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code. The following prompt is sample user input for a relative positional embeddings attention kernel. Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx):     return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel. Inference-time scaling for generating optimized GPU Kernels To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration. Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as ‌input to the DeepSeek-R1 model. This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel. Figure 3. Performance of automatically generated optimized attention kernels with flex attention This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by Stanford’s KernelBench benchmark. ‌ The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming. Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems. Figure 4. Inference-time scaling results in optimized GPU kernels Optimized GPU kernels on DeepSeek-R1 These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels. While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential. For more information or to get started, see the DeepSeek-R1 NIM microservice , now available on build.nvidia.com .
https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成
随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“ 测试时扩展 ”或“ 推理时扩展 ”的新扩展法则正在出现。该技术也称为 AI 推理 或 长时思考 技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。 在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。 事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。 优化注意力内核的需求和相关挑战 注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。 相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。 注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。 多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。 图 1. 邻域注意力在 2D 输入上的应用 创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。 最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。 以下提示是用户输入相对位置嵌入注意力核函数的示例。 Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx): return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。 用于生成经过优化的 GPU 内核的推理时间扩展 为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。 图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展 该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。 这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。 图 3、具有 Flex Attention 的自动生成优化注意力内核的性能 此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为 斯坦福大学的 KernelBench 基准测试。* KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。 图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。 图 4、在优化的 GPU 内核中实现推理 – 时间扩展 DeepSeek-R1 上经过优化的 GPU 内核 这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。 虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。 如需了解更多信息或入门,请参阅 DeepSeek-R1 NIM 微服务 (现已在 build.nvidia.com 上提供)。
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment
A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in medical AI research,” said Ruijiang LI , an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.” Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The study , recently published in Nature , highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI. Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights. To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options. The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data. According to Jinxi Xiang , study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64 NVIDIA V100 Tensor Core GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used NVIDIA A100 80 gb Tensor Core GPUs . The researchers also used NVIDIA RTX A6000 GPUs for evaluating downstream tasks. The framework was accelerated with NVIDIA CUDA and NVIDIA cuDNN libraries, for optimized performance. When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy. Figure 1. An example of the visual question-answering MUSK can perform It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected ‌breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%. Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%. “One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.” A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes. Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval. “We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li. The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on GitHub .
https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI 基础模型增强癌症诊断并实现个性化治疗
斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。 “多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者 Ruijiang LI 说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。” 在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。 借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。 为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。 研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。 据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个 NVIDIA V100 Tensor Core GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用 NVIDIA A100 80GB Tensor Core GPUs 。研究人员还使用 NVIDIA RTX A6000 GPUs 评估下游任务。该框架通过 NVIDIA CUDA 和 NVIDIA cuDNN 库进行加速,以优化性能。 在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。 图 1. 例如,视觉问答 MUSK 可以执行 它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。 此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。 “一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。” 这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。 他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。 “我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。 研究人员的工作(包括安装说明、模型权重、评估代码和样本数据) 可在 GitHub 上获取。
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA Toolkit Now Available for NVIDIA Blackwell
The latest release of the CUDA Toolkit , version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release: NVIDIA Blackwell architecture support CUDA Graphs conditional nodes enhancements Blackwell CUTLASS kernels for large language models (LLMs) NVIDIA Nsight Developer Tools updates Math libraries updates cudaStreamGetDevice Compiler updates Accelerated Python updates Feature-complete architectures NVIDIA Blackwell architecture support CUDA Toolkit 12.8 is the first version of the Toolkit to support the NVIDIA Blackwell architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in NVIDIA Hopper GPUs—Blackwell is the largest GPU ever built. Key Blackwell capabilities supported include: Second-generation Transformer Engine through custom Tensor Core technology: Accelerates inference and training for LLMs and mixture-of-experts (MoE) models. Decompression: Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate. Network interconnect: NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models. To learn more about the leading innovations in Blackwell, see the NVIDIA Blackwell Architecture Technical Brief . 2x faster CUDA Graphs with runtime kernel selection for lower latency inference With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional conditional node types. In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes. Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes. With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making. Training : By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train. Inference : For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence. To learn more, see Dynamic Control Flow in CUDA Graphs with Conditional Nodes . Blackwell CUTLASS kernels for LLMs CUTLASS , since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI. With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations. Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16. Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions NVIDIA Nsight Developer Tools NVIDIA Nsight Compute 2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data. Figure 3. Tensor Memory traffic in the Nsight Compute memory chart It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance. This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag -g-tmem-access-check when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage. Math libraries updates With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing. New features cuBLAS APIs were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher. Introduced initial support for CUDA in Graphics (CIG) on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits. cuSOLVER now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting. nvJPEG now provides support for the Tegra architecture. NPP now provides support for the DRIVE Thor architecture. cudaStreamGetDevice Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics. Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the cudaStreamGetDevice CUDA API to retrieve the device associated with a CUDA stream can simplify applications. Compiler updates New compiler updates include the following: The CUDA Toolkit 12.8 release introduces support for GCC 14 as a host-side compiler. The default high-level optimizer is now based on LLVM 18 for the Blackwell architecture. nvdisasm now supports emitting JSON formatted SASS disassembly. Accelerated Python updates The following two beta releases are now available for Python users: CUDA Python has released an early prototype of a new idiomatic object model called cuda.core and moved the CUDA binding to a submodule, cuda.bindings . For more information, see the documentation in the NVIDIA/cuda-python GitHub repo. CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about CCCL . Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability. Feature-complete architectures With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release. This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle. Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using nvcc , nvrtc , and nvjitlink . In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window. For more details, read the CUDA Toolkit 12.8 Release Notes . Summary The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python. Want more information? Check out the CUDA documentation , browse the latest NVIDIA Deep Learning Institute (DLI) offerings, and visit the NGC catalog . Ask questions and join the conversation in the CUDA Developer Forums. Acknowledgments Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash.
https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA 工具包现已支持 NVIDIA Blackwell 架构
CUDA 工具包 的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能: NVIDIA Blackwell 架构支持 CUDA 图形处理条件节点增强功能 用于大语言模型(LLMs)的 Blackwell CUTLASS 内核 NVIDIA Nsight 开发者工具更新 数学库更新 cudaStreamGetDevice 编译器更新 加速 Python 更新 功能齐全的架构 NVIDIA Blackwell 架构支持 CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持 NVIDIA Blackwell 架构。Blackwell 由 208 亿个晶体管构建而成,是 NVIDIA Hopper GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。 Blackwell 支持的主要功能包括:Key Blackwell 采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。 解压缩: 使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。 网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。 如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。 使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理 借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他 条件节点类型 。 在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。 条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。 借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。 训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。 推理:对于使用测试时计算的新一代推理模型 ,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。 如需了解详情, 请参阅使用条件节点的 CUDA 图形中的动态控制流。 适用于 LLMs 的 Blackwell CUTLASS 内核 自 2017 年首次推出以来, CUTLASS 一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。 随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。 图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能 对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。 图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍 NVIDIA Nsight 开发者工具 NVIDIA Nsight Compute 2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。 图 3、Nsight Compute 内存图中的 Tensor 内存流量 它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。 此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志 -g-tmem-access-check 来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。 数学库更新 借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。 新功能 cuBLAS API 经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。 为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。 cuSOLVER 现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。 nvJPEG 现在支持 Tegra 架构。 NPP 现在为 DRIVE Thor 架构提供支持。 cudaStreamGetDevice 应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。 以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加 cudaStreamGetDevice CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。 编译器更新 新的编译器更新包括以下内容: CUDA 工具包 12.8 版本引入了对作为主机端编译器的 GCC 14 的支持。 现在,Blackwell 架构的默认高级优化器基于 LLVM 18 nvdisasm 现在支持发射 JSON 格式的 SASS 反汇编。 加速 Python 更新 以下两个测试版现已面向 Python 用户提供: CUDA Python 已发布名为 cuda.core 的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块 cuda.bindings 。有关更多信息,请参阅 NVIDIA/cuda-python GitHub 存储库中的文档。 CUDA 核心计算库 ( CCCL ) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。 此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。 功能齐全的架构 在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。 这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。 从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用 nvcc 、 nvrtc 和 nvjitlink 时输出警告消息。 在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。 如需了解更多详情,请参阅 CUDA Toolkit 12.8 版本说明 。 总结 CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。 想要了解更多信息?查看 CUDA 文档 ,浏览最新的 NVIDIA Deep Learning Institute (DLI) 产品 ,并访问 NGC 目录 。在 CUDA Developer Forums 中提出问题并加入对话。 致谢 感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。
https://developer.nvidia.com/blog/recent-posts/
Recent posts
No content found
https://developer.nvidia.com/zh-cn/blog/recent-posts/
最近文章
No content found
https://developer.nvidia.com/blog/high-performance-remote-io-with-nvidia-kvikio/
High-Performance Remote IO With NVIDIA KvikIO
Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users who want to read or write data to object storage as quickly as possible so that IO doesn’t bottleneck your workload. Some of your knowledge about how local file systems behave translates to remote object stores, but they are fundamentally different. Probably the biggest difference between the two, at least for data analysis workloads, is that read and write operations on object storage have higher and more variable latency . Every storage service has their own set of best practices and performance guidelines ( AWS , Azure ). Here, we’ll give some general guidelines that are focused on data analysis workloads. Location Placing your compute nodes near the storage service (ideally, in the same cloud region) will give you the fastest and most reliable network between the machines running your workload and the machines serving the data. And, at the end of the day, the transfer will be limited by the speed of light so minimizing the physical distance doesn’t hurt. File format “Cloud-native” file formats have been developed to work well with object storage. These file formats typically provide fast, easy access to metadata (which includes both high-level information like the column names or data types, and lower-level information like where in the file specific data subsets are located). Apache Parquet , Zarr , and Cloud Optimized GeoTIFF are some examples of cloud-native file formats for various types of data. Because object storage services typically support range requests , clients (like cuDF ) can read the metadata and then download just the data you actually need. For example, cuDF can read just a few columns out of a Parquet file with many columns, or a Zarr client can read a single chunk out of a large n-dimensional array. These reads are done in just a few HTTP requests, and without needing to download a bunch of extraneous data that just gets filtered out. File size Because every read operation requires (at least) one HTTP request, we’d prefer to amortize the overhead from each HTTP request over a reasonably large number of bytes. If you control the data-writing process, you’ll want to ensure that the files are large enough for your downstream processing tasks to get good performance. The optimal value depends on your workload, but somewhere in the dozens to low-hundreds of MBs is common for parquet files (see below for some specific examples). That said, you’ll need to be careful with how file size interacts with the next tool in our kit: concurrency. Concurrency Using concurrency to download multiple blobs (or multiple pieces of a single blob) at the same time is essential to getting good performance out of a remote storage service. Since it’s a remote service, your process is going to spend some time (perhaps a lot of time) waiting around doing nothing. This waiting spans the time between when the HTTP request is sent and the response received. During this time, we wait for the network to carry the request, the storage service to process it and send the response, and the network to carry the (possibly large) response. While parts of that request/response cycle scale with the amount of data involved, other parts are just fixed overhead. Object storage services are designed to handle many concurrent requests. We can combine that with the fact that each request involves some time waiting around doing nothing, to make many concurrent requests to raise our overall throughput. In Python, this would typically be done using a thread pool : pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) Or with asyncio : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) We’re able to have a lot of reads waiting around doing nothing at the same time , which improves our throughput. Because each thread/task is mostly doing nothing, it’s ok to have more threads/tasks than your machine has cores. Given enough concurrent requests you will eventually saturate your storage service, which has some requests per second and bandwidth targets it tries to meet. But those targets are high; you’ll typically need many machines to saturate the storage service and should achieve very high throughput. Libraries Everything above applies to essentially any library doing remote IO from an object storage service. In the RAPIDS context, NVIDIA KvikIO is notable because It automatically chunks large requests into multiple smaller ones and makes those requests concurrently. It can read efficiently into host or device memory, especially if GPU Direct Storage is enabled. It’s fast. As mentioned in the RADIDS 24.12 release announcement , KvikIO can achieve impressive throughput when reading from S3. Let’s take a look at some benchmarks to see how it does. Benchmarks When you read a file, KvikIO splits that read into smaller reads of kvikio.defaults.task_size bytes. It makes those read requests in parallel using a thread pool with kvikio.defaults.num_threads workers. These can be controlled using the environment variables KVIKIO_TASK_SIZE and KVIKIO_NTHREADS , or through Python with: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... See Runtime Settings for more. This chart shows the throughput, in megabits per second, of reading a 1 GB blob from S3 to a g4dn EC2 instance in the same region for various sizes of the thread pool (higher is better). Figure 1. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This the throughput of kvikio.RemoteFile.read for various values of kvikio.defaults.num _threads and a task size of 16 MiB. Throughput increases as we add more threads and parallelize the reads, up to a point. Fewer threads (less than four) achieve lower throughput and take longer to read the file. More threads (64, 128, 256) achieve higher throughput by parallelizing the requests to the storage service, which serves them in parallel. There are diminishing and even negative returns as we hit the limits of the storage service, network, or other bottlenecks in our system. With remote IO, each thread spends a relatively long time idle waiting for the response, so a higher number of threads (relative to your number of cores) might be appropriate for your workload. We see that the throughput is highest between 64 to 128 threads in this case. As shown in the next figure, the task size also affects the maximum throughput. Figure 2. From a benchmark reading a 1 GB file from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. This shows a heatmap of the throughput of kvikio.RemoteFile.read . The horizontal axis shows throughput for various task sizes, while the vertical axis shows various thread counts. As long as the task size isn’t too small (around or below 4 MiB) or too large (around or above 128 MiB), then we get around 10 Gbps of throughput. With too small of a task size, the overhead of making many HTTP requests reduces throughput. With too large of a task size, we don’t get enough concurrency to maximize throughput. KvikIO achieves higher throughput on this workload when compared with boto3 , the AWS SDK for Python, even when boto3 is used in a thread pool to execute requests concurrently. Figure 3. From a benchmark reading a 1 GB from S3 to a g4dn.xlarge EC2 instance, which has a published bandwidth of up to 25 Gbps. The KvikIO benchmark used 64 threads and 16 MiB task size. The Boto3 benchmark used a ThreadPool to read many byte 4 MB chunks in parallel, which a parameter search showed to be the fastest chunk size for boto3. As a slightly more realistic workload, though still just one focused solely on IO, we compare the performance reading a batch of 360 parquet files, each about 128 MB. This was run on an AWS g4dn.12xlarge instance , which has 4 NVIDIA T4 GPUs and 48 vCPUs. Figure 4. From a benchmark reading a parquet data set from S3 to a g4dn.12xlarge EC2 instance, which has a published bandwidth of up to 50 Gbps. The dataset had 360 Apache Parquet files of about 128 MB each, for a total of about 46 GB. The Dask cluster had 4 workers. These results use cuDF 25.04 which will include an optimization to read parquet footers in parallel. With KvikIO enabled, the four Dask worker processes are able to collectively achieve almost 20 Gbps of throughput from S3 to this single node. Conclusion As RAPIDS accelerates other parts of your workload, IO can become a bottleneck. If you’re using object storage and are tired of waiting around for your data to load, try out some of the recommendations from this post. Let us know how things work with KvikIO on GitHub . You can also join over 3,500 members on the RAPIDS Slack community to talk GPU-accelerated data processing.
https://developer.nvidia.com/zh-cn/blog/high-performance-remote-io-with-nvidia-kvikio/
借助 NVIDIA KvikIO 实现高性能远程 IO
处理大量数据的工作负载 (尤其是在云端运行的工作负载) 通常会使用对象存储服务 (S3、Google Cloud Storage、Azure Blob Storage 等) 作为数据源。对象存储服务可以存储和提供海量数据,但要想获得最佳性能,可能需要根据远程对象存储的行为方式调整工作负载。本文适用于希望尽快将数据读或写到对象存储,以便 IO 不会限制工作负载的 RAPIDS 用户。 您对本地文件系统行为方式的一些了解可转换为远程对象存储,但它们本质上是不同的。这两者之间的最大区别 (至少对于数据分析工作负载而言) 可能在于,对象存储上的读取和写入操作具有越来越高的可变延迟。每个存储服务 (AWS、Azure) 都有自己的一套最佳实践和性能指南。在这里,我们将提供一些专注于数据分析工作负载的一般指南。 地址 将计算节点放置在存储服务附近 (理想情况下,应位于同一云区域),可在运行工作负载的计算机和为数据提供服务的计算机之间提供速度最快、最可靠的网络。在一天结束时,传输将受到光速的限制,因此最大限度地减少物理距离不会造成伤害。 文件格式 “云原生”文件格式的开发能够很好地与对象存储配合使用。这些文件格式通常可让用户快速轻松地访问元数据 (元数据包括列名称或数据类型等高级信息,以及文件特定数据子集所在位置等低级信息)。 Apache Parquet 、 Zarr 和 Cloud Optimized GeoTIFF 是适用于各种类型数据的云原生文件格式的一些示例。 由于对象存储服务通常支持范围请求,因此客户端 (如 cuDF ) 可以读取元数据,然后只下载您实际需要的数据。例如,cuDF 只能从包含多列的 Parquet 文件中读取几列数据,或者 Zarr 客户端可以从大型 n 维数组中读取单个 chunk。这些读取只需通过几次 HTTP 请求即可完成,而且无需下载一堆刚刚被过滤掉的不相干数据。 文件大小 由于每个读取操作都需要 (至少) 一个 HTTP 请求,因此我们倾向于在合理数量的字节数上分担每个 HTTP 请求的用度。如果您控制数据写入过程,则需要确保文件足够大,以便下游处理任务获得良好性能。最佳值取决于您的工作负载,但 parquet 文件的大小通常介于数十 MB 到数百 MB 之间 (请参阅下文,了解一些特定示例)。 也就是说,您需要注意文件大小与 Kit 中的下一个工具:并发的交互方式。 并发 使用并发同时下载多个 blobs (或单个 blob 的多个部分) 对于从远程存储服务中获得良好性能至关重要。由于这是一项远程服务,您的流程将花费一些时间 (可能会花费大量时间) 四处等待,不执行任何操作。此等待时间为 HTTP 请求被发送到响应被接收之间的时间。在此期间,我们会等待网络执行请求,等待存储服务处理并发送响应,等待网络执行响应 (可能较大)。虽然该请求/响应周期的一部分会随所涉及的数据量而扩展,但其他部分只是固定的开销。 对象存储服务旨在处理许多并发请求。我们可以将这一点与每个请求都涉及一些时间来等待不执行任何操作的事实相结合,以发出许多并发请求来提高整体吞吐量。在 Python 中,这通常使用线程池完成: pool = concurrent.futures.ThreadPoolExecutor() futures = pool.map(request_chunk, chunks) 或使用 异步 : tasks = [request_chunk_async(chunk) for chunk in chunks] await asyncio.gather(*tasks) 我们能够让大量读取 同时 不执行任何操作,从而提高吞吐量。由于每个线程/任务通常不执行任何任务,因此拥有比计算机核心数更多的线程/任务也是可以的。如果并发请求数量足够多,您最终会使存储服务饱和,而存储服务试图满足一些每秒请求数和带宽目标数。但这些目标很高;您通常需要多台机器使存储服务饱和,并且应该实现非常高的吞吐量。 库 上述内容基本上适用于从对象存储服务执行远程 IO 的任何库。在 RAPIDS 环境中, NVIDIA KvikIO 值得注意,因为 它会自动将大型请求分块为多个较小的请求,并并发发出这些请求。 它可以高效读取主机或设备内存,尤其是启用 GPU Direct Storage 时。 速度很快。 正如 RADIDS 24.12 发布公告中提到的那样,从 S3 读取数据时,KvikIO 可以实现惊人的吞吐量。我们来看看一些基准测试,看看效果如何。 基准测试 当您读取文件时,KvikIO 会将读取的文件拆分成较小的 kvikio.defaults.task_size 字节读取。它使用具有 kvikio.defaults.num_threads 工作线程的线程池并行执行这些读取请求。可以使用环境变量 KVIKIO_TASK_SIZE 和 KVIKIO_NTHREADS 控制这些内容,也可以通过 Python 使用: with kvikio.defaults.set_num_threads(num_threads), kvikio.defaults.set_task_size(size): ... 详情请参阅 Runtime Settings 。 此图表显示了在同一区域内,针对不同大小的线程池,从 S3 到 g4dn EC2 实例读取 1 GB Blob 的吞吐量 (以 Mbps 为单位) (越高越好)。 图 1、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。这是 kvikio.RemoteFile.read 的吞吐量,适用于各种值的 kvikio.defaults.num _threads 和 16 MiB 的任务。随着我们添加更多线程并对读取进行并行化,吞吐量会增加到一定程度。 线程越少 (少于 4 个),吞吐量越低,读取文件的时间越长。更多线程 (64、128、256) 通过将请求并行化到以并行方式提供服务的存储服务,实现更高的吞吐量。当我们遇到系统中存储服务、网络或其他瓶颈的限制时,会出现递减甚至负回报的情况。 借助远程 IO,每个线程都会在相对较长的时间内等待响应,因此对于您的工作负载,可能适合使用更多线程 (相对于核心数量而言)。我们看到,在本例中,吞吐量最高,介于 64 到 128 个线程之间。 如下图所示,任务大小也会影响最大吞吐量。 图 2、从 S3 读取 1 GB 文件的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例 。这显示了 kvikio.RemoteFile.read 吞吐量的热图。水平轴显示各种任务大小的吞吐量,而垂直轴显示各种线程数量。 只要任务大小不是太小(大约或低于 4 MiB)或太大(大约或超过 128 MiB),吞吐量就会达到 10 Gbps 左右。由于任务规模过小,发出许多 HTTP 请求会降低吞吐量。由于任务规模过大,我们无法获得足够的并发能力来最大限度地提高吞吐量。 与 boto3 (适用于 Python 的 AWS SDK) 相比,即使在线程池中使用 boto3 并发执行请求,KvikIO 也能实现更高的吞吐量。 图 3、从从 S3 读取 1 GB 的基准测试,到具有高达 25 Gbps 已发布带宽的 g4dn.xlarge EC2 实例。KvikIO 基准测试使用 64 个线程和 16 MiB 任务大小。Boto3 基准测试使用 ThreadPool 并行读取许多 4 MB 字节的块,而参数搜索表明,对于 Boto3 而言,这是最快的块大小。 对于略为逼真的工作负载 (尽管仍然仅有一个工作负载专注于 IO),我们比较了读取一批 360 个 parquet 文件 (每个文件约 128 MB) 的性能。这在 AWS g4dn.12xlarge 实例上运行,该实例包含 4 个 NVIDIA T4 GPU 和 48 个 vCPUs。 图 4、从读取 S3 中的 Parquet 数据集的基准测试,到具有高达 50 Gbps 已发布带宽的 g4dn.12xlarge EC2 实例。该数据集包含 360 个 Apache Parquet 文件,每个文件约 128 MB,总计约 46 GB。Dask 集群有 4 个工作者。这些结果使用 cuDF 25.04,其中包括并行读取 Parquet 文件页脚的优化。 启用 KvikIO 后,四个 Dask 工作进程能够共同实现从 S3 到此单个节点的近 20 Gbps 吞吐量。 结束语 随着 RAPIDS 加速工作负载的其他部分,IO 可能会成为瓶颈。如果您使用的是对象存储,并且已经疲于等待数据加载,请尝试本博文中的一些建议。让我们了解如何在 Github 上使用 KvikIO。您还可以与 RAPIDS Slack 社区的 3,500 多名成员一起讨论 GPU 加速的数据处理。
https://developer.nvidia.com/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs
Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have. The rise of small language models (SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to focus on specific domains and are built with simpler neural architectures. As models grow to mimic how humans perceive the world around them, models must rise to accept multiple forms of multimodal data. Microsoft announces the new generation of open SLMs to the Phi family with two new additions: Phi-4-mini Phi-4-multimodal Phi-4-multimodal is the first multimodal model to join the family that accepts text, audio, and image data inputs. These models are small enough for on-device deployment. This release builds on top of the December 2024 research-only release of the Phi-4 14B parameter SLM and enables commercial use for the two new smaller models. The new models are available on the Azure AI Foundry , Microsoft’s Cloud AI platform for design, customize, and manage AI applications and agents. You can test out each member of the Phi family through the NVIDIA API Catalog , which is the first sandbox environment to support each modality and tool-calling for Phi-4-multimodal . Use the preview NIM microservice to integrate the model into your applications today. Why invest in SLMs? SLMs enable generative AI capabilities in memory and compute constrained environments. For example, SLMs can be deployed directly on smartphones and several consumer-grade devices. On-device deployment can facilitate privacy and compliance for use cases that must adhere to regulatory requirements. Other benefits of SLMs include lower latency due to inherently faster inference compared to an LLM of similar quality. SLMs do tend to perform better on specialized tasks correlated to their training data. However, to supplement generalization and adaptability to different tasks, you can use retrieval-augmented generation (RAG) or native-function calling to build performant agentic systems. Phi-4-multimodal Phi-4-multimodal is with 5.6B parameters and accepts audio, image, and text reasoning. This enables it to support use cases such as automated speech recognition (ASR), multi-modal summarization, translation, OCR, and visual reasoning. This model was trained on 512 NVIDIA A100-80GB GPUs over 21 days. Figure 1 shows how you can preview your image data and ask Phi-4-multimodal visual QA in the NVIDIA API Catalog. You can also see how to adjust parameters such as token limits, temperature, and sampling values. You can generate sample code in Python, JavaScript, and Bash to help you integrate the model more easily into your applications. Figure 1. Visual QA demo in NVIDIA API Catalog You can also demo tool calling with a set of prebuilt agents. Figure 2 shows a tool that retrieves live weather data. Figure 2. Tool-calling demo in NVIDIA API Catalog Phi-4-mini Phi-4-mini is a text-only, dense, decoder-only Transformer model with 3.8B parameters that is optimized for chat. It includes a long-form context window of 128K tokens. This model was trained on 1024 NVIDIA A100 80GB GPUs over 14 days. For both models, the training data is intentionally focused on high quality educational data and code which results in a textbook-like quality to the models. Text, speech, and vision benchmark data can be found in the model cards. Advancing community models NVIDIA is an active contributor to the open-source ecosystem and has released several hundred projects under open-source licenses. NVIDIA is committed to optimizing community software and open models such as Phi which promotes AI transparency and lets users broadly share work in AI safety and resilience. Using the NVIDIA NeMo platform , these open models can be customized on proprietary data to be highly tuned and efficient for diverse AI workflows across any industry. NVIDIA and Microsoft have a long standing partnership which includes several collaborations driving innovation on GPUs on Azure, integrations and optimizations for PC developers using NVIDIA RTX GPUs, and many more, including research spanning generative AI to healthcare and life sciences. Get started today Bring your data and try out Phi-4 on the NVIDIA-accelerated platform at build.nvidia.com/microsoft . On the first multi-modal sandbox for Phi-4-multimodal, you can try out text, image, and audio as well as sample tool calling to see how this model will work for you in production.
https://developer.nvidia.com/zh-cn/blog/latest-multimodal-addition-to-microsoft-phi-slms-trained-on-nvidia-gpus/
在 NVIDIA GPU 上训练的 Microsoft Phi SLM 的多模态最新进展
大语言模型(LLMs)已渗透到各行各业,并改变了技术潜力。但是,由于规模庞大,它们对于许多公司目前面临的资源限制来说并不切实际。 小语言模型 (SLMs)的兴起通过创建资源占用更小的模型,将质量和成本联系起来。SLMs 是语言模型的一个子集,这些模型倾向于专注于特定领域,并使用更简单的神经架构构建。随着模型的发展模仿人类感知周围环境的方式,模型必须接受多种形式的多模态数据。 Microsoft 宣布在 Phi 系列中 推出新一代开放式 SLM ,并新增两项功能: Phi-4-mini Phi-4-multimodal Phi-4-multimodal 是第一个加入该系列的多模态模型,接受文本、音频和图像数据输入。 这些模型足够小,可以在设备上部署。此版本基于 2024 年 12 月发布的 Phi-4 14B 参数 SLM 的研究版本构建而成,可用于两个新的较小模型的商业用途。 这些新模型可在 Microsoft 的云 AI 平台 Azure AI Foundry 上使用,用于设计、定制和管理 AI 应用和代理。 您可以通过 NVIDIA API Catalog 测试 Phi 系列的每个成员,这是第一个支持 Phi-4 多模态 的每种模式和工具调用的沙盒环境。立即使用预览 NIM 微服务将模型集成到您的应用中。 为何投资 SLM? SLMs 可在内存和计算受限环境中实现生成式 AI 功能。例如,SLMs 可以直接部署在智能手机和多台消费级设备上。对于必须遵守监管要求的用例,设备端部署可以促进隐私和合规性。 SLM 的其他优势包括降低延迟,因为与质量相似的 LLM 相比,其本身的推理速度更快。SLM 在处理与其训练数据相关的专业任务时往往表现得更好。但是,为了补充对不同任务的泛化和适应性,您可以使用检索增强生成(RAG)或原生函数调用来构建高性能代理系统。 Phi-4-multimodal Phi-4-multimodal 具有 5.6B 个参数,接受音频、图像和文本推理。这使其能够支持自动语音识别 (ASR)、多模态摘要、翻译、OCR 和视觉推理等用例。该模型在 512 个 NVIDIA A100-80GB GPUs 上进行了为期 21 天的训练。 事实证明,该模型在 ASR 方面表现出色,因为它在 Huggingface OpenASR 排行榜上排名第一 ,单词错误率为 6.14%。 词错误率 (WER) 是量化语音识别性能的常用计算方法。WER 计算不正确转录的单词 (替换、插入和删除) 与正确文本相比所占的百分比。 图 1 展示了如何在 NVIDIA API Catalog 中预览图像数据并询问 Phi-4 多模态视觉问答。您还可以了解如何调整参数,例如令牌限制、温度和采样值。您可以使用 Python、JavaScript 和 Bash 生成示例代码,以帮助您更轻松地将模型集成到应用中。 图 1、NVIDIA API Catalog 中的可视化问答演示 您还可以使用一组预构建代理演示工具调用。图 2 显示了用于检索实时天气数据的工具。 图 2、NVIDIA API Catalog 中的工具调用演示 Phi-4-mini Phi-4-mini 是一个仅文本、密集、仅解码器的 Transformer 模型,具有 3.8B 个参数,并针对聊天进行了优化。它包含一个包含 128K 个令牌的长形式上下文窗口。该模型在 1024 个 NVIDIA A100 80GB GPUs 上进行了为期 14 天的训练。 对于这两个模型,训练数据有意地集中在高质量的教育数据和代码上,从而使模型获得类似于教科书的质量。您可以在模型卡中找到文本、语音和视觉基准测试数据。 推进社区模式 NVIDIA 是开源生态系统的积极贡献者,已根据开源许可发布了数百个项目。NVIDIA 致力于优化社区软件和 open-source licenses 中的项目,如 Phi,它促进了 AI 透明度,并让用户广泛分享在 AI 安全性和弹性方面的工作。 借助 NVIDIA NeMo 平台,这些开放模型可以根据专有数据进行定制,以便针对各行各业的各种 AI 工作流进行高度调整并提高效率。 NVIDIA 和 Microsoft 有着长期的合作伙伴关系,其中包括推动 Azure 上 GPU 创新的多项合作、为使用 NVIDIA RTX GPU 的 PC 开发者提供的集成和优化,等等,包括从生成式 AI 到医疗健康和生命科学的研究。 立即开始使用 请访问 build.nvidia.com/microsoft ,带上您的数据并在 NVIDIA 加速平台上试用 Phi-4。 在 Phi-4 多模态的第一个多模态沙盒中,您可以尝试使用文本、图像、音频以及示例工具调用,以了解此模型在生产环境中的工作原理。
https://developer.nvidia.com/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM
In today’s data-driven world, the ability to retrieve accurate information from even modest amounts of data is vital for developers seeking streamlined, effective solutions for quick deployments, prototyping, or experimentation. One of the key challenges in information retrieval is managing the diverse modalities in unstructured datasets, including text, PDFs, images, tables, audio, video, and so on. Multimodal AI models address this challenge by simultaneously processing multiple data modalities, generating cohesive and comprehensive output in different forms. NVIDIA NIM microservices simplify the secure and reliable deployment of AI foundation models for language, computer vision , speech, biology, and more. NIM microservices can be deployed on NVIDIA-accelerated infrastructure anywhere and expose industry-standard APIs for fast integration with applications and popular AI development frameworks, including LangChain and LlamaIndex. This post helps you get started with building a vision language model (VLM) based, multimodal, information retrieval system capable of answering complex queries involving text, images, and tables. We walk you through deploying an application using LangGraph, the state-of-the-art llama-3.2-90b-vision-instruct VLM, the optimized mistral-small-24B-instruct large language model (LLM), and NVIDIA NIM for deployment. This method of building simple information retrieval systems offers several advantages over traditional ones. The latest VLM NIM microservice enables enhanced contextual understanding by processing lengthy, complex visual documents without sacrificing coherence. The integration of LangChain’s tool calling enables the system to create tools, dynamically select and use external tools, and improve the precision of data extraction and interpretation from various sources. This system is good for enterprise applications because it generates structured outputs, ensuring consistency and reliability in responses. For more information about the implementation steps of this system, see the /NVIDIA/GenerativeAIExamples GitHub repo. A simple HTML multimodal retrieval pipeline The system consists of the following pipelines: Document ingestion and preprocessing: Runs a VLM on the images and translates them into text. Question-answering: Enables the user to ask questions of the system. Both pipelines integrate NVIDIA NIM and LangGraph to process and understand text, images, complex visualizations, and tables effectively. Data ingestion and preprocessing pipeline This stage parses documents to process text, images, and tables separately. Tables are first converted into images, and images are processed by the NVIDIA-hosted NIM microservice API endpoint for the llama-3.2-90b-vision-instruct VLM to generate descriptive text. Next, in the document reconstruction step, the descriptive text is merged with the original text of the document, then summarized by an LLM with long context modeling capability. In this implementation, llama-3.2-90b-vision-instruct is also used as the LLM, although other LLMs such as mistral-small-24b-instruct can also be deployed. Finally, the complete text, summaries, images, and their descriptions are stored in a NoSQL database, along with unique document identifiers. Figure 1. Data ingestion and preprocessing pipeline LLMs with long context modeling can process entire documents without fragmentation, enhancing comprehension of the document in a single pass, and capturing relationships and nuances across longer spans of text, leading to more accurate information retrieval. In contrast, traditional models may handle inputs of up to a few thousand tokens, requiring lengthy documents to be split into smaller chunks to fit within the model’s context window. This chunking process can disrupt coherence and context, making it more difficult to accurately retrieve and rank relevant information. However, long context modeling presents challenges related to scalability and cost, which must be considered when trading off with higher accuracy. QA pipeline All document summaries and their identifiers are compiled into a large prompt. When a query is sent, a LLM with long context modeling (mistral-small-24b-instruct in this case) processes the question, evaluates the relevance of each summary to the query, and returns the identifiers of the most relevant documents. Figure 2. Question-answering pipeline Next, the most relevant documents are fed into an LLM with long context (mistral-small-24b-instruct). The model generates an answer to the query based on the textual content. If the model identifies that an image may contain pertinent information based on its descriptive text, an additional step is triggered: the original image and the user’s question are sent to the VLM (llama-3.2-90b-vision-instruct), which can provide an answer based on the actual visual content. Finally, the system combines both textual and visual insights to deliver a comprehensive answer. Structured outputs ensure that the data returned by the model conforms to a predefined format, making it easier to extract specific information and perform subsequent operations. In contrast, unstructured or variable outputs can introduce ambiguities and difficulties in parsing the model’s responses, hindering automation and integration with other systems. Generating structured data from models typically requires carefully designed prompts to guide the model into responding in a particular format, such as JSON. However, ensuring consistent adherence to this structure can be challenging due to the models’ natural tendency to generate free-form text. NVIDIA NIM now natively supports capabilities for generating structured outputs . This means that you can rely on built-in functionalities to ensure that the model’s responses are consistently formatted, reducing the need for complex prompt engineering. Integrating NVIDIA NIM with LangGraph NVIDIA NIM offers seamless compatibility with popular frameworks and the latest AI models for your applications. The implementation of the pipeline integrates NVIDIA NIM with LangGraph , a framework to build agentic applications to determine the control flow, which has been widely adopted by the developer community. To orchestrate the workflow of this pipeline, the graph mainly consists of two nodes: Assistant node: Serves as an agent responsible for managing the logic and decision-making process. It interacts with the user’s inputs and invokes the necessary tools. Tools node: A collection of tools that perform specific tasks required by the assistant. Figure 3. Use LangGraph to build an agent for the pipeline Assistant node The assistant node is a primary agent that operates according to the workflow outlined in Figure 3. The code of the main agent can be found in the /NVIDIA/GenerativeAIExamples GitHub repo. Here are the agent inputs: Collection_name : The set of documents on which to search. Question : The user’s question. document_id : (Optional) If provided, the agent skips the document ranking phase. This is the agent process: Document selection : If document_id is not provided, the agent invokes the find_best_document_id tool, which identifies the most relevant document for the user’s question within the specified collection. Question answering : With document_id , the agent uses the query_document tool. This tool attempts to answer the question using the LLM (mistral-small-24b-instruct) based on the text and image descriptions within the document. Image analysis (if necessary): If the query_document tool indicates that the answer might be in an image (by returning an image_hash value), the agent invokes the query_image tool. This tool retrieves the actual image and uses a VLM to analyze the image and find the answer. Tools node We implemented three key tools for the agent to perform its tasks. Find_best_document_id : Identify the most relevant document for the user’s question when document_id is not provided. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_document : Search for an answer within the specified document. If the answer may be in an image, it provides details to query the image. For more information, see the /NVIDIA/GenerativeAIExamples GitHub repo. query_image : Analyze the actual image using a VLM when the answer might be within the image content. For more information, see the /NVIDIA/GenerativeAIExamples . Binding external tools with models Tool calling is a feature that enables language models to integrate and interact with external tools or functions based on the prompts that they receive. This mechanism enables a model to decide which tools to use and how to use them to accomplish specific tasks. Tool binding empowers models to extend their capabilities dynamically, selecting appropriate tools during execution to provide more accurate, context-aware responses. Binding external tools is particularly crucial in agentic frameworks, where agents must choose the appropriate tools and provide the necessary arguments to perform tasks effectively. The benefits of binding external tools include the following: Extended capabilities : Models can perform complex operations such as calculations, data retrieval, or API calls, which go beyond mere text generation. Dynamic tool selection : The model can assess in real time which tools are most suitable for the task, improving efficiency and relevance. Seamless integration : NVIDIA NIM supports the integration of external tools, such as LangChain and LangGraph, with open community models such as Llama 3.3. You can adopt these advanced features without making significant changes to your existing systems. In this implementation, use LangChain’s @tool decorator to create three tools, then use the .bind_tools method to bind the tools with models. Defining structured outputs with Pydantic By defining the output schema with Pydantic and guiding an LLM NIM microservice such as mistral-small-24b-instruct through precise prompts, you ensure that the responses are consistent, reliable, and easily consumable by other components within the system. This approach is essential when integrating the LLM into automated workflows and agent-based frameworks such as LangGraph. Define the structure The process begins by defining the structure of the output that you expect from the LLM using Pydantic. This guarantees that the data returned by the model is consistent and can be easily parsed for downstream processing . from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") Next, instruct the LLM to generate outputs that align with the defined Pydantic structures. This is achieved by incorporating specific instructions within the prompt and using LangChain’s with_structured_output method. Define the prompt The prompt_document_expert contains detailed instructions for the LLM, specifying the expected input format (Markdown with document summaries) and the required output format (JSON matching the BestDocuments schema). from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) Prepare context The get_context function prepares the input data by retrieving document summaries and formatting them appropriately. def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} Bind the structured output The llm.with_structured_output(BestDocuments) method instructs the LLM to produce output conforming to the BestDocuments Pydantic model. This method internally handles the parsing and validation of the LLM’s response, ensuring that the output matches the expected structure. LangChain’s with_structured_output method simplifies the process of binding the model to produce structured outputs. It abstracts the complexity of parsing and validating the LLM’s responses, enabling you to focus on defining the desired output structure and the prompt instructions. Finally, create a chain to process the input and generate the structured output: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) End-to-end tool in action To get started with the multimodal retrieval system, clone the /NVIDIA/GenerativeAIExamples GitHub repo and follow the Quick Start guide to set up the service. When it’s up and running, open your web browser and navigate to http://localhost:7860 to access the system through the Gradio user interface. For example, explore how the system processes queries on the NVIDIA Technical Blog. Ask a question about a bar chart showing the NVIDIA H100 GPU performance from one of the posts. The Select Question field is for evaluation purposes, with the Ground Truth Answer field value provided by a human. Figure 4. Agent multi-document evaluation This system generates an accurate answer based on the bar chart and also displays the relevant image for reference, such as the chart showing RetinaNet achieving 54%. This ensures precise answers while enabling users to visually verify the referenced data. Figure 5. Agent result with source graph for verification Video 1. How to Insert HTML Documents into a Multimodal Retriever Collection Using NVIDIA NIM Video 2. How to Search Text and Images Within a Multimodal Retriever Collection Using NVIDIA NIM Challenges and solutions As data volumes increase, so does the complexity of processing and retrieving relevant information. Handling large datasets efficiently is essential to maintaining performance and ensuring user satisfaction. In this information retrieval system, the sheer amount of document summaries can exceed the context window of even long-context models, making it challenging to process all summaries in a single prompt. Processing large volumes of data also demands considerable computational resources, which can result in higher costs and increased latency. Optimizing resource utilization is crucial to delivering fast and accurate responses while minimizing unnecessary expenses. Hierarchical document reranking solution To address scalability challenges, we implemented a hierarchical approach in the initial document reranking phase. Instead of processing all document summaries simultaneously, we divided them into manageable batches that fit within the model’s context window. The process involves multiple stages: Batch processing : Summaries are grouped into batches that the model can handle without exceeding the prompt size limitations. Intermediate reranking : The model evaluates each batch separately, ranking the documents within each group. Selection of top candidates : The most relevant documents from each batch are selected to proceed to the next stage. Final reranking : The top candidates from all batches are combined and re-evaluated to identify the most relevant document. Considering both scalability and cost concerns, this hierarchical approach ensures that all documents are considered without exceeding the model’s capacity. It not only improves scalability, but also boosts efficiency by narrowing down the candidate documents systematically until the most relevant one is identified. Future prospects with smaller models Using language models, especially those with long-context capabilities, involves processing a large number of tokens, which can incur significant costs. Each token processed adds to the overall expense, making cost management a critical consideration when deploying these systems at scale. The concern about cost is indeed valid. However, the landscape of language models is rapidly evolving, with smaller models becoming increasingly capable and efficient. As these advancements continue, these smaller models may offer similar performance at a fraction of the cost. Conclusion This post discussed the implementation of a simple multimodal information retrieval pipeline that uses NVIDIA NIM and LangGraph. The pipeline offers several advantages over existing information retrieval methods: Enhanced comprehension of documents A multimodal model to extract information from images, tables, and text Seamless integration of external tools Generation of consistent and structured output Using NVIDIA NIM and LangGraph, you can build on this work and customize it to suit specific needs. To get started, you can find source code in the /NVIDIA/GenerativeAIExamples GitHub repo. NVIDIA NIM also offers access to more models optimized for NVIDIA GPUs. You can explore NVIDIA NeMo , a scalable generative AI framework designed for researchers and PyTorch developers working on LLMs, multimodal models, and more. If you are working with a large corpora of enterprise data and are looking to develop enterprise-ready, real-time multilingual and cross-lingual information retrieval systems to generate context-aware responses, learn more about NVIDIA NeMo Retriever .
https://developer.nvidia.com/zh-cn/blog/building-a-simple-vlm-based-multimodal-information-retrieval-system-with-nvidia-nim/
使用 NVIDIA NIM 构建基于 VLM 的简单多模态信息检索系统
在当今数据驱动的世界中,即使是从少量数据中检索准确信息的能力,对于寻求精简、有效的快速部署、原型设计或实验解决方案的开发者来说也至关重要。信息检索领域的主要挑战之一是管理非结构化数据集中的各种模式,包括文本、PDF、图像、表格、音频、视频等。 多模态 AI 模型通过同时处理多个数据模式来应对这一挑战,以不同的形式生成连贯一致的全面输出。 NVIDIA NIM 微服务可简化 AI 基础模型 在语言、 计算机视觉 、语音、生物学等领域的安全可靠部署。 NIM 微服务可随时随地部署在 NVIDIA 加速基础设施上,并提供行业标准 API,以快速集成应用和热门 AI 开发框架 (包括 LangChain 和 LlamaIndex)。 本文将帮助您着手构建基于 视觉语言模型 (VLM)的多模态信息检索系统,该系统能够回答涉及文本、图像和表格的复杂查询。我们将引导您使用 LangGraph 部署应用程序、先进的 llama-3.2-90b-vision-instruct VLM、经过优化的 mistral-small-24B-instruct 大语言模型(LLM),以及用于部署的 NVIDIA NIM。 与传统方法相比,这种构建简单信息检索系统的方法具有许多优势。最新的 VLM NIM 微服务可在不牺牲一致性的情况下处理冗长而复杂的视觉文档,从而增强上下文理解。通过集成 LangChain 的工具调用 ,系统能够创建工具,动态选择和使用外部工具,并提高从各种来源提取和解释数据的精度。 此系统适用于企业应用,因为它生成结构化输出,确保响应的一致性和可靠性。有关此系统的实施步骤的更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 仓库。 简单的 HTML 多模态检索工作流 该系统由以下管道组成: 文档提取和预处理:在图像上运行 VLM 并将其转换为文本。 问答:允许用户提出系统问题。 这两个工作流均集成了 NVIDIA NIM 和 LangGraph,可有效处理和理解文本、图像、复杂的可视化效果和表格。 数据提取和预处理 pipeline 此阶段会解析文档,分别处理文本、图像和表格。首先将表格转换为图像,然后由 NVIDIA 托管的 NIM 微服务 API 端点为 llama-3.2-90b-vision-instruct VLM 处理图像,以生成描述性文本。 接下来,在文档重建步骤中,描述性文本将与文档的原始文本合并,然后由具有长上下文建模功能的 LLM 进行汇总。在此实施中,还可将 Llama-3.2-90b-vision-instruct 用作 LLM,不过也可部署其他 LLM(例如 mistral-small-24b-instruct)。 最后,完整的文本、摘要、图像及其说明将存储在 NoSQL 数据库中,以及唯一的文档标识符。 图 1. 数据提取和预处理管道 采用长上下文建模的 LLMs 可以处理整个文档,而不会出现碎片,从而在单个通道中增强对文档的理解,并捕获更长的文本跨度中的关系和细微差别,从而实现更准确的信息检索。 相比之下,传统模型可能会处理多达数千个 tokens 的输入,因此需要将冗长的文档拆分成较小的块,以适应模型的上下文窗口。这种分块过程会破坏一致性和上下文,使准确检索相关信息并对其进行排名变得更加困难。 但是,长上下文建模会带来与可扩展性和成本相关的挑战,在以更高的准确性进行权衡时必须考虑这些挑战。 QA 管道 所有文档摘要及其标识符都会编译成一个大型提示。发送查询时,使用长上下文建模(本例中为 mistral-small-24b-instruct)的 LLM 会处理问题,评估每个摘要与查询的相关性,并返回最相关文档的标识符。 图 2、问题回答管道 接下来,将最相关的文档输入到具有长上下文 (mistral-small-24b-instruct) 的 LLM 中。模型会根据文本内容生成查询答案。如果模型根据描述性文本识别出图像可能包含相关信息,则会触发另一个步骤:将原始图像和用户的问题发送至 VLM (llama-3.2-90b-vision-instruct),VLM 可以根据实际视觉内容提供答案。 最后,该系统将文本和视觉见解相结合,以提供全面的答案。 结构化输出可确保模型返回的数据符合预定义的格式,从而更轻松地提取特定信息并执行后续运算。相比之下,非结构化或可变输出会在解析模型的响应时引入模糊性和困难,从而阻碍自动化以及与其他系统的集成。 从模型生成结构化数据通常需要精心设计的提示,以指导模型以特定格式(例如 JSON)做出响应。但是,由于模型自然倾向于生成自由格式文本,因此确保一致性遵循此结构可能具有挑战性。 NVIDIA NIM 现在原生支持生成结构化输出的功能。这意味着,您可以依靠内置功能来确保模型的响应格式保持一致,从而减少对复杂提示工程的需求。 将 NVIDIA NIM 与 LangChain 集成 NVIDIA NIM 为您的应用提供与热门框架和最新 AI 模型的无缝兼容性。该流程的实施将 NVIDIA NIM 与 LangChain 相集成。LangChain 是一个用于构建代理应用以确定控制流的框架,已被开发者社区广泛采用。为编排此工作流的工作流,该图形主要由两个节点组成: 助理节点:充当负责管理逻辑和决策过程的代理。它与用户的输入进行交互,并调用必要的工具。 工具节点:用于执行助手所需特定任务的工具集合。 图 3、使用 LangGraph 为管道构建代理 助理节点 助手节点是根据图 3 中概述的工作流程运行的主代理。主代理的代码可在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到。 智能体输入如下: Collection_name :要在其上搜索的文档集。 Question :用户的问题。 document_id :(可选) 如果提供,代理会跳过文档排名阶段。 这是智能体流程: 文档选择 :如果未提供 document_id ,代理会调用 find_best_document_id 工具,该工具可在指定集合中识别与用户问题最相关的文档。 问题回答:对于 document_id ,代理使用 query_document 工具。此工具会尝试使用 LLM (mistral-small-24b-instruct) 根据文档中的文本和图像描述来回答此问题。 图像分析 (如有必要):如果 query_document 工具表明答案可能在图像中 (通过返回 image_hash 值),代理会调用 query_image 工具。此工具会检索实际图像,并使用 VLM 分析图像并寻找答案。 工具节点 我们为智能体实施了三个关键工具来执行任务。 Find_best_document_id :在未提供 document_id 时,确定与用户问题最相关的文档。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples Github 存储库。 query_document :在指定文档中搜索答案。如果答案可能在图像中,则会提供查询图像所需的详细信息。有关更多信息,请参阅 /NVIDIA/GenerativeAIExamples GitHub 存储库。 query_image :当答案可能在图像内容中时,使用 VLM 分析实际图像。有关更多信息,请参阅/ NVIDIA/GenerativeAIExamples 。 将外部工具与模型绑定 工具调用是一项功能,可让语言模型根据收到的提示集成外部工具或函数并与之交互。此机制使模型能够决定使用哪些工具以及如何使用这些工具来完成特定任务。 工具绑定使模型能够动态扩展其功能,在执行期间选择合适的工具,以提供更准确的上下文感知响应。 绑定外部工具在代理框架中尤为重要,在这种框架中,代理必须选择合适的工具并提供有效执行任务所需的参数。绑定外部工具的优势包括: 扩展功能 :模型可以执行计算、数据检索或 API 调用等复杂操作,而不仅仅是文本生成。 动态工具选择 :模型可以实时评估哪些工具最适合任务,从而提高效率和相关性。 无缝集成:NVIDIA NIM 支持将 LangChain 和 LangGraph 等外部工具与 Llama 3.3 等开放式社区模型集成。您可以采用这些高级功能,而无需对现有系统进行重大更改。 在此实现中,使用 LangChain 的 @tool 装饰器创建三个工具,然后使用 .bind_tools 方法将这些工具与模型绑定。 使用 PyTorch 定义结构化输出 通过使用 Pydantic 定义输出模式,并通过精确的提示引导 LLM NIM 微服务 (例如 mistral-small-24b-instruct) ,您可以确保响应一致、可靠,并且易于被系统中的其他组件使用。当将 LLM 集成到自动化工作流和基于代理的框架 (例如 LangChain) 时,这种方法至关重要。 定义结构 首先,使用 Pydantic 定义 LLM 的预期输出结构。这可确保模型返回的数据保持一致,并可轻松解析以进行下游处理。 from typing import List, Optional from pydantic import BaseModel, Field class Document(BaseModel): """ Represents a document with an identifier and its summary. """ id: str = Field(..., description="Hash identifier of the document") summary: str = Field(..., description="The summary of the document as is") class BestDocuments(BaseModel): """ Contains a list of the best documents to answer the question and their summaries. """ documents: List[Document] = Field(..., description="List of best documents") class Answer(BaseModel): """ Represents the answer to the user's question. """ answer: str = Field(..., description="Answer to the question posed by the user") 接下来,指示 LLM 生成与定义的 Pydantic 结构保持一致的输出。这是通过在提示符中加入特定指令并使用 LangChain 的 with_structured_output 方法实现的。 定义提示 prompt_document_expert 包含 LLM 的详细说明,可指定预期的输入格式 (带有文档摘要的 Markdown) 和所需的输出格式 (与 BestDocuments 架构匹配的 JSON)。 from langchain.chat_models import ChatNVIDIA from langchain.prompts import ChatPromptTemplate # Initialize the LLM with desired parameters llm = ChatNVIDIA(model="mistralai/mistral-small-24b-instruct ", temperature=0, max_tokens=3000) # Define the prompt template for the document expert prompt_document_expert = ChatPromptTemplate.from_messages( [ ( "system", f""" # Extract Best Document Identifier from list of summaries, based on a question coming from the user. You are an expert in getting insights of a document, based on its summaries and you are able to figure the best matches to the question in terms of the summary of the document. Provide no more than 3 of these documents. ## Format of the Input - The input is a markdown file containing second level headers (##) with the chapter index in the form ## Document <document_id> where document_id is an integer pointing to the index of the document. After the document heading there is the summary of the document which is relevant to understand the content of the document. ## Format of the output - The output is going to be the list of the best documents indices and a few of the corresponding summaries that help to answer the question coming from the user. ## Content - Here is the input you can work on: {{documents_context}} """, ), ( "human", "Can you tell me what are the most relevant document ids for this question: {question}" ), ("human", "Tip: Make sure to answer in the correct format"), ] ) 准备上下文 get_context 函数通过检索文档摘要并对其进行适当格式化来准备输入数据。 def get_context(input_data: dict) -> dict: collection_name = input_data.get("collection_name") question = input_data.get("question") documents_context = get_document_summaries_markdown(collection_name) # print(context) return {"documents_context": documents_context, "collection_name": collection_name, "question": question} 绑定结构化输出 llm.with_structured_output(BestDocuments) 方法指示 LLM 生成符合 BestDocuments Pydantic 模型的输出。此方法在内部处理 LLM 响应的解析和验证,确保输出与预期结构相匹配。 LangChain 的 with_structured_output 方法简化了绑定模型以生成结构化输出的过程。它抽象化了解析和验证 LLM 响应的复杂性,使您能够专注于定义所需的输出结构和提示指令。 最后,创建一个链来处理输入并生成结构化输出: chain_document_expert = ( RunnableLambda(get_context) | prompt_document_expert | llm.with_structured_output(BestDocuments) | (lambda x: x.dict()) ) 端到端工具的实际应用 要开始使用多模态检索系统,请克隆 /NVIDIA/GenerativeAIExamples GitHub 存储库,然后按照快速入门指南设置服务。在服务启动并运行时,打开 Web 浏览器并导航至 http://localhost:7860 ,通过 Gradio 用户界面访问系统。 例如,在 NVIDIA 技术博客上探索系统如何处理查询。在其中一篇博文中,您可以询问有关显示 NVIDIA H100 GPU 性能的条形图的问题。“ Select Question ” 字段用于评估,真值答案字段值由人类提供。 图 4、Agent 多文档评估 该系统会根据条形图生成准确的答案,并显示相关图像以供参考,例如图表显示 RetinaNet 达到了 54%。这可确保准确的答案,同时使用户能够以直观方式验证引用数据。 图 5、Agent 结果与用于验证的源图形 视频1. 如何使用 NVIDIA NIM 将 HTML 文档插入多模态检索器集合 视频2. 如何使用 NVIDIA NIM 在多模态检索器集合中搜索文本和图像 挑战和解决方案 随着数据量的增加,处理和检索相关信息的复杂性也随之增加。高效处理大型数据集对于保持性能和确保用户满意度至关重要。在此信息检索系统中,文档摘要的数量甚至可能超过长上下文模型的上下文窗口,这使得在单个提示中处理所有摘要具有挑战性。 处理大量数据还需要大量计算资源,这可能会导致成本增加和延迟增加。优化资源利用率对于提供快速准确的响应,同时最大限度地减少不必要的支出至关重要。 分层文档重新排序解决方案 为应对可扩展性挑战,我们在初始文档重新排序阶段实施了分层方法。我们不会同时处理所有文档摘要,而是将其分为可管理的批量,以适应模型的上下文窗口。此过程涉及多个阶段: 批量处理 :将摘要分组为模型可以处理的批量,且不会超过提示大小限制。 中级重新排序 :模型分别评估每个批次,对每个组中的文档进行排序。 选择最优秀的候选文档 :从每个批次中选择最相关的文档,以进入下一阶段。 最终重新排名 :系统会对所有批次中排名靠前的候选文档进行合并和重新评估,以确定相关性最高的文档。 考虑到可扩展性和成本问题,这种分层方法可确保在不超出模型容量的情况下考虑所有文档。它不仅提高了可扩展性,而且还通过系统缩小候选文档的范围来提高效率,直到识别出最相关的文档。 小型模型的未来前景 使用语言模型,尤其是具有长上下文功能的语言模型,涉及处理大量 token,而这可能会产生巨大的成本。处理的每个 token 都会增加总支出,因此在大规模部署这些系统时,成本管理是一个重要考虑因素。 对成本的担心确实是站得住脚的。然而,语言模型的格局正在迅速演变,小型模型的功能和效率也在不断提升。随着这些进步的继续,这些较小的模型可能以远低于成本提供相似的性能。 结束语 本文讨论了如何使用 NVIDIA NIM 和 LangChain 实现简单的多模态信息检索工作流。与现有的信息检索方法相比,Pipeline 具有以下优势: 增强对文档的理解 用于从图像、表格和文本中提取信息的多模态模型 无缝集成外部工具 生成一致的结构化输出 借助 NVIDIA NIM 和 LangGraph,您可以在此基础上进行构建并对其进行定制,以满足特定需求。首先,您可以在 /NVIDIA/GenerativeAIExamples GitHub repo 中找到源代码。 NVIDIA NIM 还支持访问更多针对 NVIDIA GPU 优化的模型。您可以探索 NVIDIA NeMo ,这是一个可扩展的生成式 AI 框架,专为研究 LLM、多模态模型等的研究人员和 PyTorch 开发者而设计。 如果您正在处理大型企业数据语料库,并希望开发企业就绪的实时多语种和跨语言信息检索系统来生成上下文感知响应,请详细了解 NVIDIA NeMo Retriever 。
https://developer.nvidia.com/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/zh-cn/blog/tag/inference-performance/
Inference Performance
No content found
https://developer.nvidia.com/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding
Large language models (LLMs) that specialize in coding have been steadily adopted into developer workflows. From pair programming to self-improving AI agents , these models assist developers with various tasks, including enhancing code, fixing bugs, generating tests, and writing documentation. To promote the development of open-source LLMs, the Qwen team recently released Qwen2.5-Coder, a family of advanced LLMs for code generation, reasoning, and fixing across popular programming languages. This post explores the benefits of inference optimizations for Qwen2.5-Coder models supported in NVIDIA TensorRT-LLM , and the ease of deployment with NVIDIA NIM for transformative potential and coding efficiency. Qwen2.5-Coder models The Qwen2.5-Coder models have achieved state-of-the-art performance across popular academic benchmarks. NVIDIA TensorRT-LLM has optimized three popular models from the Qwen2.5-Coder family—the 1.5B, 7B, and 32B versions—for high throughput and low latency. TensorRT-LLM is a library for fast, efficient LLM inference and includes optimizations such as dynamic inflight batching , KV caching , KV cache reuse , and several speculative decoding techniques, among others. These optimizations help deliver performance improvements for the Qwen2.5-Coder models on popular programming languages such as Python, C++, Java, Bash, Javascript, TypeScript, and Go, reaching a wider range of developers. This post explores the lookahead decoding optimization and the performance boost it helps achieve. Without any additional training or need for additional draft models, developers can leverage the TensorRT-LLM high-level API to speed up Qwen2.5-Coder inference to generate multiline autocode completion. Lookahead decoding Lookahead decoding is a speculative decoding technique that addresses the slow autoregressive nature of LLMs. Each autoregressive decoding step only generates one token at a time, not leveraging the massive parallel processing power of NVIDIA GPUs, leading to low GPU utilization and lower throughput. We’ve previously discussed the throughput boost achievable with draft target speculative decoding , and here we discuss the benefits of leveraging TensorRT-LLM lookahead decoding implementation using the Qwen2.5-Coder models as an example. Unlike the single-token generation in autoregressive decoding, lookahead decoding generates multiple tokens simultaneously, adequately utilizing the parallel processing capabilities of the GPU, leveraging computation (FLOPs) for latency reduction. Moreover, lookahead decoding doesn’t require a separate draft model that’s needed for draft target speculative decoding. Each decoding step is divided into two parallel branches, the lookahead branch and the verification branch. Using the Jacobi iteration method , a classic nonlinear systems solver, the lookhead branch performs parallel decoding for future tokens by generating n-grams. The verification branch selects and verifies the promising n-gram candidates generated by the lookahead branch. The lookahead algorithm is configured using three key parameters: window size (W), n-gram size (N), and verification set size (G). Window size (W): Represents the lookahead window size, which determines how many future tokens the algorithm attempts to predict in each step. Larger window size enables the model to look further, helping generate more tokens in a single pass. This effectively improves throughput performance while utilizing GPU computation FLOPs efficiently. N-gram size (N): Represents the size of the n-grams used in the lookahead process. For example, a 5-gram is a contiguous sequence of 5 future tokens. Together with the window size, it creates a fixed-sized, 2D window for the lookahead branch to generate n-grams from the Jacobi iteration trajectory. Verification set size (G): Represents the maximum number of speculations or candidate n-grams that the algorithm considers in each step for verification. It balances the trade-off between computation efficiency and exploring more possibilities. Figure 1. Lookahead decoding workflow with (W, N, G) = (5, 3, 2). Image credit: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding Lookahead performance greatly depends on the base model, hardware, batch size, sequence length, and the dataset. It is recommended to profile various configurations to find the best (W, N, G) configuration given the setup. Optimal (W, N, G) tuple configuration enables lookahead decoding to deliver improved throughput performance without the need for any additional training, fine-tuning or draft models. Through our experiments on (W, N, G) configuration values sweep, we achieve 3.6x and 1.6x throughput speedups for Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models, respectively. These speedups are measured in throughput (tokens/second) compared to baseline (no lookahead speculative decoding) on NVIDIA H100 Tensor Core GPUs , as shown in Figure 2. Figure 2. Qwen2.5-Coder models throughput boost on NVIDIA DGX H100 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H100, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder-32B-Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Similar throughput speedups are achieved on NVIDIA H200 Tensor Core GPUs . With their higher memory bandwidth, they also help raise the baseline throughput performance leading to slightly lower speedups as compared to H100 GPUs (Figure 3). Figure 3. Qwen2.5-Coder models throughput boost on NVIDIA DGX H200 with TensorRT-LLM lookahead decoding Data measured on 01/30/2025. Inference throughput (output tokens/second) speedups of Qwen2.5-Coder 7B Instruct and Qwen2.5-Coder 32B Instruct models. DGX H200, TP=1 | (W, N, G) = (8, 8, 8) | Qwen2.5-Coder 7B Instruct, TP=2 | (W, N, G) = (15, 15, 15) | Qwen2.5-Coder 32B Instruct, batch size=1, TensorRT-LLM version 0.15.0​. Steps to run lookahead decoding with TensorRT-LLM To reproduce these performance gains using lookahead speculative decoding within TensorRT-LLM, follow the steps below. # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com Then run lookahead decoding in TensorRT-LLM using the high-level API. # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() Summary Lookahead speculative decoding enables throughput boost on LLMs without any additional training, fine-tuning, or draft models. We presented benchmarked performance improvements on Qwen2.5-Coder models. Visit build.nvidia.com to try the Qwen2.5-Coder models optimized with NVIDIA TensorRT-LLM for free. Qwen2.5-Coder models optimized with TensorRT-LLM have also been packaged as downloadable NVIDIA NIM microservices for ease of deployment. Acknowledgments We would like to thank Liwei Ma, Fanrong Li, Nikita Korobov, and Martin Marciniszyn Mehringer  for their efforts in supporting this post.
https://developer.nvidia.com/zh-cn/blog/optimizing-qwen2-5-coder-throughput-with-nvidia-tensorrt-llm-lookahead-decoding/
使用 NVIDIA TensorRT-LLM 前瞻性解码优化 Qwen2.5-Coder 吞吐量
专注于编码的 大语言模型(LLMs) 已稳步应用于开发者工作流程。从配对编程到自我改进的 AI 智能体 ,这些模型可帮助开发者完成各种任务,包括增强代码、修复错误、生成测试和编写文档。 为促进开源 LLM 的开发,Qwen 团队最近发布了 Qwen2.5-Coder,这是一系列先进的 LLM,用于跨热门编程语言的代码生成、推理和修复。本文将探讨针对 NVIDIA TensorRT-LLM 支持 的 Qwen2.5-Coder 模型进行推理优化的优势,以及借助 NVIDIA NIM 轻松部署以提升变革潜力和编码效率的好处。 Qwen2.5-Coder 模型 Qwen2.5-Coder 模型在热门的学术基准测试中取得了出色的性能。 NVIDIA TensorRT-LLM 已对 Qwen2.5-Coder 系列的三种热门模型 (1.5B、7B 和 32B 版本) 进行优化,以实现高吞吐量和低延迟。TensorRT-LLM 是一个用于快速、高效 LLM 推理的库,包含动态机上 批处理 、 KV 缓存 、 KV 缓存重复使用 和几种预测性解码技术等优化功能。 这些优化有助于提高 Qwen2.5-Coder 模型在 Python、C++、Java、Bash、Javascript、TypeScript 和 Go 等热门编程语言中的性能,从而使更多开发者受益。本文将探讨 lookahead decoding 优化的前瞻性及其有助于实现的性能提升。开发者无需进行任何额外训练,也无需额外的草图模型,即可利用 TensorRT-LLM 高级 API 加速 Qwen2.5-Coder 推理,以生成多行自动代码完成。 解码前景展望 解码前瞻是一种预测性解码技术,可解决 LLMs 缓慢自回归的问题。每个自回归解码步骤一次仅生成一个 token,无法利用 NVIDIA GPUs 强大的并行处理能力,导致 GPU 利用率低、吞吐量低。我们之前讨论过通过草稿目标预测解码可以实现的吞吐量提升,在这里,我们讨论了以 Qwen2.5-Coder 模型为例,利用 TensorRT-LLM lookahead decoding 实现的优势。 与自回归解码中的单令牌生成不同,前瞻性解码可同时生成多个令牌,充分利用 GPU 的并行处理能力,利用计算(FLOPs)降低延迟。此外,对于草稿目标预测性解码,前瞻性解码不需要使用单独的草稿模型。 每个解码步骤分为两个并行分支,即 lookahead 分支和验证分支。通过使用经典的非线性系统求解器 Jacobi 迭代法 ,lookahead 分支通过生成 n-grams 来对未来的 tokens 执行并行解码。验证分支选择并验证由 lookahead 分支生成的有前景的 n-gram 候选项。 前瞻性算法使用三个关键参数进行配置:窗口大小(W),n-gram 大小(N)和验证集大小(G)。 窗口大小 (W):表示前瞻性窗口大小,它决定了算法在每个步骤中尝试预测的未来令牌数量。窗口大小越大,模型的视野越广,一次传递就能生成更多 token。这可有效提高吞吐量性能,同时高效利用 GPU 计算 FLOPs。 N-gram size (N):表示前瞻性流程中使用的 N – gram 的大小。例如,5-gram 是由 5 个未来令牌组成的连续序列。它与窗口大小一起为前瞻性分支创建了一个大小固定的 2D 窗口,以便从 Jacobi 迭代轨迹生成 n-gram。 验证集大小 (G):表示算法在每个验证步骤中考虑的推测或候选 n-gram 的最大数量。它平衡了计算效率与探索更多可能性之间的权衡。 图 1、使用 (W,N,G) = (5,3,2) 展望解码工作流程。图片来源: Break the Sequential Dependency of LLM Inference Using Lookahead Decoding 未来的性能很大程度上取决于基础模型、硬件、批量大小、序列长度和数据集。建议分析各种配置,以找到给定设置的最佳 (W,N,G) 配置。最佳 (W,N,G) 元组配置支持 lookahead 解码前瞻性,无需任何其他训练、fine-tuning 或 draft 模型,即可提供更高的吞吐量性能。 通过对 (W,N,G) 配置值扫描的实验,我们分别为 Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型实现了 3.6 倍和 1.6 倍的吞吐量加速。这些加速是通过 NVIDIA H100 Tensor Core GPUs 上的吞吐量 (tokens/second) 与基线 (无 lookahead speculative decoding) 的比较进行测量的,如 Figure 2 所示。 图 2、借助 TensorRT-LLM 超前解码,Qwen2.5-Coder 模型可提升 NVIDIA DGX H100 上的吞吐量 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H100,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder-32B-Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 NVIDIA H200 Tensor Core GPU 也实现了类似的吞吐量加速。凭借更高的显存带宽,它们还有助于提高基准吞吐量性能,从而使速度略低于 H100 GPU (图 3)。 图 3、Qwen2.5-Coder 模型在 NVIDIA DGX H200 上通过 TensorRT-LLM 超前解码实现吞吐量提升 数据测量日期:2025 年 1 月 30 日。Qwen2.5-Coder 7B Instruct 和 Qwen2.5-Coder 32B Instruct 模型的推理吞吐量(输出令牌/秒)加速。DGX H200,TP=1 | (W,N,G)= (8,8,8)| Qwen2.5-Coder 7B Instruct,TP=2 | (W,N,G)= (15,15,15)| Qwen2.5-Coder 32B Instruct,批量大小=1,TensorRT-LLM 版本 0.15.0。 使用 TensorRT-LLM 进行解码的前瞻性运行步骤 要在 TensorRT-LLM 中使用预测性解码重现这些性能提升,请执行以下步骤。 # Install TensorRT-LLM. (Commands below are for Linux. Refer to TensorRT-LLM docs for Windows) sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade setuptools && pip3 install tensorrt_llm --extra-index-url https://pypi.nvidia.com 然后,使用高级 API 在 TensorRT-LLM 中运行 lookahead decoding。 # Command for Qwen2.5-Coder-7B-Instruct from tensorrt_llm import LLM, SamplingParams from tensorrt_llm.llmapi import (LLM, BuildConfig, KvCacheConfig, LookaheadDecodingConfig, SamplingParams) def main(): """The end user can customize the build configuration with the build_config class. # Max draft length is based on (W,N,G) values and calculated as: (W + G -1) * (N-1) + ( N<=1 ? 0: N-2)""" build_config = BuildConfig(max_batch_size = 128, max_input_len = 2048, max_seq_len = 4096, max_num_tokens = 16384, max_draft_len = 111) build_config.plugin_config.reduce_fusion = True build_config.plugin_config.use_paged_context_fmha = True build_config.plugin_config.multiple_profiles = True # The configuration for lookahead decoding lookahead_config = LookaheadDecodingConfig(max_window_size=8, max_ngram_size=8, max_verification_set_size=8) kv_cache_config = KvCacheConfig(free_gpu_memory_fraction=0.4) llm = LLM(model="Qwen/Qwen2.5-Coder-7B-Instruct", kv_cache_config=kv_cache_config, build_config=build_config, speculative_config=lookahead_config) prompt = """Write a C++ program to find the nth Fibonacci number using recursion. Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number.""" sampling_params = SamplingParams(lookahead_config=lookahead_config) output = llm.generate(prompt, sampling_params=sampling_params) print(output) if __name__ == '__main__': main() 总结 前瞻性预测解码可提高 LLMs 的吞吐量,而无需任何其他训练、微调或草稿模型。我们展示了 Qwen2.5-Coder 模型的基准性能改进。 访问 build.nvidia.com,免费试用通过 NVIDIA TensorRT-LLM 优化的 Qwen2.5-Coder 模型。 为便于部署, 我们还将通过 TensorRT-LLM 优化的 Qwen2.5-Coder 模型打包为可下载的 NVIDIA NIM 微服务。 致谢 在此, 我们要感谢马立伟、李凡融、Nikita Korobov 和 Martin Marciniszyn Mehringer 为支持这篇博文所付出的努力 。
https://developer.nvidia.com/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/
Optimize AI Inference Performance with NVIDIA Full-Stack Solutions
The explosion of AI-driven applications has placed unprecedented demands on both developers, who must balance delivering cutting-edge performance with managing operational complexity and cost, and AI infrastructure. NVIDIA is empowering developers with full-stack innovations—spanning chips, systems, and software—that redefine what’s possible in AI inference , making it faster, more efficient, and more scalable than ever before. Easily deploy high-throughput, low-latency inference Six years ago, NVIDIA set out to create an AI inference server specifically designed for developers building high-throughput, latency-critical production applications. At the time, many developers were grappling with custom, framework-specific servers that increased complexity, drove up operational costs, and struggled to meet stringent service-level agreements for latency and throughput. To address this, NVIDIA developed the NVIDIA Triton Inference Server , an open-source platform capable of serving models from any AI framework. By consolidating framework-specific inference servers, Triton streamlined AI inference deployment and increased AI prediction capacity. This approach has made Triton one of the most widely adopted NVIDIA open-source projects , now used by hundreds of leading organizations to deploy production AI models efficiently. In addition to Triton, NVIDIA offers a broad ecosystem of AI inference solutions. For developers seeking powerful, customizable tools, NVIDIA TensorRT provides a high-performance deep learning inference library with APIs that enable fine-grained optimizations. NVIDIA NIM microservices provide a flexible framework for deploying AI models across the cloud, data centers, or workstations. Optimizations for AI inference workloads Inference is a full-stack problem today, requiring high-performance infrastructure and efficient software to make effective use of that infrastructure. In addition, inference workloads continue to become more challenging, as model sizes continue to grow and latency constraints tighten, all while the number of users leveraging these AI services also continues to increase. And with the introduction of inference time scaling, a new paradigm for scaling model intelligence, more compute is being applied during inference to enhance model performance. These trends mean that it’s important to continue advancing delivered inference performance, even on the same underlying hardware platform. By combining established methods like model parallelism, mixed-precision training, pruning, quantization, and data preprocessing optimization with cutting-edge advancements in inference technologies, developers can achieve remarkable gains in speed, scalability, and cost-effectiveness. The TensorRT-LLM library incorporates many state-of-the-art features that accelerate inference performance for large language models (LLMs) , which are outlined below. Prefill and KV cache optimizations Key-value (KV) cache early reuse : By reusing system prompts across users, the KV Cache Early Reuse feature accelerates time-to-first-token (TTFT) by up to 5x. Flexible KV block sizing and efficient eviction protocols ensure seamless memory management, enabling faster response times even in multi-user environments. Chunked prefill : For smarter deployment, chunked prefill divides the prefill phase into smaller tasks, enhancing GPU utilization and reducing latency. This innovation simplifies deployment and ensures consistent performance, even with fluctuating user demands. Supercharging multiturn interactions : The NVIDIA GH200 Superchip architecture enables efficient KV cache offloading, improving TTFT by up to 2x in multiturn interactions with Llama models while maintaining high throughput. Decoding optimization Multiblock attention for long sequences : Addressing the challenge of long input sequences, TensorRT-LLM multiblock attention maximizes GPU utilization by distributing tasks across streaming multiprocessors (SMs). This technique improves system throughput by more than 3x, enabling support for larger context lengths without additional hardware costs. Speculative decoding for accelerated throughput : Leveraging a smaller draft model alongside a larger target model, speculative decoding enables up to a 3.6x improvement in inference throughput. This approach ensures high-speed, high-accuracy generation of model outputs, streamlining workflows for large-scale AI applications. Speculative decoding with Medusa: The Medusa speculative decoding algorithm is available as part of TensorRT-LLM optimizations. By predicting multiple subsequent tokens simultaneously, Medusa boosts throughput for Llama 3.1 models by up to 1.9x on the NVIDIA HGX H200 platform. This innovation enables faster responses for applications that rely on LLMs, such as customer support and content creation. Multi-GPU inference MultiShot communication protocol : Traditional Ring AllReduce operations can become a bottleneck in multi-GPU scenarios. TensorRT-LLM MultiShot, powered by NVSwitch , reduces communication steps to just two, irrespective of GPU count. This innovation boosts AllReduce speeds by up to 3x, making low-latency inference scalable and efficient. Pipeline parallelism for high-concurrency efficiency : Parallelism techniques require that GPUs be able to transfer data quickly and efficiently, necessitating a robust GPU-to-GPU interconnect fabric for maximum performance. Pipeline parallelism on NVIDIA H200 Tensor Core GPUs achieved a 1.5x throughput increase for Llama 3.1 405B and demonstrated their versatility with a 1.2x speedup for Llama 2 70B in MLPerf Inference benchmarks. MLPerf Inference is a suite of industry-standard inference performance benchmarks developed by the MLCommons consortium. Large NVLink domains: The NVIDIA GH200 NVL32 system, powered by 32 NVIDIA GH200 Grace Hopper Superchips connected using the NVLink Switch system, and with TensorRT-LLM improvements, delivers up to 3x faster TTFT for Llama models. With up to 127 petaflops of AI compute, this next-generation architecture sets the stage for unprecedented real-time responsiveness in AI applications. Quantization and lower-precision compute NVIDIA TensorRT Model Optimizer for precision and performance: The NVIDIA custom FP8 quantization recipe in the NVIDIA TensorRT Model Optimizer delivers up to 1.44x higher throughput without sacrificing accuracy. These optimizations enable more cost-effective deployment by reducing latency and hardware requirements for demanding workloads. End-to-end full-stack optimization: NVIDIA TensorRT libraries and FP8 Tensor Core innovations ensure high performance across a wide range of devices, from data center GPUs to edge systems. NVIDIA has optimized the Llama 3.2 collection of models for great performance, demonstrating how full-stack software can adaptively unlock efficiency across diverse AI deployment environments. With these features, as well as many others within Triton and TensorRT-LLM, developers can now deploy LLMs that are not only faster and more efficient but also capable of handling a wider range of tasks and user demands. This opens new opportunities for businesses to enhance customer service, automate complex processes, and gain deeper insights from their data. Evaluating inference performance Delivering world-class inference performance takes a full technology stack—chips, systems, and software—all contributing to boosting throughput, reducing energy consumption per token, and minimizing costs. MLPerf Inference is one key measure of inference performance is MLPerf Inference. The benchmark measures inference throughput under standardized conditions, with results subject to extensive peer review. The benchmark is regularly updated to reflect new advances in AI, ensuring that organizations can rely on these results to evaluate platform performance. In the latest round of MLPerf Inference, NVIDIA Blackwell made its debut , delivering up to 4x more performance than the NVIDIA H100 Tensor Core GPU on the Llama 2 70B benchmark. This achievement was the result of the many architectural innovations at the heart of the Blackwell GPU, including the second-generation Transformer Engine with FP4 Tensor Cores and ultrafast HBM3e GPU memory that delivers 8 TB/s of memory bandwidth per GPU. In addition, many aspects of the NVIDIA software stack, including NVIDIA TensorRT-LLM, were re-engineered to make use of new capabilities in Blackwell, such as support for FP4 precision, while continuing to meet the rigorous accuracy target of the benchmark. The NVIDIA H200 Tensor Core GPU, available now from server makers and cloud service providers, also achieved outstanding results on every benchmark in the data center category. This includes the newly added Mixtral 8x7B mixture-of-experts (MoE) LLM, as well as on the Llama 2 70B LLM and Stable Diffusion XL text-to-image tests. As a result of continued software improvements, the Hopper architecture delivered up to 27% more inference performance compared to the prior round. NVIDIA Triton Inference Server, running on a system with eight H200 GPUs achieved virtually identical performance compared to the NVIDIA bare-metal submission on the Llama 2 70B benchmark in MLPerf Inference v4.1. This shows that enterprises no longer need to choose between a feature-rich, production-grade AI inference server and peak throughput performance—both can be achieved simultaneously with NVIDIA Triton. The future of AI inference: Emerging trends and technologies The landscape of AI inference is rapidly evolving, driven by a series of groundbreaking advancements and emerging technologies. Models continue to get smarter, as increases in compute at data center scale enable pretraining larger models. The introduction of sparse mixture-of-experts model architectures, such as GPT-MoE 1.8T, will also help boost model intelligence while improving compute efficiency. These larger models, whether dense or sparse, will require that GPUs individually become much more capable. NVIDIA Blackwell architecture is set to fuel next-generation generative AI inference. Each Blackwell GPU features second-generation Transformer Engine and fifth-generationTensor Cores utilizing FP4. Lower-precision data formats help to increase computational throughput and reduce memory requirements. To ensure they can deliver significant performance benefits while maintaining high accuracy, an incredible amount of software craftsmanship is needed. At the same time, to serve the most demanding models at brisk, real-time rates, many of the most capable GPUs will need to work in concert to generate responses. The NVIDIA GB200 NVL72 rack-scale solution creates a 72-GPU NVLink domain that acts as a single massive GPU. For GPT-MoE 1.8T real-time inference, it provides up to a 30x improvement in throughput compared to the prior generation Hopper GPU. In addition, the emergence of a new scaling law—test-time compute—is providing yet another way to improve response quality and accuracy for even more complex tasks. This new paradigm, first introduced with the OpenAI o1 model, enables models to “reason” by generating many intermediate tokens before outputting the final result. Reasoning models are particularly helpful in domains such as complex mathematics and generating computer code. This new paradigm is set to fuel a new wave of breakthroughs requiring more computational performance during inference time. The path to artificial general intelligence will rely on continued breakthroughs in data center compute performance. Pretraining, post-training, and test-time scaling all depend on state-of-the-art infrastructure running expertly crafted software. The NVIDIA platform is evolving rapidly, with a brisk one-year innovation rhythm, to enable the ecosystem to continue pushing the frontiers of AI. Get started Check out How to Get Started with AI Inference , learn more about the NVIDIA AI Inference platform, and stay informed about the latest AI inference performance updates. Watch a demo on how to quickly deploy NVIDIA NIM microservices or read A Simple Guide to Deploying Generative AI with NVIDIA NIM . Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using NVIDIA NIM microservices.
https://developer.nvidia.com/zh-cn/blog/optimize-ai-inference-performance-with-nvidia-full-stack-solutions/
借助 NVIDIA 全栈解决方案提升 AI 推理性能
AI 驱动的应用的爆炸式发展对开发者提出了前所未有的要求,他们必须在提供先进的性能与管理运营复杂性和成本以及 AI 基础设施之间取得平衡。 NVIDIA 正在为开发者提供涵盖芯片、系统和软件的全栈创新,重新定义 AI 推理 的可能性,使其比以往更快、更高效、更具可扩展性。 轻松部署高吞吐量、低延迟推理 六年前,NVIDIA 着手打造 AI 推理服务器,专为构建高吞吐量、延迟关键型生产应用的开发者而设计。当时,许多开发者都在努力使用定制的、特定于框架的服务器,这些服务器增加了复杂性,增加了运营成本,并且难以满足严格的服务水平协议(service-level agreements)关于延迟和吞吐量的要求。 为解决这一问题,NVIDIA 开发了 NVIDIA Triton Inference Server ,这是一个开源平台,能够为来自任何 AI 框架的模型提供服务。通过整合特定于框架的推理服务器,Triton 简化了 AI 推理部署,并提高了 AI 预测能力。这种方法使 Triton 成为广泛采用的 NVIDIA 开源项目之一,现已被数百家领先的组织用于高效部署生产级 AI 模型。 除 Triton 外,NVIDIA 还提供广泛的 AI 推理解决方案生态系统。对于寻求功能强大的可定制工具的开发者, NVIDIA TensorRT 提供了一个高性能深度学习推理库,其 API 可实现细粒度优化。 NVIDIA NIM 微服务提供了一个灵活的框架,用于在云端、数据中心或工作站中部署 AI 模型。 针对 AI 推理工作负载进行优化 推理是当今的全栈问题,需要高性能基础架构和高效软件来有效利用该基础架构。此外,随着模型大小不断增长和延迟限制日益严格,推理工作负载的挑战性也越来越高,同时利用这些 AI 服务的用户数量也在不断增加。随着推理时间扩展(一种扩展模型智能的新范式)的引入,推理过程中应用了更多的计算来增强模型性能。 这些趋势意味着,即使在相同的底层硬件平台上,继续提高交付的推理性能也很重要。通过将模型并行、混合精度训练、剪枝、量化和数据预处理优化等成熟方法与推理技术的前沿进步相结合,开发者可以在速度、可扩展性和成本效益方面实现显著提升。 TensorRT-LLM 库包含许多先进功能,可加速 大语言模型(LLMs) 的推理性能,如下所述。 预填充和 KV 缓存优化 键值 (KV) 缓存提早复用 :通过在不同用户中重复使用系统提示,KV 缓存提早复用功能可将首个令牌 (TTFT) 的时间缩短高达 5 倍。灵活的 KV 块大小和高效的驱逐协议可确保无缝管理内存,即使在多用户环境中也能缩短响应时间。 分块预填充 :为实现更智能的部署,分块预填充可将预填充阶段划分为较小的任务,从而提高 GPU 利用率并降低延迟。这项创新可简化部署,并确保一致的性能,即使在用户需求波动的情况下也是如此。 强效助力多圈交互 :NVIDIA GH200 超级芯片架构可实现高效的 KV 缓存卸载,在与 Llama 模型进行多圈交互时,将 TTFT 性能提升高达 2 倍,同时保持高吞吐量。 解码优化 长序列的 多块注意力 :TensorRT-LLM 多块注意力通过在流多处理器 (SM) 中分配任务,更大限度地提高 GPU 利用率,从而解决长输入序列的挑战。此技术可将系统吞吐量提高 3 倍以上,从而在不增加硬件成本的情况下支持更大的上下文长度。 用于加速吞吐量的推理吞吐量:通过利用较小的草稿模型和较大的目标模型,推理吞吐量可将推理吞吐量提升高达 3.6 倍。这种方法可确保高速、高精度地生成模型输出,简化大规模 AI 应用的工作流。 使用 Medusa 进行推理解码 :Medusa 推理解码算法可作为 TensorRT-LLM 优化的一部分提供。通过同时预测多个后续令牌,Medusa 在 NVIDIA HGX H200 平台上将 Llama 3.1 模型的吞吐量提高了 1.9 倍。这项创新可加快客户支持和内容创建等依赖 LLM 的应用的响应速度。 多 GPU 推理 MultiShot 通信协议 :传统的 Ring AllReduce 操作可能会成为多 GPU 场景中的瓶颈。TensorRT-LLM MultiShot 由 NVSwitch 提供支持,无论 GPU 数量如何,都可以将通信步骤减少到两个。这项创新将 AllReduce 速度提升高达 3 倍,使低延迟推理具有可扩展性并十分高效。 实现高并发效率的工作流并行:并行技术要求 GPU 能够快速高效地传输数据,因此需要强大的 GPU 到 GPU 互连结构来实现出色性能。 NVIDIA H200 Tensor Core GPU 上的工作流并行将 Llama 3.1 405B 的吞吐量提高了 1.5 倍,并在 MLPerf Inference 基准测试中证明了其通用性,将 Llama 2 70B 的速度提高了 1.2 倍。MLPerf Inference 是一套行业标准推理性能基准测试,由 MLCommons 联盟开发。 大型 NVLink 域 :NVIDIA GH200 NVL32 系统由通过 NVLink Switch 系统连接的 32 个 NVIDIA GH200 Grace Hopper 超级芯片提供支持,并进行了 TensorRT-LLM 改进,可为 Llama 模型提供高达 3 倍的 TTFT 速度。凭借高达 127 Petaflops 的 AI 计算能力,此新一代架构为 AI 应用实现出色的实时响应速度奠定了基础。 量化和低精度计算 用于提高精度和性能的 NVIDIA TensorRT 模型优化器 :NVIDIA TensorRT 模型优化器中的 NVIDIA 定制 FP8 量化方法可在不牺牲准确性的情况下将吞吐量提高 1.44 倍。这些优化可降低高要求工作负载的延迟和硬件需求,从而实现更具成本效益的部署。 端到端全栈优化 :NVIDIA TensorRT 库和 FP8 Tensor Core 创新技术可确保从数据中心 GPU 到边缘系统等各种设备实现高性能。NVIDIA 优化了 Llama 3.2 模型集合,以实现出色性能,展示了全栈软件如何在不同的 AI 部署环境中灵活释放效率。 借助这些功能以及 Triton 和 TensorRT-LLM 中的许多其他功能,开发者现在可以部署更快速、更高效的 LLM,并且能够处理更广泛的任务和用户需求。这为企业增强客户服务、实现复杂流程自动化以及从数据中获得更深入见解带来了新机遇。 评估推理性能 实现出色的推理性能需要完整的技术堆栈(芯片、系统和软件),所有这些都有助于提高吞吐量、降低每个令牌的能耗并更大限度地降低成本。 MLPerf Inference 是衡量推理性能的一个关键指标。该基准测试用于测量标准化条件下的推理吞吐量,并对结果进行广泛的同行评审。基准测试会定期更新,以反映 AI 领域的新进展,确保企业组织可以依靠这些结果来评估平台性能。 在最新一轮 MLPerf Inference 中, NVIDIA Blackwell 首次亮相 ,在 Llama 2 70B 基准测试中,其性能比 NVIDIA H100 Tensor Core GPU 高 4 倍。这一成就得益于 Blackwell GPU 核心的众多架构创新,包括采用 FP4 Tensor Cores 的第二代 Transformer Engine 和可为每个 GPU 提供 8 TB/s 的 HBM3e GPU 内存带宽。 此外,对 NVIDIA 软件堆栈的许多方面 (包括 NVIDIA TensorRT-LLM) 进行了重新设计,以利用 Blackwell 中的新功能 (例如对 FP4 精度的支持),同时继续满足基准测试的严格准确性目标。 服务器制造商和云服务提供商现已推出的 NVIDIA H200 Tensor Core GPU 在数据中心类别的每项基准测试中都取得了出色的成绩。其中包括新增的 Mixtral 8x7B 多专家模型 (MoE) LLM,以及 Llama 2 70B LLM 和 Stable Diffusion XL 文本转图像测试。得益于软件的持续改进,Hopper 架构可提供高达 27% 的推理性能。 与 MLPerf Inference v4.1 中 Llama 2 70B 基准测试中的 NVIDIA 裸机提交相比 ,在配备 8 个 H200 GPU 的系统上运行的 NVIDIA Triton Inference Server 实现了几乎相同的性能。这表明企业不再需要在功能丰富的生产级 AI 推理服务器和峰值吞吐量性能之间做出选择,而 NVIDIA Triton 可以同时实现这两种性能。 AI 推理的未来:新兴趋势和技术 在一系列突破性进展和新兴技术的推动下,AI 推理的格局正在迅速发展。随着数据中心规模的计算能力增加,模型将继续变得更加智能。引入稀疏的多专家模型架构 (例如 GPT-MoE 1.8T) 也将有助于提高模型智能,同时提高计算效率。这些更大型的模型,无论是密集模型还是稀疏模型,都需要 GPU 单独变得更加强大。NVIDIA Blackwell 架构将为新一代生成式 AI 推理提供动力支持。 每个 Blackwell GPU 均配备第二代 Transformer Engine 和第五代 Tensor Cores,利用 FP4。低精度数据格式有助于提高计算吞吐量并降低内存需求。为了确保它们能够在保持高精度的同时提供显著的性能优势,我们需要大量的软件技术。 与此同时,为了以快速、实时的速率为要求严苛的模型提供服务,许多功能非常强大的 GPU 需要协同工作以生成响应。 NVIDIA GB200 NVL72 机架级解决方案创建了一个 72-GPU NVLink 域,可充当单个大型 GPU。对于 GPT-MoE 1.8T 实时推理,与上一代 Hopper GPU 相比,其吞吐量提高了 30 倍。 此外,新的扩展定律(测试时计算) 的出现为提高更复杂任务的响应质量和准确性提供了另一种方法。这种新范式首先在 OpenAI o1 模型中引入,使模型能够在输出最终结果之前通过生成许多中间令牌来“推理”。推理模型在复杂数学和生成计算机代码等领域尤为有用。这种新范式将起新一轮突破浪潮,需要在推理期间实现更高的计算性能。 通往人工通用智能的道路将依赖于数据中心计算性能的持续突破。预训练、后训练和测试时扩展都依赖于运行专家精心编写的软件的最先进的基础架构。NVIDIA 平台发展迅速,一年内创新节奏轻快,使生态系统能够继续推动人工智能的前沿发展。 开始使用 查看如何开始使用 AI 推理 ,了解更多关于 NVIDIA AI 推理平台 的信息,并随时了解 最新的 AI 推理性能更新 。 观看演示,了解如何快速部署 NVIDIA NIM 微服务,或阅读《使用 NVIDIA NIM 部署生成式 AI 的简单指南》。TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。
https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
NVIDIA TensorRT-LLM Now Supports Recurrent Drafting for Optimizing LLM Inference
Recurrent drafting (referred as ReDrafter) is a novel speculative decoding technique developed and open-sourced by Apple for large language model (LLM) inference now available with NVIDIA TensorRT-LLM . ReDrafter helps developers significantly boost LLM workload performance on NVIDIA GPUs. NVIDIA TensorRT-LLM is a library for optimizing LLM inference. It provides an easy-to-use Python API to define LLMs and build NVIDIA TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Optimizations include custom attention kernels, inflight batching, paged KV caching, quantization (FP8, INT4 AWQ, INT8 SmoothQuant), and much more. Speculative decoding is a technique that accelerates LLM inference by generating multiple tokens in parallel. It uses smaller “draft” modules to predict future tokens, which are then verified by the main model. This method maintains output quality while significantly reducing response times, especially during low traffic periods, by better utilizing available resources for low-latency inference. ReDrafter employs recurrent neural network (RNN)-based sampling, referred to as drafting , combined with tree-style attention previously used in other techniques like Medusa to predict and verify draft tokens from multiple possible paths for better accuracy and to potentially accept more than one token in each iteration of the decoder. NVIDIA collaborated with Apple to add support for this technique in TensorRT-LLM, making it accessible to the broader developer community. The integration of ReDrafter into TensorRT-LLM expanded its reach, unlocked new optimization potential, and improved on previous methods such as Medusa. For Medusa, the path acceptance and token sampling happens in the TensorRT-LLM runtime, introducing some overhead inside the engine for processing all possible future paths without knowing the accepted path, most of which are ultimately discarded. To reduce such overhead, ReDrafter requires the token validation and acceptance of the best path before drafting future tokens for the next iteration. TensorRT-LLM has been updated to incorporate drafting and validation logic inside a single engine, rather than relying on the runtime or separate engines to further minimize overhead. This approach provides TensorRT-LLM kernel selection and scheduling more freedom to optimize the network for maximum performance. To better illustrate ReDrafter improvements, Figure 1 highlights the key differences between its implementation and that of Medusa in TensorRT-LLM. Most of the components related to speculative decoding have been done in-engine for ReDrafter. This significantly simplifies the runtime changes needed for ReDrafter. Figure 1. Comparison of Medusa (left) and ReDrafter (right) implementations in NVIDIA TensorRT-LLM The following sections delve into some of the changes that help enable ReDrafter in TensorRT-LLM. Inflight-batching compatible engine Inflight-batching (IFB) is a strategy that significantly improves the throughput by batching context-phase and generation-phase requests. Speculative decoding, coupled with IFB, introduces more complexity to the pipeline as context-phase requests need to be handled differently than generation-phase requests, which require draft token validation. Since ReDrafter moves the validation logic inside the model definition, the engine needs that logic as well during validation. Similar to the attention plugin, the batch is split into two smaller batches: one for context requests and another for generation requests. Each smaller batch then enters its computational workflow, and at the end they are combined back to a single batch for drafting. Figure 2. ReDrafter’s computational workflow for inflight-batching compatible TensorRT-LLM engine Note that this approach requires that all operators on either path support empty tensors, which could happen if a batch consists of all context requests or all generation requests. This capability adds flexibility to TensorRT-LLM APIs, enabling the definition of more complicated models in the future. Implementing in-engine validation and drafting To validate and draft inside the engine, TensorRT-LLM is updated with support for numerous new operations so that PyTorch code can be easily translated into a definition of the TensorRT-LLM model. The following PyTorch code excerpt is Apple’s PyTorch implementation of ReDrafter . The TensorRT-LLM implementation is almost a straightforward line-by-line mapping of the PyTorch version. PyTorch def unpack( packed_tensor: torch.Tensor, unpacker: torch.Tensor, ) -> torch.Tensor: assert len(packed_tensor.shape) == 3 last_dim_size = packed_tensor.shape[2] batch_size, beam_width, beam_length = unpacker.shape unpacked_data_indices = unpacker.view( batch_size, beam_width * beam_length, 1).expand( -1, -1, last_dim_size ) unpacked_tensor = torch.gather( packed_tensor, 1, unpacked_data_indices).reshape( batch_size, beam_width, beam_length, -1 ) return unpacked_tensor TensorRT-LLM def _unpack_beams( x: Tensor, indices: Tensor, num_beams: int, beam_length: int ) -> Tensor: assert x.rank() == 3 d0 = shape(x, 0, INT_DTYPE_STR) dl = shape(x, -1, INT_DTYPE_STR) indices = view( indices, [-1, num_beams * beam_length, 1], False) res_shape = concat([d0, num_beams, beam_length, dl]) res = view(gather_nd(x, indices), res_shape, False) return res This, of course, is a very simple example. For a more complex example, see the beam search implementation . With the new functionalities added for ReDrafter, it might be possible to improve the Medusa implementation in TensorRT-LLM to further increase its performance. ReDrafter performance in TensorRT-LLM As benchmarked by Apple , ReDrafter with TensorRT-LLM can provide up to 2.7x throughput improvements on NVIDIA H100 GPUs with TP8 over the base LLM. Note that the performance improvement of any speculative decoding technique can be heavily impacted by many factors, including: GPU utilization: Speculative decoding is commonly used for low-traffic scenarios, where GPU resources are typically underutilized due to small batch sizes. Average acceptance rate: The latency of each decoding step is increased since speculative decoding must perform extra computation, where a significant portion of it is ultimately wasted after validation. As a result, to see any performance benefits from speculative decoding, the average acceptance rate must be high enough to pay for that extra latency. This is affected by the number of beams, their lengths, and the quality of the beam search itself (which is impacted by the training data). Task: It is easier to predict future tokens for some tasks (code completion, for example), which leads to a higher acceptance rate, and thus improved performance. Summary This collaboration between NVIDIA and Apple, has made TensorRT-LLM more powerful and more flexible, enabling the LLM community to innovate more sophisticated models and easily deploy them with TensorRT-LLM to achieve unparalleled performance on NVIDIA GPUs. These new features open exciting possibilities, and we eagerly anticipate the next generation of advanced models from the community that leverage TensorRT-LLM capabilities, driving further improvements in LLM workloads. Explore NVIDIA TensorRT-LLM to unlock the full potential of your models on NVIDIA GPUs.
https://developer.nvidia.com/zh-cn/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
NVIDIA TensorRT-LLM 现支持 Recurrent Drafting,实现 LLM 推理优化
Recurrent Drafting (简称 ReDrafter) 是苹果公司为大语言模型 (LLM) 推理开发并开源的一种新型推测解码技术,该技术现在可与 NVIDIA TensorRT-LLM 一起使用。ReDrafter 帮助开发者大幅提升了 NVIDIA GPU 上的 LLM 工作负载性能。 NVIDIA TensorRT-LLM 是一个 LLM 推理优化库,提供了一个易于使用的 Python API 来定义 LLM 和构建 NVIDIA TensorRT 引擎,这些引擎具有顶尖的优化功能,可在 GPU 上高效执行推理。优化功能包括自定义 Attention Kernel、Inflight Batching、Paged KV Caching、量化技术 (FP8、INT4 AWQ、INT8 SmoothQuant) 等。 推测解码 (Speculative decoding) 是一种通过并行生成多个 token 来加速 LLM 推理的技术。它使用较小的“draft”模块预测未来的 token,然后由主模型进行验证。该方法通过更好地利用可用资源实现低延迟推理,在保持输出质量的同时大大缩短了响应时间,尤其是在低流量时段。 ReDrafter 运用基于循环神经网络 (RNN) 的采样 (称为 Drafting ) 并结合之前在 Medusa 等其他技术中使用的树状注意力,预测和验证来自多个可能路径的 draft token 以提高准确性,并在解码器的每次迭代中接受一个以上 token。NVIDIA 与苹果公司合作,在 TensorRT-LLM 中添加了对该技术的支持,使更加广泛的开发者社区能够使用该技术。 ReDrafter 与 TensorRT-LLM 的集成扩大了该技术的覆盖范围,解锁了新的优化潜力,并改进了 Medusa 等先前的方法。Medusa 的路径接受和 token 采样发生在 TensorRT-LLM 运行时,需要在接受路径未知的情况下处理所有可能的未来路径,而且其中大部分路径最终都会被丢弃,这就给引擎内部带来了一些开销。为了减少这种开销,ReDrafter 要求在 drafting 下一次迭代的未来 token 之前,先验证 token 并接受最佳路径。 为了进一步减少开销,TensorRT-LLM 更新后在单个引擎中整合了 drafting 和验证逻辑,不再依赖运行时或单独的引擎。这种方法为 TensorRT-LLM 内核选择和调度提供了更大的自由度,通过优化网络实现了性能的最大化。 为了更好地说明 ReDrafter 的改进,图 1 展示了 TensorRT-LLM 中 ReDrafter 实现与 Medusa 实现的主要区别。大多数与推测解码相关的组件都在 ReDrafter 的引擎内完成,这大大简化了 ReDrafter 所需的运行时更改。 图 1. NVIDIA TensorRT-LLM 中 Medusa(左)和 ReDrafter(右)实现的比较 下面将深入探讨有助于在 TensorRT-LLM 中启用 ReDrafter 的一些变化。 兼容 Inflight-batching 批处理的引擎 Inflight-batching (IFB) 是一种通过批量处理上下文阶段和生成阶段请求,来显著提高吞吐量的策略。鉴于上下文阶段请求与生成阶段请求的处理方式不同(生成阶段请求需要 draft token 验证),因此结合 IFB 的推测解码会给管线带来更大的复杂性。ReDrafter 将验证逻辑移至模型定义内部,因此引擎在验证过程中也需要该逻辑。与注意力插件类似,该批处理被分成两个较小的批处理:一个用于上下文请求,另一个用于生成请求。然后,每个较小的批处理进入计算工作流,最后再合并成一个批处理进行 drafting 流程。 图 2. ReDrafter 兼容 TensorRT-LLM 引擎的 Inflight-batching 批处理计算工作流 请注意,这种方法要求任一路径上的所有运算符都支持空张量。如果一个批处理由所有上下文请求或所有生成请求组成,就可能出现空张量。该功能增加了 TensorRT-LLM API 的灵活性,使未来定义更复杂的模型成为可能。 实现引擎内验证和 Drafting 为了在引擎内进行验证和 draft,TensorRT-LLM 更新时加入了对许多新操作的支持,这样 PyTorch 代码就可以轻松地转化成一个 TensorRT-LLM 模型的定义。 以下 PyTorch 代码摘录是苹果公司的 PyTorch 实现的 ReDrafter 。TensorRT-LLM 实现几乎就是 PyTorch 版本的直接逐行映射。 PyTorch def unpack( packed_tensor: torch.Tensor, unpacker: torch.Tensor, ) -> torch.Tensor: assert len(packed_tensor.shape) == 3 last_dim_size = packed_tensor.shape[2] batch_size, beam_width, beam_length = unpacker.shape unpacked_data_indices = unpacker.view( batch_size, beam_width * beam_length, 1).expand( -1, -1, last_dim_size ) unpacked_tensor = torch.gather( packed_tensor, 1, unpacked_data_indices).reshape( batch_size, beam_width, beam_length, -1 ) return unpacked_tensor TensorRT-LLM def _unpack_beams( x: Tensor, indices: Tensor, num_beams: int, beam_length: int ) -> Tensor: assert x.rank() == 3 d0 = shape(x, 0, INT_DTYPE_STR) dl = shape(x, -1, INT_DTYPE_STR) indices = view( indices, [-1, num_beams * beam_length, 1], False) res_shape = concat([d0, num_beams, beam_length, dl]) res = view(gather_nd(x, indices), res_shape, False) return res 当然,这只是一个非常简单的例子。如要了解更复杂的示例,请参见 束搜索实现 。借助为 ReDrafter 添加的新功能,就可以改进 TensorRT-LLM 中的 Medusa 实现,从而进一步提高其性能。 ReDrafter 在 TensorRT-LLM 中的性能 根据 苹果公司的基准测试 ,在采用 TP8(Tensor Parallelism with 8 GPUs,8 卡 GPU 张量并行) 的 NVIDIA GPU 上使用 TensorRT-LLM 的 ReDrafter 最多可将吞吐量提高至基础 LLM 的 2.7 倍。 请注意,任何推测解码技术的性能提升幅度都会受到诸多因素的大幅影响,包括: GPU 利用率: 推测解码通常用于低流量场景,由于批量较小,GPU 资源的利用率通常较低。 平均接受率: 由于推测解码必须执行额外的计算,而其中很大一部分计算最终会在验证后被浪费,因此每个解码步骤的延迟都会增加。所以要想通过推测解码获得任何性能上的优势,平均接受率必须高到足以弥补增加的延迟。这受到束数量、束长度和束搜索本身质量(受训练数据影响)的影响。 任务: 在某些任务(例如代码完成)中预测未来的 token 更容易,使得接受率更高,性能也会因此而提升。 总结 NVIDIA 与苹果公司的合作让 TensorRT-LLM 变得更加强大和灵活,使 LLM 社区能够创造出更加复杂的模型并通过 TensorRT-LLM 轻松部署,从而在 NVIDIA GPU 上实现无与伦比的性能。这些新特性带来了令人兴奋的可能性,我们热切期待着社区使用 TensorRT-LLM 功能开发出新一代先进模型,进一步改进 LLM 工作负载。 探索 NVIDIA TensorRT-LLM ,在 NVIDIA GPU 上充分释放模型潜能。
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/
Boost Llama 3.3 70B Inference Throughput 3x with NVIDIA TensorRT-LLM Speculative Decoding
Meta’s Llama collection of open large language models (LLMs) continues to grow with the recent addition of Llama 3.3 70B , a text-only instruction-tuned model. Llama 3.3 provides enhanced performance respective to the older Llama 3.1 70B model and can even match the capabilities of the larger, more computationally expensive Llama 3.1 405B model on several tasks including math, reasoning, coding, and multilingual support. NVIDIA TensorRT-LLM , a powerful inference engine that delivers state-of-the-art performance on the latest LLMs, incorporates many optimizations to deliver outstanding  Llama 3.3 70B inference throughput. These include in-flight batching , KV caching , custom FP8 quantization , speculative decoding, and more for fast, cost-efficient LLM serving. With in-flight batching activated by default as a runtime configuration parameter, TensorRT-LLM supports batching multiple different requests at the same time for higher serving throughput. By interleaving requests in context and generation phases, in-flight batching reduces latency and improves GPU utilization by executing new requests while older requests are still in flight. Finished requests are evicted from the batch, making room for the next set of requests. Caching the values of the key-value elements of previous tokens saves from expensive recomputation of these tensors in the generation phase for the next set of tokens. Computational savings effectively lead to higher throughput. However, KV cache grows linearly in size with number of batched requests and sequence context lengths, leading to higher memory requirements. TensorRT-LLM KV caching addresses these challenges through several optimizations, including support for paged KV cache , quantized KV cache , circular buffer KV cache and KV cache reuse . Each of these optimizations address the challenging balance between growing memory size and avoiding unnecessary and expensive recomputation.Speculative decoding is a popular technique for faster and cost-effective LLM inference with built-in verification for the quality of output generation. It’s  based on the premise that generating multiple sequences of future (draft) tokens is more efficient than processing a single token in autoregressive decoding, an inherently time-consuming process. The target model determines how many of these draft tokens to accept, which is far more efficient than having to generate one token per iteration. TensorRT-LLM supports a growing list of speculative decoding techniques including draft target , Medusa , Eagle , and lookahead decoding , among others. Figure 1. NVIDIA TensorRT optimizations for high-performance deep learning inference In this post, we show how the NVIDIA HGX H200 platform with NVLink and NVSwitch, as well as TensorRT-LLM, achieve great performance when running the latest Llama 3.3 70B model. We describe the step-by-step setup to get speculating decoding working for Llama 3.3 70B with TensorRT-LLM. For more information, including other optimizations, different models, and multi-GPU execution, see the full list of TensorRT-LLM examples . Achieving throughput speedups with draft target speculative decoding Table 1 and Figure 2 highlight the throughput (output tokens/second) speedups between no draft model (that is, no speculative decoding) versus draft models of various sizes with Llama 3.3 70B target model. Throughput Performance – Output Tokens/Second One NVIDIA H200 Tensor Core GPU Draft | Target Models Llama 3.2 1B | Llama 3.3 70B Llama 3.2 3B | Llama 3.3 70B Llama 3.1 8B | Llama 3.3 70B Llama 3.3 70B (without draft model) Tokens/sec 181.74 161.53 134.38 51.14 Speedups (with versus without draft models) 3.55x 3.16x 2.63x N/A Table 1. Throughput performance using one NVIDIA H200 Tensor Core GPU with TensorRT-LLM internal measurements Data measured on December 11, 2024. Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens  / total latency. DGX H200, TP1, FP8, batch size=1, TensorRT Model Optimizer version 0.21, TensorRT-LLM version 0.15.0​. Figure 2. Throughput speedups using speculative decoding with Llama 3.3 70B target model We provide the steps to reproduce these performance gains using draft target speculative decoding within TensorRT-LLM. # Download the following model checkpoints from Hugging Face and store them in a directory for easy access through the setup process. git lfs install # Download target models git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct # Download draft models git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct After the model checkpoints have been downloaded, install TensorRT-LLM. # Obtain and start the basic docker image environment (optional). docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint /bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04 # Install dependencies, TensorRT-LLM requires Python 3.10 apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin libopenmpi-dev git git-lfs # Fetch the library git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git cd TensorRT-LLM # Install the latest version (corresponding to the main branch) of TensorRT-LLM. pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com # Check installation python3 -c "import tensorrt_llm" Next, compile the downloaded model checkpoints into draft and target TensorRT engines. These engines are optimized to run inference with best accuracy and highest throughput. cd examples # Steps to build target and draft models in FP8 precision on 1 H200 # Create FP8 checkpoints python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8 --output_dir /ckpt-draft --calib_size 512 --tp_size 1 python3 quantization/quantize.py \ --model_dir=<path to target model repo> \ --output_dir=./ckpt-target-70b \ --dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \ --calib_size 512 --tp_size 1 # Build draft and target engines # Important flags for the engine build process: # --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model. # --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model. trtllm-build \ --checkpoint_dir ./ckpt-draft \ --output_dir=./draft-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 trtllm-build \ --checkpoint_dir=./ckpt-target-70b \ --output_dir=./target-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 \ --low_latency_gemm_plugin fp8 \ --speculative_decoding_mode=draft_tokens_external \ --max_draft_len 10 Finally, run speculative decoding in TensorRT-LLM. #Run decoding # Important flags to set during the run process: #--draft_engine_dir and --engine_dir must be specified for the draft and target engines. #--draft_target_model_config is corresponding to the configuration of Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4, device of draft model is GPU0, device of target model is GPU1, and use tokens rather than logits to accept. # Only CPP session (using executor as low-level API) is supported, while Python session (--use_py_session) is not supported. # Run with Llama 3.3 70B target model mpirun -n 1 --allow-run-as-root python3 ./run.py \ --tokenizer_dir <path to draft model repo> \ --draft_engine_dir ./draft-engine \ --engine_dir ./target-engine \ --draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \ --kv_cache_free_gpu_memory_fraction=0.35 \ --max_output_len=1024 \ --kv_cache_enable_block_reuse \ --input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA 3-digit integer contains one of each of the digits 1,3 and 5. What is the probability that the integer is divisible by 5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n" # Following is the LLM-generated output: Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit integers that can be formed using the digits 1, 3, and 5. There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers. ## Step 2: Identify the condition for a number to be divisible by 5. A number is divisible by 5 if its last digit is either 0 or 5. ## Step 3: Determine the number of arrangements where 5 is the last digit. Since the digit 5 must be the last digit for the number to be divisible by 5, we fix the last position with 5. The remaining two positions can be filled with the digits 1 and 3 in 2! = 2 ways. ## Step 4: Calculate the probability that the integer is divisible by 5. The probability is the number of favorable outcomes (arrangements where 5 is the last digit) divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5). ## Step 5: Calculate the probability. Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3. The final answer is: $\boxed{\frac{1}{3}}$" To benchmark throughput performance without speculative decoding, follow the steps below: # Run throughput benchmark for the 70B model without the draft model trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/ --gpt_attention_plugin float16 --workers 1 --max_batch_size 32 --max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable --use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8 python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist --num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200 --output-stdev 0 > /tmp/synthetic.txt trtllm-bench --model <path to target model repo> latency --engine_dir /data/70b-TRT/ --dataset /tmp/synthetic.txt Summary NVIDIA collaborates with Meta for the creation, optimization, and acceleration of the world’s leading open models. NVIDIA supports Llama as part of our commitment to grow open community AI models and software to enable users to customize and address their own unique workloads. NVIDIA is involved with several open-source projects through partnering with developers, maintainers, and foundations. NVIDIA TensorRT-LLM provides several features for optimizing and efficiently running LLMs of different model architectures. These optimizations lead to significant speedups on the same hardware, enable fewer resources to serve the same workload, reduce energy costs, and improve total cost of ownership. Available through production-ready deployments using NVIDIA NIM microservices, these TensorRT optimizations accelerate the deployment of your generative AI applications across NVIDIA-accelerated infrastructure anywhere, including cloud, data center, and workstations.
https://developer.nvidia.com/zh-cn/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/
借助 NVIDIA TensorRT-LLM 预测解码,将 Llama 3.3 的推理吞吐量提升 3 倍
随着近期新增的 Llama 3.3 70B (一种纯文本指令调整模型),Meta 的开放 大语言模型 (LLMs) 集合将继续增长。Llama 3.3 相对于较旧的 Llama 3.1 70B 模型提供了增强的性能,甚至可以在数学、推理、编码和多语种支持等多项任务中匹配更大、计算成本更高的 Llama 3.1 405B 模型的功能。 NVIDIA TensorRT-LLM 是一款功能强大的推理引擎,可在最新的 LLM 上提供先进的性能,并整合了许多优化措施,可提供出色的 Llama 3.3 70B 推理吞吐量。其中包括在飞批处理、 KV 缓存 、 自定义 FP8 量化 、推测解码等,可实现快速、经济高效的 LLM 服务。 动态批处理默认作为运行时配置参数激活,TensorRT-LLM 支持同时批处理多个不同的请求,从而提高服务吞吐量。通过在上下文和生成阶段交错处理请求,动态批处理可在旧请求仍在运行时执行新请求,从而减少延迟并提高 GPU 利用率。已完成的请求将从批处理中移除,从而为下一组请求腾出空间。 缓存先前令牌的键值元素的值可避免在为下一组令牌生成阶段对这些张量进行昂贵的重新计算。节省的计算量可有效提高吞吐量。但是,随着批量请求的数量和序列上下文长度的增加,KV 缓存的大小呈线性增长,从而导致内存需求增加。 TensorRT-LLM KV 缓存通过多种优化解决了这些挑战,包括对分页 KV 缓存、 量化 KV 缓存 、 循环缓冲区 KV 缓存 和 KV 缓存重复使用 的支持。每项优化都解决了在增大内存大小与避免不必要且昂贵的重新计算之间实现富有挑战性的平衡这一问题。预测解码是一种热门技术,可通过内置验证来生成输出的质量,从而实现更快且经济高效的 LLM 推理。其前提是,在自回归解码过程中,生成多个未来 (草稿) 令牌序列比处理单个令牌更高效。目标模型决定接受这些草稿令牌的数量,这比每次迭代生成一个令牌的效率要高得多。TensorRT-LLM 支持越来越多的预测性解码技术,包括 草稿目标 、 Medusa 、 Eagle 和前瞻性解码等。 图 1、适用于高性能深度学习推理的 NVIDIA TensorRT 优化 在本文中,我们将展示搭载 NVLink 和 NVSwitch 的 NVIDIA HGX H200 平台以及 TensorRT-LLM 如何在运行最新的 Llama 3.3 70B 模型时实现出色性能。我们介绍了使用 TensorRT-LLM 对 Llama 3.3 70B 进行解码预测的分步设置。有关其他优化、不同模型和多 GPU 执行的更多信息,请参阅 TensorRT-LLM 示例 的完整列表。 通过草稿目标预测解码实现吞吐量加速 表 1 和图 2 突出显示了无草稿模型 (即无预测解码) 与使用 Llama 3.3 70B 目标模型的各种规模的草稿模型之间的吞吐量 (输出令牌/秒) 加速。 吞吐量性能 – 输出 Tokens/秒 One NVIDIA H200 Tensor Core GPU 草稿|目标 模型 Llama 3.2 1B|Llama 3.3 70B Llama 3.2 3B|Llama 3.3 70B Llama 3.1 8B|Llama 3.3 70B Llama 3.3 70B (无草稿模型) 令牌/秒 191.74 151.53 134.38 51.14 加速 (有与无草稿模型对比) 3.55 倍 3.16 倍 2.63 倍 不适用 表 1、使用一个 NVIDIA H200 Tensor Core GPU 和 TensorRT-LLM 内部测量的吞吐量性能 数据测量于 2024 年 12 月 11 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 总生成 tokens / 总延迟。DGX H200、TP1、FP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21,TensorRT-LLM 版本 0.15.0。 图 2、使用 Llama 3.3 70B 目标模型的预测解码提高吞吐量 我们提供了在 TensorRT-LLM 中使用草稿目标预测解码重现这些性能提升的步骤。 # Download the following model checkpoints from Hugging Face and store them in a directory for easy access through the setup process. git lfs install # Download target models git clone https://huggingface.co/meta-llama/Meta-Llama-3.3-70B-Instruct # Download draft models git clone https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct 下载模型检查点后,安装 TensorRT-LLM。 # Obtain and start the basic docker image environment (optional). docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint /bin/bash -it nvidia/cuda:12.5.1-devel-ubuntu22.04 # Install dependencies, TensorRT-LLM requires Python 3.10 apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin libopenmpi-dev git git-lfs # Fetch the library git clone -b v0.15.0 https://github.com/NVIDIA/TensorRT-LLM.git cd TensorRT-LLM # Install the latest version (corresponding to the main branch) of TensorRT-LLM. pip3 install tensorrt_llm -U --extra-index-url https://pypi.nvidia.com # Check installation python3 -c "import tensorrt_llm" 接下来,将下载的模型检查点编译到草稿和目标 TensorRT 引擎中。这些引擎经过优化,能够以最佳的准确性和最高的吞吐量运行推理。 cd examples # Steps to build target and draft models in FP8 precision on 1 H200 # Create FP8 checkpoints python3 quantization/quantize.py --model_dir <path to draft model repo> --dtype float16 --qformat fp8 --kv_cache_dtype fp8 --output_dir /ckpt-draft --calib_size 512 --tp_size 1 python3 quantization/quantize.py \ --model_dir=<path to target model repo> \ --output_dir=./ckpt-target-70b \ --dtype=float16 --qformat fp8 --kv_cache_dtype fp8 \ --calib_size 512 --tp_size 1 # Build draft and target engines # Important flags for the engine build process: # --use_paged_context_fmha=enable must be specified since we need KVcache reuse for the draft/target model. # --speculative_decoding_mode=draft_tokens_external and --max_draft_len must be specified for target model. trtllm-build \ --checkpoint_dir ./ckpt-draft \ --output_dir=./draft-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 trtllm-build \ --checkpoint_dir=./ckpt-target-70b \ --output_dir=./target-engine \ --gpt_attention_plugin float16 \ --workers 1 \ --gemm_plugin=fp8 \ --use_paged_context_fmha=enable \ --multiple_profiles enable \ --max_batch_size=32 \ --max_seq_len=131072 \ --low_latency_gemm_plugin fp8 \ --speculative_decoding_mode=draft_tokens_external \ --max_draft_len 10 最后,在 TensorRT-LLM 中运行投机性解码。 #Run decoding # Important flags to set during the run process: #--draft_engine_dir and --engine_dir must be specified for the draft and target engines. #--draft_target_model_config is corresponding to the configuration of Draft-Target-Model. As an example, [4,[0],[1],False] means draft_len=4, device of draft model is GPU0, device of target model is GPU1, and use tokens rather than logits to accept. # Only CPP session (using executor as low-level API) is supported, while Python session (--use_py_session) is not supported. # Run with Llama 3.3 70B target model mpirun -n 1 --allow-run-as-root python3 ./run.py \ --tokenizer_dir <path to draft model repo> \ --draft_engine_dir ./draft-engine \ --engine_dir ./target-engine \ --draft_target_model_config = "[10,[0,1,2,3,4,5,6,7],[0,1,2,3,4,5,6,7], False]" \ --kv_cache_free_gpu_memory_fraction=0.35 \ --max_output_len=1024 \ --kv_cache_enable_block_reuse \ --input_text="<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nA 3-digit integer contains one of each of the digits 1,3 and 5. What is the probability that the integer is divisible by 5.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n" # Following is the LLM-generated output: Output [Text 0 Beam 0]: "## Step 1: Determine the total number of 3-digit integers that can be formed using the digits 1, 3, and 5. There are 3! = 6 ways to arrange the digits 1, 3, and 5 to form different 3-digit integers. ## Step 2: Identify the condition for a number to be divisible by 5. A number is divisible by 5 if its last digit is either 0 or 5. ## Step 3: Determine the number of arrangements where 5 is the last digit. Since the digit 5 must be the last digit for the number to be divisible by 5, we fix the last position with 5. The remaining two positions can be filled with the digits 1 and 3 in 2! = 2 ways. ## Step 4: Calculate the probability that the integer is divisible by 5. The probability is the number of favorable outcomes (arrangements where 5 is the last digit) divided by the total number of possible outcomes (total arrangements of the digits 1, 3, and 5). ## Step 5: Calculate the probability. Probability = (Number of favorable outcomes) / (Total number of outcomes) = 2 / 6 = 1/3. The final answer is: $\boxed{\frac{1}{3}}$" 要在不使用 speculative decoding 的情况下对吞吐量性能进行基准测试,请执行以下步骤: # Run throughput benchmark for the 70B model without the draft model trtllm-build --checkpoint_dir ./ckpt-target-70b --output_dir /data/70B-TRT/ --gpt_attention_plugin float16 --workers 1 --max_batch_size 32 --max_seq_len 131072 --use_fused_mlp enable --reduce_fusion enable --use_paged_context_fmha enable --multiple_profiles enable --gemm_plugin fp8 python3 /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py --output token-norm-dist.json --tokenizer /llama-3_3-70b/ token-norm-dist --num-requests 1000 --input-mean 500 --input-stdev 0 --output-mean 200 --output-stdev 0 > /tmp/synthetic.txt trtllm-bench --model <path to target model repo> latency --engine_dir /data/70b-TRT/ --dataset /tmp/synthetic.txt 总结 NVIDIA 与 Meta 合作创建、优化和加速世界领先的开放模型。NVIDIA 支持 Llama,履行我们发展开放社区 AI 模型和软件的承诺,使用户能够自定义和处理自己独特的工作负载。NVIDIA 通过与开发者、维护人员和基金会合作参与了多个开源项目。 NVIDIA TensorRT-LLM 提供多种功能,用于优化和高效运行不同模型架构的 LLM。这些优化可显著加快相同硬件的运行速度,减少资源以处理相同工作负载,降低能源成本,并提高总拥有成本。这些 TensorRT 优化通过使用 NVIDIA NIM 微服务的生产就绪型部署提供,可随时随地在 NVIDIA 加速的基础设施 (包括云、数据中心和工作站) 中加速生成式 AI 应用的部署。
https://developer.nvidia.com/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/
Spotlight: Perplexity AI Serves 400 Million Search Queries a Month Using NVIDIA Inference Stack
The demand for AI-enabled services continues to grow rapidly, placing increasing pressure on IT and infrastructure teams. These teams are tasked with provisioning the necessary hardware and software to meet that demand while simultaneously balancing cost efficiency with optimal user experience. This challenge was faced by the inference team at Perplexity AI, an AI-powered search engine that handles more than 435 million queries each month. Each query represents multiple AI inference requests. To meet this demand, the Perplexity inference team turned to NVIDIA H100 Tensor Core GPUs , NVIDIA Triton Inference Server , and NVIDIA TensorRT-LLM for cost-effective large language model (LLM) deployment. This post details some of the deployment best practices and TCO savings based on their hands-on experience. Serving multiple AI models simultaneously To support its extensive user base and serve a wide range of requests—spanning search, summarization, and question answering, among others—the inference team at Perplexity serves over 20 AI models simultaneously. This includes different variations of the popular open source Llama 3.1 models like 8B, 70B, and 405B. To match each user request with the appropriate model, the company relies on smaller classifier models that help determine user intent. User tasks detected by the classifiers, like text completion, are then routed to specific models deployed on GPU pods. Each pod consists of one or more NVIDIA H100 GPUs and is managed by an NVIDIA Triton Inference Server instance. The pods operate under strict service-level agreements (SLAs) for both cost efficiency and user interactivity. Figure 1. Perplexity AI deployed LLM production on a massive scale using NVIDIA HGX H100 four-GPU and eight-GPU systems featuring dynamic scalability, model parallelism, and load balancing capabilities To accommodate the large Perplexity user base and fluctuating traffic throughout the day, the pods are hosted within a Kubernetes cluster. They feature a front-end scheduler built in-house that routes traffic to the appropriate pod based on their load and usage, ensuring that the SLAs are consistently met. The scheduling algorithm used by the front-end scheduler can affect inter-token latency, particularly in improving the worst percentile of performance (Figure 2). The team at Perplexity constantly looks for new scheduler optimizations, including how to better account for sequence length variations across requests. Figure 2. Inter-token latency distribution for round-robin, least requests, and power of two random choices load balancing strategies during an internal Perplexity benchmark at a moderate request rate of 16 QPS Triton Inference Server is a critical component of Perplexity’s deployment architecture. It serves optimized models across various backends, batches incoming user requests, and provides GPU utilization metrics to the scheduler. This supports scaling up or down the number of deployments and GPUs based on the amount of the inference requests. For a detailed guide on how to deploy NVIDIA Triton with Kubernetes, see Scaling LLMs with NVIDIA Triton and NVIDIA TensorRT-LLM Using Kubernetes . Meeting strict service-level agreements To define the right SLAs for the company’s diverse use cases, Perplexity’s inference team conducts comprehensive A/B testing, evaluating different configurations and their impact on user experience. Their goal is to maximize GPU utilization while consistently meeting the target SLA for each specific use case. By improving batching while meeting target SLAs, inference serving cost is optimized. For smaller models, such as embedding models under 1 billion parameters used in real-time retrieval, the focus is on achieving the lowest possible latency. These are typically hidden from the user and are part of a broader workflow. As a result, configurations for these queries typically have low batch sizes. Given the smaller memory footprints of these models, the team runs multiple models concurrently on the NVIDIA H100 GPU to maintain high resource utilization. For user-facing models such as Llama 8B, 70B, and 405B, which have a greater impact on user experience and deployment costs, the team conducts a deeper performance analysis and evaluates key metrics such as time to first token, tokens per second per user, and cost per million queries. To optimize performance while controlling costs, Perplexity parallelizes their deployment of these models across multiple GPUs. Due to the strict SLAs, the team opted to increase tensor parallelism to four and eight GPUs, which they found yields lower serving costs for very latency-sensitive requests within a fixed GPU budget. Data or pipeline parallelism was useful for maximizing throughput in less latency-sensitive settings. The Perplexity team now uses TensorRT-LLM in combination with proprietary LLM runtimes built with optimized CUDA kernels to successfully serve the Llama-based models within their strict SLAs at minimal costs. Figure 3. Sharding the Llama 8B model using tensor parallelism across four NVIDIA Hopper GPUs reduces Perplexity’s relative cost per million token by up to 3x for latency-sensitive requests Ultimately, the Perplexity inference team’s decision to host models depends on their ability to serve these models at a lower cost while still meeting their strict SLAs, compared to using third-party LLM provider APIs. For example, the team estimated that they were able to save approximately $1 million annually by serving models that power their Related-Questions feature on cloud-hosted NVIDIA GPUs. The Related-Questions feature offers Perplexity users suggested follow-up questions to facilitate deeper dives after a search query. Delivering new levels of performance The inference team at Perplexity adopts a comprehensive, full-stack approach to their road map, consistently optimizing and enhancing every layer of the stack—from applications and use cases to inference serving middleware and hardware accelerators. In terms of inference serving middleware, the team is actively collaborating with the NVIDIA Triton engineering team to deploy disaggregating serving , a groundbreaking technique that separates the prefill and decode inference phases of an LLM workflow onto separate NVIDIA GPUs. This technique significantly boosts overall system throughput while meeting SLAs, translating to lower cost per token. Additionally, this technique gives Perplexity the flexibility to use different NVIDIA GPU products for each inference phase given its specific hardware resource requirements. The Perplexity team understands that optimizing the software stack can only drive performance improvements to a certain extent. To deliver new levels of performance, hardware innovations are crucial. This is why they are eager to assess the NVIDIA Blackwell platform . NVIDIA Blackwell delivers significant performance leaps enabled by numerous technology innovations, including the second-generation Transformer Engine with support for the FP4 data format, fifth-generation NVLink and NVSwitch enabling significantly larger NVLink domains, and more. Collectively these innovations deliver 30x improvement in inference performance for trillion parameter LLMs . Get started NVIDIA Triton Inference Server and NVIDIA TensorRT are open-source projects available on GitHub. They are also available as Docker containers that can be pulled from NVIDIA NGC . They are part of NVIDIA AI Enterprise , which offers enterprise-grade security, stability, and support. Enterprises seeking the fastest time to value can use NVIDIA NIM , a set of easy-to-use microservices for accelerated inference on a wide range of AI models, including open-source community and NVIDIA AI Foundation models. To learn more, check out these resources: From Zero to Millions: Scaling Large Language Model Inference with TensorRT-LLM Perplexity Enhances Model Performance for AI-Powered Search Engines with NVIDIA NeMo Accelerating Large Language Model Inference with NVIDIA in the Cloud
https://developer.nvidia.com/zh-cn/blog/spotlight-perplexity-ai-serves-400-million-search-queries-a-month-using-nvidia-inference-stack/
聚焦:Perplexity AI 利用 NVIDIA 推理栈每月服务 4 亿个搜索请求
对 AI 赋能服务的需求持续快速增长,这给 IT 和基础设施团队带来了越来越大的压力。这些团队的任务是配置必要的硬件和软件来满足这一需求,同时平衡成本效益和最佳用户体验。Perplexity AI 的推理团队面临着这一挑战。Perplexity AI 是一款由 AI 驱动的搜索引擎,每月可处理超过 435 million 个查询。每个查询代表多个 AI 推理请求。 为了满足这一需求,Perplexity 推理团队转而使用 NVIDIA H100 Tensor Core GPUs 、 NVIDIA Triton Inference Server 和 NVIDIA TensorRT-LLM 来进行经济高效的 大语言模型(LLM) 部署。本文根据他们的实操经验,详细介绍了一些部署最佳实践和节省的 TCO。 同时为多个 AI 模型提供服务 为了支持其广泛的用户群并满足搜索、总结和问答等方面的各种请求,Perplexity 的推理团队同时为 20 多个 AI 模型提供服务。其中包括热门开源 Llama 3.1 模型的不同变体,如 8B、70B 和 405B。 为了将每个用户请求与适当的模型相匹配,该公司依靠较小的分类器模型来帮助确定用户意图。然后,分类器检测到的用户任务 (例如文本补全) 会被路由到部署在 GPU 节点上的特定模型。每个节点由一个或多个 NVIDIA H100 GPU 组成,并由 NVIDIA Triton Inference Server 实例管理。这些节点遵循严格的服务级别协议 (SLA) 运行,以实现成本效益和用户交互。 图 1. Perplexity AI 使用 NVIDIA HGX H100 4-GPU 和 8-GPU 系统大规模部署 LLM 生产,具有动态可扩展性、模型并行性和负载均衡功能 为了适应庞大的 Perplexity 用户群和全天波动的流量,Pod 托管在 Kubernetes 集群内。它们具有内置的前端调度程序,可根据负载和使用情况将流量路由到适当的 Pod,从而确保始终满足 SLA。 前端调度程序使用的调度算法会影响令牌间延迟,尤其是在改善最差性能百分位方面 (Figure 2)。Perplexity 团队不断寻找新的调度器优化,包括如何更好地考虑不同请求之间的序列长度变化。 图 2、在 16 QPS 的中等请求速率下,在内部 Perplexity 基准测试中,循环、最小请求的令牌间延迟分配,以及两种随机选择负载均衡策略的能力 Triton Inference Server 是 Perplexity 部署架构的关键组件。它跨各种后端提供优化模型、批量处理传入用户请求,并向调度程序提供 GPU 利用率指标。这支持根据推理请求的数量增加或减少部署和 GPU 的数量。 有关如何使用 Kubernetes 部署 NVIDIA Triton 的详细指南,请参阅 使用 Kubernetes 扩展 LLMs with NVIDIA Triton 和 NVIDIA TensorRT-LLM 。 满足严格的服务级别协议 为了为公司的各种用例定义合适的 SLA,Perplexity 的推理团队进行了全面的 A/B 测试,评估不同的配置及其对用户体验的影响。他们的目标是最大限度地提高 GPU 利用率,同时始终满足每个特定用例的目标 SLA。通过在满足目标 SLA 的同时改进 batching,推理服务成本得以优化。 对于较小的模型,例如嵌入实时检索中使用的 10 亿个参数以下的模型,重点在于尽可能降低延迟。这些通常对用户隐藏,并且是更广泛工作流程的一部分。因此,这些查询的配置通常具有较低的批量大小。鉴于这些模型的内存占用空间较小,该团队在 NVIDIA H100 GPU 上同时运行多个模型,以保持高资源利用率。 对于面向用户的模型 (例如 Llama 8B、70B 和 405B),这些模型对用户体验和部署成本的影响更大,因此该团队会进行更深入的性能分析,并评估关键指标,例如时间到第一个 token、每位用户每秒 token 数以及每百万次查询的成本。 为了在控制成本的同时优化性能,Perplexity 跨多个 GPU 并行部署这些模型。由于严格的服务水平协议 (SLA),该团队选择将张量并行性增加到 4 个和 8 个 GPU,他们发现这样可以在固定的 GPU 预算内降低对延迟非常敏感的请求的服务成本。数据或 pipeline 并行有助于在延迟敏感度较低的设置中更大限度地提高吞吐量。 Perplexity 团队现在将 TensorRT-LLM 与使用优化的 CUDA 内核构建的专有 LLM 运行时相结合,以更低的成本在严格的 SLA 中成功为基于 Llama 的模型提供服务。 图 3、在四个 NVIDIA Hopper GPU 上使用张量并行机制对 Llama 8B 模型进行分片,可将 Perplexity 每百万个令牌的相对成本降低高达 3 倍,以处理延迟敏感型请求 最终,Perplexity 推理团队决定托管模型,取决于他们能否以较低的成本为这些模型提供服务,同时仍满足严格的服务水平协议 (SLA),与使用第三方 LLM 提供商 API 相比。例如,该团队估计,与向第三方 LLM API 服务提供商发送相同的请求量相比,通过在云托管的 NVIDIA GPUs 上提供支持其 Related-Questions 功能的模型,他们每年能够节省约 $1 million。“Related-Questions” 功能为 Perplexity 用户提供后续问题建议,以便在搜索查询后进行更深入的研究。 图 4、与使用第三方 LLM API 服务相比,Perplexity 使用其部署架构在云托管的 NVIDIA GPU 上提供 Related-Questions AI 模型,每年可节省约 100 万美元 提供更高水平的性能 Perplexity 的推理团队对其路线图采用全面的全栈方法,不断优化和增强堆栈的每一层 (从应用和用例到为中间件和硬件加速器提供服务的推理)。 在推理服务中间件方面,该团队正在积极与 NVIDIA Triton 工程团队合作,部署 分解服务 ,这是一项突破性技术,可将 LLM 工作流的预填充和解码推理阶段分离到单独的 NVIDIA GPU。此技术可显著提高整体系统吞吐量,同时满足 SLA,从而降低每个令牌的成本。此外,鉴于 Perplexity 的特定硬件资源需求,此技术还可让 Perplexity 在每个推理阶段灵活使用不同的 NVIDIA GPU 产品。 Perplexity 团队明白,优化软件堆栈只能在一定程度上推动性能提升。为了提供更高级别的性能,硬件创新至关重要。因此,他们迫切希望评估 NVIDIA Blackwell 平台 。 NVIDIA Blackwell 通过众多技术创新实现了 显著的性能飞跃 ,包括支持 FP4 数据格式的第二代 Transformer Engine、支持更大 NVLink 域的第五代 NVLink 和 NVSwitch 等。总体而言,这些创新可将 万亿级参数 LLM 的推理性能提升 30 倍。 开始使用 NVIDIA Triton Inference Server 和 NVIDIA TensorRT 是 GitHub 上的开源项目。它们还可用作 Docker 容器,可从 NVIDIA NGC 中提取。它们是 NVIDIA AI Enterprise 的一部分,可提供企业级安全性、稳定性和支持。寻求快速实现价值的企业可以使用 NVIDIA NIM ,这是一套易于使用的微服务,可加速各种 AI 模型(包括开源社区和 NVIDIA AI Foundation 模型)的推理。 如需了解详情,请查看以下资源: 从 0 扩展到数百万:使用 TensorRT-LLM 扩展大型语言模型推理 Perplexity 借助 NVIDIA NeMo 增强 AI 驱动的搜索引擎的模型性能 借助 NVIDIA 在云端加速大语言模型推理
https://developer.nvidia.com/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/
Llama 3.2 Full-Stack Optimizations Unlock High Performance on NVIDIA GPUs
Meta recently released its Llama 3.2 series of vision language models (VLMs), which come in 11B parameter and 90B parameter variants. These models are multimodal, supporting both text and image inputs. In addition, Meta has launched text-only small language model (SLM) variants of Llama 3.2 with 1B and 3B parameters. NVIDIA has optimized the Llama 3.2 collection of models for great performance and cost-efficient serving across millions of GPUs worldwide – from our most powerful data center and cloud GPUs to local NVIDIA RTX workstations and even low-power edge devices with NVIDIA Jetson . Llama 3.2 VLMs support long context lengths of up to 128K text tokens as well as a single image input at a resolution of  1120 x 1120 pixels. To enable low latency responses for great user experiences, while also providing high throughput for cost-efficient serving of these models, the NVIDIA platform is optimized at every layer of the technology stack. Similarly, the Llama 3.2 SLMs have been optimized to run well on the millions of NVIDIA RTX PCs and workstations worldwide. They have also been quantized to allow for local deployment on edge devices with NVIDIA Jetson. For more information, see Deploying Accelerated Llama 3.2 from the Edge to the Cloud . This post describes the full-stack optimizations that enable high throughput and low latency serving of Llama 3.2 models. Accelerating Llama 3.2 AI inference throughput The Llama 3.2 11B and Llama 3.2 90B models include a vision encoder with a text decoder. The encoder is optimized for high-performance inference using the NVIDIA TensorRT library and the text decoder is optimized using the NVIDIA TensorRT-LLM library. The visual information from the vision encoder is fused into the Llama text decoder with a cross-attention mechanism that is supported in TensorRT-LLM. This enables the Llama 3.2 VLMs to efficiently generate text by taking into account visual reasoning and understanding in context with the text input. TensorRT supports the vision encoder in the BF16 data format. TensorRT-LLM supports the text decoder in both FP16 and BF16 formats. The official recipe released by Meta uses a BF16 text decoder. This is also used for our baseline performance measurements. To boost performance further, NVIDIA developed a custom FP8 post-training quantization (PTQ) recipe, leveraging the fourth-generation FP8 Tensor Cores that are part of the NVIDIA Hopper architecture . This recipe, available through the TensorRT Model Optimizer library , enables higher Llama 3.2 throughput and lower latency while delivering the same accuracy across numerous benchmarks including ScienceQA , OCRBench , TextVQA , and MMMU . This means that developers can now run the model more cost-effectively. Optimizations from TensorRT, TensorRT-LLM, and TensorRT Model Optimizer libraries are combined and available through production-ready deployments using NVIDIA NIM microservices. Delivering high throughput and low latency Table 1 shows maximum throughput performance, representing offline use cases, across a range of input and output sequence lengths and single input image with maximum supported resolution of 1120 x 1120 pixels. Using a system based on the NVIDIA HGX H200 platform, we run the Llama 3.2 90B model on eight NVIDIA H200 Tensor Core GPUs , each with 141 GB of fast HBM3e memory, connected through NVLink and NVLink Switch , providing 900 GB/s of GPU-to-GPU bandwidth between the GPUs. Maximum Throughput Performance – Output Tokens/Second Eight NVIDIA H200 Tensor Core GPUs Input | Output Sequence Lengths | Image Size 8,000 | 2,000 | 1120×1120 20,000 | 2,000 | 1120×1120 60,000 | 2,000 | 1120×1120 BF16 Encoder with FP8 Decoder 2,646 1,417 480 Table 1. Maximum throughput performance with NVIDIA internal measurements TensorRT optimized NIM for VLMs version 1.1.0 recipe. NIM server restarted between each ISL/OSL configuration to set an optimal KV cache split. Data measured on 11/14/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens  / total latency. DGX H200, normalized to 8 GPUs (by taking the TP profile maximizing throughput per GPU, and multiplying that value by 8 to simulate a replica-parallel setup), batch size tuned for maximum node throughput, TensorRT Model Optimizer version 0.21 (pre-release), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0​. Table 2 shows minimum latency performance using the same input and output sequence lengths and input image size. Minimum Latency Performance – Output Tokens/Second Eight NVIDIA H200 Tensor Core GPUs Input | Output Sequence Lengths | Image Size 8,000 | 2,000 | 1120×1120 20,000 | 2,000 | 1120×1120 60,000 | 2,000 | 1120×1120 BF16 Encoder with FP8 Decoder 64 63 55 Table 2. Minimum latency performance with NVIDIA internal measurements TensorRT optimized NIM for VLMs version 1.1.0 recipe. Data measured on 11/4/2024.  Output tokens/second is inclusive of time to generate the first token – tok/s = total generated tokens  / total latency. DGX H200, TP8, batch size = 1, TensorRT Model Optimizer version 0.21 (prerelease), TensorRT-LLM version 0.16.0.dev, TensorRT version 10.4.0​. As these results show, NVIDIA H200 GPUs with TensorRT-optimized software delivers exceptional performance on Llama 3.2 90B VLM, in both latency-optimized and throughput-optimized scenarios. Throughput performance of GeForce RTX 4090 with ONNX Runtime on NVIDIA RTX For Windows deployments, NVIDIA has optimized Llama 3.2 SLMs to work efficiently using the ONNX Runtime Generative API , with a DirectML backend. Performance measurements are made using the model checkpoint available on the NGC catalog. The checkpoint is a quantized version of Llama 3.2 3B Instruct model and is quantized to AWQ INT4 using AutoAWQ and converted to ONNX using ONNX Runtime Generative API. Maximum Throughput Performance – Output Tokens/Second NVIDIA GeForce RTX 4090 GPUs Input | Output Sequence Lengths 100 | 100 2,000 | 100 4,000 | 100 Onnx-GenAI Runtime with DirectML, BS=1 253 203 165 Onnx-GenAI Runtime with DirectML, BS=4 615 374 251 Table 3. Maximum throughput performance with NVIDIA internal measurements ONNX Runtime Generative API with DirectML data measured on 10/07/2024. Output tokens/second is inclusive of time to generate the first token – tok/s =total generated tokens  / total latency. GeForce RTX 4090 GPU. Better performance on Llama 3.2 across platforms With the NVIDIA accelerated computing platform, you can build models and supercharge your applications with the most performant Llama 3.2 models on any platform—from the data center and cloud to local workstations. Enterprises seeking the fastest time to value can use NVIDIA NIM , part of the NVIDIA AI Enterprise software platform, which offers NVIDIA TensorRT optimized inference on Llama 3.2 and other models from NVIDIA and its partner ecosystem. Acknowledgments We would like to thank George Yuan, Alex Settle, and Chenjie Luo for their efforts in supporting this post.
https://developer.nvidia.com/zh-cn/blog/llama-3-2-full-stack-optimizations-unlock-high-performance-on-nvidia-gpus/
Llama 3.2 全栈优化释放 NVIDIA GPU 的高性能
Meta 最近发布了 Llama 3.2 系列视觉语言模型(VLM),其中包含 11B 参数和 90B 参数变体。这些模型是多模态模型,支持文本和图像输入。此外,Meta 还推出了 Llama 3.2 的纯文本 小语言模型(SLM) 变体,具有 1B 和 3B 参数。NVIDIA 已对 Llama 3.2 模型集合进行优化,以便在全球数百万个 GPU 上提供出色的性能和经济高效的服务 – 从我们功能强大的数据中心和云 GPU 到本地 NVIDIA RTX 工作站 ,甚至是使用 NVIDIA Jetson 的低功耗边缘设备。 Llama 3.2 VLM 支持长达 128K 的文本标记,以及分辨率为 1120 x 1120 像素的单张图像输入。为了实现低延迟响应,提供出色的用户体验,同时提供高吞吐量以经济高效地提供这些模型,NVIDIA 平台在技术堆栈的每一层都进行了优化。 同样,Llama 3.2 SLM 已经过优化,可在全球数百万 NVIDIA RTX PC 和工作站上良好运行。它们还经过量化,允许在边缘设备上使用 NVIDIA Jetson 进行本地部署。有关更多信息,请参阅 将加速的 Llama 3.2 从边缘部署到云端 。 本文将介绍如何通过全栈优化实现 Llama 3.2 模型的高吞吐量和低延迟服务。 加速 Llama 3.2 AI 推理吞吐量 Llama 3.2 11B 和 Llama 3.2 90B 模型包括一个带有文本解码器的视觉编码器。编码器使用 NVIDIA TensorRT 库 针对高性能推理进行了优化,文本解码器使用 NVIDIA TensorRT-LLM 库 进行了优化。 通过 TensorRT-LLM 中支持的交叉注意力机制,视觉编码器的视觉信息融合到 Llama 文本解码器中。这使得 Llama 3.2 VLM 能够高效生成文本,在文本输入的上下文中考虑视觉推理和理解。 TensorRT 支持视觉编码器在 BF16 数据格式下运行。TensorRT-LLM 支持文本解码器在 FP16 和 BF16 格式下运行。Meta 发布的官方 recipe 使用了 BF16 文本解码器,这也用于我们的基准性能测量。为了进一步提高性能,NVIDIA 利用 NVIDIA Hopper 架构中的 第四代 FP8 Tensor Core 开发了自定义的 FP8 后训练量化(PTQ)recipe。 此方法通过 TensorRT Model Optimizer 库 提供,可实现更高的 Llama 3.2 吞吐量和更低的延迟,同时在众多基准测试(包括 ScienceQA 、 OCRBench 、 TextVQA 和 MMMU )中提供相同的准确性。这意味着开发者现在可以更经济高效地运行模型。 TensorRT、TensorRT-LLM 和 TensorRT Model Optimizer 库中的优化经过组合,可通过使用 NVIDIA NIM 微服务的生产就绪型部署获得。 提供高吞吐量和低延迟 表 1 显示了在一系列输入和输出序列长度以及单输入图像 (最大支持分辨率为 1120 x 1120 像素) 中的最大吞吐量性能 (代表离线用例)。我们使用基于 NVIDIA HGX H200 平台的系统,在 8 个 NVIDIA H200 Tensor Core GPU 上运行 Llama 3.2 90B 模型,每个 GPU 均具有 141 GB 的快速 HBM3e 内存,通过 NVLink 和 NVLink Switch 连接,可在 GPU 之间提供 900 GB/s 的 GPU 到 GPU 带宽。 最大吞吐量性能 – 每秒输出令牌数 八个 NVIDIA H200 Tensor Core GPU 输入|输出序列长度|图像大小 8000|2000|1120*1120 20000|2000|1120*1120 60000|2000|1120*1120 带 FP8 解码器的 BF16 编码器 2646 1417 480 表 1. 通过 NVIDIA 内部测量获得的最大吞吐量性能 TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。在每个 ISL/OSL 配置之间重新启动 NIM 服务器,以设置最佳 KV 缓存分割。数据测量时间:11 月 14 日,2024 年。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = total generated tokens / total latency。DGX H200,标准化为 8 个 GPUs (通过采用 TP 配置文件最大限度地提高每个 GPU 的吞吐量,并将该值乘以 8 以模拟副本并行设置),针对最大节点吞吐量调整了批量大小,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0。 表 2 显示了使用相同的输入和输出序列长度以及输入图像大小时的最小延迟性能。 最低延迟性能 – 输出令牌/秒 ( 八个 NVIDIA H200 Tensor Core GPUs ) 输入|输出序列长度|图像大小 8000|2000|1120*1120 20000|2000|1120*1120 60000|2000|1120*1120 带 FP8 解码器的 BF16 编码器 64 63 55 表 2、NVIDIA 内部测量的最低延迟性能 TensorRT 针对 VLM 版本 1.1.0 recipe 优化的 NIM。数据测量时间:2024 年 11 月 4 日。输出 tokens/second 包含生成第一个 token 的时间 – tok/s = 生成的 tokens 总数/总延迟。DGX H200、TP8,批量大小 = 1,TensorRT Model Optimizer 版本 0.21 (预发布),TensorRT-LLM 版本 0.16.0.dev,TensorRT 版本 10.4.0 以上。 这些结果表明, 搭载 TensorRT 优化软件的 NVIDIA H200 GPU 可在 Llama 3.2 90B VLM 上提供出色的性能,适用于延迟优化和吞吐量优化的场景。 采用 ONNX Runtime 的 GeForce RTX 4090 在 NVIDIA RTX 上的吞吐量性能 对于 Windows 部署,NVIDIA 已对 Llama 3.2 SLM 进行优化,以便使用带有 DirectML 后端的 ONNX Runtime Generative API 高效工作。使用 NGC 目录上提供的 模型检查点 进行性能测量。检查点是 Llama 3.2 3B Instruct 模型的量化版本,使用 AutoAWQ 量化为 AWQ INT4 ,并使用 ONNX Runtime Generative API 转换为 ONNX。 最大吞吐量性能 – 输出令牌/秒 NVIDIA GeForce RTX 4090 GPU 输入|输出序列长度 100|100 2000|100 4000|100 采用 DirectML 的 Onnx-GenAI 运行时,BS = 1 253 203 165 支持 DirectML 的 Onnx-GenAI 运行时,BS = 4 615 374 251 表 3、通过 NVIDIA 内部测量获得的最大吞吐量性能 使用在 2024 年 10 月 7 日测量的 DirectML 数据的 ONNX Runtime Generative API。输出 token/s 包含生成第一个 token 的时间 – tok/s = 总生成 token 数 / 总延迟。GeForce RTX 4090 GPU。 跨平台在 Llama 3.2 上获得更好的性能 借助 NVIDIA 加速计算平台,您可以在任何平台 (从数据中心、云到本地工作站) 上构建模型并使用性能超强的 Llama 3.2 模型增强您的应用。 NVIDIA NIM 是 NVIDIA AI Enterprise 软件平台的一部分,可为 Llama 3.2 和 NVIDIA 及其合作伙伴生态系统的其他模型提供经 NVIDIA TensorRT 优化的推理。 致谢 我们要感谢 George Yuan、Alex Settle 和 Chenjie Luo 为支持这篇博文而付出的努力。
https://developer.nvidia.com/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/
Streamlining AI Inference Performance and Deployment with NVIDIA TensorRT-LLM Chunked Prefill
In this blog post, we take a closer look at chunked prefill, a feature of NVIDIA TensorRT-LLM that increases GPU utilization and simplifies the deployment experience for developers. This builds on our previous post discussing how advanced KV cache optimization features in TensorRT-LLM improve performance up to 5x in use cases that require system prefills. Challenges with traditional prefill and decode inference approaches When a user submits a request to a model, it goes through two distinct computational phases: prefill and decode. Each phase uses GPU compute resources differently. During the prefill phase, the system processes all input tokens to compute the KV cache, which is then used to generate the first token of the output. This phase is computationally demanding and can effectively use a GPU’s vast parallel compute resources. In the decode phase, the system generates output tokens individually, updating the intermediate states from the prefill phase with each new token. Since the heavy computational work for intermediate state calculations is done in the prefill phase, the decode phase primarily involves processing only the newly generated token. As a result, this phase is less computationally intensive. TensorRT-LLM supports in-flight batching where multiple requests are handled in parallel, enabling prefill and decode stage processing simultaneously. While more efficient than phased batching, in which decode requests are paused to process prefill requests, this approach can introduce latency as the decode phases are delayed until the prefill requests are completed. The top portion of Figure 1 illustrates this concept. Figure 1. A simplified representation of the chunked prefill process, showing how it can increase GPU utilization and accelerate query completion Balancing prefill and decode phases with chunked prefill With TensorRT-LLM chunked prefill, the tokens are divided into smaller units, or chunks, for faster processing. This prevents the prefill phase from becoming a bottleneck, enables more parallelization with decode phase tokens, and increases GPU utilization. The bottom portion of Figure 1 illustrates this concept. Additionally, using TensorRT-LLM chunked prefill enables GPU systems to handle longer contexts and higher concurrency levels. Since memory usage depends on the number of tokens processed during each iteration, using prefill chunks decouples memory consumption from the context length of incoming requests. The system can then process larger context lengths without increasing memory demands. Simplifying TensorRT-LLM engine creation with dynamic chunk sizing Figure 1 shows how ‌chunk size can impact user interactivity and overall system throughput, requiring a trade-off based on the desired user experience and available GPU resources. Using a large chunk size lowers the number of iterations required to process prefill sequences, reducing time to first token (TTFT). However, it also increases the time taken to complete the decode phase of ongoing requests, increasing query completion time and reducing output tokens per second (TPS). Finding the right balance can sometimes be time-consuming for developers. TensorRT-LLM deploys dynamic chunk sizing, providing ideal recommendations for the user based on GPU utilization metrics. An additional advantage of dynamic prefill chunk sizing is the simplification of the TensorRT-LLM engine build process. In the past, developers had to supply a maximum input sequence length from which activation buffers were computed during engine build time. To ensure the activation buffers could handle the most complex user queries, developers were required to set the maximum sequence length expected for their use case. This approach optimized GPU resources for outlier and worst-case scenarios, leading to inefficient memory usage. With dynamic prefill chunk sizing developers no longer need to specify the maximum input length manually. Instead, activation buffer sizes are automatically determined by the chunk size, which is configured by TensorRT-LLM. This eliminates manual configuration and leads to more efficient memory usage during execution. Getting started with TensorRT-LLM chunked prefills Using TensorRT-LLM chunked prefill significantly improves both system performance and utilization. Breaking down the traditionally sequential prefill phase into smaller, more manageable chunks, enables better parallelization, with the decode phase, reducing bottlenecks and accelerating query completion. The dynamic adjustment of chunk size also simplifies the TensorRT-LLM engine configuration process, eliminating the need for developers to manually set activation buffer sizes, which previously resulted in inefficient memory usage. Dynamic chunk sizing optimizes memory use and ensures that the system adapts seamlessly to varying LLM user demands. Check out our GitHub documentation , to start using TensorRT-LLM chunked prefills. Learn more about NVIDIA AI Inference solutions and stay up-to-date with the latest AI inference performance updates.
https://developer.nvidia.com/zh-cn/blog/streamlining-ai-inference-performance-and-deployment-with-nvidia-tensorrt-llm-chunked-prefill/
借助 NVIDIA TensorRT-LLM 分块预填充提高 AI 推理效率和简化部署
在本 博文 中,我们详细介绍了分块预填充,这是 NVIDIA TensorRT-LLM 的一项功能,可提高 GPU 利用率并简化开发者的部署体验。本文基于之前的博文,讨论了 TensorRT-LLM 中的高级 KV 缓存优化功能如何在需要系统预填充的用例中将性能提升高达 5 倍。 传统预填充和解码推理方法面临的挑战 当用户向模型提交请求时,它将经历两个不同的计算阶段:预填充和解码。每个阶段使用 GPU 计算资源的方式各不相同。 在预填充阶段,系统会处理所有输入令牌以计算 KV 缓存,然后使用该缓存生成输出的第一个令牌。此阶段的计算要求很高,可以有效利用 GPU 庞大的并行计算资源。 在解码阶段,系统会单独生成输出 tokens,使用每个新 tokens 更新预填充阶段的中间状态。由于中间状态计算的繁重计算工作是在预填充阶段完成的,因此解码阶段主要涉及仅处理新生成的 tokens。因此,此阶段的计算密集度较低。 TensorRT-LLM 支持动态批处理,允许并行处理多个请求,使得预填充和解码阶段可以同时进行。虽然这种方法比暂停解码请求以处理预填充请求的分阶段批处理更高效,但由于解码阶段延迟到预填充请求完成,因此可能会导致延迟。图 1 的顶部说明了这一概念。 图 1. 分块预填充过程的简化表示,展示了如何提高 GPU 利用率并加速查询完成。 使用分块预填充平衡预填充和解码阶段 借助 TensorRT-LLM 分块预填充,令牌被分成较小的单元或分块,以加快处理速度。这可以防止预填充阶段成为瓶颈,使解码阶段的令牌实现更多的并行化,并提高 GPU 的利用率。图 1 的底部部分说明了这一概念。 此外,使用 TensorRT-LLM 分块预填充使 GPU 系统能够处理更长的上下文和更高的并发级别。由于内存使用量取决于每次迭代期间处理的令牌数量,因此使用预填充块可将内存消耗量与传入请求的上下文长度解耦。然后,系统可以在不增加内存需求的情况下处理更大的上下文长度。 通过动态数据块大小调整简化 TensorRT-LLM 引擎创建 图 1 显示了块大小百分比对用户交互性和整体系统吞吐量的影响,这需要根据所需的用户体验和可用 GPU 资源进行权衡。使用大块大小可以减少处理预填充序列所需的迭代次数,从而缩短第一个令牌(TTFT)的时间。 但是,这也会增加完成正在进行的请求的解码阶段所需的时间,从而增加查询完成时间并降低每秒输出令牌数(TPS)。对于开发者而言,找到合适的平衡有时非常耗时。TensorRT-LLM 部署动态块大小,根据 GPU 利用率指标为用户提供理想建议。 动态预填充块大小调整的另一个优势是简化了 TensorRT-LLM 引擎构建流程。过去,开发者必须提供最大输入序列长度,以便在引擎构建期间计算激活缓冲区。为确保激活缓冲区能够处理最复杂的用户查询,开发者需要为其用例设置预期的最大序列长度。这种方法针对异常和最差情况优化了 GPU 资源,导致内存使用效率低下。 借助动态预填充块大小调整,开发者无需再手动指定最大输入长度。相反,激活缓冲区大小由块大小(由 TensorRT-LLM 配置)自动确定。这消除了手动配置,并提高了执行期间的内存使用效率。 TensorRT-LLM 分块预填充入门 使用 TensorRT-LLM 分块预填充可显著提高系统性能和利用率。将传统的连续预填充阶段分解为更小、更易于管理的块,可通过解码阶段实现更好的并行化,从而减少瓶颈并加速查询完成。 块大小的动态调整还简化了 TensorRT-LLM 引擎配置流程,无需开发者手动设置激活缓冲区大小,而这在之前会导致内存使用效率低下。动态块大小优化内存使用,并确保系统能够无缝适应不同 LLM 用户的需求。 查看 我们的 GitHub 文档 ,开始使用 TensorRT-LLM 分块预填充。详细了解 NVIDIA AI 推理解决方案 ,并随时了解最新的 AI 推理性能更新 。
https://developer.nvidia.com/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/
5x Faster Time to First Token with NVIDIA TensorRT-LLM KV Cache Early Reuse
In our previous blog post , we demonstrated how reusing the key-value (KV) cache by offloading it to CPU memory can accelerate time to first token (TTFT) by up to 14x on x86-based NVIDIA H100 Tensor Core GPUs and 28x on the NVIDIA GH200 Superchip. In this post, we shed light on KV cache reuse techniques and best practices that can drive even further TTFT speedups. Introduction to KV cache LLM models are rapidly being adopted for many tasks, including question-answering, and code generation. To generate a response, these models begin by converting the user’s prompt into tokens, which are then transformed into dense vectors. Extensive dot-product operations follow to mathematically model the relationships between the tokens and build a contextual understanding of the user input. The computational cost of generating this contextual understanding increases quadratically with the length of the input sequence. This resource-intensive process generates keys and values, which are cached to avoid recomputation when generating subsequent tokens. Reusing the KV cache reduces the computational load and time needed to generate additional tokens—leading to a faster and more efficient user experience. When reusing the KV cache, careful attention must be given to how long it remains in memory, which components to evict first when memory is full, and when it can be reused for new incoming prompts. Optimizing these factors can lead to incremental performance improvements in KV cache reuse. NVIDIA TensorRT-LLM offers three key features that specifically address these areas. Early KV cache reuse Traditional reuse algorithms require the entire KV cache computation to be completed before any portions of it can be reused with new user prompts. In scenarios such as enterprise chatbots, where system prompts—predefined instructions added to user queries—are essential to direct the LLM’s responses in line with enterprise guidelines, this method can be inefficient. When a surge of users interacts with the chatbot simultaneously, each user would require a separate computation of the system prompt KV cache. With TensorRT-LLM, we can instead reuse the system prompt as it is being generated in real time, enabling it to be shared across all users during the burst, rather than recalculating it for each user. This can significantly accelerate inference for use cases requiring system prompts by up to 5x. Figure 1. TensorRT-LLM KV cache reuse can speed up TTFT by up to 5x Flexible KV cache block sizing In reuse implementations, only entire cache memory blocks can be allocated for reuse. For example, if the cache memory block size is 64 tokens and KV cache is 80 tokens, only 64 tokens will be stored for reuse, while the remaining 16 tokens will need to be recomputed. However, if the memory block size is reduced to 16 tokens, all 64 tokens can be stored across five memory blocks, eliminating the need for re-computation. This effect is most pronounced when the input sequences are short. For long input sequences, larger blocks can be more beneficial.  As is clear, the more granular the control you have over the KV cache, the better you can optimize it for your specific use case. TensorRT-LLM provides fine-grained control over KV cache memory blocks, giving developers the ability to chop them into smaller blocks between 64 to 2 tokens. This optimizes the usage of allocated memory, increases reuse rates, and improves TTFT. When running LLAMA70B on NVIDIA H100 Tensor Core GPUs, we can speed up TTFT up to 7% in multi-user environments by reducing KV cache block size from 64 tokens to 8 tokens. Figure 2. Impact of changing KV cache block size on inference speedup Efficient KV cache eviction protocols Partitioning the KV cache into smaller blocks and evicting unused ones can be effective for memory optimization, but it introduces dependency complexities. When a specific block is used to generate a response, and the result is stored as a new block, it can form a tree-like structure of dependencies. Over time, the counters tracking the usage of the source blocks (the branches) may become stale as the dependent nodes (the leaves) are reused. Evicting the source block then requires the eviction of all dependent blocks, which would require recalculation of the KV cache for new user prompts, increasing TTFT. To address this challenge, TensorRT-LLM includes intelligent eviction algorithms that can trace the dependent nodes from their source nodes and evict dependent nodes first, even if they have more recent reuse counters. This ensures more efficient memory management while preventing unnecessary evictions of dependent blocks. Figure 3. A logical representation of KV cache eviction algorithm show how it can reduce the number of evicted blocks, increasing the likelihood of reuse Getting started with TensorRT-LLM KV cache reuse Generating KV cache during inference requires a lot of compute and memory resources. Using it efficiently is critical to improving model response, accelerating inference, and increasing system throughput. TensorRT-LLM provides advanced reuse features for developers looking to further optimize TTFT response times for peak performance. To start using TensorRT-LLM KV cache reuse check out our GitHub documentation .
https://developer.nvidia.com/zh-cn/blog/5x-faster-time-to-first-token-with-nvidia-tensorrt-llm-kv-cache-early-reuse/
NVIDIA TensorRT-LLM KV 缓存早期重用实现首个令牌速度 5 倍提升
在上一篇 博客文章 中,我们展示了通过将键值 (KV) 缓存卸载到 CPU 内存来重复使用该缓存如何在基于 x86 的 NVIDIA H100 Tensor Core GPU 上将首次生成令牌 (Time To First Token, TTFT) 的速度提升高达 14 倍,在 NVIDIA GH200 Superchip 上可将其提升高达 28 倍。在本文中,我们介绍了可以进一步推动 TTFT 加速的 KV 缓存重复使用技术和最佳实践。 KV 缓存简介 许多任务(包括问答和代码生成)都迅速采用了 LLM 模型。为了生成响应,这些模型首先将用户的提示符转换为 tokens,然后将其转换为 dense vectors。随后会进行大量的 dot-product operations,以数学方式对 tokens 之间的关系建模,并构建对用户输入的上下文理解。生成这种上下文理解的计算成本会随着输入序列的长度而二次增加。 此资源密集型流程会生成键和值,这些键和值会被缓存,以避免在生成后续 token 时进行重新计算。重复使用 KV 缓存可减少生成额外 token 所需的计算负载和时间,实现更快、更高效的用户体验。 重复使用 KV 缓存时,必须仔细注意其在内存中的剩余时间、在内存已满时首先移除哪些组件,以及何时可重复用于新的传入提示。优化这些因素可能会带来 KV 缓存重复使用的性能增益。NVIDIA TensorRT-LLM 提供了三个关键功能,专门解决了这些问题。 早期 KV 缓存复用 传统的复用算法需要先完成整个 KV 缓存的计算,然后才能根据新用户的提示重复使用其任何部分。在企业聊天机器人等场景中,系统提示(添加到用户查询中的预定义指令)对引导 LLM 根据企业指南做出响应至关重要,这种方法可能效率低下。 当大量用户同时与聊天机器人交互时,每个用户都需要单独计算系统提示 KV 缓存。借助 TensorRT-LLM,我们可以重新使用实时生成的系统提示,使其在突发期间在所有用户之间共享,而不是为每个用户重新计算。这可以显著加快推理速度,最高可达 5 倍。 图 1.TensorRT-LLM KV 缓存重复利用可将 TTFT 速度提升高达 5 倍。 灵活调整 KV 缓存块大小 在重用实现中,只能分配整个缓存内存块以供重用。例如,如果缓存内存块大小为 64 个令牌,KV 缓存为 80 个令牌,则仅存储 64 个令牌以供重用,而其余 16 个令牌则需要重新计算。但是,如果将内存块大小减少到 16 个令牌,则可在五个内存块中存储全部 64 个令牌,从而消除重新计算的需求。 当输入序列较短时,这种效果最为明显。对于长输入序列,更大的块会更有好处。显然,您对 KV 缓存的控制越细致,就越能针对特定用例进行优化。 TensorRT-LLM 可对 KV 缓存内存块进行细粒度控制,使开发者能够将这些块分割成 64 到 2 个令牌之间的较小块。这可优化已分配内存的使用,提高复用率并改善 TTFT。在 NVIDIA H100 Tensor Core GPU 上运行 LLAMA70B 时,我们可以通过将 KV 缓存块的大小从 64 个令牌减少到 8 个令牌,在多用户环境中将 TTFT 速度提升高达 7%。 图 2. 更改 KV 缓存块大小对推理加速的影响 高效的 KV 缓存回收协议 将 KV 缓存分割成较小的块并移除未使用的块对于内存优化可能有效,但这会引入依赖项复杂性。当使用特定块生成响应并将结果存储为新块时,它会形成树状的依赖结构。 随着时间的推移,跟踪源块(分支)使用情况的计数器可能会随着依赖节点(叶子)的重复使用而过时。然后,移除源块需要移除所有依赖块,这将需要重新计算 KV 缓存以用于新用户提示,从而增加 TTFT。 为应对这一挑战,TensorRT-LLM 包含智能驱逐算法,该算法可以追踪源节点中的依赖节点,并首先驱逐依赖节点,即使这些节点具有较新的重复使用计数器。这可以确保更高效的内存管理,同时防止不必要的依赖块驱逐。 图 3.KV 缓存驱逐算法的逻辑表示展示了如何减少被驱逐的块的数量,从而增加重复使用的可能性。 TensorRT-LLM KV 缓存重用入门 在推理过程中生成 KV 缓存需要大量的计算和内存资源。高效使用 KV 缓存对于改善模型响应、加速推理和提高系统吞吐量至关重要。TensorRT-LLM 为希望进一步优化 TTFT 响应时间以实现峰值性能的开发者提供了高级复用功能。 要开始使用 TensorRT-LLM KV 缓存,请参阅 我们的 GitHub 文档 。
https://developer.nvidia.com/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/
3x Faster AllReduce with NVSwitch and TensorRT-LLM MultiShot
Deploying generative AI workloads in production environments where user numbers can fluctuate from hundreds to hundreds of thousands – and where input sequence lengths differ with each request – poses unique challenges. To achieve low latency inference in these environments, multi-GPU setups are a must – irrespective of the GPU generation or its memory capacity. To enhance inference performance in production-grade setups, we’re excited to introduce TensorRT-LLM Multi-shot, a new multi-GPU communication protocol that leverages the NVIDIA NVLink Switch to significantly increase communication speeds by up to 3x. This blog outlines this new feature and how it helps developers and solution architects address the limitations of traditional multi-GPU communication methods. Challenges with traditional AllReduce algorithms For low latency inference, multi-GPU is critical, regardless of the memory capacity of a single GPU. However, at low concurrency, the time GPUs spend exchanging data can outweigh the time spent on compute. For optimal performance, an efficient AllReduce operation – a collective operation that combines partial results from each participating GPU – is critical. Traditional approaches use ring-based algorithms, where the partial values are passed around a ring of GPUs.  Each GPU contributes its values and passes the result to its neighbor. This process is repeated 2N-2 times where N is the number of GPUs working together, and by the end of the process, every GPU has the same summed value. A second pass over the ring is required to propagate summed values from the last GPU to the rest. The Ring approach makes efficient use of available GPU-to-GPU bandwidth per communication step, but as the number of GPUs increases, so does the number of steps. This increases latency, as all GPUs need to stay synchronized at every step of the ring. ‌These synchronization latencies add significant latency overhead and can make it difficult to meet more stringent latency targets. The Ring AllReduce algorithm is described below: Ring Algorithm:  GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1) 2N-2 steps, with full tensor send/recv each step Latency: 2N-2 communication steps.  (N: # of GPUs) Traffic: (4N-4)/N tensor bytes of send/recvs Addressing AllReduce communication challenges with TensorRT-LLM MultiShot TensorRT-LLM MultiShot is a new algorithm that reduces the O(N) latency of Ring AllReduce by up to 3x leveraging multicast in NVSwitch. Multicast is a hardware acceleration feature in NVSwitch which allows a GPU to send data once and have that data sent simultaneously to all other GPUs, minimizing the number of communication steps to two inter-GPU synchronizations while remaining bandwidth efficient. Without NVSwitch, this would take N times the communication bandwidth. TensorRT-LLM Multishot separates the AllReduce into a ReduceScatter operation followed by an AllGather operation (for more detailed descriptions of collective operations, see this documentation ). Each GPU is responsible for accumulating only a portion of the result tensor. The first step (or “shot”) involves each GPU sending the different slices of the tensor to the respective GPU responsible for accumulating that slice of the tensor. After accumulating locally, each GPU now has the correct sum accumulators for its unique slice of the output. In the second step (or “shot”), each GPU broadcasts the result slice to all other GPUs using the NVSwitch multicast capability. This minimizes the per GPU bandwidth required as the NVSwitch itself performs data amplification; each GPU sends 1/N the data and receives the full result tensor in one step. The entire operation only takes two communication steps, regardless of the number GPUs performing tensor parallel inference. TensorRT-LLM MultiShot Algorithm: GPU_N sends slices, Compute slice sum, broadcast result in single multicast operation. Latency: 2 communication steps (regardless of number of GPUs) Traffic: 2 tensor bytes of send/recv (regardless of number of GPUs) Why this matters Since this algorithm requires only two communication steps rather than 2N-2 (where N is the number of GPUs), MultiShot can be nearly 3x faster than Ring AllReduce. The benefits of this algorithm are particularly evident with smaller message sizes and high parallelism – the scenario needed when minimum latency is required for a great user experience. This can be used to either reduce minimum latency, or increase throughput at a given latency. In scenarios with more aggressive latency thresholds, this can lead to super-linear scaling with the number of GPUs. Figure 1. With TensorRT-LLM MultiShot, AllReduce latency is reduced by up to 3x. Achieving optimal inference performance requires careful workload analysis and a deep understanding of performance bottlenecks. By gaining that understanding – both through internal engineering work as well as through close collaboration with external developers and researchers – we can quickly and frequently optimize many aspects of our platform to deliver great performance for users. As we continue to identify and implement new performance optimizations – some may be extensive, others might be narrower in scope –  we will be providing regular updates on these optimizations, providing both technical motivation and quantified benefits.
https://developer.nvidia.com/zh-cn/blog/3x-faster-allreduce-with-nvswitch-and-tensorrt-llm-multishot/
NVSwitch 和 TensorRT-LLM MultiShot 共同加速 AllReduce 速度达 3 倍
在用户数量可能在数百到数十万之间波动,且输入序列长度随每个请求而变化的生产环境中,部署生成式 AI 工作负载会面临独特的挑战。要在这些环境中实现低延迟推理,无论 GPU 生成方式或显存容量如何,多 GPU 设置都是必需的。为了提高生产级设置中的推理性能,我们很高兴推出 TensorRT-LLM Multi-shot,这是一种新的多 GPU 通信协议,利用 NVIDIA NVLink Switch 可将通信速度大幅提升高达 3 倍。本博客概述了这一新功能,以及它如何帮助开发者和解决方案架构师克服传统多 GPU 通信方法的限制。 传统 AllReduce 算法面临的挑战 对于低延迟推理,无论单个 GPU 的显存容量如何,多 GPU 都至关重要。但是,在低并发情况下,GPU 花在交换数据上的时间可能超过花在计算上的时间。为了获得最佳性能, 高效的 AllReduce 操作 –结合每个参与其中的 GPU 的部分结果的集合操作–至关重要。 传统方法使用基于环的算法,其中部分值在环形的 GPU 之间传递。每个 GPU 都贡献其值并将结果传递给其邻居。该过程重复 2N-2 次,其中 N 是协同工作的 GPU 数量,在该过程结束时,每个 GPU 都具有相同的总和值。需要对环进行第二次传递,以将总和值从最后一个 GPU 传播到其他 GPU。 Ring 方法可在每个通信步骤中高效利用可用的 GPU 到 GPU 带宽,但随着 GPU 数量的增加,步骤数也会增加。这会增加延迟,因为所有 GPU 都需要在 Ring 的每个步骤中保持同步。这些同步延迟会显著增加延迟开销,并可能导致难以满足更严格的延迟目标。 Ring AllReduce 算法描述如下: 环形算法:GPU-1 → GPU-2 → … → GPU-N → GPU-1 → GPU-2 → … → GPU-(N-1) 2N-2 步长,每步具有完整的 Tensor send/recv 延迟:2N-2 通信步骤。(N:GPU 的数量) 流量:(4N-4)/N 张量的 send/recv 字节数 使用 TensorRT-LLM MultiShot 应对 AllReduce 通信挑战 TensorRT-LLM MultiShot 是一种新算法,可利用 NVSwitch 中的组播,将 Ring AllReduce 的 O(N) 延迟最多降低 3 倍。组播是 NVSwitch 中的硬件加速功能,允许一个 GPU 发送数据一次,并将该数据同时发送到所有其他 GPU,从而将通信步骤的数量减少到两个 GPU 间的同步,同时保持带宽效率。如果没有 NVSwitch,这将占用 N 倍的通信带宽。 TensorRT-LLM Multishot 将 AllReduce 分离为 ReduceScatter 操作,然后是 AllGather 操作(有关集合操作的更多详细说明,请参阅 此文档 )。 每个 GPU 仅负责累积结果张量的一部分。 第一步(或“shot”)涉及每个 GPU 将张量的不同切片发送到负责累积该张量切片的相应 GPU。 在本地累加后,每个 GPU 现在都有正确的和累加器,用于其独特的输出切片。 在第二步 (或“shot”) 中,每个 GPU 使用 NVSwitch 组播功能将结果切片广播到所有其他 GPU。这可最大限度地减少 NVSwitch 本身执行数据放大所需的每个 GPU 带宽;每个 GPU 一步发送 1/N 数据并接收完整的结果张量。 无论执行张量并行推理的 GPU 数量如何,整个操作仅需两次通信步骤。 TensorRT-LLM MultiShot 算法:GPU_N 发送切片、计算切片和、在单个组播运算中广播结果。 延迟:2 个通信步骤(与 GPU 数量无关) 流量:2 张量字节的 send/recv(与 GPU 数量无关) 为何如此重要 由于此算法只需要两个通信步骤,而不是 2N-2 (其中 N 表示 GPU 数量),因此 MultiShot 的速度几乎是 Ring AllReduce 的 3 倍。这种算法的优势在消息大小较小且并行度高的情况下尤为明显,而这正是需要最低延迟以获得出色的用户体验的场景。 这可用于降低最小延迟,或在给定延迟下提高吞吐量。在具有更激进的延迟阈值的场景中,这可能会导致 GPU 数量的超线性扩展。 图 1. 借助 TensorRT-LLM MultiShot,AllReduce 延迟最多可降低 3 倍。 实现最佳推理性能需要仔细的工作负载分析和对性能瓶颈的深入了解。通过内部工程工作以及与外部开发者和研究人员的密切合作,我们可以快速、频繁地优化平台的许多方面,为用户提供出色的性能。 随着我们继续识别和实施新的性能优化(一些可能是广泛的,另一些可能范围较窄),我们将定期提供有关这些优化的更新,提供技术动机和量化效益。
https://developer.nvidia.com/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/
Accelerating Scientific Literature Reviews with NVIDIA NIM Microservices for LLMs
A well-crafted systematic review is often the initial step for researchers exploring a scientific field. For scientists new to this field, it provides a structured overview of the domain. For experts, it refines their understanding and sparks new ideas. In 2024 alone, 218,650 review articles were indexed in the Web of Science database, highlighting the importance of these resources in research. Completing a systematic review significantly enhances a researcher’s knowledge base and their academic impact. However, traditional review writing requires collecting, reading, and summarizing large volumes of academic articles on a specific topic. Due to the time-consuming nature of this manual exercise, the scope of processed literature is often confined to dozens or a few hundred articles. Interdisciplinary content—frequently outside the researcher’s area of expertise—adds another layer of complexity. These challenges make it increasingly difficult to create comprehensive, reliable, and impactful systematic reviews. The advent of large language models (LLMs) offers a groundbreaking solution, enabling the rapid extraction and synthesis of information from extensive literature. Participating in the Generative AI Codefest Australia provided a unique opportunity to explore this idea with support from NVIDIA AI experts to leverage NVIDIA NIM microservices for accelerating literature reviews. This enabled the rapid testing and fine-tuning of several state-of-the-art LLMs for our literature analysis process. Testing the potential of LLMs for processing papers As a research group specializing in physiological ecology within the ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF) , we embarked on writing a review of the literature on the global responses of non-vascular plants, such as moss or lichen, to wind. However, we quickly faced a challenge: many relevant articles on wind-plant interactions failed to explicitly mention these key words in their titles or abstracts, which are typically used as primary filters during literature screening. A comprehensive analysis of the topic required manually reading the full text of each article—a highly time-consuming process. We decided to explore the potential of using LLMs to extract content specifically related to wind-plant interactions from the articles. To achieve this, we implemented a simple Q&A application based on the LlaMa 3.1 8B Instruct NIM microservice (Figure 1). This enabled us to get an initial prototype quickly. This first prototype, processing the papers sequentially, was extremely useful to craft and optimize the prompts to extract key information from each article. Figure 1. Initial simple paper processing pipeline using an LLM NIM microservice at its core To validate the accuracy of extracted information, we initially manually validated the results. When no significant errors were found in the test dataset, we identified opportunities to further enhance the efficiency of key information extraction using LLMs (Figure 2). These include converting the papers from a PDF format to structured JSON; extracting images, tables, and charts; and using parallel processing to speed up the processing of papers. Figure 2. Enhanced paper processing pipeline incorporating NV-ingest and parallel processing of papers Enhancing the performance of LLMs for more efficient information extraction By using NVIDIA NIM microservices for LLMs and nv-ingest , we deployed LLMs and a data ingestion pipeline in our local environment with eight NVIDIA A100 80-GB GPUs. We also fine-tuned the models using low-rank adaptation (LoRA) to improve the accuracy of information extraction from the papers. We compiled a dataset of over 2K scientific articles related to the targeted research domain, sourced from the Web of Science and Scopus databases. Over a week during Generative AI Codefest, we focused on experimenting with various strategies to optimize the efficiency and accuracy of key information extraction from these articles. Best-performing model To determine the best-performing model, we tested a range of instruction-based and general-purpose LLMs from the NVIDIA API Catalog on a set of randomly selected articles. Each model was assessed for its accuracy and comprehensiveness in information extraction. Ultimately, we determined that Llama-3.1-8B-instruct was the most suitable for our needs. Processing speed We developed a Q&A module using streamlit to answer user-defined research-specific questions. To further improve processing speed, we implemented parallel processing of the prompts sent to the LLM engine and used KV-caching, which significantly accelerated the computation time by a factor of 6x when using 16 threads. Extraction content types We used nv-ingest to extract content from the original PDFs, including text, figures, tables, and charts, into structured JSON files. This enabled information extraction beyond text content, enabling a more comprehensive context for answering the questions. Using JSON files instead of the original PDF files during inference also had a significant effect on lowering the processing time, by an additional factor of 4.25x. Results Thanks to these improvements, we significantly reduced the time required to extract information from our database of papers, with a total speedup of 25.25x compared to our initial implementation. Processing the entirety of our database now takes less than 30 minutes using two A100 80-GB GPUs and 16 threads. Compared to the traditional approach of manually reading and analyzing an entire article, which typically takes about one hour, this optimized workflow achieved a time savings of over 99% (Figure 3). Figure 3. Required time for information extraction and categorization In addition to information extraction, we also investigated automated article classification. By fine-tuning Llama-3.1-8b-Instruct with a LoRA adapter on a sample of papers manually annotated, we successfully automated the classification process, demonstrating its effectiveness in organizing complex datasets of scientific papers. The results indicated that each article required only 2 seconds for classification, compared to the 300+ seconds required on average for a manual classification by an experienced reader (Figure 3). Future directions We’re currently refining our workflow to further accelerate the processing. We’re also improving our user interface to provide easy access to more locally deployed LLMs and enhance accessibility by other researchers (Figure 4). We plan to implement the NVIDIA AI Blueprint for multimodal PDF data extraction to identify the most relevant articles for each research question and interact with those papers. Beyond technical improvements, we are aiming to organize extracted key information for each question and generate visualizations (such as maps with location of the experiments mentioned in the papers) to accelerate further the writing of the systematic review. Figure 4. Prototype interface undergoing in-house testing at SAEF Summary Our work at the Generative AI Codefest demonstrated the transformative potential of AI in accelerating systematic literature reviews. With NVIDIA NIM, we quickly moved from an idea to a working solution that significantly improves the process of information extraction from scientific papers. This experience highlights how AI can streamline research workflows, enabling faster and more comprehensive insights. LLMs have the potential to facilitate interdisciplinary research, empowering scientists to explore complex, multi-domain research fields more effectively. Moving forward, we aim to refine these methods and tools, ensuring that they are accessible and scalable for future research across diverse topics.
https://developer.nvidia.com/zh-cn/blog/accelerating-scientific-literature-reviews-with-nvidia-nim-microservices-for-llms/
借助适用于 LLM 的 NVIDIA NIM 微服务加速科学文献审查
精心设计的系统审查通常是研究人员探索科学领域的第一步。对于刚刚接触该领域的科学家,它提供了该领域的结构化概述。对于专家而言,它可以细化他们的理解并激发新的想法。在 2024 年, Web of Science 数据库收录了 218,650 篇评测文章,凸显了这些资源在研究中的重要性。 完成系统审查可显著增强研究人员的知识库及其学术影响。然而,传统的评论文章需要收集、阅读和总结有关特定主题的大量学术文章。由于这项人工练习非常耗时,处理过的文献的范围通常局限于几十篇或几百篇文章。跨学科内容 (通常超出研究人员的专业领域) 又增加了一层复杂性。 这些挑战使得创建全面、可靠且有影响力的系统审查变得越来越困难。 大语言模型 (LLMs) 的出现提供了一种突破性的解决方案,能够从大量文献中快速提取和合成信息。参加 Generative AI Codefest Australia 提供了一个独特的机会,让我们在 NVIDIA AI 专家的支持下探索这一想法,利用 NVIDIA NIM 微服务加速文献审查。这为我们的文献分析流程快速测试和微调了多个先进的 LLMs。 测试 LLMs 在处理论文方面的潜力 作为 ARC 特殊研究计划 (ARC Special Research Initiative Securing Antarctica’s Environmental Future (SAEF)) 中的一个专门研究生理生态学的研究小组,我们着手撰写了一篇关于苔藓或地衣等非血管性植物对风的全球反应的文献回顾。 然而,我们很快就面临了一个挑战:许多关于风力电厂相互作用的相关文章都未能在标题或摘要中明确提及这些关键词,而在文献筛选过程中,这些关键词通常用作主要的过滤器。要对主题进行全面分析,需要手动阅读每篇文章的完整文本,这一过程非常耗时。 我们决定探索使用 LLMs 从文章中提取与风电场交互特别相关的内容的可能性。为此,我们 基于 LlaMa 3.1 8B Instruct NIM 微服务 实施了一个简单的问答应用(图 1)。这使我们能够快速获得初始原型。 第一个原型按顺序处理论文,在制作和优化提示以从每篇文章中提取关键信息方面非常有用。 图 1、以 LLM NIM 微服务为核心的初始简单论文处理流程 为了验证提取信息的准确性,我们首先手动验证结果。当测试数据集中未发现重大错误时,我们确定了使用 LLM 进一步提高关键信息提取效率的机会 (图 2)。其中包括将论文从 PDF 格式转换为结构化 JSON;提取图像、表格和图表;以及使用并行处理来加速论文处理速度。 图 2、增强型论文处理流程,整合 NV-ingest 和论文的并行处理 增强 LLMs 的性能,实现更高效的信息提取 通过将 NVIDIA NIM 微服务用于 LLM 和 nv-ingest ,我们在本地环境中部署了 LLM 和数据提取管道,并配备了 8 个 NVIDIA A100 80-GB GPU。我们还使用低等级适应性(LoRA)对模型进行了微调,以提高从论文中提取信息的准确性。 我们编译了一个数据集,其中包含与目标研究领域相关的 2000 多篇科学文章,这些文章均源自 Web of Science 和 Scopus 数据库。在 Generative AI Codefest 期间,我们在一周内专注于尝试各种策略,以优化从这些文章中提取关键信息的效率和准确性。 性能最佳的模型 为了确定性能最佳的模型,我们在一组随机选择的文章中测试了 NVIDIA API Catalog 中的一系列基于指令的和通用的 LLM。评估了每个模型在信息提取方面的准确性和全面性。 最终,我们确定 Llama-3.1-8B-instruct 最适合我们的需求。 处理速度 我们开发了一个问答模块,使用 Streamlit 回答用户定义的研究特定问题。 为进一步提高处理速度,我们对发送到 LLM 引擎的提示实施了并行处理,并使用了 KV 缓存,这使得使用 16 个线程时的计算时间显著缩短了 6 倍。 提取内容类型 我们使用 nv-ingest 将原始 PDF 中的内容 (包括文本、图形、表格和图表) 提取到结构化 JSON 文件中。这使得信息提取功能超越了文本内容,从而为回答问题提供更全面的环境。 在推理过程中使用 JSON 文件而非原始 PDF 文件也对将处理时间额外缩短 4.25 倍产生了重大影响。 成果 得益于这些改进,我们显著缩短了从论文数据库中提取信息所需的时间,与初始实施相比,总速度提高了 25.25 倍。 现在,使用两个 A100 80-GB GPU 和 16 个线程处理整个数据库只需不到 30 分钟。 与手动阅读和分析整篇文章的传统方法 (通常需要大约 1 小时) 相比,这种经过优化的工作流程可节省 99% 以上的时间 (图 3)。 图 3. 信息提取和分类所需的时间 除了信息提取,我们还研究了自动化文章分类。通过在手动标注的论文样本上使用 LoRA 适配器微调 Llama-3.1-8b-Instruct,我们成功实现了分类过程的自动化,证明了其在组织复杂的科学论文数据集方面的有效性。 结果表明,与经验丰富的读者手动分类平均所需的 300 多秒相比,每篇文章只需 2 秒即可完成分类 (图 3)。 未来方向 我们目前正在改进工作流程,以进一步加速处理。我们还在改进用户界面,以便轻松访问更多本地部署的 LLM,并增强其他研究人员的可访问性(图 4)。 我们计划实施用于多模态 PDF 数据提取的 NVIDIA AI Blueprint,以确定与每个研究问题最相关的文章,并与这些论文进行交互。 除了技术改进,我们的目标是组织提取的每个问题的关键信息,并生成可视化效果 (例如,包含论文中提到的实验位置的地图),以进一步加速系统审查的编写。 图 4、原型接口正在 SAEF 进行内部测试 总结 我们在生成式 AI Codefest 上的工作展示了 AI 在加速系统文献审查方面的变革潜力。借助 NVIDIA NIM,我们迅速从想法转变为可行的解决方案,从而显著改善从科学论文中提取信息的过程。 此体验凸显了 AI 如何简化研究工作流程,从而更快获得更全面的见解。LLMs 具有促进跨学科研究的潜力,可助力科学家更有效地探索复杂的多领域研究领域。 展望未来,我们的目标是改进这些方法和工具,确保其可访问性和可扩展性,以供未来跨各种主题的研究使用。
https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/
Vision Language Model Prompt Engineering Guide for Image and Video Understanding
Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder . These initial VLMs were limited in their abilities, only able to understand text and single image inputs. Fast-forward a few years and VLMs are now capable of understanding multi-image and video inputs to perform advanced vision-language tasks such as visual question-answering (VQA), captioning, search, and summarization. Figure 1. Suite of model customization tools in NVIDIA NeMo VLM accuracy can be improved for specific use cases by tuning the prompts and model weights. Fine-tuning the model weights can be done with efficient techniques such as PEFT but still requires sufficient data and compute. However, prompt engineering and in context learning can also be used to improve the output quality and is as simple as adjusting the text input to the model at runtime. This post walks you through the evolution of VLMs and how to effectively prompt a VLM for single image, multi-image, and video understanding use cases. For more information about VLMs and visual AI agents, register for the upcoming Vision for All: Unlocking Video Analytics with AI Agents webinar. Single-image understanding From a single image, a VLM is capable of identifying, describing, classifying, and reasoning over what is in the image. Some VLMs have advanced capabilities such as providing bounding box coordinates for certain objects, and translating text in the image. This works best for use cases where an image needs to be classified, captioned or described in text. Single-image capability can also be used to detect basic events occurring in a livestream. This can be accomplished by sampling one frame at a time from a livestream and providing it to the VLM along with a prompt describing what to look for. This works especially well when the types of events to detect are large, clearly visible, and slow-moving. The VLM can also be prompted to output in a structured format such as JSON so that the response can easily be parsed and sent to a database or a notification service. For example, we provided Llama-3.2-90b-vision-instruct with the following prompt and image (Figure 2). Figure 2. Input image to the VLM for question and answering Prompt: Is there a fire truck? Is there a fire? Are there firefighters? Output the answer to each question in JSON format. VLM response: [ { "question": "Is there a fire truck?", "answer": "True" },    { "question": "Is there a fire?", "answer": "True" },    { "question": "Are there firefighters?", "answer": "True" }  ] For each of the questions, the VLM was able to accurately respond and output it in a structured format to be used for downstream tasks. While this method enables the VLM to operate on video input by sampling one frame at a time, the VLM itself does not understand the full context of the video, limiting it to simple use cases. Sampling one frame at a time does not give it the temporal understanding needed to detect actions or trends that occur over time. This can be improved with multi-image and video VLMs. Multi-image understanding Many VLMs can accept multiple images as input allowing the model to compare, contrast, and learn from input images. VLMs generally have good zero-shot capability across a wide range of use cases. However, when trying to obtain more precise answers from a VLM on domain-specific use cases, VLMs can struggle because they lack the full context required to accurately respond to the user prompts. To better understand this, we used a VLM to estimate the stock level of products. This is an important retail use case as it could autonomously monitor and alert employees when products are running low in a store to avoid lost revenues from stock-outs. We provided a single image of a table (Figure 3) that should be fully stocked with several snack products to Cosmos Nemotron 34B and asked it to estimate the stock level. Figure 3. Single-image VLM input for retail stock level detection Prompt: Estimate the stock level of the snack table on a scale of 0–100. VLM response: The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd. The model rates it as 80–90% stocked even though it’s quite clear to the human eye that the stock level is low. From the model’s perspective, it does not know what the table looked like beforehand when looking at a single image. Using the Cosmos Nemotron 34B multi-image capability, we provided another image of the snack table at a 100% stock level (Figure 4), giving  the model extra context to produce a more accurate estimate. Figure 4. In-context learning input to VLM for retail stock level detection Prompt: First compare and contrast the stock level of the two images. Then generate an estimate for each image of the stock level on a scale of 0–100%. VLM response: The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%. By providing the additional image as a reference, the VLM now estimates the stock level of the second image at 20–30%, which is significantly more accurate. Multi-image input can also be used in more complex multimodal RAG pipelines where example images are dynamically added to the prompt based on the user’s input. Multi-image VLMs can also be used for basic video understanding by sampling a small set of frames from a video and providing it to the VLM. However, many VLMs that accept multi-image input have limited context windows (10–20 frames) and may not have been trained on video data. To truly understand video input, we recommend using a VLM trained on video-specific datasets and a long context window. Video understanding The latest VLMs have full video understanding capabilities with large context lengths, frame compression and video training data. This enables VLMs to ingest longer videos and have improved temporal understanding: the ability to understand actions and trends over time. The first example showed how to use a single frame from a livestream to detect a fire but that same method wouldn’t be able to tell you if the fire is growing or getting smaller because it lacks the temporal context. A VLM with long context and video understanding processes many frames across time to understand if the fire is growing or not. Here are some examples from the Cosmos Nemotron model families. Sequential visual understanding VLMs with sequential understanding relies on processing frames in a sequence, enabling them to capture the progression of actions and events over time. Unlike multi-image models, which process a limited number of frames without explicit temporal connections, sequential video models incorporate temporal dynamics by analyzing frames in an orderly sequence. This is helpful in situations where you want to capture the causality of events in a video, as it enables the model to track the progression of actions over time and establish cause-effect relationships. The next few examples are based on the following video where a worker drops a box in a warehouse. We discuss a few prompts and responses. Figure 5. Input warehouse video to the VLM for scene understanding Prompt: What happened in this video? VLM response: The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles. The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment. The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting. Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility. The model gave a detailed response with specifics about the environment such as aisle labels and lighting conditions but failed to correctly identify that the worker dropped a box. Now try to ask this question directly. Prompt: Did the worker drop any box? VLM response: Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it. The VLM is now able to deduce that the worker might have dropped the box. This suggests that generic prompts might not capture all the nuances correctly unless you direct the VLM to look for something specific. Modify the prompt a little. Prompt: Elaborate on the visual and narrative elements of this warehouse video in detail. Highlight all anomalies or hazards if present. VLM response: The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor. The model is now capturing all the important details of the scene along with important events, such as workers dropping a box. The detailed response is useful if you want to perform Q&A with the video later. VLMs with video understanding are also able to address directional questions, as they can understand actions over time. Prompt: From which direction did the worker enter the scene? VLM response: The worker entered the scene from the right side. However, these models often lack precise temporal localization, which can limit their ability to pinpoint when specific events occur. Prompt: When did the worker drop the box? VLM response: The worker dropped the box in the second image. Temporal localization VLMs incorporating Localization Interpretable Temporal Attention (LITA) or similar temporal localization techniques elevate video understanding by explicitly learning when and where critical events occur. The models achieve this by introducing temporal attention mechanisms that focus on key segments of a video, aligning visual and textual modalities more effectively over time. This helps answer the “when” questions in the videos. Prompt: When did the worker drop the box? VLM response: 0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground. The optimal prompt format for a VLM depends on the model’s architecture and the nature of the caption pairs used during training. Different training datasets influence how a VLM interprets the prompt. Conclusion This post walked through how VLMs have evolved from supporting only single-image input to being capable of complex temporal reasoning on long video inputs. To get started with VLMs, visit build.nvidia.com and try out some of the prompts shown in this post. For technical questions, see the Visual AI Agent forum . VLMs can be used to build various visual AI agents. Explore one of the sample vision AI workflows on the /NVIDIA/metropolis-nim-workflows GitHub repo with no GPU required. To build a video analytics AI agent, try the AI blueprint for video search and summarization . For more information about VLMs and visual AI agents, register for the upcoming Vision for All: Unlocking Video Analytics with AI Agents webinar. For more information about LLM prompting, see An Introduction to Large Language Models: Prompt Engineering and P-Tuning .
https://developer.nvidia.com/zh-cn/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/
图像和视频理解的视觉语言模型提示工程实践指南
视觉语言模型 (VLMs) 正在以极快的速度发展。2020 年,首批 VLMs 通过使用视觉编码器将视觉理解引入大语言模型 (LLMs) ,彻底改变了生成式 AI 格局。这些初始 VLMs 能力有限,只能理解文本和单张图像输入。 几年后,VLM 现在能够理解多图像和视频输入,以执行高级视觉语言任务,例如视觉问答 (VQA)、字幕、搜索和摘要。 图 1、NVIDIA NeMo 中的模型自定义工具套件 通过调整提示和模型权重,可以提高特定用例的 VLM 准确性。可以使用 PEFT 等高效技术微调模型权重,但仍需要足够的数据和计算。 但是, prompt engineering 和上下文学习也可用于提高输出质量,并且与在运行时调整模型的文本输入一样简单。 本文将为您介绍 VLM 的演变,以及如何有效地提示 VLM 以用于了解单张图像、多张图像和视频的用例。如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 在线研讨会 。 单图像理解 从单张图像来看,VLM 能够识别、描述、分类和推理图像中的内容。一些 VLM 具有高级功能,例如为某些对象提供边界框坐标,以及翻译图像中的文本。这最适合需要对图像进行分类、添加字幕或以文本进行描述的用例。 单图像功能还可用于检测直播中发生的基本事件。这可以通过以下方式实现:从直播中一次采样一帧,并将其提供给 VLM,同时提供描述要查找的内容的提示。当要检测的事件类型较大、清晰可见且移动缓慢时,此方法的效果尤为出色。 系统还可以提示 VLM 以 JSON 等结构化格式输出,以便轻松解析响应并将其发送至数据库或通知服务。 例如,我们为 Llama-3.2-90b-vision-instruct 提供了以下提示和图像(图 2)。 图 2、将图像输入 VLM 以进行问答 提示:是否有消防车?是否有火灾?有消防人员吗?输出每个问题的答案,以 JSON 格式。 VLM 响应: [ { "question": "Is there a fire truck?", "answer": "True" }, { "question": "Is there a fire?", "answer": "True" }, { "question": "Are there firefighters?", "answer": "True" } ] 对于每个问题,VLM 能够准确回答并以结构化格式输出,以便用于下游任务。 虽然此方法允许 VLM 通过一次采样一帧来操作视频输入,但 VLM 本身无法理解视频的完整上下文,将其局限于简单的用例。一次对一帧进行采样无法为其提供检测随时间变化的动作或趋势所需的时间理解。这可以通过多图像和视频 VLM 得到改善。 多图像理解 许多 VLM 可以接受多张图像作为输入,允许模型比较、对比并从输入图像中学习。 VLM 通常在各种用例中都具有良好的零样本击能力。但是,在尝试从 VLM 获取有关特定领域用例的更精确答案时,VLM 可能会遇到困难,因为它们缺乏准确响应用户提示所需的完整上下文。 为了更好地理解这一点,我们使用 VLM 来估计产品的库存水平。这是一个重要的零售用例,因为它可以在商店中的产品运行不足时自动监控和提醒员工,以避免因缺货而损失的收入。 我们向 Cosmos Nemotron 34B 提供了一张表格 (图 3) 的单一图像,该表格应与多种零食产品一起充分库存,并要求其估算库存水平。 图 3. 用于零售库存水平检测的单图像 VLM 输入 提示:按 0 – 100 的比例估算零食表的库存量。 VLM 响应: The snack table appears to be well-stocked, with a variety of snacks and treats. The shelves are filled with an assortment of items, including chips, cookies, and other snacks. The table is also neatly organized, with each item placed in a designated spot. Based on the visual, I would rate the stock level at around 80-90, indicating that the table is well-stocked and ready to cater to a crowd. 该模型将其评为 80–90% 的库存,即使肉眼很清楚库存水平很低。从模型的角度来看,它不知道在查看单个图像之前表格的样子。 通过使用 Cosmos Nemotron 34B 多图像功能,我们提供了另一张 100%库存水平的零食表图像 (图 4),为模型提供了额外的背景信息,以便生成更准确的估计。 图 4. 为 VLM 提供情境学习输入,用于零售库存水平检测 提示:首先比较两张图像的库存水平。然后,按 0 – 100% 的比例对库存量的每张图像生成估计值。 VLM 响应: The first image shows a well-stocked display with a variety of snacks, while the second image shows a more sparse display with only a few items. Based on the visual comparison, I estimate that the first image has a stock level of around 80-90%, while the second image has a stock level of around 20-30%. 现在,通过提供额外的图像作为参考,VLM 可将第二张图像的库存量估算为 20–30%,准确度明显更高。 多图像输入还可用于更复杂的多模态 RAG 工作流,其中会根据用户的输入将示例图像动态添加到提示中。多图像 VLM 还可以通过从视频中采样一小部分帧并将其提供给 VLM 来用于基本的视频理解。 但是,许多接受多图像输入的 VLM 的上下文窗口有限(10–20 帧),并且可能未基于视频数据进行训练。为了真正理解视频输入,我们建议使用基于视频特定数据集训练的 VLM 和长上下文窗口。 视频理解 最新的 VLM 具有完整的视频理解功能,具有大上下文长度、帧压缩和视频训练数据。这使得 VLM 能够提取更长的视频,并提高时间理解能力:能够理解一段时间内的行动和趋势。 第一个示例展示了如何使用直播中的单帧来检测火灾,但由于缺乏时间背景,相同的方法无法告诉您火灾是在增大还是变小。 具有长上下文和视频理解的 VLM 会随着时间的推移处理多个帧,以了解火势是否在增长。以下是 Cosmos Nemotron 模型系列的一些示例。 顺序视觉理解 具有顺序理解的 VLMs 依赖于按序列处理帧,使其能够捕捉动作和事件随时间推移的进展。与多图像模型相比,它们在处理有限数量的帧时没有显式时间连接,而顺序视频模型则通过按顺序分析帧来整合时间动力学。 这在您想要在视频中捕捉事件因果关系的情况下非常有用,因为它使模型能够跟踪动作在一段时间内的进展情况,并建立因果关系。 接下来的几个示例基于以下视频,展示了一名工人在仓库中丢弃箱子的过程。我们将讨论一些提示和响应。 图 5、将仓库视频输入到 VLM 以进行场景理解 提示:本视频中发生了什么? VLM 响应: The video shows a warehouse worker in a yellow safety vest and hard hat performing various tasks. The worker is seen walking through the warehouse, carrying boxes, and interacting with the shelving units. The warehouse has green metal shelving with labels like "C", "D", "E", and "F" on the aisles. The worker appears to be in the process of organizing or retrieving items from the shelves. At one point, they're seen bending down to pick up a box from the floor. The worker's movements suggest they're actively engaged in their duties, possibly sorting, stocking, or preparing items for shipment. The warehouse environment is well-lit, with fluorescent lighting illuminating the space. The worker's safety gear, including the reflective vest and hard hat, indicates adherence to safety protocols in this industrial setting. Overall, the video captures a typical day in the life of a warehouse worker, showcasing the physical nature of the job and the organized layout of the storage facility. 该模型提供了详细的环境细节 (例如通道标签和照明条件),但无法正确识别工人掉落的箱子。现在,尝试直接问这个问题。Prompt:工作人员是否丢弃了任何箱子? VLM 响应: Yes, the worker did drop a box. In the video, we can see a box lying on the floor in front of the worker. This suggests that the worker has accidentally dropped the box while handling it. 现在,VLM 可以推断出工作者可能丢弃了箱子。这表明,除非您指示 VLM 查找特定内容,否则通用提示可能无法正确捕捉所有细微差别。稍微修改一下提示。 提示:详细介绍此仓库视频的视觉和叙事元素。突出显示所有异常或危险(如果存在)。 VLM 响应: The video captures a warehouse with tall metal shelving units labeled with letters 'C', 'D', 'E', and 'F'. The shelves are stocked with various items, including boxes and packages. The floor is concrete, and the lighting is artificial, typical of an indoor industrial setting. A worker wearing a high-visibility vest, a hard hat, and dark pants is seen walking down the aisle between the shelves. The worker is carrying a cardboard box and then drops the box on the floor. 现在,该模型可以捕捉场景的所有重要细节以及重要事件,例如工人丢箱。如果您想稍后在视频中执行 Q&A,详细的回答非常有用。具有视频理解功能的 VLM 还能够解决定向问题,因为它们可以理解随时间推移的动作。 提示:工作者是从哪个方向进入场景的? VLM 响应: The worker entered the scene from the right side. 但是,这些模型通常缺乏精确的时间定位,这可能会限制其确定特定事件何时发生的能力。 提示:工人是什么时候丢弃盒子的? VLM 响应: The worker dropped the box in the second image. 时间定位 结合 Localization Interpretable Temporal Attention (LITA) 或类似时间定位技术的 VLM 通过显式学习关键事件发生的时间和地点来提高视频理解能力。这些模型通过引入专注于视频关键片段的时间注意力机制,随着时间的推移更有效地调整视觉和文本模式来实现这一目标。这有助于回答视频中的“何时”问题。 提示: 工人是什么时候丢弃盒子的? VLM 响应: 0:04-0:06 The worker dropped the box between 4s and 6s. During this time, the worker is seen dropping the box on the ground. VLM 的最佳提示格式取决于模型的架构和训练期间使用的描述对的性质。不同的训练数据集会影响 VLM 解释提示的方式。 结束语 本文介绍了 VLM 如何从仅支持单图像输入发展为能够对长视频输入进行复杂的时间推理。要开始使用 VLM,请访问 build.nvidia.com 并尝试本博文中显示的一些提示。有关技术问题,请参阅 Visual AI Agent 论坛 。 VLM 可用于构建各种视觉 AI 智能体。无需 GPU,即可在 /NVIDIA/metropolis-nim-workflows GitHub 存储库上探索视觉 AI 工作流示例之一。要构建视频分析 AI 智能体,请试用 NVIDIA AI Blueprint 中的视频搜索和摘要蓝图。 如需了解有关 VLM 和视觉 AI 智能体的更多信息,请注册参加即将举办的 Vision for All:Unlocking Video Analytics with AI Agents 网络研讨会。 有关 LLM 提示的更多信息,请参阅《 An Introduction to Large Language Models: Prompt Engineering and P-Tuning 》。
https://developer.nvidia.com/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025
From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet. At NVIDIA GTC 2025 , running March 17–21, thought leaders, scientists, developers, and innovators will highlight how AI is helping shape a more sustainable and resilient future. The following sessions showcase AI’s role in climate forecasting, disaster mitigation, and conservation efforts, helping communities adapt to an increasingly unpredictable world. Strengthening extreme weather predictions and disaster response As global warming intensifies, and extreme weather events become more severe and frequent, communities need faster and more precise natural disaster predictions and response strategies. AI is improving flood, wildfire, and hurricane modeling, enabling earlier warnings and more effective mitigation efforts. Using large-scale weather simulations, satellite data analysis, and real-time predictive insights, AI is helping emergency responders and decision-makers minimize damage, improve disaster resilience, and keep communities safe. Sessions Transform Natural Catastrophe Risk Simulations with Advanced Computational Tools AXA, AWS, and NVIDIA use Earth-2 simulations to model extreme weather events at unprecedented scale and precision. These tools help insurers, urban planners, and governments prepare for disasters by improving risk estimation and resilience planning, reducing the financial and societal impact of natural disasters. Boosting Earth System Model Outputs Using Exascale Climate Emulators Researchers at KAUST and Saint Louis University use exascale climate emulators powered by NVIDIA GPUs to accelerate and refine earth system model outputs. Achieving ultra-high spatial resolution (3.5 km), these models enable more accurate storm and climate simulations, improving extreme weather predictions, and helping emergency responders, insurers, and policymakers improve disaster response planning and climate resilience efforts. Harnessing AI for Advanced Flood Risk Modelling and Mitigation Strategies JBA Risk Management uses AI-driven weather models, including Spherical Fourier Neural Operators, to simulate storm seasons with greater accuracy. Using NVIDIA NIM, these models improve flood risk assessment, disaster response planning, and infrastructure investment decisions, all while reducing computational costs. Near-Real-Time Fire Detection Leveraging Edge AI in Space: Transforming Thermal Earth Observation with NVIDIA Wildfires require rapid response to minimize devastation. OroraTech’s use of NVIDIA Jetson technology onboard CubeSats delivers fire detection within 60 seconds, providing first responders with critical real-time data to deploy resources more effectively. Autonomous Systems and Remote Sensing for Better Earth Data Autonomous aircraft equipped with AI are revolutionizing environmental monitoring by collecting high-resolution data at scale. MIT researchers showcase how these low-cost, long-range systems gather critical data for precision agriculture, disaster response, and environmental assessments, providing actionable insights at scale. Boosting climate forecasting for energy and infrastructure planning Accurate, long-term climate forecasting is essential for guiding infrastructure investments, optimizing energy grids, and enhancing sustainability planning. AI-powered models make climate projections faster and more precise, guiding solar energy integration, climate-resilient infrastructure development, and sustainability strategies. These high-resolution, data-driven insights help city planners and decision-makers anticipate evolving conditions for a more resilient future. Sessions The Role of AI and Accelerated Computing in Understanding and Mitigating Urban Climate Change Researchers are using AI, digital twins, and accelerated computing to model rising temperatures, air pollution, and extreme weather in cities. This session explores how generative AI and machine learning analyze satellite data, IoT sensors, and social insights to create high-resolution simulations of urban heat islands and pollution patterns. Experts will discuss how these insights help guide climate-resilient infrastructure, energy efficiency, and targeted adaptation strategies while addressing challenges like computational efficiency and equitable access to AI-driven solutions. Enhancing Photovoltaic Power Predictions with High-Resolution Weather Forecasting from NVIDIA Earth-2 NVIDIA Earth-2 is revolutionizing solar energy forecasting with ultra-precise, AI-driven weather predictions. In collaboration with GCL and Peking University, researchers show how time series models and high-resolution weather data improve photovoltaic power forecasts, enhancing grid stability, and improving renewable energy planning for power providers and policymakers. Applying AI Weather Models with NVIDIA Earth-2 NVIDIA Earth-2 AI-powered forecasting models generate high-resolution weather predictions at a fraction of the cost and time of traditional numerical models. This training lab explores how AI-driven downscaling techniques improve forecasting accuracy for industries such as energy and agriculture, providing more accurate forecasting and better decision-making across critical sectors. Advancing AI-driven environmental monitoring and conservation AI is transforming environmental monitoring, conservation, and ecosystem management with advanced digital twin technology and autonomous systems. From high-resolution coral reef modeling to large-scale ecosystem assessments, these innovations provide scientists and conservationists with critical insights to guide conservation strategies and protect biodiversity. Session Exploring Earth’s Oceans: Using Digital Twins to Drive Digital Ocean Collaboration Oceans regulate climate and support biodiversity, but their complexity makes them challenging to study. MITRE uses NVIDIA Omniverse to create digital twins of ocean systems, enabling real-time simulations and predictive modeling. These tools foster collaboration among scientists, policymakers, and educators to improve marine resource management, drive conservation efforts, and bolster climate resilience. In-person posters Photo-Realistic 3D Digital Twin to Enhance Understanding of the Great Barrier Reef ​​AI-powered 3D digital twins are advancing how researchers model and monitor coral reef ecosystems. Using Reef-NeRF and Reef-3DGS, scientists can create highly detailed reconstructions to track coral health, measure structural changes, and assess the impacts of climate change. These tools provide conservationists and policymakers with critical data to inform reef recovery strategies and improve long-term conservation efforts Mangrove Simulation Predicts Carbon Sequestration Solutions Mangrove forests are a key solution to carbon capture and climate mitigation, but effective restoration requires precise monitoring and management. ID Water Co., Ltd. is using AI-powered irrigation automation and GPU-driven carbon sink modeling to improve mangrove reforestation efforts. These models improve survival rates, optimize carbon sequestration, and address verification challenges, making large-scale restoration more feasible and impactful. Revolutionizing Antarctic Flora Monitoring with AI and Drones AI-powered drones and hyperspectral imaging are enabling high-precision mapping of Antarctic vegetation. Using NVIDIA GPUs, researchers can detect moss and lichen with over 99% accuracy, providing key insights into climate-driven ecosystem changes while reducing the need for invasive field surveys in this fragile ecosystem. Join our global community of developers, scientists, business leaders, and innovators at NVIDIA GTC 2025 to discover how AI drives solutions to our most complex challenges. From NVIDIA CEO Jensen Huang’s must-see keynote to over 900 sessions, 300+ exhibits, hands-on technical training, and exclusive networking events, GTC offers a firsthand look at AI’s real-world impact. The session catalog is open—start building your agenda today.
https://developer.nvidia.com/zh-cn/blog/ai-for-climate-energy-and-ecosystem-resilience-at-nvidia-gtc-2025/
NVIDIA GTC 2025 上的人工智能促进气候、能源和生态系统复原力
从减缓气候变化到改进灾害响应和环境监测,AI 正在重塑我们应对重大全球挑战的方式。快速、高分辨率的气候预报、实时监控和数字孪生技术的进步为科学家、政策制定者和行业领导者提供了数据驱动的工具,帮助他们了解、规划和应对一个变暖的星球。 在 3 月 17 日至 21 日举行的 NVIDIA GTC 2025 大会上,思想领袖、科学家、开发者和创新者将重点介绍 AI 如何帮助塑造更具可持续性和韧性的未来。以下会议展示了 AI 在气候预测、灾难缓解和保护工作中发挥的作用,帮助社区适应日益不可预测的世界。 加强极端天气预测和灾害响应 随着全球变暖加剧,极端天气事件变得更加严重和频繁,社区需要更快、更精确的自然灾害预测和响应策略。AI 正在改进洪水、野火和飓风建模,从而实现更早的警报和更有效的缓解措施。借助大规模天气模拟、卫星数据分析和实时预测性见解,AI 正在帮助应急响应人员和决策者尽可能减少损失、提高抗灾能力,并确保社区安全。 会议 借助高级计算工具转变自然灾害风险模拟 AXA、AWS 和 NVIDIA 使用 Earth-2 模拟以前所未有的规模和精度对极端天气事件进行建模。这些工具通过改进风险估计和恢复能力规划,减少自然灾害的金融和社会影响,帮助保险公司、城市规划人员和政府做好灾害准备。 使用百亿亿级 (Exascale) 气候模拟器提升地球系统模型的输出 KAUST 和圣路易斯大学的研究人员使用由 NVIDIA GPUs 提供支持的百亿亿级 (Exascale) 气候模拟器来加速和优化地球系统模型的输出。这些模型可实现超高的空间分辨率 (3.5 公里),从而能够更准确地模拟风暴和气候,改进极端天气预测,并帮助应急响应人员、保险公司和政策制定者改进灾害应对规划和气候弹性工作。 将 AI 用于高级洪水风险建模和缓解策略 JBA Risk Management 使用 AI 驱动的天气模型 (包括 Spherical Fourier Neural Operators) 更准确地模拟风暴季。借助 NVIDIA NIM,这些模型可改善洪水风险评估、灾害应对规划和基础设施投资决策,同时降低计算成本。 在太空中利用边缘 AI 进行近乎实时的火灾检测:借助 NVIDIA 改变热地球观测方式 野火需要快速响应,以尽可能减少破坏。OroraTech 在 CubeSats 上使用 NVIDIA Jetson 技术,可在 60 秒内完成火灾检测,从而为急救人员提供关键的实时数据,以便更有效地部署资源。 利用自主系统和遥感获取更好的地球数据 配备 AI 的自主飞机正在大规模收集高分辨率数据,从而彻底改变环境监测。麻省理工学院的研究人员展示了这些低成本的远程系统如何为精准农业、灾害响应和环境评估收集关键数据,并大规模提供可行的见解。 提升气候预测能力以加强能源和基础设施规划 准确的长期气候预测对于指导基础设施投资、优化电网和增强可持续发展规划至关重要。AI 驱动的模型能够更快、更精确地进行气候预测,为太阳能集成、气候弹性基础设施开发和可持续发展策略提供指导。这些由数据驱动的高分辨率见解可帮助城市规划人员和决策者预测不断变化的条件,打造更具弹性的未来。 会议 AI 和加速计算在了解和减缓城市气候变化方面的作用 研究人员正在利用 AI、数字孪生和加速计算对城市中的气温升高、空气污染和极端天气进行建模。此会议将探讨生成式 AI 和机器学习如何分析卫星数据、物联网传感器和社会见解,以创建城市热岛和污染模式的高分辨率模拟。专家们将讨论这些见解如何帮助指导适应气候变化的基础设施、能效和有针对性的适应战略,同时应对计算效率和公平获取 AI 驱动的解决方案等挑战。 借助 NVIDIA Earth-2 的高分辨率天气预报增强太阳能发电预测 NVIDIA Earth-2 通过 AI 驱动的超精确天气预测,正在彻底改变太阳能预测。研究人员与 GCL 和北京大学合作,展示了时间序列模型和高分辨率天气数据如何改善太阳能发电预测、增强电网稳定性,以及如何改善电力供应商和政策制定者的可再生能源规划。 将 AI 天气模型与 NVIDIA Earth-2AI 驱动的预测模型结合使用 ,生成高分辨率天气预测,所需的成本和时间远低于传统数值模型。此训练实验室将探讨 AI 驱动的降比例技术如何提高能源和农业等行业的预测准确性,从而为关键领域提供更准确的预测和更好的决策。 推进 AI 驱动的环境监测和保护 AI 正在利用先进的数字孪生技术和自主系统,改变环境监测、保护和生态系统管理。从高分辨率珊瑚礁建模到大规模生态系统评估,这些创新为科学家和自然保护主义者提供了重要见解,以指导保护策略和保护生物多样性。 会议 探索地球的海洋:使用数字孪生推动数字海洋协作海洋调节气候并支持生物多样性 ,但其复杂性使研究这些海洋具有挑战性。MITRE 使用 NVIDIA Omniverse 创建海洋系统的数字孪生,实现实时模拟和预测建模。这些工具促进了科学家、政策制定者和教育工作者之间的协作,以改善海洋资源管理、推动保护工作,并增强气候恢复能力。 线下海报 逼真的 3D 数字孪生增强对大堡礁的理解 AI 驱动的 3D 数字孪生正在推进研究人员建模和监测珊瑚礁生态系统的方式。借助 Reef-NeRF 和 Reef-3DGS,科学家可以创建高度精细的重建模型,以追踪珊瑚健康状况、测量结构变化并评估气候变化的影响。这些工具为环保人士和政策制定者提供关键数据,以便制定珊瑚礁恢复策略并改进长期保护工作 Mangrove Simulation 预测碳封存解决方案 红树林是碳捕获和气候减缓的关键解决方案,但有效的恢复需要精确的监控和管理。ID Water Co.,Ltd.正在使用由 AI 提供动力支持的喷洒自动化和 GPU 驱动的碳汇建模来改进红树林再造工作。这些模型可提高存活率、优化碳封存并解决验证难题,从而提高大规模修复的可行性和成效。 借助 AI 和无人机革新南极植物监测 AI 赋能的无人机和高光谱成像技术可实现对南极植被的高精度绘图。借助 NVIDIA GPU,研究人员可以以超过 99%的准确率检测和,从而对气候驱动的生态系统变化提供关键见解,同时减少在这个脆弱的生态系统中进行侵入性实地调查的需求。 在 NVIDIA GTC 2025 大会上,加入由开发者、科学家、业务领袖和创新者组成的全球社区,了解 AI 如何为我们面临的复杂挑战提供解决方案。 从 NVIDIA 首席执行官 Jensen Huang 不容错过的主题演讲 ,到 900 多场会议、300 多场展览、实操技术培训和独家交流活动,GTC 让您亲身体验 AI 对现实世界的影响。 会议目录 现已开放,请立即开始构建您的议程。
https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling
As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking , this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution. In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming. The results turned out to be better than the optimized kernels developed by skilled engineers in some cases. The need for optimized attention kernels and associated challenges Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data. The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes. There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task. ‌ Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on. Figure 1. Neighborhood attention on 2D inputs Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers. ‌ Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code. The following prompt is sample user input for a relative positional embeddings attention kernel. Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx):     return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel. Inference-time scaling for generating optimized GPU Kernels To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration. Figure 2. Inference-time scaling with DeepSeek-R1 on the NVIDIA Hopper platform The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as ‌input to the DeepSeek-R1 model. This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel. Figure 3. Performance of automatically generated optimized attention kernels with flex attention This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by Stanford’s KernelBench benchmark. ‌ The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming. Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems. Figure 4. Inference-time scaling results in optimized GPU kernels Optimized GPU kernels on DeepSeek-R1 These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels. While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential. For more information or to get started, see the DeepSeek-R1 NIM microservice , now available on build.nvidia.com .
https://developer.nvidia.com/zh-cn/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
使用 DeepSeek-R1 和推理时间缩放实现自动化 GPU 内核生成
随着 AI 模型扩展其功能以解决更复杂的挑战,一种称为“ 测试时扩展 ”或“ 推理时扩展 ”的新扩展法则正在出现。该技术也称为 AI 推理 或 长时思考 技术,通过在推理过程中分配额外的计算资源来评估多种可能的结果,然后选择最佳结果 (neural network),从而提高模型性能。这使得 AI 能够以类似于人类剖析复杂问题并单独解决这些问题以达成最终解决方案的方式,制定战略并系统化地解决复杂问题。 在本文中,我们将介绍 NVIDIA 工程师完成的一项实验,他们在推理过程中使用最新的开源模型之一 DeepSeek-R1 模型以及额外的计算能力来解决复杂的问题。该实验旨在自动生成 GPU 注意力内核,这些内核在数值上是正确的,并针对不同的注意力类型进行了优化,而无需任何显式编程。 事实证明,在某些情况下,最终结果优于由技术精湛的工程师开发的优化内核。 优化注意力内核的需求和相关挑战 注意力是一个关键概念,彻底改变了大语言模型(LLM)的发展。这是一种功能强大的机制,可让 AI 模型在执行任务时,有选择地专注于与输入内容最相关的部分。通过聚焦于重要信息,注意力运算可帮助模型做出更好的预测,并找到数据中隐藏的模式。 相对于输入序列长度,注意力运算的计算复杂性呈正交增长。这促使我们需要开发经过优化的低级实现 (即 GPU 内核),以防止简单实现产生的运行时错误 (例如内存不足的错误),并提高计算效率。 注意力有多种变体 (因果关系、相对位置嵌入、不在场证明等),工程师通常必须将这些变体的组合用于给定任务。 多模态模型 (例如,vision transformers) 带来了额外的一层挑战,因为它们需要专门的注意力机制 (Spatial Neighborhood Attention) 来维护计算机视觉、视频生成模型等领域中经常遇到的时空信息。 图 1. 邻域注意力在 2D 输入上的应用 创建经优化的 GPU 内核以供关注需要大量技能和时间,即使对于经验丰富的软件工程师而言也是如此。 最近的 LLMs(如 DeepSeek-R1)在代码生成任务方面表现出了很大的前景,但在第一次尝试创建优化代码时,它们仍然面临挑战。这使得有必要在推理时使用其他策略来生成优化的代码。 以下提示是用户输入相对位置嵌入注意力核函数的示例。 Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications. Use the following function to compute the relative positional encoding: def relative_positional(score, b, h, q_idx, kv_idx): return score + (q_idx - kv_idx) When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use: qk = qk * qk_scale + rel_pos * 1.44269504 Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations. LLM 偶尔会产生来自不同语言或框架的幻影代码或混合语法,从而立即导致代码错误或效率低下。计算最佳 GPU 线程映射也并非易事,也是一项具有挑战性的任务,通常需要进行迭代优化才能实现正确高效的内核。 用于生成经过优化的 GPU 内核的推理时间扩展 为了通过优化的注意力内核获得最佳结果,NVIDIA 工程师创建了一个新的工作流程,其中包括一个特殊的验证器以及 DeepSeek-R1 模型,在预先设定的时间内以闭环方式进行推理。 图 2、在 NVIDIA Hopper 平台上使用 DeepSeek-R1 实现推理时间扩展 该工作流程首先通过手动提示进行初始化,然后 DeepSeek-R1 模型会在首次通道中生成 GPU 代码(即核函数)。该验证器在 NVIDIA H100 GPU 上运行。它会分析生成的核函数并创建新的提示,这些提示以 input 的形式提供给 DeepSeek-R1 模型。 这种闭环方法每次都以不同的方式指导代码生成过程,从而改进代码生成过程。该团队发现,让此过程持续 15 分钟可以改进注意力核函数。 图 3、具有 Flex Attention 的自动生成优化注意力内核的性能 此工作流程为 100%的 1 级问题和 96%的 2 级问题生成了数值正确的内核,测试对象为 斯坦福大学的 KernelBench 基准测试。* KernelBench 中的 1 级求解率是指用于评估 LLM 为特定计算任务生成高效 GPU 内核的能力的数字正确指标。本次测试属于一系列挑战,旨在测试最新 LLM 在 GPU 编程中的能力。 图 4 显示了推理时间预算如何影响智能体的求解率。在 Level-1 类别中为每个问题分配超过 10 分钟的时间,使工作流程能够为 100 个问题中的大多数生成正确的数字代码。 图 4、在优化的 GPU 内核中实现推理 – 时间扩展 DeepSeek-R1 上经过优化的 GPU 内核 这些结果展示了如何使用最新的 DeepSeek-R1 模型,通过在推理期间使用更强大的计算能力来提供更好的 GPU 内核。这仍然是一个新的研究领域,在自动生成有效注意力内核的前景良好的方法方面取得了早期成果。 虽然我们已经有了一个良好的开端,但我们需要做更多的工作,以便为更广泛的问题持续提供更好的结果。我们对 DeepSeek-R1 的最新进展及其潜力感到兴奋。 如需了解更多信息或入门,请参阅 DeepSeek-R1 NIM 微服务 (现已在 build.nvidia.com 上提供)。
https://developer.nvidia.com/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment
A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in medical AI research,” said Ruijiang LI , an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.” Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The study , recently published in Nature , highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI. Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights. To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options. The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data. According to Jinxi Xiang , study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64 NVIDIA V100 Tensor Core GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used NVIDIA A100 80 gb Tensor Core GPUs . The researchers also used NVIDIA RTX A6000 GPUs for evaluating downstream tasks. The framework was accelerated with NVIDIA CUDA and NVIDIA cuDNN libraries, for optimized performance. When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy. Figure 1. An example of the visual question-answering MUSK can perform It improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected ‌breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%. Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%. “One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.” A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes. Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval. “We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li. The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on GitHub .
https://developer.nvidia.com/zh-cn/blog/ai-foundation-model-enhances-cancer-diagnosis-and-tailors-treatment/
AI 基础模型增强癌症诊断并实现个性化治疗
斯坦福大学研究人员的一项新研究和 AI 模型正在简化癌症诊断、治疗规划和预后预测。这项名为 MUSK (Multimodal transformer with Unified maSKed modeling) 的研究旨在提高精准肿瘤学,根据每位患者独特的医疗数据为其定制治疗计划。 “多模态基础模型是医学 AI 研究的新领域,”放射肿瘤学副教授兼研究高级作者 Ruijiang LI 说。“最近,我们为医学领域开发了视觉语言基础模型,尤其是在病理学领域。但是,现有研究使用的现有基础模型需要配对的图像 – 文本数据进行预训练。尽管我们付出了大量努力,最终打造出 1M 病理图像文本对,但它仍然不足以完全捕捉整个疾病谱系的多样性。” 在考虑患者状况和规划最佳治疗方案时,肿瘤科医生依靠多种数据源。然而,医生和 AI 模型仍然难以集成和解释复杂的医疗数据。该研究最近发表在 Nature 杂志上,重点介绍了 MUSK 如何帮助医生做出更准确、更明智的决定,同时解决医学 AI 领域的长期挑战。 借助深度学习,MUSK 处理临床文本数据(如医生的笔记)和病理学图像(如组织学幻灯片),以识别医生可能无法立即发现的模式,从而获得更好的临床见解。 为此,它使用了两步多模态 transformer 模型。首先,它从大量未配对的数据中学习,从有用的文本和图像中提取特征。然后,它通过关联配对的图像-文本数据来微调对数据的理解,这有助于识别不同类型的癌症、预测生物标志物,并提出有效的治疗方案。 研究人员基于该领域最大的数据集之一预训练了 AI 模型,使用了来自 11,577 名患者的 50M 病理学图像,其中有 33 种肿瘤类型和 1B 病理学相关文本数据。 据辐射物理学研究主要作者兼博士后学者 Jinxi Xiang 称,预训练在 8 个节点上使用 64 个 NVIDIA V100 Tensor Core GPUs 进行了 10 天以上,使 MUSK 能够高效处理大量病理学图像和临床文本。二级预训练阶段和消融研究使用 NVIDIA A100 80GB Tensor Core GPUs 。研究人员还使用 NVIDIA RTX A6000 GPUs 评估下游任务。该框架通过 NVIDIA CUDA 和 NVIDIA cuDNN 库进行加速,以优化性能。 在 23 项病理学基准测试中,MUSK 在多个关键领域的表现优于现有 AI 模型。它擅长将病理学图像与相关的医学文本进行匹配,从而更有效地收集相关的患者信息。它还能解读与病理学相关的问题,例如识别癌变区域或预测生物标志物的存在,准确率高达 73%。 图 1. 例如,视觉问答 MUSK 可以执行 它将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类能力提高了 10%,这有助于早期诊断和治疗规划。它还检测到乳腺癌生物标志物,AUC(用于衡量模型准确性的指标)为 83%。 此外,MUSK 有 75%的时间能够可靠预测癌症生存期结果,以及哪些肺癌和胃食道癌会对免疫治疗做出反应,准确率为 77%。与准确率仅为 60-65%的标准临床生物标志物相比,这是一个显著的改进。 “一个惊人的发现是,集成多模态数据的 AI 模型的性能始终优于仅基于图像或文本数据的 AI 模型,这凸显了多模态方法的强大功能,”Li 说。“MUSK 的真正价值在于它能够利用大规模的未配对图像和文本数据进行预训练,与需要配对数据的现有模型相比,这是一个巨大的提升。” 这项研究的一个核心优势是,它可以在几乎没有训练的情况下适应不同的临床环境。这可以提高肿瘤学工作流程的效率,并帮助医生更快地诊断癌症,同时定制治疗方案以改善患者的治疗效果。 他们未来的工作重点将是在来自不同人群的多机构患者群体中验证该模型,以及用于治疗决策等高风险应用。研究人员指出,临床试验中的前瞻性验证需要获得监管机构的批准。 “我们还致力于将 MUSK 方法扩展到数字病理学,包括放射学图像和基因组数据等其他类型的数据,”Li 说。 研究人员的工作(包括安装说明、模型权重、评估代码和样本数据) 可在 GitHub 上获取。
https://developer.nvidia.com/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA Toolkit Now Available for NVIDIA Blackwell
The latest release of the CUDA Toolkit , version 12.8, continues to push accelerated computing performance in data sciences, AI, scientific computing, and computer graphics and simulation, using the latest NVIDIA CPUs and GPUs. This post highlights some of the new features and enhancements included with this release: NVIDIA Blackwell architecture support CUDA Graphs conditional nodes enhancements Blackwell CUTLASS kernels for large language models (LLMs) NVIDIA Nsight Developer Tools updates Math libraries updates cudaStreamGetDevice Compiler updates Accelerated Python updates Feature-complete architectures NVIDIA Blackwell architecture support CUDA Toolkit 12.8 is the first version of the Toolkit to support the NVIDIA Blackwell architecture across the entire suite of Developer Tools including performance tools and profilers, libraries, and compilers. Built with 208 billion transistors—more than 2.5x the number of transistors in NVIDIA Hopper GPUs—Blackwell is the largest GPU ever built. Key Blackwell capabilities supported include: Second-generation Transformer Engine through custom Tensor Core technology: Accelerates inference and training for LLMs and mixture-of-experts (MoE) models. Decompression: Accelerates performance on data analytics and data science pipelines using the latest compression formats such as LZ4, Snappy, and Deflate. Network interconnect: NVLink and NVLink Switches accelerate inter-GPU communications performance for trillion-parameter and multitrillion-parameter AI models. To learn more about the leading innovations in Blackwell, see the NVIDIA Blackwell Architecture Technical Brief . 2x faster CUDA Graphs with runtime kernel selection for lower latency inference With Blackwell, CUDA Graphs APIs continue to be the most efficient way to launch repeated invocations of sequences of GPU operations. CUDA Toolkit 12.8 introduces more enhancements to CUDA Graphs, including additional conditional node types. In many applications, having dynamic control over the execution of work in CUDA Graphs can increase performance and flexibility of graph launches. For example, an algorithm that involves iterating over a series of operations many times until the result converges below a certain threshold can now run wholly on the GPU without needing CPU control management, reducing overhead by as much as 2x. CUDA Toolkit 12.8 improves APIs for runtime control of conditional graph nodes. Conditional nodes contain segments of a graph that can execute, or be skipped, based on a condition to evaluate as the graph is running. Such segments can be evaluated once (an IF node), or repeatedly in a loop (a WHILE node). CUDA 12.8 adds support for two new types of conditional graph nodes: IF/ELSE combined nodes and SWITCH nodes. With the Blackwell architecture, we’ve improved LLM performance to benefit all reasoning models, including DeepSeek-R1. CUDA Graphs enhanced SWITCH and IF/ELSE support delivers 2x more performance for runtime kernel selection versus going back to the CPU for launch decision-making. Training : By reducing CPU dependency for kernel selection, training workloads sustain even more GPU Tensor Core throughput, resulting in higher Model FLOPs Utilization (MFU). This improves performance using the same GPU infrastructure, reducing time and cost to train. Inference : For next-generation reasoning models that make use of test-time compute, a high token generation rate is critical as each inference request can generate a vast number of tokens per query. CUDA 12.8 new stream API enables fewer calls back to the host CPU, reducing the time between one kernel finishing and the next one starting, increasing token generation rate. This results in more tokens generated in fixed time budget, helping models reason more and increasing intelligence. To learn more, see Dynamic Control Flow in CUDA Graphs with Conditional Nodes . Blackwell CUTLASS kernels for LLMs CUTLASS , since its 2017 debut, has been instrumental for researchers and developers implementing high-performance CUDA kernels on NVIDIA GPUs. By providing developers with comprehensive tools to design custom operations, such as GEMMs and Convolutions, CUTLASS has been critical for the development of hardware-aware algorithms, powering breakthroughs like FlashAttention that helped spark modern AI. With the release of CUTLASS 3.8—which supports CUDA 12.8—NVIDIA is extending support to the Blackwell architecture, enabling developers to harness next-generation Tensor Cores with support for all new data types. This includes new narrow precision MX formats and the NVIDIA-developed FP4 format, which increase compute throughput. Figure 1 shows CUTLASS can achieve up to 98% relative peak performance for Tensor Core operations. Figure 1. Blackwell CUTLASS GEMM performance relative to expected peak, delivering up to 98% of Blackwell peak performance For DeepSeek-V3 and DeepSeek-R1, grouped GEMMs make up a large portion of the MoE compute required during inference. These operations enable different matrix sizes, scaling factors, and fusions to be grouped and parallelized in a single persistent-kernel launch. With CUTLASS, on Blackwell with FP4, Grouped GEMM kernel performance increases by up to 5x over H200 with FP16. Figure 2. CUTLASS Grouped GEMM performance for MoE inference used in DeepSeek delivers up to 5x more performance on Blackwell compared to Hopper at various precisions NVIDIA Nsight Developer Tools NVIDIA Nsight Compute 2025.1 is the first official release with support for the Blackwell architecture. Updates include visualization of Blackwell Hardware Tensor Memory in the memory chart as well as Tensor Core performance data. Figure 3. Tensor Memory traffic in the Nsight Compute memory chart It also comes with several improvements to the increasingly popular range profiling feature. Users can now collect source-level metrics, including Instructions Executed and memory access information, inside profiled ranges. This update also enables Guided Analysis rules evaluation for ranges. This built-in expertise for identifying performance issues is a key component of NVIDIA Nsight Compute. This release reports kernel stack sizes and adds custom tooltips to help users understand their workload performance. This release of Compute Sanitizer, an automatic correctness checking tool, adds support for Python call stacks to accurately locate kernel correctness issues when kernels are launched through Python applications. Additionally, new Tensor Core MMA guardrails for Blackwell can report errors related to Tensor Core programming. These are enabled by adding the PTXAS flag -g-tmem-access-check when compiling programs. Examples of common errors include access to unallocated tensor memory, invalid addresses, and invalid allocator usage. Math libraries updates With CUDA Toolkit 12.8, we have several new library enhancements that leverage the new Blackwell architecture and help accelerate applications in AI, data sciences, graphics and simulation, and high-performance scientific computing. New features cuBLAS APIs were extended to support microscaled 4-bit and 8-bit floating point mixed-precision tensor core accelerated matrix multiplication for compute capability 10.0 (Blackwell) and higher. Introduced initial support for CUDA in Graphics (CIG) on Windows x64 for NVIDIA Ampere GPU architecture and Blackwell GeForce-class GPUs. CIG contexts are now autodetected, and cuBLAS selects kernels that comply with CIG shared memory usage limits. cuSOLVER now supports zsytrf/zsytrs, a complex symmetric direct solver without pivoting. nvJPEG now provides support for the Tegra architecture. NPP now provides support for the DRIVE Thor architecture. cudaStreamGetDevice Applications often use CUDA streams to provide ordered access to GPU resources. An instance of a CUDA stream is associated with a fixed CUDA device. In applications that address multiple devices, there are scenarios where getting a handle to the underlying device for a given stream is useful to tailor the application to device characteristics. Previously, the CUDA API did not provide a mechanism for retrieving the device associated with a CUDA stream; developers had to track this themselves. The addition of the cudaStreamGetDevice CUDA API to retrieve the device associated with a CUDA stream can simplify applications. Compiler updates New compiler updates include the following: The CUDA Toolkit 12.8 release introduces support for GCC 14 as a host-side compiler. The default high-level optimizer is now based on LLVM 18 for the Blackwell architecture. nvdisasm now supports emitting JSON formatted SASS disassembly. Accelerated Python updates The following two beta releases are now available for Python users: CUDA Python has released an early prototype of a new idiomatic object model called cuda.core and moved the CUDA binding to a submodule, cuda.bindings . For more information, see the documentation in the NVIDIA/cuda-python GitHub repo. CUDA Core Compute Libraries (CCCL) has released early prototypes of Python for parallel and cooperative algorithms, enabling you to use thread-level parallelism with user-defined types and functions from pure Python code. Learn more about CCCL . Additionally, the CuPy team is releasing a new version with Blackwell patches validated for general availability. Feature-complete architectures With the CUDA Toolkit 12.8 release, we now consider the Maxwell, Pascal, and Volta architectures to be feature-complete and support for them will be frozen in an upcoming release. This means that, in future releases, no new features will be added to the driver to enable new CUDA Toolkit functionality supporting Maxwell, Pascal, and Volta architectures. End users will be able to run existing software stacks and applications on Maxwell, Pascal, and Volta architectures using the supported upcoming LTS driver branch through its lifecycle. Starting with release 12.8, developers running offline compilers targeting these architectures will output a warning message when using nvcc , nvrtc , and nvjitlink . In the next major CUDA Toolkit release, offline compilation support for the Maxwell, Pascal, and Volta architectures will be removed from the compilers. The upcoming LTS driver for production application execution and JIT compilation of Maxwell, Pascal, and Volta applications will be supported for the normal 3-year LTS support window. For more details, read the CUDA Toolkit 12.8 Release Notes . Summary The CUDA Toolkit 12.8 release provides full feature support for the NVIDIA Blackwell architecture. This release continues to provide enhanced support for the newest NVIDIA GPUs, accelerated libraries, compilers, and Developer Tools, whether you’re developing applications in C++ or Python. Want more information? Check out the CUDA documentation , browse the latest NVIDIA Deep Learning Institute (DLI) offerings, and visit the NGC catalog . Ask questions and join the conversation in the CUDA Developer Forums. Acknowledgments Thanks to the following NVIDIA contributors: Stephen Jones, Jackson Marusarz, Becca Zandstein, Andy Terrel, Ashraf Eassa, Matt Nicely, and Mridula Prakash.
https://developer.nvidia.com/zh-cn/blog/cuda-toolkit-12-8-delivers-nvidia-blackwell-support/
CUDA 工具包现已支持 NVIDIA Blackwell 架构
CUDA 工具包 的最新版本 (版本 12.8) 使用最新的 NVIDIA CPU 和 GPU,持续提升数据科学、AI、科学计算以及计算机图形和模拟领域的加速计算性能。本文重点介绍了此版本包含的一些新功能和增强功能: NVIDIA Blackwell 架构支持 CUDA 图形处理条件节点增强功能 用于大语言模型(LLMs)的 Blackwell CUTLASS 内核 NVIDIA Nsight 开发者工具更新 数学库更新 cudaStreamGetDevice 编译器更新 加速 Python 更新 功能齐全的架构 NVIDIA Blackwell 架构支持 CUDA 工具包 12.8 是该工具包的第一个版本,在整个开发者工具套件 (包括性能工具和分析器、库和编译器) 中支持 NVIDIA Blackwell 架构。Blackwell 由 208 亿个晶体管构建而成,是 NVIDIA Hopper GPU 中晶体管数量的 2.5 倍以上,是迄今为止最大的 GPU。 Blackwell 支持的主要功能包括:Key Blackwell 采用自定义 Tensor Core 技术的第二代 Transformer 引擎:加速 LLM 和 mixture-of-experts (MoE) 模型的推理和训练。 解压缩: 使用 LZ4、Snappy 和 Deflate 等最新压缩格式,加速数据分析和数据科学工作流的性能。 网络互连:NVLink 和 NVLink Switches 加速万亿参数和数万亿参数 AI 模型的 GPU 间通信性能。 如需详细了解 NVIDIA Blackwell 的领先创新,请参阅 NVIDIA Blackwell 架构技术概览。 使用运行时核选择将 CUDA Graphs 速度提升 2 倍,从而降低延迟推理 借助 Blackwell,CUDA Graphs APIs 仍然是启动 GPU 操作序列重复调用的最高效方式。CUDA Toolkit 12.8 为 CUDA Graphs 引入了更多增强功能,包括其他 条件节点类型 。 在许多应用程序中,对 CUDA Graphs 中工作的执行进行动态控制可以提高图形启动的性能和灵活性。例如,一种算法需要多次迭代一系列运算,直到结果收到某个值以下,现在这种算法无需进行 CPU 控制管理即可完全在 GPU 上运行,从而将开销降低高达 2 倍。CUDA Toolkit 12.8 改进了用于条件图形节点运行时控制的 API。 条件节点包含图形的片段,这些片段可以在图形运行时根据要评估的条件执行或跳过。此类片段可以评估一次 (IF 节点),也可以在循环中重复评估 (WHILE 节点)。CUDA 12.8 增加了对两种新型条件图形节点的支持:IF/ELSE 组合节点和 SWITCH 节点。 借助 Blackwell 架构,我们改进了 LLM 性能,使包括 DeepSeek-R1 在内的所有推理模型受益。与返回 CPU 进行启动决策相比,CUDA Graphs 增强的 SWITCH 和 IF/ELSE 支持可将运行时内核选择的性能提高 2 倍。 训练:通过减少内核选择对 CPU 的依赖,训练工作负载可维持更多的 GPU Tensor Core 吞吐量,从而提高模型 FLOPS 利用率(MFU)。这提高了使用相同的 GPU 基础架构的性能,减少了训练时间和成本。 推理:对于使用测试时计算的新一代推理模型 ,高令牌生成速率至关重要,因为每个推理请求都可以在每个查询中生成大量令牌。CUDA 12.8 新流 API 可减少对主机 CPU 的调用,从而缩短一次内核处理与下一次启动之间的时间,从而提高令牌生成率。这会在固定时间预算内生成更多 token,帮助模型推理更多并提高智能。 如需了解详情, 请参阅使用条件节点的 CUDA 图形中的动态控制流。 适用于 LLMs 的 Blackwell CUTLASS 内核 自 2017 年首次推出以来, CUTLASS 一直在推动研究人员和开发者在 NVIDIA GPUs 上实施高性能 CUDA 核函数。通过为开发者提供全面的工具来设计自定义操作 (例如 GEMMs 和 Convolutions),CUTLASS 在开发硬件感知算法方面发挥了至关重要的作用,推动了 FlashAttention 等帮助激发现代 AI 的突破。 随着支持 CUDA 12.8 的 CUTLASS 3.8 的发布,NVIDIA 将扩展对 Blackwell 架构的支持,使开发者能够利用新一代 Tensor Core 来支持所有新的数据类型。这包括新的窄精度 MX 格式和 NVIDIA 开发的 FP4 格式,可提高计算吞吐量。图 1 显示,对于 Tensor Core 运算,CUTLASS 可实现高达 98% 的相对峰值性能。 图 1. Blackwell CUTLASS GEMM 性能相对于预期峰值,可提供高达 98% 的 Blackwell 峰值性能 对于 DeepSeek-V3 和 DeepSeek-R1,分组的 GEMM 在推理期间所需的 MoE 计算中占很大比例。这些运算支持在单个持久性核函数启动中对不同的矩阵大小、缩放系数和融合进行分组和并行化。借助 CUTLASS,在 Blackwell 以 FP4,Grouped GEMM 内核性能增加高达 5 倍,相比使用 FP16 的 H200。 图 2、与 Hopper 相比,DeepSeek 中使用的用于 MoE 推理的 CUTLASS 分组 GEMM 性能在 Blackwell 上在各种精度下的性能提升高达 5 倍 NVIDIA Nsight 开发者工具 NVIDIA Nsight Compute 2025.1 是首个支持 Blackwell 架构的官方版本。更新包括显存图表中 Blackwell 硬件 Tensor 内存的可视化,以及 Tensor Core 性能数据。 图 3、Nsight Compute 内存图中的 Tensor 内存流量 它还对日益流行的范围分析功能进行了多项改进。用户现在可以在已分析的范围内收集源级指标,包括已执行指令和内存访问信息。此更新还启用了针对范围的引导分析规则评估。这种用于识别性能问题的内置专业知识是 NVIDIA Nsight Compute 的关键组件。此版本报告了内核堆栈大小,并添加了自定义工具提示,以帮助用户了解其工作负载性能。 此版本的 Compute Sanitizer 是一款自动正确性检查工具,增加了对 Python 调用堆栈的支持,可在通过 Python 应用启动内核时准确定位内核正确性问题。此外,用于 Blackwell 的新 Tensor Core MMA 护栏可以报告与 Tensor Core 编程相关的错误。在编译程序时,可以通过添加 PTXAS 标志 -g-tmem-access-check 来启用这些功能。常见错误的示例包括访问未分配的 tensor 内存、无效的地址以及使用无效的分配器。 数学库更新 借助 CUDA 工具包 12.8,我们获得了一些新的增强功能库,这些增强功能利用了新的 Blackwell 架构,并有助于加速 AI、数据科学、图形和仿真以及高性能科学计算领域的应用程序。 新功能 cuBLAS API 经过扩展,支持微缩 4 位和 8 位浮点混合精度张量核心加速矩阵乘法,可实现 10.0(Blackwell)及更高版本的计算能力。 为 Windows x64 上的 NVIDIA Ampere GPU 架构和 Blackwell GeForce 级 GPU 引入了对 CUDA in Graphics (CIG) 的初步支持。现在,系统会自动检测 CIG 上下文,并且 cuBLAS 会选择符合 CIG 共享内存使用限制的内核。 cuSOLVER 现在支持 zsytrf/zsytrs,这是一款无需旋转的复杂对称直接求解器。 nvJPEG 现在支持 Tegra 架构。 NPP 现在为 DRIVE Thor 架构提供支持。 cudaStreamGetDevice 应用程序通常使用 CUDA 流提供对 GPU 资源的有序访问。CUDA 流实例与固定的 CUDA 设备相关联。在用于处理多台设备的应用中,在某些情况下,为给定流获取底层设备的句柄有助于根据设备特性定制应用。 以前,CUDA API 没有提供检索与 CUDA 流关联的设备的机制;开发者必须自行追踪。添加 cudaStreamGetDevice CUDA API 以检索与 CUDA 流关联的设备,可以简化应用。 编译器更新 新的编译器更新包括以下内容: CUDA 工具包 12.8 版本引入了对作为主机端编译器的 GCC 14 的支持。 现在,Blackwell 架构的默认高级优化器基于 LLVM 18 nvdisasm 现在支持发射 JSON 格式的 SASS 反汇编。 加速 Python 更新 以下两个测试版现已面向 Python 用户提供: CUDA Python 已发布名为 cuda.core 的新惯用对象模型的早期原型,并将 CUDA 绑定移至子模块 cuda.bindings 。有关更多信息,请参阅 NVIDIA/cuda-python GitHub 存储库中的文档。 CUDA 核心计算库 ( CCCL ) 已发布用于并行和协作算法的早期 Python 原型,使您能够使用线程级并行性以及来自纯 Python 代码的用户定义类型和函数。详细了解 CCCL。 此外,CuPy 团队还将发布新版本,其中的 Blackwell 补丁经过验证,现已全面推出。 功能齐全的架构 在 CUDA 工具包 12.8 版本中,我们现在认为 Maxwell、Pascal 和 Volta 架构功能齐全,并且即将发布的版本将冻结对这些架构的支持。 这意味着,在未来的版本中,不会向驱动添加任何新功能来启用支持 Maxwell、Pascal 和 Volta 架构的新 CUDA 工具包功能。最终用户将能够在其生命周期中使用受支持的即将推出的 LTS 驱动分支,在 Maxwell、Pascal 和 Volta 架构上运行现有的软件堆栈和应用。 从版本 12.8 开始,开发者在运行针对这些架构的离线编译器时,将在使用 nvcc 、 nvrtc 和 nvjitlink 时输出警告消息。 在下一个主要 CUDA 工具包版本中,将从编译器中删除对 Maxwell、Pascal 和 Volta 架构的离线编译支持。即将推出的用于生产应用程序执行的 LTS 驱动以及 Maxwell、Pascal 和 Volta 应用程序的 JIT 编译将在正常的 3 年期 LTS 支持窗口期内获得支持。 如需了解更多详情,请参阅 CUDA Toolkit 12.8 版本说明 。 总结 CUDA 工具包 12.8 版本为 NVIDIA Blackwell 架构提供完整的功能支持。无论您是使用 C++ 还是 Python 开发应用程序,此版本都将继续为最新的 NVIDIA GPU、加速库、编译器和开发者工具提供增强支持。 想要了解更多信息?查看 CUDA 文档 ,浏览最新的 NVIDIA Deep Learning Institute (DLI) 产品 ,并访问 NGC 目录 。在 CUDA Developer Forums 中提出问题并加入对话。 致谢 感谢以下 NVIDIA 贡献者:Stephen Jones、Jackson Marusarz、Becca Zandstein、Andy Terrel、Ashraf Eassa、Matt Nicely 和 Mridula Prakash。
https://developer.nvidia.com/blog/recent-posts/
Recent posts
No content found
https://developer.nvidia.com/zh-cn/blog/recent-posts/
最近文章
No content found