# YOLOv12-N ## Model Description **YOLOv12-N** is a variant of the 12th-generation YOLO (You Only Look Once) real-time object detector. It builds on prior YOLO models with improved backbone/neck architectures, updated training strategies, and optimizations for both high-performance GPUs and edge devices. ## Features - **Real-time object detection** optimized for low-latency inference. - **High accuracy** across diverse categories and challenging environments. - **Lightweight variants** suitable for mobile and embedded deployment. - **Scalable**: runs from smartphones to multi-GPU servers. - **Extensible**: fine-tuning supported for domain-specific datasets. ## Use Cases - Autonomous driving and ADAS (Advanced Driver Assistance Systems) - Surveillance and security monitoring - Industrial automation and defect detection - Retail analytics and inventory monitoring - Sports analytics and event detection ## Inputs and Outputs **Input**: - RGB images or video frames (any resolution; auto-resized during preprocessing). **Output**: - Bounding boxes `(x, y, w, h)` - Class labels - Confidence scores --- ## How to use > ⚠️ **Hardware requirement:** the model currently runs **only on Qualcomm NPUs** (e.g., Snapdragon-powered AIPC). > Apple NPU support is planned next. ### 1) Install Nexa-SDK - Download and follow the steps under "Deploy Section" Nexa's model page: [Download Windows arm64 SDK](https://sdk.nexa.ai/model/YOLOv12%E2%80%91N) - (Other platforms coming soon) ### 2) Get an access token Create a token in the Model Hub, then log in: ```bash nexa config set license '' ``` ### 3) Run the model Running: ```bash nexa infer NexaAI/yolov12-npu ``` --- ## License - Licensed under [AGPL-3.0](https://github.com/ultralytics/ultralytics?tab=AGPL-3.0-1-ov-file#readme) (same as Ultralytics YOLO). ## References - Original repo: [https://github.com/sunsmarterjie/yolov12](https://github.com/sunsmarterjie/yolov12) - Ultralytics YOLO family: [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)