DeepSignal-4B-V1 (GGUF)

This repository provides a GGUF model file for local inference (e.g., llama.cpp / LM Studio). It is intended for traffic-signal-control analysis and related text-generation workflows. For details, check our repository at AIMSLaboratory/DeepSignal.

Files

  • DeepSignal-4B_V1.F16.gguf
  • config.json

Quickstart (llama.cpp)

llama-cli -m DeepSignal-4B_V1.F16.gguf -p "You are a traffic management expert. You can use your traffic knowledge to solve the traffic signal control task.
Based on the given traffic {scene} and {state}, predict the next signal phase and its duration.
You must answer directly, the format must be: next signal phase: {number}, duration: {seconds} seconds
where the number is the phase index (starting from 0) and the seconds is the duration (usually between 20-90 seconds)."

You need to input the {scene} (total number of phases, which phases controls which lanes/directions and current phase ID/number, etc) and {state} (number of queing vehicles per lane, throughout vehicles per lane during the current phase, etc)

Evaluation (Traffic Simulation)

Performance Metrics Comparison by Model

Model Avg Saturation Avg Queue Length Max Saturation Max Queue Length Avg Congestion Index
Qwen3-30B-A3B 0.1550 5.5000 0.1550 5.4995 0.1500
DeepSignal-4B (Ours) 0.1580 5.5500 0.1580 5.5498 0.1550
LightGPT-8B-Llama3 0.1720 6.1000 0.1720 6.1000 0.1950
SFT 0.1780 6.2500 0.1780 6.2500 0.2050
Last Round GRPO 0.1850 6.4500 0.1850 6.4500 0.2150
Qwen3-4B 0.1980 7.2000 0.1980 7.1989 0.2450
Max Pressure 0.2050 7.8000 0.2049 7.7968 0.2550
GPT-OSS-20B 0.2250 8.5001 0.2250 8.4933 0.3050

Congestion Level Distribution by Model (%)

Model Light congestion Smooth Very smooth
DeepSignal-4B (Ours) 0.00 12.00 88.00
GPT-OSS-20B 2.00 53.33 44.67
LightGPT-8B-Llama3 0.00 21.00 79.00
Max Pressure 0.00 36.44 63.56
Qwen3-30B-A3B 0.00 10.00 90.00
Qwen3-4B 2.33 32.00 65.67
Qwen3-4B-SFT 0.00 23.33 76.67
Downloads last month
80
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support