Papers
arxiv:2509.02563

DynaGuard: A Dynamic Guardrail Model With User-Defined Policies

Published on Sep 2
· Submitted by nsjain on Sep 3
Authors:
,
,
,
,
,
,
,

Abstract

Dynamic guardian models evaluate text based on user-defined policies, offering fast and accurate detection of both static harms and free-form policy violations.

AI-generated summary

Guardian models are used to supervise and moderate the outputs of user-facing chatbots, enforcing guardrails and detecting bad behaviors. Standard guardian models like LlamaGuard detect predefined, static categories of harms. We propose dynamic guardian models that evaluate text based on user-defined policies, making them useful for different application domains that are not addressed by standard guardian models. Our dynamic guardian models can be used for fast detection of policy violations or with chain-of-thought reasoning that articulates and justifies the model outputs. Our dynamic guardian models match static models in detection accuracy for static harm categories while identifying violations of free-form policies with accuracy comparable to frontier reasoning models in a fraction of the time.

Community

Paper author

Check out our interactive demo and give us feedback for improvement!

Demo: https://huggingface.co/spaces/tomg-group-umd/DynaGuard
Project Page: https://taruschirag.github.io/DynaGuard/
Code: https://github.com/montehoover/DynaGuard

Paper author Paper submitter
This comment has been hidden

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 2