Model Summary: This is a multitask transformer-based model designed to jointly learn semantic representations for search queries and web pages, aligned to a shared IAB Content Taxonomy goal space.

It uses a shared RoBERTa encoder with separate task-specific heads and incorporates domain context for web pages. The output embeddings are trained to align with pretrained IAB goal embeddings, allowing both classification and nearest-neighbor retrieval.

Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Mozilla/iab-multitask-inference

Quantized
(13)
this model