Quijada's picture
2

Quijada

RaulQF
Β·

AI & ML interests

Passionate about the intersection of Artificial Intelligence, Machine Learning, and Media Technology, I specialize in: Computer Vision – Advanced image and video analysis for content understanding and automation. Natural Language Processing – Extracting insights from text using cutting-edge NLP models. Audio & Speech Processing – Enhancing accessibility through speech recognition, transcription, and dubbing solutions. AI for Media & Broadcasting – Automating workflows for TV companies, from metadata extraction to content monitoring. Optimization & Deployment – Scalable AI solutions leveraging GPU acceleration and cloud-based architectures.

Recent Activity

updated a model about 5 hours ago
Ugiat/NERCat
published a model about 5 hours ago
Ugiat/NERCat
updated a dataset about 6 hours ago
Ugiat/ner-cat
View all activity

Organizations

Hugging Face Discord Community's profile picture Ugiat Technologies's profile picture

RaulQF's activity

updated a model about 5 hours ago
published a model about 5 hours ago
updated a dataset about 6 hours ago
updated a Space about 6 hours ago
published a dataset about 7 hours ago
published a Space about 7 hours ago
reacted to tomaarsen's post with πŸ”₯ 5 months ago
view post
Post
7096
πŸ“£ Sentence Transformers v3.2.0 is out, marking the biggest release for inference in 2 years! 2 new backends for embedding models: ONNX (+ optimization & quantization) and OpenVINO, allowing for speedups up to 2x-3x AND Static Embeddings for 500x speedups at 10-20% accuracy cost.

1️⃣ ONNX Backend: This backend uses the ONNX Runtime to accelerate model inference on both CPU and GPU, reaching up to 1.4x-3x speedup depending on the precision. We also introduce 2 helper methods for optimizing and quantizing models for (much) faster inference.
2️⃣ OpenVINO Backend: This backend uses Intel their OpenVINO instead, outperforming ONNX in some situations on CPU.

Usage is as simple as SentenceTransformer("all-MiniLM-L6-v2", backend="onnx"). Does your model not have an ONNX or OpenVINO file yet? No worries - it'll be autoexported for you. Thank me later πŸ˜‰

πŸ”’ Another major new feature is Static Embeddings: think word embeddings like GLoVe and word2vec, but modernized. Static Embeddings are bags of token embeddings that are summed together to create text embeddings, allowing for lightning-fast embeddings that don't require any neural networks. They're initialized in one of 2 ways:

1️⃣ via Model2Vec, a new technique for distilling any Sentence Transformer models into static embeddings. Either via a pre-distilled model with from_model2vec or with from_distillation where you do the distillation yourself. It'll only take 5 seconds on GPU & 2 minutes on CPU, no dataset needed.
2️⃣ Random initialization. This requires finetuning, but finetuning is extremely quick (e.g. I trained with 3 million pairs in 7 minutes). My final model was 6.6% worse than bge-base-en-v1.5, but 500x faster on CPU.

Full release notes: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.2.0
Documentation on Speeding up Inference: https://sbert.net/docs/sentence_transformer/usage/efficiency.html
  • 1 reply
Β·