hassenhamdi

hassenhamdi

AI & ML interests

None yet

Recent Activity

liked a Space 3 days ago
hf-audio/open_asr_leaderboard
liked a model 3 days ago
SparkAudio/Spark-TTS-0.5B
reacted to singhsidhukuldeep's post with šŸ§  15 days ago
O1 Embedder: Transforming Retrieval Models with Reasoning Capabilities Researchers from University of Science and Technology of China and Beijing Academy of Artificial Intelligence have developed a novel retrieval model that mimics the slow-thinking capabilities of reasoning-focused LLMs like OpenAI's O1 and DeepSeek's R1. Unlike traditional embedding models that directly match queries with documents, O1 Embedder first generates thoughtful reflections about the query before performing retrieval. This two-step process significantly improves performance on complex retrieval tasks, especially those requiring intensive reasoning or zero-shot generalization to new domains. The technical implementation is fascinating: - The model integrates two essential functions: Thinking and Embedding - It uses an "Exploration-Refinement" data synthesis workflow where initial thoughts are generated by an LLM and refined by a retrieval committee - A multi-task training method fine-tunes a pre-trained LLM to generate retrieval thoughts via behavior cloning while simultaneously learning embedding capabilities through contrastive learning - Memory-efficient joint training enables both tasks to share encoding results, dramatically increasing batch size The results are impressive - O1 Embedder outperforms existing methods across 12 datasets in both in-domain and out-of-domain scenarios. For example, it achieves a 3.9% improvement on Natural Questions and a 3.0% boost on HotPotQA compared to models without thinking capabilities. This approach represents a significant paradigm shift in retrieval technology, bridging the gap between traditional dense retrieval and the reasoning capabilities of large language models. What do you think about this approach? Could "thinking before retrieval" transform how we build search systems?
View all activity

Organizations

ONNX Community's profile picture Hugging Face Discord Community's profile picture Nerdy Face's profile picture HasSensi_org's profile picture

hassenhamdi's activity

reacted to singhsidhukuldeep's post with šŸ§  15 days ago
view post
Post
3461
O1 Embedder: Transforming Retrieval Models with Reasoning Capabilities

Researchers from University of Science and Technology of China and Beijing Academy of Artificial Intelligence have developed a novel retrieval model that mimics the slow-thinking capabilities of reasoning-focused LLMs like OpenAI's O1 and DeepSeek's R1.

Unlike traditional embedding models that directly match queries with documents, O1 Embedder first generates thoughtful reflections about the query before performing retrieval. This two-step process significantly improves performance on complex retrieval tasks, especially those requiring intensive reasoning or zero-shot generalization to new domains.

The technical implementation is fascinating:

- The model integrates two essential functions: Thinking and Embedding
- It uses an "Exploration-Refinement" data synthesis workflow where initial thoughts are generated by an LLM and refined by a retrieval committee
- A multi-task training method fine-tunes a pre-trained LLM to generate retrieval thoughts via behavior cloning while simultaneously learning embedding capabilities through contrastive learning
- Memory-efficient joint training enables both tasks to share encoding results, dramatically increasing batch size

The results are impressive - O1 Embedder outperforms existing methods across 12 datasets in both in-domain and out-of-domain scenarios. For example, it achieves a 3.9% improvement on Natural Questions and a 3.0% boost on HotPotQA compared to models without thinking capabilities.

This approach represents a significant paradigm shift in retrieval technology, bridging the gap between traditional dense retrieval and the reasoning capabilities of large language models.

What do you think about this approach? Could "thinking before retrieval" transform how we build search systems?
replied to wassemgtk's post 17 days ago
view reply

Would like to see performance on well known benchmark datasets.

reacted to wassemgtk's post with šŸ§  17 days ago
view post
Post
1850
# GESAL: Real-Time Adaptation for LLMs


Weā€™re excited to unveil **Graph-Enhanced Singular Adaptive Learning (GESAL)**, a framework that lets LLMs like meta-llama/Llama-3.2-1B adapt in real time using user feedback. Check out the code and white paper on GitHub!

šŸ”— **Code**: [https://github.com/writer/AI-Adaptive-Learning-GESAL](https://github.com/writer/AI-Adaptive-Learning-GESAL)

---

## Why GESAL?

Static LLMs struggle to adapt without heavy retraining. GESAL solves this with:
- **SVF**: Adapts weights via \( W' = U (\Sigma \cdot z) V^T \), using few parameters.
- **Graph Memory**: Stores adaptations in nodes for scalability.
- **RL**: Updates via \( J(z) = \mathbb{E}[\log \pi_z(y|x) r] \) based on feedback.

---

## How It Works

Ask "How many Rā€™s in ā€˜strawberryā€™?" If it says "2" and you say "no," GESAL learns to say "3" next time, avoiding repeats.

---

## Try It

Built with Hugging Faceā€™s transformers:
pip install transformers torch numpy
python Adaptive_Learning_(GESAL).py

Needs a Hugging Face token for Llama-3.2-1B.

---

## Results

GESAL hits 95% accuracy after 5 feedbacks vs. LoRAā€™s 70%. Itā€™s efficient (~0.5M params) and scalable.
Ā·
reacted to freddyaboulton's post with šŸš€ 17 days ago
view post
Post
3157
Getting WebRTC and Websockets right in python is very tricky. If you've tried to wrap an LLM in a real-time audio layer then you know what I'm talking about.

That's where FastRTC comes in! It makes WebRTC and Websocket streams super easy with minimal code and overhead.

Check out our org: hf.co/fastrtc
reacted to KonradSzafer's post with šŸ‘€ 17 days ago
view post
Post
1847
Iā€™ve been experimenting with a ā€œTech Treeā€ to make ML research more systematic and transparentā€”turns out it helped me spot hidden interactions between experiments and share progress more easily. I wrote a short blog post with examples and insights! KonradSzafer/tech_tree_blog