Clem πŸ€—'s picture

Clem πŸ€— PRO

clem

AI & ML interests

multi-modal, time-series, biology and chemistry

Recent Activity

upvoted a collection about 9 hours ago
Tulu 3 Models
liked a model about 9 hours ago
allenai/Llama-3.1-Tulu-3-8B
View all activity

Organizations

Hugging Face's profile picture Objective Function's profile picture Pied Piper's profile picture Society & Ethics's profile picture Organization's profile picture Text Generation Inference's profile picture testifly's profile picture HugGAN Community's profile picture Hugging Face Fellows's profile picture Gradio-Blocks-Party's profile picture HuggingFaceM4's profile picture Open-Source AI Meetup's profile picture Hugging Face OSS Metrics's profile picture Hugging Face Smol Cluster's profile picture huggingPartyParis's profile picture Unofficial Mistral Community's profile picture Journalists on Hugging Face's profile picture Major TOM's profile picture MLX Community's profile picture Miami AI Hub's profile picture Social Post Explorers's profile picture Paris AI Running Club's profile picture Hugging Face for Legal's profile picture Hugging Face Party @ PyTorch Conference's profile picture Nerdy Face's profile picture open/ acc's profile picture Bluesky Community's profile picture

clem's activity

upvoted an article about 5 hours ago
view article
Article

How to deploy and fine-tune DeepSeek models on AWS

β€’ 10
upvoted an article 1 day ago
view article
Article

Open-R1: a fully open reproduction of DeepSeek-R1

β€’ 460
upvoted an article 2 days ago
view article
Article

Welcome to Inference Providers on the Hub πŸ”₯

β€’ 171
reacted to fdaudens's post with ❀️ 3 days ago
view post
Post
7543
Yes, DeepSeek R1's release is impressive. But the real story is what happened in just 7 days after:

- Original release: 8 models, 540K downloads. Just the beginning...

- The community turned those open-weight models into +550 NEW models on Hugging Face. Total downloads? 2.5Mβ€”nearly 5X the originals.

The reason? DeepSeek models are open-weight, letting anyone build on top of them. Interesting to note that the community focused on quantized versions for better efficiency & accessibility. They want models that use less memory, run faster, and are more energy-efficient.

When you empower builders, innovation explodes. For everyone. πŸš€

The most popular community model? @bartowski 's DeepSeek-R1-Distill-Qwen-32B-GGUF version β€” 1M downloads alone.
Β·
reacted to mitkox's post with πŸ‘πŸš€ 3 days ago
view post
Post
2051
llama.cpp is 26.8% faster than ollama.
I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison.

Total duration:
llama.cpp 6.85 sec <- 26.8% faster
ollama 8.69 sec

Breakdown by phase:
Model loading
llama.cpp 241 ms <- 2x faster
ollama 553 ms

Prompt processing
llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster
ollama 42.17 tokens/s with an eval time of 498 ms

Token generation
llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster
ollama 122.07 tokens/s with an eval time 7.64 sec

llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing.

Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
Β·
posted an update 3 days ago
view post
Post
6654
AI is not a zero-sum game. Open-source AI is the tide that lifts all boats!
reacted to merve's post with πŸ‘€πŸ€—πŸ”₯ 6 days ago
view post
Post
4398
Oof, what a week! πŸ₯΅ So many things have happened, let's recap! merve/jan-24-releases-6793d610774073328eac67a9

Multimodal πŸ’¬
- We have released SmolVLM -- tiniest VLMs that come in 256M and 500M, with it's retrieval models ColSmol for multimodal RAG πŸ’—
- UI-TARS are new models by ByteDance to unlock agentic GUI control 🀯 in 2B, 7B and 72B
- Alibaba DAMO lab released VideoLlama3, new video LMs that come in 2B and 7B
- MiniMaxAI released Minimax-VL-01, where decoder is based on MiniMax-Text-01 456B MoE model with long context
- Dataset: Yale released a new benchmark called MMVU
- Dataset: CAIS released Humanity's Last Exam (HLE) a new challenging MM benchmark

LLMs πŸ“–
- DeepSeek-R1 & DeepSeek-R1-Zero: gigantic 660B reasoning models by DeepSeek, and six distilled dense models, on par with o1 with MIT license! 🀯
- Qwen2.5-Math-PRM: new math models by Qwen in 7B and 72B
- NVIDIA released AceMath and AceInstruct, new family of models and their datasets (SFT and reward ones too!)

Audio πŸ—£οΈ
- Llasa is a new speech synthesis model based on Llama that comes in 1B,3B, and 8B
- TangoFlux is a new audio generation model trained from scratch and aligned with CRPO

Image/Video/3D Generation ⏯️
- Flex.1-alpha is a new 8B pre-trained diffusion model by ostris similar to Flux
- tencent released Hunyuan3D-2, new 3D asset generation from images
Β·
New activity in deepseek-ai/DeepSeek-R1 6 days ago