AI & ML interests

Hugging Face Inference Endpoints Images repository allows AI Builders to collaborate and engage creating awesome inference deployments

Recent Activity

AdinaY 
posted an update about 13 hours ago
view post
Post
73
🔥 August highlights from Chinese AI community

zh-ai-community/august-2025-china-open-source-highlights-68a2de5630f406edaf320e88

✨ Efficiency leads the month
- At scale: optimizing compute use in massive MoE models e.g. DeepSeek v3.1
- In small models: lightweight & deployable
e.g. MiniCPM V 4.5, Step Audio 2-mini, Intern S1-mini,Ovis2.5-9B etc.

✨ Reasoning + Agentic wave 🌊 Not just demos, but real product use cases.
- Meituan, DeepSeek: large-scale models tuned for reasoning & tools
- Qwen, GLM, InternLM: multimodal reasoning + agentic interaction
- CodeAgent, Prover, Baichuan-M2-32B: domain-focused (coding, logic, specialized reasoning)

✨ Open source is exploding across all types of companies!!
- Big tech: Tencent, ByteDance, Xiaomi, Kuaishou, Alibaba/Qwen, Skywork, Ant Group
- Startups: DeepSeek (yes, still a startup!), Zhipu, Baichuan, StepFun, OpenBMB
- New entrants: Meituan, RedNote
- Research labs: Shanghai AI Lab (InternLM, OpenGVLab)

✨ Open source was explicitly mentioned in the State Council’s new guidance on deepening the "AI+" strategy.
- Open-source: support communities, encourage contributions (incl. university credits & recognition), foster new application approaches, and build globally impactful ecosystems 👀

💡 The Chinese community didn’t slow down at all in August 🤯 September, the last month before the Golden Week holiday, may bring even more surprises.

Stay Tuned!
AdinaY 
posted an update about 18 hours ago
view post
Post
108
Hunyuan-MT-7B 🔥 open translation model released by Tencent Hunyuan

tencent/hunyuan-mt-68b42f76d473f82798882597

✨ Supports 33 languages, including 5 ethnic minority languages in China 👀
✨ Including a translation ensemble model: Chimera-7B
✨ Full pipeline: pretrain > CPT > SFT > enhancement > ensemble refinement > SOTA performance at similar scale
AdinaY 
posted an update about 18 hours ago
view post
Post
114
From food delivery to frontier AI 🚀 Meituan, the leading lifestyle platform just dropped its first open SoTA LLM: LongCat-Flash 🔥

meituan-longcat/LongCat-Flash-Chat

✨ 560B total / ~27B active MoE — MIT license
✨ 128k context length + advanced reasoning
✨ ScMoE design: 100+ TPS inference
✨ Stable large-scale training + strong agentic performance
AdinaY 
posted an update 4 days ago
view post
Post
439
USO 🎨 Unified customization model released by Bytedance research

Demo
bytedance-research/USO
Model
bytedance-research/USO
Paper
USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning (2508.18966)

✨ Large-scale triplet dataset (content, style, stylized)
✨ Disentangled learning: style alignment + content preservation
✨ Style Reward Learning (SRL) for higher fidelity
✨ USO-Bench: 1st benchmark for style & subject jointly
✨ SOTA results on subject consistency & style similarity
AdinaY 
posted an update 4 days ago
view post
Post
342
Step-Audio 2🔥 New end to end multimodal LLM for audio & speech, released by StepFun

stepfun-ai/step-audio-2-68b003c3a47b273fffaf67a8

✨ Direct raw audio: text & speech ,no ASR+LLM+TTS pipeline
✨ High-IQ reasoning: RL + CoT for paralinguistic cues
✨ Multimodal RAG + tool calling
✨ Emotion, timbre, dialect & style control
✨ SOTA on ASR, paralinguistic, speech dialog
AdinaY 
posted an update 7 days ago
view post
Post
1055
🇨🇳 China’s State Council just released its “AI+” Action Plan (2025)

<The State Council’s Guidance on Deepened Implementation of the ‘AI+’ Strategy>
zh-ai-community/china-ai-policy-research

✨Goal: By 2035, AI will deeply empower all sectors, reshape productivity & society

✨Focus on 6 pillars:
>Science & Tech
>Industry
>Consumption
>Public welfare
>Governance
>Global cooperation

✨Highlights:
>Models: advance theory, efficient training/inference, evaluation system
>Data: high-quality datasets, IP/copyright reform, new incentives
>Compute: boost chips & clusters, improve national network, promote cloud standardization, and ensure inclusive, efficient, green, secure supply.
>Applications: AI-as-a-service, test bases, new standards
>Open-source: support communities, encourage contributions (incl. university credits & recognition), foster new application approaches, and build globally impactful ecosystems 👀
>Talent, policy & safety frameworks to secure sustainable growth
AdinaY 
posted an update 7 days ago
view post
Post
4821
MiniCPM-V 4.5 🚀 New MLLM for image, multi-image & video understanding, running even on your phone, released by OpenBMB

openbmb/MiniCPM-V-4_5

✨ SOTA vision language capability
✨ 96× video token compression > high-FPS & long video reasoning
✨ Switchable fast vs deep thinking modes
✨ Strong OCR, document parsing, supports 30+ languages
AdinaY 
posted an update 7 days ago
view post
Post
267
InternVL3.5 🔥 New family of multimodal model by Shanghai AI lab

OpenGVLab/internvl35-68ac87bd52ebe953485927fb

✨ 1B · 2B · 4B · 8B · 14B · 38B | MoE → 20B-A4B · 30B-A3B · 241B-A28B 📄Apache 2.0
✨ +16% reasoning performance, 4.05× speedup vs InternVL3
✨ Cascade RL (offline + online) : stronger reasoning
✨ ViR: efficient visual token routing
✨ DvD: calable vision–language deployment
✨ Supports GUI & embodied agency 🤖
AdinaY 
posted an update 12 days ago
AdinaY 
posted an update 12 days ago
view post
Post
3615
Seed-OSS 🔥 The latest open LLM from Bytedance Seed team

ByteDance-Seed/seed-oss-68a609f4201e788db05b5dcd

✨ 36B - Base & Instruct
✨ Apache 2.0
✨ Native 512K long context
✨ Strong reasoning & agentic intelligence
✨ 2 Base versions: with & without synthetic data
AdinaY 
posted an update 13 days ago
AdinaY 
posted an update 14 days ago
view post
Post
480
Before my vacation: Qwen releasing.
When I came back: Qwen still releasing
Respect!!🫡

Meet Qwen Image Edit 🔥 the image editing version of Qwen-Image by
@Alibaba_Qwen

Qwen/Qwen-Image-Edit

✨ Apache 2.0
✨ Semantic + Appearance Editing: rotate, restyle, add/remove 🎨
✨ Precise Text Editing → edit CN/EN text, keep style
clem 
posted an update 25 days ago
a-r-r-o-w 
posted an update 29 days ago
view post
Post
2142
You would've implemented the 3-loop matrix multiplication many times as a ML practitioner, but the naive implementation is terrible for GPU performance. Modern GPUs achieve peak performance through careful memory access patterns and minimizing scheduling overhead.

In naive matmul (MxK . KxN), the computation happens in tiles - both for the output matrix and for how you read chunks from the input matrices. Each thread-block processes one output tile by loading corresponding tiles from input (for sum-reduction across K dimension), performing the computation, then terminating. The GPU launches many thread-blocks and schedules them across available streaming multiprocessors (SMs). When an SM finishes one tile, it gets assigned a new thread-block for the next uncomputed tile. This way, multiple output tiles are computed in parallel across the SMs, but we pay the cost for launching thread-blocks each time a new tile is computed.

Persistent matmul changes this approach. Instead of launching thread-blocks to compute some output tiles, computing the results on SMs in parallel, and repeating until all output tiles are computed, you launch only as many thread-blocks as you have SMs available (typically 80-132 on modern GPUs). These thread-blocks stay alive until all output tiles are computed, looping through multiple tiles sequentially. Each persistent thread-block may handle multiple output tiles.

The key benefit is the reduced thread-block launch latency. This persistence strategy, combined with other optimizations like coalesced memory loads/stores, block-tiling, warp-tiling, warp-specialization, double-buffering, ping-pong scheduling and other tricks, helps achieve peak performance. More on this in the future!

Code snippet for testing: https://gist.github.com/a-r-r-o-w/28339b442d164084506c0967029968a8

(Bonus: Since I've wanted to learn Manim for a while, this was a great opportunity to make a visualization for Naive VS Persistent matmul. Enjoy ✨)
  • 3 replies
·
AdinaY 
posted an update about 1 month ago
view post
Post
1230
🔥 July highlights from Chinese AI community

zh-ai-community/july-2025-open-works-from-the-chinese-community-686586f1a8840797e477ae5a

✨ Another "DeepSeek moment" - Kimi K2 🙌

✨ Qwen goes fully matrixed - Instruct / Thinking / Coder models across 30B - 480B 🤯

✨ The multimodal wave🌊
- GLM-4.1V-Thinking: Image+Text > Text
- Intern-S1: Image+Text > Text
- Wan 2.2 - Text +Image > video
- Skywork-R1V3: Image+Text > Text
- Skywork-UniPic: Text > Image / Image > Text
- Tar-7B: Any-to-Any
- Ming-Lite-Omni-1.5: Any-to-Any
- Step3: Image+Text > Text
- HunyuanWorld-1: Image > 3D
- ThinkSound: Video > Audio
- Neta-Lumina: Text > Image

✨Tiny & deployable models 🤏
- SmallThinker runs on 1GB RAM

✨Agentic coding goes mainstream 💻
- Qwen3-Coder: fully spec'd tool calling
- GLM-4.5: browser agents, IDE assistant
- Qwen3 WebDev demo: text-to-frontend code

✨Domain-Specific & Utility Models/Tools/Dataset
- Science one S1: Scientific model
- Agentar DeepFinance: Finance dataset
- ObjectClear: Interactive Vision Tool
- Qwen3 MT Demo: Machine Translation Tool

✨ Big month not only for models, but for policy too🏛️
- Announced Global Action Plan for AI Governance
- Proposes to set up a World AI Cooperation Organization in Shanghai
- Released International AI Open Source Collaboration Initiative
- Published Risk Assessment Guidelines for Endpoint AI Agents

✨ Big event - WAIC
- 355K offline visitors
- 108 new released in 4 days
- 145 sessions across key domains

I’ve been tracking things closely, but July’s open-source wave still blew me away. Can’t wait to see what’s coming next! 🚀
AdinaY 
posted an update about 1 month ago
view post
Post
1646
Qwen team did it again!!

They just released Qwen3-Coder-30B-A3B-Instruct on the hub🔥
Qwen/Qwen3-Coder-30B-A3B-Instruct

✨ Apache 2.0
✨30B total / 3.3B active (128 experts, 8 top-k)
✨ Native 256K context, extendable to 1M via Yarn
✨ Built for Agentic Coding
hlarcher 
posted an update about 1 month ago
view post
Post
238
GH200 cooking time 🧑‍🍳🔥!

We just updated GPU-fryer 🍳 to run on Grace Hopper Superchip (GH200) - fully optimized for ARM-based systems!
With this release, we switched to cuBLASLt to support running FP8 benchmarks. You can monitor GPU throttling, TFLOPS outliers, HBM memory health, and ensure that you get the most of your hardware setup.
Perfect for stress testing and tuning datacenter GPUs.

Check it out on Github 👉 https://github.com/huggingface/gpu-fryer
AdinaY 
posted an update about 1 month ago
view post
Post
371
It’s here! After the WAIC announcement, StepFun has just dropped Step 3 🔥 their latest multimodal reasoning model on the hub.

Paper: Step-3 is Large yet Affordable: Model-system Co-design for Cost-effective Decoding (2507.19427)
Model: stepfun-ai/step3

✨ 321B total / 32B active - Apache 2.0
✨ MFA + AFD : cutting decoding cost by up to 70% vs. DeepSeek-V3
✨ 4T image-text pretraining: strong vision–language grounding
✨ Modular, efficient, deployable: runs on just 8×48GB GPUs
jsulz 
posted an update about 1 month ago
view post
Post
3075
We've crossed 1 million repositories backed by Xet storage on Hugging Face! 🚀🚀🚀

You can follow along our progress converting the Hub from Git LFS to Xet at jsulz/ready-xet-go

We have a lot of repos left to migrate, which means I have plenty of time to add more animations 🤪
AdinaY 
posted an update about 1 month ago
view post
Post
3529
Qwen3-30B-A3B-Thinking-2507 🔥 latest step in scaling thinking capabilities from Alibaba Qwen team.

Qwen/Qwen3-30B-A3B-Thinking-2507-FP8

✨ 30B total / 3B active - Apache 2.0
✨ Native 256K context
✨ SOTA coding, alignment, agentic reasoning