sometimesanotion PRO

sometimesanotion

AI & ML interests

Agentic LLM services, model merging, finetunes, distillation

Recent Activity

liked a model about 2 hours ago
prithivMLmods/Deep-Fake-Detector-Model
updated a model about 3 hours ago
sometimesanotion/Qwenvergence-14B-v11
updated a model about 6 hours ago
sometimesanotion/Qwenvergence-14B-v9
View all activity

Organizations

Hugging Face Discord Community's profile picture

Posts 1

view post
Post
2599
I've managed a #1 score of 41.22% average for 14B parameter models on the Open LLM Leaderboard. As of this writing, sometimesanotion/Lamarck-14B-v0.7 is #8 for all models up to 70B parameters.

It took a custom toolchain around Arcee AI's mergekit to manage the complex merges, gradients, and LoRAs required to make this happen. I really like seeing features of many quality finetunes in one solid generalist model.

datasets

None public yet