sometimesanotion PRO

sometimesanotion

AI & ML interests

Agentic LLM services, model merging, finetunes, distillation

Recent Activity

updated a model about 2 hours ago
sometimesanotion/Lamarck-14B-v0.7-Fusion
liked a model about 2 hours ago
mradermacher/Lamarck-14B-v0.7-Fusion-GGUF
liked a model about 2 hours ago
mistralai/Pixtral-12B-Base-2409
View all activity

Organizations

Hugging Face Discord Community's profile picture

sometimesanotion's activity

posted an update about 2 hours ago
view post
Post
71
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:

sometimesanotion/Lamarck-14B-v0.7-Fusion

It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico 's jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha 's suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.

A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.

I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.

Thank you, @mradermacher and @MaziyarPanahi , for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
replied to jjokah's post about 3 hours ago
view reply

Right-sizing language models is something I'm really here for. I find that a 1.5B parameter model fronting simple questions from a backing RAG source that a larger model gradually works on is more scalable. Classic information sources and stores can be QA'd, and they don't have such huge energy footprints.

AI will work out better if we give humans, classic code, SLMs, and frontier LLMs the roles they're right-sized for, and ensure data privacy and individual dignity at every stage of the contract.

reacted to jjokah's post with 👍 about 3 hours ago
view post
Post
1705
The past few years have been a blast for artificial intelligence, with large language models (LLMs) stunning everyone with their capabilities and powering everything from chatbots to code assistants. However, not all applications demand the massive size and complexity of LLMs, the computational power required makes them impractical for many use cases. This is why Small Language Models (SLMs) entered the scene to make powerful AI models more accessible by shrinking in size.

In this article we went through what SLMs are, how they are made small, their benefits and limitations, real-world use cases, and how they can be used on mobile and desktop devices.
https://huggingface.co/blog/jjokah/small-language-model
  • 1 reply
·
New activity in sometimesanotion/Lamarck-14B-v0.7 2 days ago

Excellent model!

14
#3 opened 18 days ago by
nixudos