Friedrich Marty

Smorty100

AI & ML interests

I'm most interested in content rerouting between LLM and VLLM agens for automation possibilities. Using templates for each agent which is then filled in by another agents inputs seems really useful.

Recent Activity

liked a model about 24 hours ago
google/gemma-3-27b-it
reacted to pidou's post with 😎 3 days ago
testing post
reacted to pidou's post with 😎 3 days ago
testing post
View all activity

Organizations

None yet

Smorty100's activity

reacted to pidou's post with 😎😎 3 days ago
view post
Post
1623
testing post
New activity in Qwen/QwQ-32B-GGUF 6 days ago

Create params

1
#4 opened 7 days ago by
jklj077
reacted to onekq's post with πŸ‘ 7 days ago
view post
Post
3241
QwQ-32B is amazing!

It ranks below o1-preview, but beats DeepSeek v3 and all Gemini models.
onekq-ai/WebApp1K-models-leaderboard

Now we have such a powerful model that can fit into a single GPU, can someone finetune a web app model to push SOTA of my leaderboard? πŸ€—
  • 1 reply
Β·
replied to onekq's post 7 days ago
view reply

to me it's a bit weird to see QwQ not get the hype it... should deserve.

it's a crazy good model, can be run LOCALLY with non-business-level GPUs and actuallly performs supr gud even compared to huge gigantic V3 model.

Even for smaller businesses, having a completely local and secure LLM solution like this MUST have some value, right?

like - huh? people should be doin backflips, like i am

gif of a person backflipping
*flip* *flip* *flip* *flip*

anyway, i go play more with... cloud-hosted free LLMs (codestral 25.01) which probably does collect and train on my data... *sigh*

replied to clem's post 7 days ago
view reply

i very much agree.
it really seems like many models just push for that initial completion, which in many cases, can't even occur (like with lookup tools)
some models really do just.... execute like - 12 actions at a time to get them all in in one block.

reacted to clem's post with πŸ‘β€οΈ 7 days ago
view post
Post
7090
I was chatting with @peakji , one of the cofounders of Manu AI, who told me he was on Hugging Face (very cool!).

He shared an interesting insight which is that agentic capabilities might be more of an alignment problem rather than a foundational capability issue. Similar to the difference between GPT-3 and InstructGPT, some open-source foundation models are simply trained to 'answer everything in one response regardless of the complexity of the question' - after all, that's the user preference in chatbot use cases. Just a bit of post-training on agentic trajectories can make an immediate and dramatic difference.

As a thank you to the community, he shared 100 invite code first-come first serve, just use β€œHUGGINGFACE” to get access!
Β·
reacted to WENGSYX's post with πŸ˜” 11 days ago
view post
Post
1670
πŸ”¬ Exciting Research Breakthrough! πŸš€
We've developed a new AI research assistant LLMs trained through RL that can:
- Generate research ideas from reference literature
- Preview potential research methodologies
- Automatically draft research reports
- Transform experimental results directly into academic papers! πŸ“

See in -> WestlakeNLP/CycleResearcher-12B

Check out our free demo at http://ai-researcher.cn and experience the future of academic research workflows. 🌐

Proud to share that our work has been accepted as a Poster at ICLR 2025! πŸ† #AIResearch #AcademicInnovation #MachineLearning
reacted to nroggendorff's post with ❀️ 13 days ago
view post
Post
2806
We're using RLHF on diffusion models, right? Just making sure..
Β·
reacted to singhsidhukuldeep's post with πŸ‘ 13 days ago
view post
Post
6753
Exciting New Tool for Knowledge Graph Extraction from Plain Text!

I just came across a groundbreaking new tool called KGGen that's solving a major challenge in the AI world - the scarcity of high-quality knowledge graph data.

KGGen is an open-source Python package that leverages language models to extract knowledge graphs (KGs) from plain text. What makes it special is its innovative approach to clustering related entities, which significantly reduces sparsity in the extracted KGs.

The technical approach is fascinating:

1. KGGen uses a multi-stage process involving an LLM (GPT-4o in their implementation) to extract entities and relations from source text
2. It aggregates graphs across sources to reduce redundancy
3. Most importantly, it applies iterative LM-based clustering to refine the raw graph

The clustering stage is particularly innovative - it identifies which nodes and edges refer to the same underlying entities or concepts. This normalizes variations in tense, plurality, stemming, and capitalization (e.g., "labors" clustered with "labor").

The researchers from Stanford and University of Toronto also introduced MINE (Measure of Information in Nodes and Edges), the first benchmark for evaluating KG extractors. When tested against existing methods like OpenIE and GraphRAG, KGGen outperformed them by up to 18%.

For anyone working with knowledge graphs, RAG systems, or KG embeddings, this tool addresses the fundamental challenge of data scarcity that's been holding back progress in graph-based foundation models.

The package is available via pip install kg-gen, making it accessible to everyone. This could be a game-changer for knowledge graph applications!
reacted to Reality123b's post with πŸ˜” 27 days ago
view post
Post
2208
https://huggingface.co/posts/Reality123b/533143502736808
Since many of you upvoted that post, I'm open-sourcing this on 19th February 2025.

I don't know, but, this may be the "smartest AI on earth". im not totally sure.
also, i need some kind of help with the UI coz i suck at that.