Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
Hamza Shahid Malik's picture
2 7

Hamza Shahid Malik

hamzashahid40
YhyaShehabEL-Den's profile picture HolyTreeCrowns's profile picture dyabmo's profile picture
ยท

AI & ML interests

None yet

Recent Activity

liked a dataset about 13 hours ago
simplescaling/s1K
upvoted an article 11 days ago
NVIDIA Releases 6 Million Multi-Lingual Reasoning Dataset
reacted to MohamedRashad's post with โค๏ธ 6 months ago
I think we have released the best Arabic model under 25B at least based on https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard Yehia = https://huggingface.co/ALLaM-AI/ALLaM-7B-Instruct-preview + GRPO and its ranked number one model under the 25B parameter size mark. Now, i said "i think" not "i am sure" because this model used the same metric of evaluation the AraGen developers use (the 3C3H) as a reward model to improve its responses and this sparks the question. Is this something good for users or is it another type of overfitting that we don't want ? I don't know if this is a good thing or a bad thing but what i know is that you can try it from here: https://huggingface.co/spaces/Navid-AI/Yehia-7B-preview or Download it for your personal experiments from here: https://huggingface.co/Navid-AI/Yehia-7B-preview Ramadan Kareem ๐ŸŒ™
View all activity

Organizations

Navid AI's profile picture

hamzashahid40 's models

None public yet
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs