Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
7
22
Nicolai Berk
nicoberk
Follow
21world's profile picture
Wauplin's profile picture
Numaan1998's profile picture
8 followers
·
24 following
https://nicolaiberk.com/
nicolaiberk
nicolaiberk
AI & ML interests
NLP, Political Communication, Media Effects
Recent Activity
liked
a model
about 6 hours ago
HuggingFaceTB/SmolLM2-1.7B-Instruct
reacted
to
MoritzLaurer
's
post
with ❤️
3 days ago
Prompts are hyperparameters. Every time you test a different prompt on your data, you become less sure if the LLM actually generalizes to unseen data. Issues of overfitting to a test set seem like concepts from boring times when people still fine-tuned models, but it's just as important for "zeroshot prompting". Using a separate validation split to tune the main hyperparameter of LLMs (the prompt) is just as important as train-val-test splitting for fine-tuning. The only difference is that you don't have a training dataset anymore and it somehow feels different because there is no training / no parameter updates. Its easy to trick yourself into believing that an LLM performs well on your task, while you've actually overfit the prompt on your data. Every good "zeroshot" paper should clarify that they used a validation split for finding their prompt before final testing.
updated
a Space
about 1 month ago
PubPol/FrenchTutor
View all activity
Organizations
nicoberk
's models
2
Sort: Recently updated
nicoberk/GermanNewsCrime
Text Classification
•
0.1B
•
Updated
Jan 14
•
1
nicoberk/GermanNewsMigration
Text Classification
•
0.1B
•
Updated
Jan 14
•
4