Samuel Lima Braz PRO

samuellimabraz

AI & ML interests

None yet

Recent Activity

Organizations

Tech4Humans's profile picture Hugging Face Discord Community's profile picture

samuellimabraz's activity

upvoted an article about 9 hours ago
view article
Article

Vision Language Models Explained

β€’ 254
upvoted an article 2 days ago
view article
Article

Open-source DeepResearch – Freeing our search agents

β€’ 702
upvoted an article 7 days ago
view article
Article

Mastering Tensor Dimensions in Transformers

By not-lain β€’
β€’ 42
upvoted an article 8 days ago
view article
Article

KV Caching Explained: Optimizing Transformer Inference Efficiency

By not-lain β€’
β€’ 23
upvoted 2 articles 11 days ago
view article
Article

FineWeb2-C: Help Build Better Language Models in Your Language

By davanstrien and 5 others β€’
β€’ 18
New activity in tech4humans/signature-detection 14 days ago
posted an update 14 days ago
view post
Post
378
I wrote a article on Parameter-Efficient Fine-Tuning (PEFT), exploring techniques for efficient fine-tuning in LLMs, their implementations, and variations.

The study is based on the article "Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning" and the PEFT library integrated with Hugging Face's Transformers.

Article: https://huggingface.co/blog/samuellimabraz/peft-methods
Notebook: https://colab.research.google.com/drive/1B9RsKLMa8SwTxLsxRT8g9OedK10zfBEP?usp=sharing
Collection: samuellimabraz/service-summary-6793ccfe774073328ea9f8df

Analyzed methods:
- Adapters: Soft Prompts (Prompt Tuning, Prefix Tuning, P-tuning), IAΒ³.
- Reparameterization: LoRA, QLoRA, LoHa, LoKr, X-LoRA, Intrinsic SAID, and variations of initializations (PiSSA, OLoRA, rsLoRA, DoRA).
- Selective Tuning: BitFit, DiffPruning, FAR, FishMask.

I'm starting out in generative AI, I have more experience with computer vision and robotics. Just sharing here πŸ€—