awacke1 commited on
Commit
e97884e
Β·
verified Β·
1 Parent(s): 183fff9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -22
README.md CHANGED
@@ -11,28 +11,16 @@ license: mit
11
  short_description: Torch and Transformers Demonstration - SFT NLP and CV ML
12
  ---
13
 
14
- LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners β€” Arxiv Link)
15
- Composable Sparse Fine-Tuning for Cross-Lingual Transfer β€” Arxiv Link)
16
- Efficient Fine-Tuning of Compressed Language Models with Learners β€” Arxiv Link)
17
- Task Adaptive Parameter Sharing for Multi-Task Learning β€” Arxiv Link)
18
- RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture β€” Arxiv Link)
19
- Scaling Sparse Fine-Tuning to Large Language Models β€” Arxiv Link)
20
- Exploring and Evaluating Personalized Models for Code Generation β€” Arxiv Link)
21
- UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory β€” Arxiv Link)
22
- Weaver: Foundation Models for Creative Writing β€” Arxiv Link)
23
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models β€” Arxiv Link)
24
- AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning β€” Arxiv Link)
25
- AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning β€” Arxiv Link)
26
- ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization β€” Arxiv Link)
27
- Bit Cipher -- A Simple yet Powerful Word Representation System that Integrates Efficiently with Language Models β€” Arxiv Link)
28
- ConES: Concept Embedding Search for Parameter Efficient Tuning Large Vision Language Models β€” Arxiv Link)
29
- LeTI: Learning to Generate from Textual Interactions β€” Arxiv Link)
30
- Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks β€” Arxiv Link)
31
- DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models β€” Arxiv Link)
32
- SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning β€” Arxiv Link)
33
- HyperTuning: Toward Adapting Large Language Models without Back-propagation β€” Arxiv Link)
34
-
35
- With torch, transformers, and specialized fine tuning of small models we can build to specification of input dataset and easily create RAG agents with fine tuned models using duckduckgo and smolagents. Show state of art SFT for agentic RAG to help manage models and gain ROI.
36
 
37
  # Detailed Research Paper Summary
38
 
 
11
  short_description: Torch and Transformers Demonstration - SFT NLP and CV ML
12
  ---
13
 
14
+ Deep Research Evaluator:
15
+ https://huggingface.co/spaces/awacke1/DeepResearchEvaluator
16
+
17
+
18
+ With torch, transformers, and specialized fine tuning of small models
19
+ 1. We can build to specification of input dataset and
20
+ 2. Easily create RAG agents with fine tuned models using duckduckgo and smolagents.
21
+ 3. Show state of art SFT for agentic RAG to help manage models and gain ROI.
22
+
23
+
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  # Detailed Research Paper Summary
26