awacke1's picture
Update README.md
e97884e verified

A newer version of the Streamlit SDK is available: 1.43.2

Upgrade
metadata
title: TorchTransformers NLP CV SFT
emoji: πŸš€
colorFrom: red
colorTo: gray
sdk: streamlit
sdk_version: 1.43.1
app_file: app.py
pinned: false
license: mit
short_description: Torch and Transformers Demonstration - SFT NLP and CV ML

Deep Research Evaluator: https://huggingface.co/spaces/awacke1/DeepResearchEvaluator

With torch, transformers, and specialized fine tuning of small models

  1. We can build to specification of input dataset and
  2. Easily create RAG agents with fine tuned models using duckduckgo and smolagents.
  3. Show state of art SFT for agentic RAG to help manage models and gain ROI.

Detailed Research Paper Summary

πŸ“„ LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners

Authors: Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Date: ### 18 May 2022
Word Count (Title): 8 | Word Count (Summary): 219

Links: Abstract) | PDF.pdf)

High Info Terms: list, is, self-training, fine-tuning, parameters, we, few-shot, learning, over, that, prompt-based, fn, use, as, model
ROUGE Score: 6.85%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["list"] --> T2["is"]
    T2["is"] --> T3["self-training"]
    T3["self-training"] --> T4["fine-tuning"]
    T4["fine-tuning"] --> T5["parameters"]
    T5["parameters"] --> T6["we"]
    T6["we"] --> T7["few-shot"]
    T7["few-shot"] --> T8["learning"]
    T8["learning"] --> T9["over"]
    T9["over"] --> T10["that"]
    T10["that"] --> T11["prompt-based"]
    T11["prompt-based"] --> T12["fn"]
    T12["fn"] --> T13["use"]
    T13["use"] --> T14["as"]
    T14["as"] --> T15["model"]

πŸ“„ Composable Sparse Fine-Tuning for Cross-Lingual Transfer

Authors: Alan Ansell, Edoardo Maria Ponti, Anna Korhonen, Ivan Vuli'c
Date: ### 09 Feb 2023
Word Count (Title): 6 | Word Count (Summary): 218

Links: Abstract) | PDF.pdf)

High Info Terms: fine-tuning, model, adapters, language, we, masks, sparse, be, both, in a, parameters, large, pretrained, transfer, prevent
ROUGE Score: 6.88%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["fine-tuning"] --> T2["model"]
    T2["model"] --> T3["adapters"]
    T3["adapters"] --> T4["language"]
    T4["language"] --> T5["we"]
    T5["we"] --> T6["masks"]
    T6["masks"] --> T7["sparse"]
    T7["sparse"] --> T8["be"]
    T8["be"] --> T9["both"]
    T9["both"] --> T10["in a"]
    T10["in a"] --> T11["parameters"]
    T11["parameters"] --> T12["large"]
    T12["large"] --> T13["pretrained"]
    T13["pretrained"] --> T14["transfer"]
    T14["transfer"] --> T15["prevent"]

πŸ“„ Efficient Fine-Tuning of Compressed Language Models with Learners

Authors: Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross
Date: ### 03 Aug 2022
Word Count (Title): 8 | Word Count (Summary): 131

Links: Abstract) | PDF.pdf)

High Info Terms: fine-tuning, training, learners, models, works, learner, modules, methods, that, convergence, resource, utilization, by, parameters, learner modules
ROUGE Score: 11.45%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["fine-tuning"] --> T2["training"]
    T2["training"] --> T3["learners"]
    T3["learners"] --> T4["models"]
    T4["models"] --> T5["works"]
    T5["works"] --> T6["learner"]
    T6["learner"] --> T7["modules"]
    T7["modules"] --> T8["methods"]
    T8["methods"] --> T9["that"]
    T9["that"] --> T10["convergence"]
    T10["convergence"] --> T11["resource"]
    T11["resource"] --> T12["utilization"]
    T12["utilization"] --> T13["by"]
    T13["by"] --> T14["parameters"]
    T14["parameters"] --> T15["learner modules"]

πŸ“„ Task Adaptive Parameter Sharing for Multi-Task Learning

Authors: Matthew Wallingford, Hao Li, Alessandro Achille, Avinash Ravichandran, Charless Fowlkes, Rahul Bhotika, Stefano Soatto
Date: ### 30 Mar 2022
Word Count (Title): 7 | Word Count (Summary): 183

Links: Abstract) | PDF.pdf)

High Info Terms: tasks, taps, model, downstream, task, base, task-specific, layers, while, downstream tasks, base model, models, learning, fine-tuning, is
ROUGE Score: 8.2%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["tasks"] --> T2["taps"]
    T2["taps"] --> T3["model"]
    T3["model"] --> T4["downstream"]
    T4["downstream"] --> T5["task"]
    T5["task"] --> T6["base"]
    T6["base"] --> T7["task-specific"]
    T7["task-specific"] --> T8["layers"]
    T8["layers"] --> T9["while"]
    T9["while"] --> T10["downstream tasks"]
    T10["downstream tasks"] --> T11["base model"]
    T11["base model"] --> T12["models"]
    T12["models"] --> T13["learning"]
    T13["learning"] --> T14["fine-tuning"]
    T14["fine-tuning"] --> T15["is"]

πŸ“„ RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture

Authors: Angels Balaguer, Vinamra Benara, Renato Luiz de Freitas Cunha, Roberto de M. Estev~ao Filho, Todd Hendry, Daniel Holstein, Jennifer Marsman, Nick Mecklenburg, Sara Malvar, Leonardo O. Nunes, Rafael Padilha, Morris Sharp, Bruno Silva, Swati Sharma, Vijay Aski, Ranveer Chandra
Date: ### 30 Jan 2024
Word Count (Title): 11 | Word Count (Summary): 281

Links: Abstract) | PDF.pdf)

High Info Terms: fine-tuning, we, rag, llms, pipeline, p, rag and, are, knowledge, model, our, from, results, and fine-tuning, which
ROUGE Score: 5.34%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["fine-tuning"] --> T2["we"]
    T2["we"] --> T3["rag"]
    T3["rag"] --> T4["llms"]
    T4["llms"] --> T5["pipeline"]
    T5["pipeline"] --> T6["p"]
    T6["p"] --> T7["rag and"]
    T7["rag and"] --> T8["are"]
    T8["are"] --> T9["knowledge"]
    T9["knowledge"] --> T10["model"]
    T10["model"] --> T11["our"]
    T11["our"] --> T12["from"]
    T12["from"] --> T13["results"]
    T13["results"] --> T14["and fine-tuning"]
    T14["and fine-tuning"] --> T15["which"]

πŸ“„ Scaling Sparse Fine-Tuning to Large Language Models

Authors: Alan Ansell and Ivan Vuli'c and Hannah Sterz and Anna Korhonen and Edoardo M. Ponti
Date: ### 02 Feb 2024
Word Count (Title): 7 | Word Count (Summary): 219

Links: Abstract) | PDF.pdf)

High Info Terms: we, their, llms, fine-tuning, spiel, parameters, sparse, terms, indices, deltas, sparse fine-tuning, in terms, terms of, parameter-efficient, methods
ROUGE Score: 6.85%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["we"] --> T2["their"]
    T2["their"] --> T3["llms"]
    T3["llms"] --> T4["fine-tuning"]
    T4["fine-tuning"] --> T5["spiel"]
    T5["spiel"] --> T6["parameters"]
    T6["parameters"] --> T7["sparse"]
    T7["sparse"] --> T8["terms"]
    T8["terms"] --> T9["indices"]
    T9["indices"] --> T10["deltas"]
    T10["deltas"] --> T11["sparse fine-tuning"]
    T11["sparse fine-tuning"] --> T12["in terms"]
    T12["in terms"] --> T13["terms of"]
    T13["terms of"] --> T14["parameter-efficient"]
    T14["parameter-efficient"] --> T15["methods"]

πŸ“„ Exploring and Evaluating Personalized Models for Code Generation

Authors: Andrei Zlotchevski, Dawn Drain, Alexey Svyatkovskiy, Colin Clement, Neel Sundaresan, Michele Tufano
Date: ### 20 Sep 2022
Word Count (Title): 8 | Word Count (Summary): 226

Links: Abstract) | PDF.pdf)

High Info Terms: model, fine-tuning, we, which, are, code, evaluate, parameters, large, transformer, modeling, learning, token, generalization, personalization
ROUGE Score: 6.64%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["model"] --> T2["fine-tuning"]
    T2["fine-tuning"] --> T3["we"]
    T3["we"] --> T4["which"]
    T4["which"] --> T5["are"]
    T5["are"] --> T6["code"]
    T6["code"] --> T7["evaluate"]
    T7["evaluate"] --> T8["parameters"]
    T8["parameters"] --> T9["large"]
    T9["large"] --> T10["transformer"]
    T10["transformer"] --> T11["modeling"]
    T11["modeling"] --> T12["learning"]
    T12["learning"] --> T13["token"]
    T13["token"] --> T14["generalization"]
    T14["generalization"] --> T15["personalization"]

πŸ“„ UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory

Authors: Haiwen Diao, Bo Wan, Ying Zhang, Xu Jia, Huchuan Lu, Long Chen
Date: ### 28 Aug 2023
Word Count (Title): 12 | Word Count (Summary): 225

Links: Abstract) | PDF.pdf)

High Info Terms: petl, unipt, pre-trained, methods, we, parallel, that, petl methods, achieve, performance, tasks, parameters, networks, is, transfer
ROUGE Score: 6.67%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["petl"] --> T2["unipt"]
    T2["unipt"] --> T3["pre-trained"]
    T3["pre-trained"] --> T4["methods"]
    T4["methods"] --> T5["we"]
    T5["we"] --> T6["parallel"]
    T6["parallel"] --> T7["that"]
    T7["that"] --> T8["petl methods"]
    T8["petl methods"] --> T9["achieve"]
    T9["achieve"] --> T10["performance"]
    T10["performance"] --> T11["tasks"]
    T11["tasks"] --> T12["parameters"]
    T12["parameters"] --> T13["networks"]
    T13["networks"] --> T14["is"]
    T14["is"] --> T15["transfer"]

πŸ“„ Weaver: Foundation Models for Creative Writing

Authors: Tiannan Wang, Jiamin Chen, Qingrui Jia, Shuai Wang, Ruoyu Fang, Huilin Wang, Zhaowei Gao, Chunzhao Xie, Chuou Xu, Jihong Dai, Yibin Liu, Jialong Wu, Shengwei Ding, Long Li, Zhiwei Huang, Xinle Deng, Teng Yu, Gangan Ma, Han Xiao, Zixin Chen, Danjun Xiang, Yunxia Wang, Yuanyuan Zhu, Yi Xiao, Jing Wang, Yiru Wang, Siran Ding, Jiayang Huang, Jiayi Xu, Yilihamu Tayier, Zhenyu Hu, Yuan Gao, Chengfeng Zheng, Yueshu Ye, Yihang Li, Lei Wan, Xinyue Jiang, Yujie Wang, Siyu Cheng, Zhule Song, Xiangru Tang, Xiaohua Xu, Ningyu Zhang, Huajun Chen, Yuchen Eleanor Jiang, and Wangchunshu Zhou
Date: ### 30 Jan 2024
Word Count (Title): 6 | Word Count (Summary): 237

Links: Abstract) | PDF.pdf)

High Info Terms: weaver, writing, llms, models, we, our, family, large, language, content, creation, carefully, improving, capabilities, professional
ROUGE Score: 6.33%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["weaver"] --> T2["writing"]
    T2["writing"] --> T3["llms"]
    T3["llms"] --> T4["models"]
    T4["models"] --> T5["we"]
    T5["we"] --> T6["our"]
    T6["our"] --> T7["family"]
    T7["family"] --> T8["large"]
    T8["large"] --> T9["language"]
    T9["language"] --> T10["content"]
    T10["content"] --> T11["creation"]
    T11["creation"] --> T12["carefully"]
    T12["carefully"] --> T13["improving"]
    T13["improving"] --> T14["capabilities"]
    T14["capabilities"] --> T15["professional"]

πŸ“„ PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

Authors: Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, and Majid Yazdani
Date: ### 26 Apr 2022
Word Count (Title): 9 | Word Count (Summary): 184

Links: Abstract) | PDF.pdf)

High Info Terms: few-shot, fine-tuning, that, perfect, we, which, methods, plms, engineered, prompts, verbalizers, new, task, can, simple
ROUGE Score: 8.15%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["few-shot"] --> T2["fine-tuning"]
    T2["fine-tuning"] --> T3["that"]
    T3["that"] --> T4["perfect"]
    T4["perfect"] --> T5["we"]
    T5["we"] --> T6["which"]
    T6["which"] --> T7["methods"]
    T7["methods"] --> T8["plms"]
    T8["plms"] --> T9["engineered"]
    T9["engineered"] --> T10["prompts"]
    T10["prompts"] --> T11["verbalizers"]
    T11["verbalizers"] --> T12["new"]
    T12["new"] --> T13["task"]
    T13["task"] --> T14["can"]
    T14["can"] --> T15["simple"]

πŸ“„ AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

Authors: Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Date: ### 02 Nov 2022
Word Count (Title): 6 | Word Count (Summary): 191

Links: Abstract) | PDF.pdf)

High Info Terms: fine-tuning, peft, plm, adamix, tasks, parameters, we, method, that, mixture, the plm, peft method, a mixture, mixture of, large
ROUGE Score: 7.85%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["fine-tuning"] --> T2["peft"]
    T2["peft"] --> T3["plm"]
    T3["plm"] --> T4["adamix"]
    T4["adamix"] --> T5["tasks"]
    T5["tasks"] --> T6["parameters"]
    T6["parameters"] --> T7["we"]
    T7["we"] --> T8["method"]
    T8["method"] --> T9["that"]
    T9["that"] --> T10["mixture"]
    T10["mixture"] --> T11["the plm"]
    T11["the plm"] --> T12["peft method"]
    T12["peft method"] --> T13["a mixture"]
    T13["a mixture"] --> T14["mixture of"]
    T14["mixture of"] --> T15["large"]

πŸ“„ AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

Authors: Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Date: ### 02 Nov 2022
Word Count (Title): 6 | Word Count (Summary): 191

Links: Abstract) | PDF.pdf)

High Info Terms: fine-tuning, peft, plm, adamix, tasks, parameters, we, method, that, mixture, the plm, peft method, a mixture, mixture of, large
ROUGE Score: 7.85%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["fine-tuning"] --> T2["peft"]
    T2["peft"] --> T3["plm"]
    T3["plm"] --> T4["adamix"]
    T4["adamix"] --> T5["tasks"]
    T5["tasks"] --> T6["parameters"]
    T6["parameters"] --> T7["we"]
    T7["we"] --> T8["method"]
    T8["method"] --> T9["that"]
    T9["that"] --> T10["mixture"]
    T10["mixture"] --> T11["the plm"]
    T11["the plm"] --> T12["peft method"]
    T12["peft method"] --> T13["a mixture"]
    T13["a mixture"] --> T14["mixture of"]
    T14["mixture of"] --> T15["large"]

πŸ“„ ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization

Authors: Prateek Yadav, Leshem Choshen, Colin Raffel, Mohit Bansal
Date: ### 22 Nov 2023
Word Count (Title): 11 | Word Count (Summary): 247

Links: Abstract) | PDF.pdf)

High Info Terms: compeft, models, peft, we, expert, that, expert models, it, model, generalization, by, size, performance, show, we show
ROUGE Score: 6.07%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["compeft"] --> T2["models"]
    T2["models"] --> T3["peft"]
    T3["peft"] --> T4["we"]
    T4["we"] --> T5["expert"]
    T5["expert"] --> T6["that"]
    T6["that"] --> T7["expert models"]
    T7["expert models"] --> T8["it"]
    T8["it"] --> T9["model"]
    T9["model"] --> T10["generalization"]
    T10["generalization"] --> T11["by"]
    T11["by"] --> T12["size"]
    T12["size"] --> T13["performance"]
    T13["performance"] --> T14["show"]
    T14["show"] --> T15["we show"]

πŸ“„ Bit Cipher -- A Simple yet Powerful Word Representation System that Integrates Efficiently with Language Models

Authors: Haoran Zhao and Jake Ryland Williams
Date: ### 18 Nov 2023
Word Count (Title): 16 | Word Count (Summary): 237

Links: Abstract) | PDF.pdf)

High Info Terms: bit-cipher, while, word, that, we, embeddings, efficiency, experiments, training, classic, from, convergence, glove, word2vec, process
ROUGE Score: 6.33%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["bit-cipher"] --> T2["while"]
    T2["while"] --> T3["word"]
    T3["word"] --> T4["that"]
    T4["that"] --> T5["we"]
    T5["we"] --> T6["embeddings"]
    T6["embeddings"] --> T7["efficiency"]
    T7["efficiency"] --> T8["experiments"]
    T8["experiments"] --> T9["training"]
    T9["training"] --> T10["classic"]
    T10["classic"] --> T11["from"]
    T11["from"] --> T12["convergence"]
    T12["convergence"] --> T13["glove"]
    T13["glove"] --> T14["word2vec"]
    T14["word2vec"] --> T15["process"]

πŸ“„ ConES: Concept Embedding Search for Parameter Efficient Tuning Large Vision Language Models

Authors: Huahui Yi, Ziyuan Qin, Wei Xu, Miaotian Guo, Kun Wang, Shaoting Zhang, Kang Li, Qicheng Lao
Date: ### 30 May 2023
Word Count (Title): 12 | Word Count (Summary): 275

Links: Abstract) | PDF.pdf)

High Info Terms: prompt, tuning, text, encoder, text encoder, methods, embeddings, approach, our, the text, can, by, is, we, as
ROUGE Score: 5.45%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["prompt"] --> T2["tuning"]
    T2["tuning"] --> T3["text"]
    T3["text"] --> T4["encoder"]
    T4["encoder"] --> T5["text encoder"]
    T5["text encoder"] --> T6["methods"]
    T6["methods"] --> T7["embeddings"]
    T7["embeddings"] --> T8["approach"]
    T8["approach"] --> T9["our"]
    T9["our"] --> T10["the text"]
    T10["the text"] --> T11["can"]
    T11["can"] --> T12["by"]
    T12["by"] --> T13["is"]
    T13["is"] --> T14["we"]
    T14["we"] --> T15["as"]

πŸ“„ LeTI: Learning to Generate from Textual Interactions

Authors: Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji
Date: ### 17 May 2023
Word Count (Title): 7 | Word Count (Summary): 279

Links: Abstract) | PDF.pdf)

High Info Terms: feedback, leti, textual, code, language, lms, that, generation, natural, performance, textual feedback, outputs, from, we, binary
ROUGE Score: 5.38%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["feedback"] --> T2["leti"]
    T2["leti"] --> T3["textual"]
    T3["textual"] --> T4["code"]
    T4["code"] --> T5["language"]
    T5["language"] --> T6["lms"]
    T6["lms"] --> T7["that"]
    T7["that"] --> T8["generation"]
    T8["generation"] --> T9["natural"]
    T9["natural"] --> T10["performance"]
    T10["performance"] --> T11["textual feedback"]
    T11["textual feedback"] --> T12["outputs"]
    T12["outputs"] --> T13["from"]
    T13["from"] --> T14["we"]
    T14["we"] --> T15["binary"]

πŸ“„ Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks

Authors: Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, Zsolt Kira
Date: ### 07 Oct 2022
Word Count (Title): 8 | Word Count (Summary): 207

Links: Abstract) | PDF.pdf)

High Info Terms: tasks, methods, vision, fine-tuning, parameter-efficient, different, parameters, existing, vision tasks, while, transformers, this, trainable, different tasks, tasks with
ROUGE Score: 7.25%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["tasks"] --> T2["methods"]
    T2["methods"] --> T3["vision"]
    T3["vision"] --> T4["fine-tuning"]
    T4["fine-tuning"] --> T5["parameter-efficient"]
    T5["parameter-efficient"] --> T6["different"]
    T6["different"] --> T7["parameters"]
    T7["parameters"] --> T8["existing"]
    T8["existing"] --> T9["vision tasks"]
    T9["vision tasks"] --> T10["while"]
    T10["while"] --> T11["transformers"]
    T11["transformers"] --> T12["this"]
    T12["this"] --> T13["trainable"]
    T13["trainable"] --> T14["different tasks"]
    T14["different tasks"] --> T15["tasks with"]

πŸ“„ DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

Authors: Xuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng
Date: ### 24 May 2023
Word Count (Title): 9 | Word Count (Summary): 239

Links: Abstract) | PDF.pdf)

High Info Terms: by, pre-trained, models, fine-tuning, as, two, fine-tuned, model, dsee, language, starting, point, towards, downstream, pain
ROUGE Score: 6.28%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["by"] --> T2["pre-trained"]
    T2["pre-trained"] --> T3["models"]
    T3["models"] --> T4["fine-tuning"]
    T4["fine-tuning"] --> T5["as"]
    T5["as"] --> T6["two"]
    T6["two"] --> T7["fine-tuned"]
    T7["fine-tuned"] --> T8["model"]
    T8["model"] --> T9["dsee"]
    T9["dsee"] --> T10["language"]
    T10["language"] --> T11["starting"]
    T11["starting"] --> T12["point"]
    T12["point"] --> T13["towards"]
    T13["towards"] --> T14["downstream"]
    T14["downstream"] --> T15["pain"]

πŸ“„ SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning

Authors: M Saiful Bari, Aston Zhang, Shuai Zheng, Xingjian Shi, Yi Zhu, Shafiq Joty, Mu Li
Date: ### 21 Dec 2022
Word Count (Title): 8 | Word Count (Summary): 147

Links: Abstract) | PDF.pdf)

High Info Terms: spt, fine-tuning, prompts, generalization, prompt, tuning, datasets, prompt tuning, language, can, multitask, prompted, learning, tasks, methods
ROUGE Score: 10.2%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["spt"] --> T2["fine-tuning"]
    T2["fine-tuning"] --> T3["prompts"]
    T3["prompts"] --> T4["generalization"]
    T4["generalization"] --> T5["prompt"]
    T5["prompt"] --> T6["tuning"]
    T6["tuning"] --> T7["datasets"]
    T7["datasets"] --> T8["prompt tuning"]
    T8["prompt tuning"] --> T9["language"]
    T9["language"] --> T10["can"]
    T10["can"] --> T11["multitask"]
    T11["multitask"] --> T12["prompted"]
    T12["prompted"] --> T13["learning"]
    T13["learning"] --> T14["tasks"]
    T14["tasks"] --> T15["methods"]

πŸ“„ HyperTuning: Toward Adapting Large Language Models without Back-propagation

Authors: Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen
Date: ### 22 Nov 2022
Word Count (Title): 8 | Word Count (Summary): 164

Links: Abstract) | PDF.pdf)

High Info Terms: that, parameters, we, language, fine-tuning, large, tasks, can, hypertuning, model, hypermodel, generate, hypert5, parameters for, models
ROUGE Score: 9.15%

🎀 TTF Read Aloud

Mermaid Graph of Key Concepts

flowchart TD
    T1["that"] --> T2["parameters"]
    T2["parameters"] --> T3["we"]
    T3["we"] --> T4["language"]
    T4["language"] --> T5["fine-tuning"]
    T5["fine-tuning"] --> T6["large"]
    T6["large"] --> T7["tasks"]
    T7["tasks"] --> T8["can"]
    T8["can"] --> T9["hypertuning"]
    T9["hypertuning"] --> T10["model"]
    T10["model"] --> T11["hypermodel"]
    T11["hypermodel"] --> T12["generate"]
    T12["generate"] --> T13["hypert5"]
    T13["hypert5"] --> T14["parameters for"]
    T14["parameters for"] --> T15["models"]