File size: 2,531 Bytes
739884c
 
 
 
 
 
 
 
c4e29e7
 
 
 
 
 
 
 
 
 
 
 
 
 
8c7ae6c
c4e29e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c0ea29
c4e29e7
 
 
 
 
 
6c0ea29
c4e29e7
 
 
 
 
 
 
 
 
 
 
 
 
8c7ae6c
c4e29e7
 
 
8c7ae6c
 
c4e29e7
 
 
 
 
8c7ae6c
c4e29e7
 
 
 
 
 
8c7ae6c
c4e29e7
 
8c7ae6c
c4e29e7
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
datasets:
- jaeyeol816/ai_lecture
language:
- en
base_model:
- google/gemma-2-2b
pipeline_tag: question-answering
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** Jaeyeol Choi, Yuchan Jung
- **Model type:** LLM
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** https://huggingface.co/google/gemma-2-2b

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/YuchanJung/AI-Instructor

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

Can be used as an assistant for students who are studying machine learning and deep learning.


### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should crosscheck the answer.

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

https://huggingface.co/datasets/jaeyeol816/ai_lecture

### Training Procedure

https://github.com/YuchanJung/AI-Instructor?tab=readme-ov-file#model-training

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->


#### Training Hyperparameters

- **Training regime:** Basic <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->




### Results

https://github.com/YuchanJung/AI-Instructor?tab=readme-ov-file#results

#### Summary
This project focuses on building an AI Instructor, a Q&A bot, using transcripts from the Andrew Ng's Deep Learning course. It was created specifically for the juniors of the Google ML Bootcamp to provide them with an interactive tool to deepen their understanding of key machine learning concepts. The provided model was created by fine-tuning the Gemma-2B model on a custom-generated Q&A dataset derived from the lecture content.




## Model Card Contact

[More Information Needed]