Deeokay commited on
Commit
f5be8e3
·
verified ·
1 Parent(s): a2dfdb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -167
README.md CHANGED
@@ -10,14 +10,19 @@ Just a model using to learn Fine Tuning of 'gpt2-medium'
10
  - on a self made special tokens
11
  - on a multiple fine tuned with ~15K dataset (in progress mode)
12
 
13
- I would consider this [GPT2-medium-custom-v1.0](https://huggingface.co/Deeokay/GPT2-medium-custom-v1.0) a the base model to start my Fine Tuning 2.0 on specific Datasets.
14
- - Previous models of this: gpt-special-tokens-medium(1~4) are consider beta check-points to this
15
-
16
  If interested in how I got to this point and how I created the datasets you can visit:
17
  [Crafting GPT2 for Personalized AI-Preparing Data the Long Way](https://medium.com/@deeokay/the-soul-in-the-machine-crafting-gpt2-for-personalized-ai-9d38be3f635f)
18
  <!-- Provide a quick summary of what the model is/does. -->
19
 
20
 
 
 
 
 
 
 
 
 
21
  ## DECLARING NEW SPECIAL TOKENS
22
 
23
  ```python
@@ -264,167 +269,3 @@ This is the model card of a 🤗 transformers model that has been pushed on the
264
  - **Paper [optional]:** [More Information Needed]
265
  - **Demo [optional]:** [More Information Needed]
266
 
267
- ## Uses
268
-
269
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
270
-
271
- ### Direct Use
272
-
273
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
274
-
275
- [More Information Needed]
276
-
277
- ### Downstream Use [optional]
278
-
279
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
280
-
281
- [More Information Needed]
282
-
283
- ### Out-of-Scope Use
284
-
285
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
286
-
287
- [More Information Needed]
288
-
289
- ## Bias, Risks, and Limitations
290
-
291
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
292
-
293
- [More Information Needed]
294
-
295
- ### Recommendations
296
-
297
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
298
-
299
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
300
-
301
- ## How to Get Started with the Model
302
-
303
- Use the code below to get started with the model.
304
-
305
- [More Information Needed]
306
-
307
- ## Training Details
308
-
309
- ### Training Data
310
-
311
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
312
-
313
- [More Information Needed]
314
-
315
- ### Training Procedure
316
-
317
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
318
-
319
- #### Preprocessing [optional]
320
-
321
- [More Information Needed]
322
-
323
-
324
- #### Training Hyperparameters
325
-
326
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
327
-
328
- #### Speeds, Sizes, Times [optional]
329
-
330
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
331
-
332
- [More Information Needed]
333
-
334
- ## Evaluation
335
-
336
- <!-- This section describes the evaluation protocols and provides the results. -->
337
-
338
- ### Testing Data, Factors & Metrics
339
-
340
- #### Testing Data
341
-
342
- <!-- This should link to a Dataset Card if possible. -->
343
-
344
- [More Information Needed]
345
-
346
- #### Factors
347
-
348
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
349
-
350
- [More Information Needed]
351
-
352
- #### Metrics
353
-
354
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
355
-
356
- [More Information Needed]
357
-
358
- ### Results
359
-
360
- [More Information Needed]
361
-
362
- #### Summary
363
-
364
-
365
-
366
- ## Model Examination [optional]
367
-
368
- <!-- Relevant interpretability work for the model goes here -->
369
-
370
- [More Information Needed]
371
-
372
- ## Environmental Impact
373
-
374
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
375
-
376
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
377
-
378
- - **Hardware Type:** [More Information Needed]
379
- - **Hours used:** [More Information Needed]
380
- - **Cloud Provider:** [More Information Needed]
381
- - **Compute Region:** [More Information Needed]
382
- - **Carbon Emitted:** [More Information Needed]
383
-
384
- ## Technical Specifications [optional]
385
-
386
- ### Model Architecture and Objective
387
-
388
- [More Information Needed]
389
-
390
- ### Compute Infrastructure
391
-
392
- [More Information Needed]
393
-
394
- #### Hardware
395
-
396
- [More Information Needed]
397
-
398
- #### Software
399
-
400
- [More Information Needed]
401
-
402
- ## Citation [optional]
403
-
404
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
405
-
406
- **BibTeX:**
407
-
408
- [More Information Needed]
409
-
410
- **APA:**
411
-
412
- [More Information Needed]
413
-
414
- ## Glossary [optional]
415
-
416
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
417
-
418
- [More Information Needed]
419
-
420
- ## More Information [optional]
421
-
422
- [More Information Needed]
423
-
424
- ## Model Card Authors [optional]
425
-
426
- [More Information Needed]
427
-
428
- ## Model Card Contact
429
-
430
- [More Information Needed]
 
10
  - on a self made special tokens
11
  - on a multiple fine tuned with ~15K dataset (in progress mode)
12
 
 
 
 
13
  If interested in how I got to this point and how I created the datasets you can visit:
14
  [Crafting GPT2 for Personalized AI-Preparing Data the Long Way](https://medium.com/@deeokay/the-soul-in-the-machine-crafting-gpt2-for-personalized-ai-9d38be3f635f)
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
 
18
+
19
+ # FINE TUNED - BASE MODEL
20
+ I would consider this [GPT2-medium-custom-v1.0](https://huggingface.co/Deeokay/GPT2-medium-custom-v1.0) a the base model to start my Fine Tuning 2.0 on specific Datasets.
21
+ - Previous models of this: gpt-special-tokens-medium(1~4) are consider beta check-points to this
22
+
23
+
24
+
25
+
26
  ## DECLARING NEW SPECIAL TOKENS
27
 
28
  ```python
 
269
  - **Paper [optional]:** [More Information Needed]
270
  - **Demo [optional]:** [More Information Needed]
271