HuyRemy commited on
Commit
ff6fab5
·
verified ·
1 Parent(s): 8a170be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -39,7 +39,7 @@ This is the model card of a 🤗 transformers model that has been pushed on the
39
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
 
41
  ### Direct Use
42
- """
43
  from unsloth import FastLanguageModel
44
  model, tokenizer = FastLanguageModel.from_pretrained(
45
  model_name = "huyremy/aichat", # YOUR MODEL YOU USED FOR TRAINING
@@ -49,8 +49,6 @@ model, tokenizer = FastLanguageModel.from_pretrained(
49
  )
50
  FastLanguageModel.for_inference(model) # Enable native 2x faster inference
51
 
52
- # alpaca_prompt = You MUST copy from above!
53
-
54
  inputs = tokenizer(
55
  [
56
  alpaca_prompt.format(
@@ -62,7 +60,8 @@ inputs = tokenizer(
62
 
63
  outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
64
  tokenizer.batch_decode(outputs)
65
- ""
 
66
  [More Information Needed]
67
 
68
  ### Downstream Use [optional]
 
39
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
 
41
  ### Direct Use
42
+ [code]
43
  from unsloth import FastLanguageModel
44
  model, tokenizer = FastLanguageModel.from_pretrained(
45
  model_name = "huyremy/aichat", # YOUR MODEL YOU USED FOR TRAINING
 
49
  )
50
  FastLanguageModel.for_inference(model) # Enable native 2x faster inference
51
 
 
 
52
  inputs = tokenizer(
53
  [
54
  alpaca_prompt.format(
 
60
 
61
  outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
62
  tokenizer.batch_decode(outputs)
63
+ [/code]
64
+
65
  [More Information Needed]
66
 
67
  ### Downstream Use [optional]