ssmits commited on
Commit
99ace76
·
verified ·
1 Parent(s): e2be15c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -46,6 +46,37 @@ dtype: bfloat16
46
 
47
  ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/PaL4iBzj6ikuMfna2EUWp.png)
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## Direct Use
50
  Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
51
 
 
46
 
47
  ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/PaL4iBzj6ikuMfna2EUWp.png)
48
 
49
+ ```python
50
+ from transformers import AutoTokenizer, AutoModelForCausalLM
51
+ import transformers
52
+ import torch
53
+
54
+ model = "ssmits/Falcon2-5.5B-Dutch"
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(model)
57
+ pipeline = transformers.pipeline(
58
+ "text-generation",
59
+ model=model,
60
+ tokenizer=tokenizer,
61
+ torch_dtype=torch.bfloat16,
62
+ )
63
+ sequences = pipeline(
64
+ "Can you explain the concepts of Quantum Computing?",
65
+ max_length=200,
66
+ do_sample=True,
67
+ top_k=10,
68
+ num_return_sequences=1,
69
+ eos_token_id=tokenizer.eos_token_id,
70
+ )
71
+ for seq in sequences:
72
+ print(f"Result: {seq['generated_text']}")
73
+
74
+ ```
75
+
76
+ 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
77
+
78
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
79
+
80
  ## Direct Use
81
  Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
82