auslawbench commited on
Commit
3f90d01
·
verified ·
1 Parent(s): f8694ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -2
README.md CHANGED
@@ -11,8 +11,6 @@ base_model:
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
14
-
15
-
16
  ## Model Details
17
 
18
  ### Model Description
@@ -33,6 +31,57 @@ This is the model card of a 🤗 transformers model that has been pushed on the
33
 
34
  - **Paper:** https://arxiv.org/pdf/2412.06272
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## Citation
38
 
 
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
 
 
14
  ## Model Details
15
 
16
  ### Model Description
 
31
 
32
  - **Paper:** https://arxiv.org/pdf/2412.06272
33
 
34
+ ## Uses
35
+
36
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
+ Here's how you can run the model:
38
+
39
+ ```python
40
+ # pip install git+https://github.com/huggingface/transformers.git
41
+ # pip install git+https://github.com/huggingface/peft.git
42
+
43
+ import torch
44
+ from transformers import (
45
+ AutoModelForCausalLM,
46
+ AutoTokenizer,
47
+ BitsAndBytesConfig
48
+ )
49
+ from peft import PeftModel
50
+
51
+ model = AutoModelForCausalLM.from_pretrained(
52
+ "Equall/Saul-7B-Base",
53
+ quantization_config=BitsAndBytesConfig(load_in_8bit=True),
54
+ device_map="auto",
55
+ )
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained("Equall/Saul-7B-Base")
58
+ tokenizer.pad_token = tokenizer.eos_token
59
+
60
+ model = PeftModel.from_pretrained(
61
+ model,
62
+ "auslawbench/Cite-SaulLM-7B",
63
+ device_map="auto",
64
+ torch_dtype=torch.bfloat16,
65
+ )
66
+ model.eval()
67
+
68
+ fine_tuned_prompt = """
69
+ ### Instruction:
70
+ {}
71
+
72
+ ### Input:
73
+ {}
74
+
75
+ ### Response:
76
+ {}"""
77
+
78
+ model_input = fine_tuned_prompt.format("Predict the name of the case that needs to be cited in the text and explain why it should be cited.", input, '')
79
+ inputs = tokenizer(model_input, return_tensors="pt").to("cuda")
80
+ outputs = model.generate(**inputs, max_new_tokens=256, temperature=1.0)
81
+ output = tokenizer.decode(outputs[0], skip_special_tokens=True)
82
+ print(output.split("### Response:")[1].strip().split('>')[0] + '>')
83
+
84
+ ```
85
 
86
  ## Citation
87