PEFT
cdh commited on
Commit
29d5001
·
verified ·
1 Parent(s): 8860492

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -9
README.md CHANGED
@@ -35,7 +35,7 @@ library_name: peft
35
  ---
36
 
37
  # Introduction
38
- The paper explores the capabilities of Large Language Models (LLMs) like LLaMA in syntactic parsing tasks. We introduce U-DepPLLaMA, a novel architecture that treats Dependency Parsing as a sequence-to-sequence problem, achieving state-of-the-art results in 26 languages from the Universal Dependency Treebank. Our approach demonstrates that LLMs can handle dependency parsing without the need for specialized architectures, showing robust performance even with complex sentence structures. The paper is available [here](https://journals.openedition.org/ijcol/1352).
39
 
40
  For more details, please consult the associated [Github repository](https://github.com/crux82/u-deppllama).
41
 
@@ -76,14 +76,8 @@ with torch.no_grad():
76
  s = gen_outputs.sequences[0]
77
  output = tokenizer.decode(s, skip_special_tokens=True)
78
 
79
- if "### Answer:" in output:
80
- response = output.split("### Answer:")[1].rstrip().lstrip()
81
- else:
82
- response = "UNK"
83
- print("WARNING")
84
- print(row)
85
- print(output)
86
- print("\n--------------------\n")
87
  ```
88
 
89
  # Citation
 
35
  ---
36
 
37
  # Introduction
38
+ The paper explores the capabilities of Large Language Models (LLMs) like LLaMA in syntactic parsing tasks. We introduce U-DepPLLaMA, a novel architecture that treats Dependency Parsing as a sequence-to-sequence problem, achieving state-of-the-art results in 26 languages from the Universal Dependency Treebank. Our approach demonstrates that LLMs can handle dependency parsing without the need for specialized architectures, showing robust performance even with complex sentence structures. The paper is available [here](https://www.ai-lc.it/wp-content/uploads/2024/08/IJCOL_10_1_2_hromei_et_al.pdf).
39
 
40
  For more details, please consult the associated [Github repository](https://github.com/crux82/u-deppllama).
41
 
 
76
  s = gen_outputs.sequences[0]
77
  output = tokenizer.decode(s, skip_special_tokens=True)
78
 
79
+ response = output.split("### Answer:")[1].rstrip().lstrip()
80
+ print(response)
 
 
 
 
 
 
81
  ```
82
 
83
  # Citation