nielsr HF staff commited on
Commit
7142699
·
verified ·
1 Parent(s): 3e03a06

Add transformers and text-generation metadata

Browse files

This PR adds the `library_name` and `pipeline_tag` metadata to make the model easier to find on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # lmarena-ai/p2l-1.5b-grk-01112025
5
 
6
  Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance.
@@ -51,7 +54,7 @@ Note: the P2L model outputs with this structure:
51
  ```python
52
  class P2LOutputs(ModelOutput):
53
  coefs: torch.FloatTensor = None # "betas" as described above
54
- eta: Optional[torch.FloatTensor] = None # tie coefficent (also eta above)
55
  last_hidden_state: torch.FloatTensor = None # last hidden state from the transformer
56
  ```
57
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  # lmarena-ai/p2l-1.5b-grk-01112025
8
 
9
  Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance.
 
54
  ```python
55
  class P2LOutputs(ModelOutput):
56
  coefs: torch.FloatTensor = None # "betas" as described above
57
+ eta: Optional[torch.FloatTensor = None # tie coefficent (also eta above)
58
  last_hidden_state: torch.FloatTensor = None # last hidden state from the transformer
59
  ```
60