Safetensors
llama
nielsr HF staff commited on
Commit
76496fa
·
verified ·
1 Parent(s): 10b5fb7

Add pipeline tag and library name

Browse files

This PR ensures the "how to use" button appears on the top right and the model can be found at https://huggingface.co/models?pipeline_tag=text-generation.

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # lmarena-ai/p2l-360m-bt-01132025
5
 
6
  Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance.
@@ -33,7 +36,6 @@ To serve a P2L model, please see our documentation on GitHub: [Serving P2L](http
33
 
34
  Note: the P2L model outputs with this structure:
35
 
36
-
37
  ```python
38
  class P2LOutputs(ModelOutput):
39
  coefs: torch.FloatTensor = None # "betas" as described above
@@ -57,8 +59,6 @@ with open(fname) as fin:
57
  model_list = json.load(fin)
58
  ```
59
 
60
-
61
-
62
  ### Loading from Pretrained
63
 
64
  To define and load the model:
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  # lmarena-ai/p2l-360m-bt-01132025
8
 
9
  Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance.
 
36
 
37
  Note: the P2L model outputs with this structure:
38
 
 
39
  ```python
40
  class P2LOutputs(ModelOutput):
41
  coefs: torch.FloatTensor = None # "betas" as described above
 
59
  model_list = json.load(fin)
60
  ```
61
 
 
 
62
  ### Loading from Pretrained
63
 
64
  To define and load the model: