Safetensors
mpt
Krutrim
language-model
custom_code
krutrim-admin commited on
Commit
149fd3b
·
verified ·
1 Parent(s): 70d97a8

Updated readme

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -55,7 +55,7 @@ Krutrim Large Language Model (LLM) is a 2 trillion token multilingual foundation
55
 
56
  ## Evaluation Results
57
 
58
- ### English Comparison between Krutrim-1-7B and Llama2Chat-7B (Benchmarks run on `llm_foundry`)
59
 
60
  | Task | Llama2Chat | Krutrim-1-7B |
61
  |--------------------|--------------|------------|
@@ -125,7 +125,6 @@ chat_template ="{% for message in messages %}{% if message['role'] == 'system' %
125
  tokenizer.chat_template = chat_template
126
 
127
  prompt_dict = [
128
-
129
  {"role": "system", "content": "You are an AI assistant."},
130
  {"role": "user", "content": "Who are you?"}
131
  ]
@@ -151,13 +150,13 @@ The model was trained on a dataset that includes content from the internet, whic
151
  - Provide inaccurate, incomplete, or redundant answers
152
  - Generate responses in languages inconsistent with the prompt
153
 
154
- ## License
155
- TBD
156
-
157
  ## Ethical Considerations
158
  - The model may produce biased or offensive outputs based on its training data.
159
  - Users should apply human oversight when using the model for decision-making in sensitive areas.
160
  - While safeguards have been implemented, the model may still generate socially undesirable text in certain contexts.
161
 
 
 
 
162
  ## Contact
163
  TBD
 
55
 
56
  ## Evaluation Results
57
 
58
+ ### English Comparison between Llama2Chat-7B and Krutrim-1-7B
59
 
60
  | Task | Llama2Chat | Krutrim-1-7B |
61
  |--------------------|--------------|------------|
 
125
  tokenizer.chat_template = chat_template
126
 
127
  prompt_dict = [
 
128
  {"role": "system", "content": "You are an AI assistant."},
129
  {"role": "user", "content": "Who are you?"}
130
  ]
 
150
  - Provide inaccurate, incomplete, or redundant answers
151
  - Generate responses in languages inconsistent with the prompt
152
 
 
 
 
153
  ## Ethical Considerations
154
  - The model may produce biased or offensive outputs based on its training data.
155
  - Users should apply human oversight when using the model for decision-making in sensitive areas.
156
  - While safeguards have been implemented, the model may still generate socially undesirable text in certain contexts.
157
 
158
+ ## License
159
+ TBD
160
+
161
  ## Contact
162
  TBD