DataHammer commited on
Commit
3ac0c8b
1 Parent(s): a74d9f0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/qasper
5
+ - DataHammer/scimrc
6
+ language:
7
+ - en
8
+ - zh
9
+ library_name: transformers
10
+ pipeline_tag: question-answering
11
+ ---
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+ Mozi is the first large-scale language model for the scientific paper domain, such as question answering and emotional support. With the help of the large-scale language and evidence retrieval models, SciDPR, Mozi generates concise and accurate responses to users' questions about specific papers and provides emotional support for academic researchers.
19
+
20
+ - **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
21
+ - **Model date:** LLaMA was trained In May. 2023.
22
+ - **Model version:** This is version 1 of the model.
23
+ - **Model type:** mozi_llama is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B parameters.
24
+ - **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
25
+ - **License:** English
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [Girhub Repo](https://github.com/gmftbyGMFTBY/science-llm)
32
+ - **Paper [optional]:** [Paper Repo]()