thermal666 commited on
Commit
20a976a
·
verified ·
1 Parent(s): ca334b2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - biology
7
+ - chemistry
8
+ ---
9
+
10
+ # Model details
11
+ ## Model description
12
+ Nature Language Model (NatureLM) is a sequence-based science foundation model designed for scientific discovery. Pre-trained with data from multiple scientific domains, NatureLM offers a unified, versatile model that enables various applications including generating and optimizing small molecules, proteins, RNA, and materials using text instructions; cross-domain generation/design such as protein-to-molecule and protein-to-RNA generation; and top performance across different domains.
13
+
14
+ - Developed by: SFM team ∗ Microsoft Research AI for Science
15
+ - Model type: Sequence-based science foundation model
16
+ - Language(s): English
17
+ - License: MIT License
18
+
19
+
20
+ # Model sources
21
+ ## Repository:
22
+ We provide four repositories for 1B and 8x7B models, including both base versions and instruction-finetuned versions.
23
+
24
+ - https://huggingface.co/microsoft/NatureLM-1B
25
+ - https://huggingface.co/microsoft/NatureLM-1B-Inst
26
+ - https://huggingface.co/microsoft/NatureLM-8x7B
27
+ - https://huggingface.co/microsoft/NatureLM-8x7B-Inst
28
+
29
+
30
+ ## Paper:
31
+ [[2502.07527] Nature Language Model: Deciphering the Language of Nature for Scientific Discovery](https://arxiv.org/abs/2502.07527)
32
+
33
+ # Uses
34
+ ## Direct intended uses
35
+ NatureLM is designed to facilitate scientific discovery across multiple domains, including the generation and optimization of small molecules, proteins, and RNA. It offers two unique features: (1) Text-driven capability — users can prompt NatureLM using natural language instructions; and (2) Cross-domain functionality — NatureLM can perform complex cross-domain tasks, such as generating compounds for specific targets or designing protein binders for small molecules.
36
+ Downstream uses:
37
+ Science researchers can finetune NatureLM for their own tasks, especially cross-domain generation tasks.
38
+
39
+ ## Out-of-scope uses
40
+ ### Use in Real-World Applications Beyond Proof of Concept
41
+ NatureLM currently not ready to use in clinical applications, without rigorous external validation and additional specialized development. It is being released for research purposes only.
42
+ ### Use outside of the science domain
43
+ NatureLM is not a general-purpose language model and is not designed or optimized to perform general tasks like text summarization or Q&A.
44
+ ### Use by Non-Experts
45
+ NatureLM outputs scientific entities (e.g., molecules, proteins, materials) and requires expert interpretation, validation, and analysis. It is not intended for use by non-experts or individuals without the necessary domain knowledge to evaluate and verify its outputs. Outputs, such as small molecule inhibitors for target proteins, require rigorous validation to ensure safety and efficacy. Misuse by non-experts may lead to the design of inactive or suboptimal compounds, resulting in wasted resources and potentially delaying critical research or development efforts.
46
+
47
+ ## Risks and limitations
48
+ NatureLM may not always generate compounds or proteins precisely aligned with user instructions. Users are advised to apply their own adaptive filters before proceeding. Users are responsible for verification of model outputs and decision-making.
49
+ NatureLM was designed and tested using the English language. Performance in other languages may vary and should be assessed by someone who is both an expert in the expected outputs and a native speaker of that language.
50
+ NatureLM inherits any biases, errors, or omissions characteristic of its training data, which may be amplified by any AI-generated interpretations. For example, inorganic data in our training corpus is relatively limited, comprising only 0.02 billion tokens out of a total of 143 billion tokens. As a result, the model's performance on inorganic-related tasks is constrained. In contrast, protein-related data dominates the corpus, with 65.3 billion tokens, accounting for the majority of the training data.
51
+ There has not been a systematic effort to ensure that systems using NatureLM are protected from security vulnerabilities such as indirect prompt injection attacks. Any systems using it should take proactive measures to harden their systems as appropriate.
52
+
53
+
54
+ # Training details
55
+ ## Training data
56
+ The pre-training data includes text, small molecules (SMILES notations), proteins (FASTA format), materials (chemical composition and space group number), DNA (FASTA format), and RNA (FASTA format). The dataset contains single-domain sequences and cross-domain sequences.
57
+
58
+ ## Training procedure
59
+ Preprocessing
60
+ The training procedure involves two stages: Stage 1 focuses on training newly introduced tokens while freezing existing model parameters. Stage 2 involves joint optimization of both new and existing parameters to enhance overall performance.
61
+
62
+ ## Training hyperparameters
63
+ - Learning Rate:
64
+ - 1B model: 1×10<sup>−4</sup>
65
+ - 8x7B model: 2×10<sup>−4</sup>
66
+ - Batch Size (Sentences):
67
+ - 1B model: 4096
68
+ - 8x7B model: 1536
69
+ - Context Length (Tokens):
70
+ - All models: 8192
71
+ - GPU Number (H100):
72
+ - 1B model: 64
73
+ - 8x7B model: 256
74
+
75
+ ## Speeds, sizes, times
76
+
77
+ Model sized listed above;
78
+
79
+ # Evaluation
80
+ ## Testing data, factors, and metrics
81
+ Testing data
82
+ The testing data includes 22 types of scientific tasks such as molecular generation, protein generation, material generation, RNA generation, and prediction tasks across small molecules, proteins, DNA.
83
+
84
+ ## Factors
85
+ 1. Cross-Domain Adaptability: The ability of NatureLM to perform tasks that span multiple scientific domains (e.g., protein-to-compound generation, RNA design for CRISPR targets, or material design with specific properties).
86
+ 2. Accuracy of Outputs: For tasks like retrosynthesis, assess the correctness of the outputs compared to ground truth or experimentally validated data.
87
+ 3. Diversity and Novelty of Outputs: Evaluate whether the generated outputs are novel (e.g., new molecules or materials not present in databases or training data).
88
+ 4. Scalability Across Model Sizes: Assess the performance improvements as the model size increases (1B, 8B, and 46.7B parameters).
89
+ ## Metrics
90
+ Accuracy, AUROC, and independently trained AI-based predictors are utilized for various tasks.
91
+ Evaluation results
92
+
93
+ 1. We successfully demonstrated that NatureLM is capable of performing tasks such as target-to-compound, target-to-RNA, and DNA-to-RNA generation.
94
+ 2. NatureLM achieves state-of-the-art results on retrosynthesis benchmarks and the MatBench benchmark for materials.
95
+ 3. NatureLM can generate novel proteins, small molecules, and materials.
96
+
97
+ # Summary
98
+ Nature Language Model (NatureLM) is a groundbreaking sequence-based science foundation model designed to unify multiple scientific domains, including small molecules, materials, proteins, DNA and RNA. This innovative model leverages the "language of nature" to enable scientific discovery through text-based instructions. NatureLM represents a significant advancement in the field of artificial intelligence, providing researchers with a powerful tool to drive innovation and accelerate scientific breakthroughs. By integrating knowledge across multiple scientific domains, NatureLM paves the way for new discoveries and advancements in various fields of science. We hope to release it to benefit more users and
99
+ contribute to the development of AI for Science research.
100
+
101
+ # Model card contact
102
+ This work was conducted in Microsoft Research AI for Science. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at:
103
+ - Yingce Xia, [email protected]
104
+ - Chen Hu, [email protected]
105
+ - Yawen Yang, [email protected]
106
+
107
+ If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.