yuxiaod commited on
Commit
6e24892
·
verified ·
1 Parent(s): 455a5d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -23
README.md CHANGED
@@ -11,35 +11,36 @@ The Knowledge Engineering Group (**[KEG](https://twitter.com/thukeg)**) & Data M
11
 
12
  We build **LLMs** and related training & inference techniques:
13
 
14
- * **[ChatGLM](https://github.com/THUDM/ChatGLM3)**: Open Bilingual Chat LLMs, among which the ChatGLM-6B series has attracted **10,000,000** downloads on HF.
15
- * **[CodeGeeX](https://github.com/THUDM/CodeGeeX2)**: A Multilingual Code Generation Model (KDD 2023)
16
- * **[CogVLM (VisualGLM)](https://github.com/THUDM/CogVLM)**: An Open Visual Language Model
17
- * **[WebGLM](https://github.com/THUDM/WebGLM)**: An Efficient Web-Enhanced Question Answering System (KDD 2023)
18
- * **[GLM-130B](https://github.com/THUDM/GLM-130B)**: An Open Bilingual Pre-Trained Model (ICLR 2023)
19
- * **[CogView](https://github.com/THUDM/CogView)**: An Open Text-to-Image Generation Model (NeurIPS 2021)
20
- * **[CogVideo](https://github.com/THUDM/CogVideo)**: An Open Text-to-Video Generation Model (ICLR 2023)
21
- * **[CogAgent](https://github.com/THUDM/CogVLM)**: A Visual Language Model for GUI Agents
22
- * **[AgentTuning](https://github.com/THUDM/AgentTuning)**: Enabling Generalized Agent Abilities for LLMs
23
- * **[APAR](https://arxiv.org/abs/2401.06761)**: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
 
24
 
25
  We also work on **LLM evaluations**:
26
- * **[AgentBench](https://github.com/THUDM/AgentBench)**: A Benchmark to Evaluate LLMs as Agents (ICLR 2024)
27
- * **[AlignBench](https://github.com/THUDM/AlignBench)**: A Benchmark to Evaluate Chinese Alignment of LLMs
28
- * **[LongBench](https://github.com/THUDM/LongBench)**: A Bilingual, Multitask Benchmark for Long Context Understanding
29
 
30
 
31
  We also **pre-train graph neural networks**:
32
- * **[CogDL](https://github.com/THUDM/CogDL)**: A Library for Graph Deep Learning (WWW 2023)
33
- * **[GraphMAE](https://github.com/THUDM/GraphMAE)**: (Generative) Masked Graph Neural Network Pre-Training. (KDD 2022 & [WWW 2023](https://github.com/THUDM/GraphMAE2))
34
- * **[GPT-GNN](https://github.com/acbull/GPT-GNN)**: Generative Graph Neural Network Pre-Training (KDD 2020, MSR, UCLA).
35
- * **[GCC](https://github.com/THUDM/CogDL)**: Constrative Graph Neural Network Pre-Training (KDD 2020)
36
- * **[SelfKG](https://github.com/THUDM/SelfKG)**: Self-Supervised Learning for Knowledge Graphs (WWW 2022)
37
 
38
  We also work on **graph embedding theory, algorithms, and systems**:
39
- * **[SketchNE](https://github.com/THU-numbda/SketchNE)**: Embedding Billion-Scale Networks Accurately in One Hour (TKDE 2023)
40
- * **[ProNE](https://github.com/THUDM/ProNE)**: Embedding Networks of 100 Million Nodes with 10-400 Speedup (IJCAI 2019)
41
- * **[NetSMF](https://github.com/xptree/NetSMF)**: Embedding Networks of 100 Million Nodes (WWW 2019)
42
- * **[NetMF](https://github.com/xptree/NetMF)**: Understanding DeepWalk, LINE, PTE, and node2vec as Matrix Factorization (WSDM 2018)
43
 
44
  We started with **social networks and graphs**, and always love them:
45
- * **[AMiner](https://www.aminer.cn/)**: An Academic Search and Mining System Since 2006 (KDD 2008, ACM SIGKDD Test of Time Award)
 
11
 
12
  We build **LLMs** and related training & inference techniques:
13
 
14
+ * **[ChatGLM](https://github.com/THUDM/ChatGLM3)**,
15
+ **[CodeGeeX](https://github.com/THUDM/CodeGeeX2)**,
16
+ **[CogVLM (VisualGLM)](https://github.com/THUDM/CogVLM)**,
17
+ **[WebGLM](https://github.com/THUDM/WebGLM)**,
18
+ **[GLM-130B](https://github.com/THUDM/GLM-130B)**,
19
+ **[CogView](https://github.com/THUDM/CogView)**,
20
+ **[CogVideo](https://github.com/THUDM/CogVideo)**.
21
+
22
+ **[CogAgent](https://github.com/THUDM/CogVLM)**: A Visual Language Model for GUI Agents
23
+ **[AgentTuning](https://github.com/THUDM/AgentTuning)**: Enabling Generalized Agent Abilities for LLMs
24
+ **[APAR](https://arxiv.org/abs/2401.06761)**: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
25
 
26
  We also work on **LLM evaluations**:
27
+ * **[AgentBench](https://github.com/THUDM/AgentBench)**,
28
+ **[AlignBench](https://github.com/THUDM/AlignBench)**,
29
+ **[LongBench](https://github.com/THUDM/LongBench)**.
30
 
31
 
32
  We also **pre-train graph neural networks**:
33
+ * **[CogDL](https://github.com/THUDM/CogDL)**,
34
+ **[GraphMAE](https://github.com/THUDM/GraphMAE)**,
35
+ **[GPT-GNN](https://github.com/acbull/GPT-GNN)**,
36
+ **[GCC](https://github.com/THUDM/CogDL)**,
37
+ **[SelfKG](https://github.com/THUDM/SelfKG)**.
38
 
39
  We also work on **graph embedding theory, algorithms, and systems**:
40
+ * **[SketchNE](https://github.com/THU-numbda/SketchNE)**,
41
+ **[ProNE](https://github.com/THUDM/ProNE)**,
42
+ **[NetSMF](https://github.com/xptree/NetSMF)**,
43
+ **[NetMF](https://github.com/xptree/NetMF)**.
44
 
45
  We started with **social networks and graphs**, and always love them:
46
+ * **[AMiner](https://www.aminer.cn/)**.