Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ pinned: false
|
|
9 |
|
10 |
The Knowledge Engineering Group (**[KEG](https://twitter.com/thukeg)**) & Data Mining ([THUDM](https://github.com/THUDM)) at Tsinghua University.
|
11 |
|
12 |
-
We build **LLMs
|
13 |
|
14 |
* **[ChatGLM](https://github.com/THUDM/ChatGLM3)**: Open Bilingual Chat LLMs, among which the ChatGLM-6B series has attracted **10,000,000** downloads on HF.
|
15 |
* **[CodeGeeX](https://github.com/THUDM/CodeGeeX2)**: A Multilingual Code Generation Model (KDD 2023)
|
@@ -18,10 +18,12 @@ We build **LLMs**:
|
|
18 |
* **[GLM-130B](https://github.com/THUDM/GLM-130B)**: An Open Bilingual Pre-Trained Model (ICLR 2023)
|
19 |
* **[CogView](https://github.com/THUDM/CogView)**: An Open Text-to-Image Generation Model (NeurIPS 2021)
|
20 |
* **[CogVideo](https://github.com/THUDM/CogVideo)**: An Open Text-to-Video Generation Model (ICLR 2023)
|
|
|
21 |
* **[AgentTuning](https://github.com/THUDM/AgentTuning)**: Enabling Generalized Agent Abilities for LLMs
|
|
|
22 |
|
23 |
We also work on **LLM evaluations**:
|
24 |
-
* **[AgentBench](https://github.com/THUDM/AgentBench)**: A Benchmark to Evaluate LLMs as Agents
|
25 |
* **[AlignBench](https://github.com/THUDM/AlignBench)**: A Benchmark to Evaluate Chinese Alignment of LLMs
|
26 |
* **[LongBench](https://github.com/THUDM/LongBench)**: A Bilingual, Multitask Benchmark for Long Context Understanding
|
27 |
|
|
|
9 |
|
10 |
The Knowledge Engineering Group (**[KEG](https://twitter.com/thukeg)**) & Data Mining ([THUDM](https://github.com/THUDM)) at Tsinghua University.
|
11 |
|
12 |
+
We build **LLMs** and related training & inference techniques:
|
13 |
|
14 |
* **[ChatGLM](https://github.com/THUDM/ChatGLM3)**: Open Bilingual Chat LLMs, among which the ChatGLM-6B series has attracted **10,000,000** downloads on HF.
|
15 |
* **[CodeGeeX](https://github.com/THUDM/CodeGeeX2)**: A Multilingual Code Generation Model (KDD 2023)
|
|
|
18 |
* **[GLM-130B](https://github.com/THUDM/GLM-130B)**: An Open Bilingual Pre-Trained Model (ICLR 2023)
|
19 |
* **[CogView](https://github.com/THUDM/CogView)**: An Open Text-to-Image Generation Model (NeurIPS 2021)
|
20 |
* **[CogVideo](https://github.com/THUDM/CogVideo)**: An Open Text-to-Video Generation Model (ICLR 2023)
|
21 |
+
* **[CogAgent](https://github.com/THUDM/CogVLM)**: A Visual Language Model for GUI Agents
|
22 |
* **[AgentTuning](https://github.com/THUDM/AgentTuning)**: Enabling Generalized Agent Abilities for LLMs
|
23 |
+
* **[APAR](https://arxiv.org/abs/2401.06761)**: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
|
24 |
|
25 |
We also work on **LLM evaluations**:
|
26 |
+
* **[AgentBench](https://github.com/THUDM/AgentBench)**: A Benchmark to Evaluate LLMs as Agents (ICLR 2024)
|
27 |
* **[AlignBench](https://github.com/THUDM/AlignBench)**: A Benchmark to Evaluate Chinese Alignment of LLMs
|
28 |
* **[LongBench](https://github.com/THUDM/LongBench)**: A Bilingual, Multitask Benchmark for Long Context Understanding
|
29 |
|