library_name: transformers | |
tags: | |
- code | |
- hpc | |
- parallel | |
- axonn | |
# HPC-Coder-v2 | |
The HPC-Coder-v2-6.7b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc. | |
This version is a fine-tuning of the [Deepseek Coder 6.7b](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) model. | |
It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets. | |
We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs. | |
HPC-Coder-v2-6.7b is the best performing LLM under 30b parameters on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_. | |
It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation. | |