01Coder Readme
Browse files
README.md
CHANGED
|
@@ -1,3 +1,51 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
| 3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
| 4 |
+
tags:
|
| 5 |
+
- code-llm
|
| 6 |
+
- mistral-7b
|
| 7 |
+
- language-model
|
| 8 |
---
|
| 9 |
+
|
| 10 |
+
# Model Card for 01Coder 7B
|
| 11 |
+
|
| 12 |
+
This model card provides details about a code language model (LLM) based on Mistral 7B architecture. It has been trained on a combination of three datasets: ise-uiuc/Magicoder-OSS-Instruct-75K, HuggingFaceH4/CodeAlpaca_20K, and theblackcat102/evol-codealpaca-v1.
|
| 13 |
+
|
| 14 |
+
## Model Details
|
| 15 |
+
|
| 16 |
+
### Model Description
|
| 17 |
+
|
| 18 |
+
This model is a language model fine-tuned for code generation tasks, leveraging the Mistral 7B base model architecture. It has been trained on a combination of three datasets, namely Magicoder-OSS-Instruct-75K, CodeAlpaca_20K, and evol-codealpaca-v1. The model aims to assist developers in generating code snippets for various programming tasks, ranging from natural language instructions to specific coding prompts.
|
| 19 |
+
|
| 20 |
+
- **Developed by:** Manoj Athreya A
|
| 21 |
+
- **Model type:** Language model (LLM)
|
| 22 |
+
- **License:** [Apache 2.0 License]
|
| 23 |
+
- **Finetuned from model:** Mistral 7B
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
## Intended Uses
|
| 27 |
+
|
| 28 |
+
- Code generation from natural language prompts.
|
| 29 |
+
- Assisting developers in completing code snippets.
|
| 30 |
+
- Augmenting code-related tasks with automated generation capabilities.
|
| 31 |
+
|
| 32 |
+
## Limitations and Ethical Considerations
|
| 33 |
+
|
| 34 |
+
- **Bias:** As with any language model, biases present in the training data may manifest in the generated code snippets.
|
| 35 |
+
- **Accuracy:** While the model aims to generate accurate code, it may occasionally produce incorrect or suboptimal solutions, especially for complex tasks.
|
| 36 |
+
- **Security:** Generated code should be reviewed for security vulnerabilities, as the model may inadvertently produce insecure implementations.
|
| 37 |
+
- **Ethical Use:** Users are encouraged to employ the model responsibly and ethically, avoiding harmful or malicious use cases.
|
| 38 |
+
|
| 39 |
+
### Recommendations
|
| 40 |
+
|
| 41 |
+
- Fine-tuning the model on specific domains or tasks may improve its performance.
|
| 42 |
+
- Validate generated code in real-world scenarios to ensure its correctness and reliability.
|
| 43 |
+
- Provide feedback to continuously improve the model's performance and address any issues encountered during usage.
|
| 44 |
+
|
| 45 |
+
## License
|
| 46 |
+
|
| 47 |
+
- The source code in this repo is licensed under the Apache 2.0 license.
|
| 48 |
+
|
| 49 |
+
## Version History
|
| 50 |
+
|
| 51 |
+
- 01-Coder-7Bv0.1
|