SolaraV2-coder-0517 / README.md
summerstars's picture
Update README.md
4a35e78 verified
---
license: apache-2.0
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
language:
- en
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers
---
# 🌞 SolaraV2 — `summerstars/SolaraV2`
> **📅 Version 0517(2025-0517)**
> This is the 0517 release of SolaraV2.
## ✨ Created by a High School Student | Built on Google Colab (T4 GPU)
### 🌸 高校生によって開発 | Google Colab(T4 GPU)で作成
**SolaraV2** is an upgraded version of the original **Solara** — a lightweight, instruction-tuned language model based on [`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct).
This version is trained on a **larger and more diverse dataset**, including **basic math-related samples**, improving its ability to handle both casual conversations and educational tasks.
All development was conducted by a high school student using **Google Colab** and a **T4 GPU**.
**SolaraV2(ソララV2)** は、オリジナルの **Solara** モデルを改良した軽量の言語モデルで、[`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) をベースにしています。
本バージョンでは、**より大規模かつ多様なデータセット**(数学系データを含む)で学習を行い、日常会話から教育的な質問まで幅広く対応できるようになりました。
開発はすべて、高校生が **Google Colab(T4 GPU)** 上で行いました。
---
## 📌 Model Details | モデル詳細
| Feature / 特徴 | Description / 説明 |
|--------------------|------------------|
| **Base Model** | `HuggingFaceTB/SmolLM2-360M-Instruct` |
| **Parameters** | 360M |
| **Architecture** | Decoder-only Transformer |
| **Language** | English / 英語 |
| **License** | Apache 2.0 |
| **Training Additions** | Basic math, factual Q&A / 基本数学・事実ベースのデータ追加 |
---
## 🚀 Use Cases | 主な用途
- 🤖 Lightweight chatbots / 軽量チャットボット
- 📱 Inference on CPUs or mobile devices / CPUやモバイル端末での推論
- 📚 Educational or hobbyist projects / 教育・趣味向けプロジェクト
- 🧾 Instruction-following tasks / 指示応答タスク
- ➗ Basic math questions / 基本的な数学問題への対応
---
## 🛠️ How to Use | 使用方法
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "summerstars/SolaraV2-coder-0517"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "What is 15 * 4?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
# Print the result / 結果を表示
print(tokenizer.decode(outputs[0], skip_special_tokens=True))