|
--- |
|
language: en |
|
license: apache-2.0 |
|
tags: |
|
- home-assistant |
|
- voice-assistant |
|
- automation |
|
- assistant |
|
- home |
|
pipeline_tag: text-generation |
|
datasets: |
|
- acon96/Home-Assistant-Requests |
|
base_model: |
|
- TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
|
base_model_relation: finetune |
|
--- |
|
|
|
# π TinyLLaMA-1.1B Home Assistant Voice Model |
|
|
|
This model is a **fine-tuned version** of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), trained with [acon96/Home-Assistant-Requests](https://huggingface.co/datasets/acon96/Home-Assistant-Requests). |
|
It is designed to act as a **voice-controlled smart home assistant** that takes natural language instructions and outputs **Home Assistant commands**. |
|
|
|
--- |
|
|
|
## β¨ Features |
|
- Converts **natural language voice commands** into Home Assistant automation calls. |
|
- Produces **friendly confirmations** and **structured JSON service commands**. |
|
- Lightweight (1.1B parameters) β runs efficiently on CPUs, GPUs, and via **Ollama** with quantization. |
|
|
|
--- |
|
|
|
## π§ Example Usage (Transformers) |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("premrajreddy/tinyllama-1.1b-home-llm") |
|
model = AutoModelForCausalLM.from_pretrained("premrajreddy/tinyllama-1.1b-home-llm") |
|
|
|
query = "turn on the kitchen lights" |
|
inputs = tokenizer(query, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_new_tokens=80) |
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |