File size: 1,503 Bytes
ca36808
 
5f59c4a
ca36808
 
 
463ce72
 
 
ca36808
463ce72
 
 
 
5f59c4a
ca36808
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
463ce72
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
language: en
license: apache-2.0
tags:
- home-assistant
- voice-assistant
- automation
- assistant
- home
pipeline_tag: text-generation
datasets:
- acon96/Home-Assistant-Requests
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
base_model_relation: finetune
---

# 🏠 TinyLLaMA-1.1B Home Assistant Voice Model

This model is a **fine-tuned version** of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0), trained with [acon96/Home-Assistant-Requests](https://huggingface.co/datasets/acon96/Home-Assistant-Requests).  
It is designed to act as a **voice-controlled smart home assistant** that takes natural language instructions and outputs **Home Assistant commands**.

---

## ✨ Features
- Converts **natural language voice commands** into Home Assistant automation calls.  
- Produces **friendly confirmations** and **structured JSON service commands**.  
- Lightweight (1.1B parameters) – runs efficiently on CPUs, GPUs, and via **Ollama** with quantization.  

---

## 🔧 Example Usage (Transformers)

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("premrajreddy/tinyllama-1.1b-home-llm")
model = AutoModelForCausalLM.from_pretrained("premrajreddy/tinyllama-1.1b-home-llm")

query = "turn on the kitchen lights"
inputs = tokenizer(query, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=80)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))