File size: 3,329 Bytes
9110292
bd6e55e
 
 
 
 
 
 
9110292
 
bd6e55e
9110292
bd6e55e
9110292
 
 
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
 
 
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
 
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
 
 
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
 
 
 
 
 
 
 
 
 
9110292
 
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
 
9110292
bd6e55e
9110292
bd6e55e
 
9110292
bd6e55e
 
9110292
bd6e55e
9110292
bd6e55e
 
9110292
 
bd6e55e
9110292
bd6e55e
 
 
 
9110292
bd6e55e
 
9110292
bd6e55e
9110292
bd6e55e
 
9110292
bd6e55e
9110292
bd6e55e
 
9110292
 
bd6e55e
9110292
 
 
bd6e55e
 
 
 
 
 
 
 
9110292
bd6e55e
9110292
bd6e55e
9110292
bd6e55e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
datasets:
- homebrewltd/instruction-speech-whispervq-v2
language:
- en
license: apache-2.0
tags:
- sound language model
---

## Caution

This is an 8-bit quantized model for inference only using [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes) implementation.

## Model Details

We have developed and released the family [llama3s](https://huggingface.co/collections/homebrew-research/llama3-s-669df2139f0576abc6eb7405). This family is natively understanding audio and text input.

We continue to supervised finetune our last checkpoint using WhisperVQ as a tokenizer for audio files [homebrewltd/...](...) with 2B tokens from [Instruction Speech WhisperVQ v2](https://huggingface.co/datasets/homebrewltd/instruction-speech-whispervq-v2) dataset.

**Model developers** Homebrew Research.

**Input** Text and sound.

**Output** Text.

**Model Architecture** Llama-3.

**Language(s):** English.

## Intended Use

**Intended Use Cases** This family is primarily intended for research applications. This version aims to further improve the LLM on sound understanding capabilities.

**Out-of-scope** The use of llama3-s in any manner that violates applicable laws or regulations is strictly prohibited.

## How to Get Started with the Model

First, we need to convert the audio file to sound tokens

```python

```

Then, we can inference the model the same as any other LLM.

```python

```

## Training process
**Training Metrics Image**: Below is a snapshot of the training loss curve visualized.

![training_loss](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/Mo_FGQvhkcHl3y1REf76f.png)

### Hardware

**GPU Configuration**: Cluster of 8x NVIDIA H100-SXM-80GB.
**GPU Usage**:
  - **Continual Training**: 6 hours.

### Training Arguments

We utilize [torchtune](https://github.com/pytorch/torchtune) library for the latest FSDP2 training code implementation. 

| Parameter                  | Continual Training      | 
|----------------------------|-------------------------|
| **Epoch**                  | 1                       | 
| **Global batch size**      | 128                     | 
| **Learning Rate**          | 0.5e-4                  | 
| **Learning Scheduler**     | Cosine with warmup      | 
| **Optimizer**              | Adam torch fused        | 
| **Warmup Ratio**           | 0.01                    | 
| **Weight Decay**           | 0.005                   |
| **Max Sequence Length**    | 1024                    |


## Examples

1. Good example:

<details>
<summary>Click to toggle Example 1</summary>

```

```
</details>

<details>
<summary>Click to toggle Example 2</summary>

```

```
</details>


2. Misunderstanding example:

<details>
<summary>Click to toggle Example 3</summary>
  
```

```
</details>

3. Off-tracked example:

<details>
<summary>Click to toggle Example 4</summary>

```

```
</details>


## Citation Information

**BibTeX:**

```
@article{Llama3-S: Sound Instruction Language Model 2024,
  title={Llama3-S},
  author={Homebrew Research},
  year=2024,
  month=August},
  url={https://huggingface.co/homebrewltd/llama3.1-s-2024-08-15}
```

## Acknowledgement

- **[WhisperSpeech](https://github.com/collabora/WhisperSpeech)**

- **[Meta-Llama-3.1-8B-Instruct ](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)**