Tama

Here's the subjectively superior L3 version: L3-8B-Niitama-v1

An experimental model using experimental methods.

More detail on it:

Tamamo and Niitama are made from the same data. Literally. The only thing that's changed is how theyre shuffled and formatted. Yet, I get wildly different results.

Interesting, eh?

Feels kinda not as good compared to the l3 version, but it's aight.

Have a good day.

Downloads last month
316
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Sao10K/L3.1-8B-Niitama-v1.1

Merges
18 models
Quantizations
5 models