Tama

Here's the subjectively superior L3 version: L3-8B-Niitama-v1

An experimental model using experimental methods.

More detail on it:

Tamamo and Niitama are made from the same data. Literally. The only thing that's changed is how theyre shuffled and formatted. Yet, I get wildly different results.

Interesting, eh?

Feels kinda not as good compared to the l3 version, but it's aight.

Have a good day.

Downloads last month
164
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Sao10K/L3.1-8B-Niitama-v1.1

Merges
18 models
Quantizations
5 models