RobAgrees commited on
Commit
df9e5e0
·
verified ·
1 Parent(s): 93ab50b

Update Readme.md

Browse files
Files changed (1) hide show
  1. readme.md +153 -14
readme.md CHANGED
@@ -1,3 +1,4 @@
 
1
  license: apache-2.0
2
  pipeline_tag: text-to-speech
3
  language:
@@ -6,19 +7,6 @@ tags:
6
  - model_hub_mixin
7
  - pytorch_model_hub_mixin
8
  widget:
9
- - text: >-
10
- # Quantized Dia 1.6B (INT8)
11
-
12
- This is a dynamic int8 quantized version of [nari-labs/Dia-1.6B](https://huggingface.co/nari-labs/Dia-1.6B).
13
- It uses dynamic quantization for lighter deployment and faster inference.
14
-
15
- Original model: **float16**, ~6.4GB
16
- Quantized model: **int8 dynamic**, ~6.4GB
17
-
18
- Uploaded by [RobertAgee](https://github.com/RobertAgee) and [RobAgrees](https://huggingface.co/RobAgrees.
19
-
20
- > Quantized automatically with PyTorch dynamic quantization in Google Colab.
21
-
22
  - text: >-
23
  [S1] Dia is an open weights text to dialogue model. [S2] You get full
24
  control over scripts and voices. [S1] Wow. Amazing. (laughs) [S2] Try it
@@ -31,4 +19,155 @@ widget:
31
  Everybody stay fucking calm!!!... Everybody fucking calm down!!!!! [S1]
32
  No! No! If you touch the handle, if its hot there might be a fire down the
33
  hallway!
34
- example_title: Panic protocol
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
  license: apache-2.0
3
  pipeline_tag: text-to-speech
4
  language:
 
7
  - model_hub_mixin
8
  - pytorch_model_hub_mixin
9
  widget:
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - text: >-
11
  [S1] Dia is an open weights text to dialogue model. [S2] You get full
12
  control over scripts and voices. [S1] Wow. Amazing. (laughs) [S2] Try it
 
19
  Everybody stay fucking calm!!!... Everybody fucking calm down!!!!! [S1]
20
  No! No! If you touch the handle, if its hot there might be a fire down the
21
  hallway!
22
+ example_title: Panic protocol
23
+ ---
24
+ # Quantized Dia 1.6B (INT8)
25
+
26
+ This is a dynamic int8 quantized version of [nari-labs/Dia-1.6B](https://huggingface.co/nari-labs/Dia-1.6B).
27
+ It uses dynamic quantization for lighter deployment and faster inference.
28
+
29
+ Original model: **float16**, ~6.4GB
30
+ Quantized model: **int8 dynamic**, ~6.4GB
31
+
32
+ ## ⚡️ Quickstart
33
+
34
+ This will open a Gradio UI that you can work on.
35
+
36
+ ```bash
37
+ git clone --branch int8-dia https://github.com/RobertAgee/dia.git
38
+ cd dia && uv run app.py
39
+ ```
40
+
41
+ or if you do not have `uv` pre-installed:
42
+
43
+ ```bash
44
+ git clone --branch int8-dia https://github.com/RobertAgee/dia.git
45
+ cd dia
46
+ python -m venv .venv
47
+ source .venv/bin/activate
48
+ pip install uv
49
+ uv run app.py
50
+ ```
51
+
52
+ Uploaded by [RobertAgee](https://github.com/RobertAgee) and [RobAgrees](https://huggingface.co/RobAgrees.
53
+
54
+ > Quantized automatically with PyTorch dynamic quantization in Google Colab.
55
+
56
+ ## Original Readme:
57
+
58
+ <center>
59
+ <a href="https://github.com/nari-labs/dia">
60
+ <img src="https://github.com/nari-labs/dia/raw/main/dia/static/images/banner.png">
61
+ </a>
62
+ </center>
63
+
64
+ Dia is a 1.6B parameter text to speech model created by Nari Labs. It was pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration.
65
+
66
+ Dia **directly generates highly realistic dialogue from a transcript**. You can condition the output on audio, enabling emotion and tone control. The model can also produce nonverbal communications like laughter, coughing, clearing throat, etc.
67
+
68
+ To accelerate research, we are providing access to pretrained model checkpoints and inference code. The model weights are hosted on [Hugging Face](https://huggingface.co/nari-labs/Dia-1.6B). The model only supports English generation at the moment.
69
+
70
+ We also provide a [demo page](https://yummy-fir-7a4.notion.site/dia) comparing our model to [ElevenLabs Studio](https://elevenlabs.io/studio) and [Sesame CSM-1B](https://github.com/SesameAILabs/csm).
71
+
72
+ - (Update) We have a ZeroGPU Space running! Try it now [here](https://huggingface.co/spaces/nari-labs/Dia-1.6B). Thanks to the HF team for the support :)
73
+ - Join our [discord server](https://discord.gg/yBrqQ9Dd) for community support and access to new features.
74
+ - Play with a larger version of Dia: generate fun conversations, remix content, and share with friends. 🔮 Join the [waitlist](https://tally.so/r/meokbo) for early access.
75
+
76
+ ## ⚡️ Quickstart
77
+
78
+ This will open a Gradio UI that you can work on.
79
+
80
+ ```bash
81
+ git clone https://github.com/nari-labs/dia.git
82
+ cd dia && uv run app.py
83
+ ```
84
+
85
+ or if you do not have `uv` pre-installed:
86
+
87
+ ```bash
88
+ git clone https://github.com/nari-labs/dia.git
89
+ cd dia
90
+ python -m venv .venv
91
+ source .venv/bin/activate
92
+ pip install uv
93
+ uv run app.py
94
+ ```
95
+
96
+ Note that the model was not fine-tuned on a specific voice. Hence, you will get different voices every time you run the model.
97
+ You can keep speaker consistency by either adding an audio prompt (a guide coming VERY soon - try it with the second example on Gradio for now), or fixing the seed.
98
+
99
+ ## Features
100
+
101
+ - Generate dialogue via `[S1]` and `[S2]` tag
102
+ - Generate non-verbal like `(laughs)`, `(coughs)`, etc.
103
+ - Below verbal tags will be recognized, but might result in unexpected output.
104
+ - `(laughs), (clears throat), (sighs), (gasps), (coughs), (singing), (sings), (mumbles), (beep), (groans), (sniffs), (claps), (screams), (inhales), (exhales), (applause), (burps), (humming), (sneezes), (chuckle), (whistles)`
105
+ - Voice cloning. See [`example/voice_clone.py`](example/voice_clone.py) for more information.
106
+ - In the Hugging Face space, you can upload the audio you want to clone and place its transcript before your script. Make sure the transcript follows the required format. The model will then output only the content of your script.
107
+
108
+ ## ⚙️ Usage
109
+
110
+ ### As a Python Library
111
+
112
+ ```python
113
+ import soundfile as sf
114
+
115
+ from dia.model import Dia
116
+
117
+
118
+ model = Dia.from_pretrained("nari-labs/Dia-1.6B")
119
+
120
+ text = "[S1] Dia is an open weights text to dialogue model. [S2] You get full control over scripts and voices. [S1] Wow. Amazing. (laughs) [S2] Try it now on Git hub or Hugging Face."
121
+
122
+ output = model.generate(text)
123
+
124
+ sf.write("simple.mp3", output, 44100)
125
+ ```
126
+
127
+ A pypi package and a working CLI tool will be available soon.
128
+
129
+ ## 💻 Hardware and Inference Speed
130
+
131
+ Dia has been tested on only GPUs (pytorch 2.0+, CUDA 12.6). CPU support is to be added soon.
132
+ The initial run will take longer as the Descript Audio Codec also needs to be downloaded.
133
+
134
+ On enterprise GPUs, Dia can generate audio in real-time. On older GPUs, inference time will be slower.
135
+ For reference, on a A4000 GPU, Dia roughly generates 40 tokens/s (86 tokens equals 1 second of audio).
136
+ `torch.compile` will increase speeds for supported GPUs.
137
+
138
+ The full version of Dia requires around 10GB of VRAM to run. We will be adding a quantized version in the future.
139
+
140
+ If you don't have hardware available or if you want to play with bigger versions of our models, join the waitlist [here](https://tally.so/r/meokbo).
141
+
142
+ ## 🪪 License
143
+
144
+ This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
145
+
146
+ ## ⚠️ Disclaimer
147
+
148
+ This project offers a high-fidelity speech generation model intended for research and educational use. The following uses are **strictly forbidden**:
149
+
150
+ - **Identity Misuse**: Do not produce audio resembling real individuals without permission.
151
+ - **Deceptive Content**: Do not use this model to generate misleading content (e.g. fake news)
152
+ - **Illegal or Malicious Use**: Do not use this model for activities that are illegal or intended to cause harm.
153
+
154
+ By using this model, you agree to uphold relevant legal standards and ethical responsibilities. We **are not responsible** for any misuse and firmly oppose any unethical usage of this technology.
155
+
156
+ ## 🔭 TODO / Future Work
157
+
158
+ - Docker support.
159
+ - Optimize inference speed.
160
+ - Add quantization for memory efficiency.
161
+
162
+ ## 🤝 Contributing
163
+
164
+ We are a tiny team of 1 full-time and 1 part-time research-engineers. We are extra-welcome to any contributions!
165
+ Join our [Discord Server](https://discord.gg/yBrqQ9Dd) for discussions.
166
+
167
+ ## 🤗 Acknowledgements
168
+
169
+ - We thank the [Google TPU Research Cloud program](https://sites.research.google/trc/about/) for providing computation resources.
170
+ - Our work was heavily inspired by [SoundStorm](https://arxiv.org/abs/2305.09636), [Parakeet](https://jordandarefsky.com/blog/2024/parakeet/), and [Descript Audio Codec](https://github.com/descriptinc/descript-audio-codec).
171
+ - HuggingFace for providing the ZeroGPU Grant.
172
+ - "Nari" is a pure Korean word for lily.
173
+ - We thank Jason Y. for providing help with data filtering.