We are thrilled to present the improved "ClearerVoice-Studio", an open-source platform designed to make speech processing easy use for everyone! Whether you’re working on speech enhancement, speech separation, speech super-resolution, or target speaker extraction, this unified platform has you covered.
** Why Choose ClearerVoice-Studio?**
- Pre-Trained Models: Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch! - Ease of Use: Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training.
- Enhance noisy speech recordings to achieve crystal-clear quality. - Separate speech from complex audio mixtures with ease. - Transform low-resolution audio into high-resolution audio. A full upscaled LJSpeech-1.1-48kHz dataset can be downloaded from alibabasglab/LJSpeech-1.1-48kHz . - Extract target speaker voices with precision using audio-visual models.
**Join Us in Growing ClearerVoice-Studio!**
We believe in the power of open-source collaboration. By starring our GitHub repository and sharing ClearerVoice-Studio with your network, you can help us grow this community-driven platform.
**Support us by:**
- Starring it on GitHub. - Exploring and contributing to our codebase . - Sharing your feedback and use cases to make the platform even better. - Joining our community discussions to exchange ideas and innovations. - Together, let’s push the boundaries of speech processing! Thank you for your support! :sparkling_heart:
reacted to sanaka87's
post with 🔥about 2 months ago
🔥 Why 3DIS & 3DIS-FLUX? Current SOTA multi-instance generation methods are typically adapter-based, requiring additional control modules trained on pre-trained models for layout and instance attribute control. However, with the emergence of more powerful models like FLUX and SD3.5, these methods demand constant retraining and extensive resources.
✨ Our Solution: 3DIS We introduce a decoupled approach that only requires training a low-resolution Layout-to-Depth model to convert layouts into coarse-grained scene depth maps. Leveraging community and company pre-trained models like ControlNet + SAM2, we enable training-free controllable image generation on high-resolution models such as SDXL and FLUX.
🌟 Benefits of Our Decoupled Multi-Instance Generation: 1. Enhanced Control: By constructing scenes using depth maps in the first stage, the model focuses on coarse-grained scene layout, improving control over instance placement. 2. Flexibility & Preservation: The second stage employs training-free rendering methods, allowing seamless integration with various models (e.g., fine-tuned weights, LoRA) while maintaining the generative capabilities of pre-trained models.
Join us in advancing Layout-to-Image Generation! Follow and star our repository to stay updated! ⭐
Started another experimental product training for a client. Doing FLUX Dreambooth / Finetuning via Kohya SS GUI. GPU is L40S and batch size is 7. Config name : Batch_Size_7_48GB_GPU_46250MB_29.1_second_it_Tier_1.json