Social Post Explorers

community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

social-post-explorers's activity

Xenova 
posted an update 4 days ago
view post
Post
5118
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. ⚡️

Generate 10 seconds of speech in ~1 second for $0.

What will you build? 🔥
webml-community/kokoro-webgpu

The most difficult part was getting the model running in the first place, but the next steps are simple:
✂️ Implement sentence splitting, allowing for streamed responses
🌍 Multilingual support (only phonemization left)

Who wants to help?
·
tadeodonegana 
posted an update 7 days ago
view post
Post
1122
At RooMix(dot)ai we’re looking for an expert in generative image models for a short consulting gig. Any recommendations?
  • 1 reply
·
ameerazam08 
posted an update 12 days ago
Abhaykoul 
posted an update 12 days ago
view post
Post
3771
🔥 THE WAIT IS OVER... HAI-SER IS HERE! 🔥

Yo fam, this ain't just another AI drop— this is the FUTURE of emotional intelligence! 🚀

Introducing HAI-SER, powered by Structured Emotional Reasoning (SER), the next-level AI that doesn’t just understand your words—it feels you, analyzes your emotions, and helps you navigate life’s toughest moments. 💡

💥 What makes HAI-SER a game-changer?
🔹 Emotional Vibe Check – Gets the mood, energy, and what’s really going on 🎭
🔹 Mind-State Analysis – Breaks down your thoughts, beliefs, and patterns 🤯
🔹 Root Cause Deep-Dive – Unpacks the WHY behind your emotions 💡
🔹 Impact Check – Sees how it’s affecting your life and mental health 💔
🔹 Safety Check – Prioritizes your well-being and crisis management 🚨
🔹 Healing Game Plan – Custom strategies to help you bounce back 💪
🔹 Growth Potential – Turns struggles into opportunities for self-improvement 📈
🔹 How to Approach – Teaches you and others how to communicate and heal 🤝
🔹 Personalized Response – Not just generic advice—real talk, tailored to YOU 💯

No more robotic AI responses. No more surface-level advice. HAI-SER gets deep, analyzing emotions with precision and giving real, actionable support.

This ain’t just AI—this is your digital therapist, life coach, and hype squad all in one. Whether it’s mental health, career struggles, relationships, or personal growth, HAI-SER has your back.

🚀 The future of emotionally intelligent AI is HERE.
Are you ready? 🔥💯

HelpingAI/HAI-SER
·
sayakpaul 
posted an update 13 days ago
view post
Post
1850
We have been cooking a couple of fine-tuning runs on CogVideoX with finetrainers, smol datasets, and LoRA to generate cool video effects like crushing, dissolving, etc.

We are also releasing a LoRA extraction utility from a fully fine-tuned checkpoint. I know that kind of stuff has existed since eternity, but the quality on video models was nothing short of spectacular. Below are some links:

* Models and datasets: https://huggingface.co/finetrainers
* finetrainers: https://github.com/a-r-r-o-w/finetrainers
* LoRA extraction: https://github.com/huggingface/diffusers/blob/main/scripts/extract_lora_from_model.py
  • 1 reply
·
sayakpaul 
posted an update 15 days ago
view post
Post
1912
We have authored a post to go over the state of video generation in the Diffusers ecosystem 🧨

We cover the models supported, the knobs of optims our users can fire, fine-tuning, and more 🔥

5-6GBs for HunyuanVideo, sky is the limit 🌌 🤗
https://huggingface.co/blog/video_gen
umarigan 
posted an update 16 days ago
view post
Post
549
** Extracting Reasoning Prompts with DeepSeek-R1: A Step Towards Better AI Reasoning **

Hi everyone! 👋

I’m excited to share a small but impactful project I’ve been working on, where I extracted **reasoning prompts** using the **DeepSeek-R1 model**. Reasoning prompts are a powerful way to understand how AI models arrive at their answers, and they can be used to train smaller, more efficient models to generate reasoning. Let me walk you through the process and explain why this is important.

---

#### **The Code: Extracting Reasoning Prompts**

Here’s the code I used to extract reasoning prompts from the openaccess-ai-collective/oo-gpt4-filtered dataset:

from tqdm import tqdm
import time

reasoning_data = []

for example in tqdm(ds, desc="answering"):
    try:
        response = client.chat.completions.create(
            model='deepseek-reasoner',  # Using DeepSeek-R1 for reasoning
            messages=[
                {"role": "system", "content": example['system_prompt']},
                {"role": "user", "content": example['question']},
            ],
            stream=False,
            max_tokens=4096,
            temperature=0.7,
        )
        
        answer = response.choices[0].message.content
        reasoning = response.choices[0].message.reasoning_content

        reasonng_example = {
            "id": example['id'],
            "question": example['question'],
            'answer': answer,
            'reasoning': reasoning,
        }

        reasoning_data.append(reasonng_example)
    except Exception as e:
        print(f"Error translating example: {e}")
        time.sleep(3)  # Wait for 3 seconds before continuing
        continue  # Skip the current example and move to the next one

data: umarigan/deepseek-r1-reasoning-prompts
haritzpuerto 
posted an update 18 days ago
view post
Post
603
I just got my first ChatGPT review on ARR! 😅 Any advice on how to prove it's AI-generated? Thanks!
  • 3 replies
·
haritzpuerto 
posted an update 19 days ago
view post
Post
1460
I'm excited to announce that my internship paper at Parameter Lab was accepted to Findings of #NAACL2025 🎉
TLDR: Stating an LLM was trained on a sentence might not be possible 😥 , but it is possible for large enough amounts of tokens, such as long documents or multiple documents! 🤯
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models (2411.00154)
🔗 https://github.com/parameterlab/mia-scaling
Xenova 
posted an update 26 days ago
view post
Post
5231
Introducing Kokoro.js, a new JavaScript library for running Kokoro TTS, an 82 million parameter text-to-speech model, 100% locally in the browser w/ WASM. Powered by 🤗 Transformers.js. WebGPU support coming soon!
👉 npm i kokoro-js 👈

Try it out yourself: webml-community/kokoro-web
Link to models/samples: onnx-community/Kokoro-82M-ONNX

You can get started in just a few lines of code!
import { KokoroTTS } from "kokoro-js";

const tts = await KokoroTTS.from_pretrained(
  "onnx-community/Kokoro-82M-ONNX",
  { dtype: "q8" }, // fp32, fp16, q8, q4, q4f16
);

const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text,
  { voice: "af_sky" }, // See `tts.list_voices()`
);
audio.save("audio.wav");

Huge kudos to the Kokoro TTS community, especially taylorchu for the ONNX exports and Hexgrad for the amazing project! None of this would be possible without you all! 🤗

The model is also extremely resilient to quantization. The smallest variant is only 86 MB in size (down from the original 326 MB), with no noticeable difference in audio quality! 🤯
·
mlabonne 
posted an update 27 days ago
view post
Post
4385
🆕 LLM Course 2025 edition!

I updated the LLM Scientist roadmap and added a ton of new information and references. It covers training, datasets, evaluation, quantization, and new trends like test-time compute scaling.

The LLM Course has been incredibly popular (41.3k stars!) and I've been touched to receive many, many messages about how it helped people in their careers.

I know how difficult this stuff can be, so I'm super proud of the impact it had. I want to keep updating it in 2025, especially with the LLM Engineer roadmap.

Thanks everyone, hope you'll enjoy it!

💻 LLM Course: https://huggingface.co/blog/mlabonne/llm-course
StephenGenusa 
posted an update 29 days ago
view post
Post
1186
I have a pro account and I am logged in. I have duplicated a space due to the error "You have exceeded your GPU quota", I am showing 0 GPU use, yet I am unable to use it "You have exceeded your GPU quota (60s requested vs. 44s left). Create a free account to get more daily usage quota." "Expert Support" is a pitch for consulting.
·
cschroeder 
posted an update about 1 month ago
view post
Post
516
🔥 𝐅𝐢𝐧𝐚𝐥 𝐂𝐚𝐥𝐥 𝐚𝐧𝐝 𝐃𝐞𝐚𝐝𝐥𝐢𝐧𝐞 𝐄𝐱𝐭𝐞𝐧𝐬𝐢𝐨𝐧: Survey on Data Annotation and Active Learning

Short summary: We need your support for a web survey in which we investigate how recent advancements in natural language processing, particularly LLMs, have influenced the need for labeled data in supervised machine learning — with a focus on, but not limited to, active learning. See the original post for details.

➡️ Extended Deadline: January 26th, 2025.
Please consider participating or sharing our survey! (If you have any experience with supervised learning in natural language processing, you are eligible to participate in our survey.)

Survey: https://bildungsportal.sachsen.de/umfragen/limesurvey/index.php/538271