π Introducing @huggingface Open Deep-Researchπ₯
In just 24 hours, we built an open-source agent that: β Autonomously browse the web β Search, scroll & extract info β Download & manipulate files β Run calculations on data
If you haven't seen yet, we just released Inference Providers π
> 4 new serverless inference providers on the Hub π€― > Use your HF API key or personal key with all providers π > Chat with Deepseek R1, V3, and more on HF Hub π > We support Sambanova, TogetherAI, Replicate, and Fal.ai πͺ
Best of all, we don't charge any markup on top of the provider π«° Have you tried it out yet? HF Pro accounts get $2 of free usage for the provider inference.
All the responses get saved in the cfahlgren1/react-code-instructions dataset. Hopefully we can build one of the biggest, highest quality frontend datasets on the hub πͺ
It's 2025, you shouldn't be hand writing SQL! This is a big step in making it where anyone can do in depth analysis on a dataset. Let us know what you think π€
observers π - automatically log all OpenAI compatible requests to a datasetπ½
β’ supports any OpenAI compatible endpoint πͺ β’ supports DuckDB, Hugging Face Datasets, and Argilla as stores
> pip install observers
No complex framework. Just a few lines of code to start sending your traces somewhere. Let us know what you think! @davidberenstein1957 and I will continue iterating!
π¨ How green is your model? π± Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research! π open-llm-leaderboard/comparator Now, you can not only compare models by performance, but also by their environmental footprint!
π The Comparator calculates COβ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... π οΈ Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
The cleaning process consists of: - Joining the separate splits together / add split column - Converting string messages into list of structs - Removing empty system prompts
π New feature of the Comparator of the π€ Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!
π οΈ Here's how to use it: 1. Select your model from the leaderboard. 2. Load its model tree. 3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison. 4. Press Load. See side-by-side performance metrics instantly!
Ready to dive in? π Try the π€ Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator π
When you come across an interesting dataset, you often wonder: Which topics frequently appear in these documents? π€ What is this data really about? π
Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.
Iβve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. π
π How do we make this work? Hereβs the stack weβre using:
π Data Source β‘οΈ Hugging Face datasets with DuckDB for retrieval π§ Text Embeddings β‘οΈ Sentence Transformers (all-MiniLM-L6-v2) β‘ Dimensionality Reduction β‘οΈ RAPIDS cuML UMAP for GPU-accelerated performance π Clustering β‘οΈ RAPIDS cuML HDBSCAN for fast clustering βοΈ Tokenization β‘οΈ CountVectorizer π§ Representation Tuning β‘οΈ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct π Visualization β‘οΈ Datamapplot library Check out the space and see how you can quickly generate topics from your dataset: datasets-topics/topics-generator
Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?