Yi Cui

onekq

AI & ML interests

Benchmark, Code Generation Model

Recent Activity

updated a model 1 day ago
onekq/outputs
published a model 1 day ago
onekq/outputs
updated a collection 2 days ago
R1 Reproduction Works
View all activity

Organizations

MLX Community's profile picture ONEKQ AI's profile picture

onekq's activity

replied to their post 8 days ago
replied to their post 10 days ago
view reply

And their python package too 😜

Having AI to do the refactor is a great idea though. It will be breaking change if you switch your model from non-reasoning to reasoning.

posted an update 10 days ago
view post
Post
1656
o3-mini is slightly better than R1, but lags behind Claude. Sorry folks, no new SOTA πŸ˜•

But OAI definitely owns the fashion of API. temperature and top_p are history now, reasoning_effort will be copied by other vendors.

onekq-ai/WebApp1K-models-leaderboard
Β·
posted an update 11 days ago
view post
Post
1280
Mistral Small 3 is SUPER fast, and highest score for 20+b model, but still 11 points below Qwen 2.5 coder 32b.

I believe specialty model is the future. The more you know what to do with the model, the better bang you can get for your buck. If Mistral scopes this small model to coding only, I'm confident they can beat Qwen.

One day my leaderboard will be dominated by smol models excellent on one thing, not monolithic ones costing $$$. And I'm looking forward to that.

onekq-ai/WebApp1K-models-leaderboard
  • 1 reply
Β·
replied to their post 14 days ago
posted an update 18 days ago
view post
Post
2278
So πŸ‹DeepSeekπŸ‹ hits the mainstream media. But it has been a star in our little cult for at least 6 months. Its meteoric success is not overnight, but two years in the making.

To learn their history, just look at their πŸ€— repo https://huggingface.co/deepseek-ai

* End of 2023, they launched the first model (pretrained by themselves) following Llama 2 architecture
* June 2024, v2 (MoE architecture) surpassed Gemini 1.5, but behind Mistral
* September, v2.5 surpassed GPT 4o mini
* December, v3 surpassed GPT 4o
* Now R1 surpassed o1

Most importantly, if you think DeepSeek success is singular and unrivaled, that's WRONG. The following models are also near or equal the o1 bar.

* Minimax-01
* Kimi k1.5
* Doubao 1.5 pro
  • 1 reply
Β·
reacted to clem's post with πŸ”₯ 18 days ago
replied to their post 18 days ago
view reply

My conclusion is the same. The R1 paper already reported lower success rate of the distilled models. This is not surprising since we cannot expect the same outcomes out of a much smaller model.

Here is the problem. The small models released by frontier labs are always generic, i.e. decent but lower performance than the flagship model on every benchmark. But we GPU deplorables often want a specialized model which is excellent on only one thing, hence the disappointment.

I guess we will have to help ourselves on this one. Distill an opinionated dataset from the flagship model to a small model of your choice, then hill climb the benchmark you care about.

replied to their post 19 days ago
view reply

1000% agree.

Also reasoning models sure spit out lots of tokens. The same benchmark cost 4x or 5x the money and time to run than regular LLMs. Exciting time for inference players.

Have you tried the distilled models of R1(Qwen and Llama)?

replied to their post 20 days ago
view reply

+1

Also the velocity of progress. I have wanted to learn Monte Carlo Tree Search and process rewards etc. and haven't got the time. I guess now I can skip them πŸ€—

posted an update 21 days ago
view post
Post
2676
This is historical. πŸŽ‰

DeepSeek πŸ‹R1πŸ‹ surpassed OpenAI πŸ“o1πŸ“ on the dual leaderboard. What a year for the open source!

onekq-ai/WebApp1K-models-leaderboard
posted an update 22 days ago
view post
Post
4708
πŸ‹DeepSeek πŸ‹ is the real OpenAI 😯
Β·
posted an update 28 days ago
posted an update about 2 months ago
view post
Post
3050
πŸ‹ DeepSeek πŸ‹v3 achieves a solid 7 point jump than v2.5, surpassing GPT-4o, but is still behind πŸ“ o1 πŸ“and Claude 3.5.

onekq-ai/WebApp1K-models-leaderboard
posted an update 4 months ago
view post
Post
598
October version of Claude 3.5 lifts SOTA (set by its June version) by 7 points.
onekq-ai/WebApp1K-models-leaderboard

Closed sourced models are widening the gap again.

Note: Our frontier leaderboard now uses double test scenarios because the single-scenario test suit has been saturated.
posted an update 4 months ago
view post
Post
1862
I'm now working on finetuning of coding models. If you are GPU-hungry like me, you will find quantized models very helpful. But quantization for finetuning and inference are different and incompatible. So I made two collections here.

Inference (GGUF, via Ollama, CPU is enough)
onekq-ai/ollama-ready-coding-models-67118c3cfa1af2cf04a926d6

Finetuning (Bitsandbytes, QLora, GPU is needed)
onekq-ai/qlora-ready-coding-models-67118771ce001b8f4cf946b2

For quantization, the inference models are far more popular on HF than finetuning models. I use https://huggingface.co/QuantFactory to generate inference models (GGUF), and there are a few other choices.

But there hasn't been such a service for finetuning models. DIY isn't too hard though. I made a few myself and you can find the script in the model cards. If the original model is small enough, you can even do it on a free T4 (available via Google Colab).

If you know a (small) coding model worthy of quantization, please let me know and I'd love to add it to the collections.
reacted to fdaudens's post with πŸ”₯ 5 months ago
view post
Post
1809
Exciting news in AI: Molmo, a groundbreaking family of open-source multimodal models, has just been announced! πŸš€

Key points:
- Closes the gap with proprietary systems on benchmarks & human evals
- Trained on high-quality data (< 1M image-text pairs vs billions)
- Introduces pointing capability for rich interactions
- Fully open weights, data, and training code

The 72B model outperforms several proprietary systems, while the 1B model nearly matches GPT-4V. Small is indeed the new big in AI!

There's an interactive demo available using Molmo-7B-D. Definitely worth checking out to see its capabilities firsthand.

All model weights, data, and code will be released soon. This is a significant step towards truly open, cutting-edge multimodal AI.
The future of AI research and applications is looking brighter than ever! πŸ€–πŸ–ΌοΈ

πŸ‘‰ Demo: https://molmo.allenai.org/
πŸ‘‰ Models: allenai/molmo-66f379e6fe3b8ef090a8ca19

#AI #MachineLearning #OpenSource #ComputerVision
reacted to victor's post with πŸ‘πŸ€— 5 months ago
view post
Post
5706
πŸ™‹ Calling all Hugging Face users! We want to hear from YOU!

What feature or improvement would make the biggest impact on Hugging Face?

Whether it's the Hub, better documentation, new integrations, or something completely different – we're all ears!

Your feedback shapes the future of Hugging Face. Drop your ideas in the comments below! πŸ‘‡
Β·
reacted to their post with 🧠 5 months ago
view post
Post
2570
Here is my latest study on OpenAIπŸ“o1πŸ“.
A Case Study of Web App Coding with OpenAI Reasoning Models (2409.13773)

I wrote an easy-to-read blogpost to explain finding.
https://huggingface.co/blog/onekq/daily-software-engineering-work-reasoning-models

INSTRUCTION FOLLOWING is the key.

100% instruction following + Reasoning = new SOTA

But if the model misses or misunderstands one instruction, it can perform far worse than non-reasoning models.