FAQ

SUBMISSIONS

My model requires trust_remote_code=True, can I submit it?

What about models of type X?

My model disappeared from all the queues, what happened?

What causes an evaluation failure?

How can I report an evaluation failure?


RESULTS

What kind of information can I find?

Why do models appear several times in the leaderboard?

What is this concept of “flagging”?

My model has been flagged improperly, what can I do?


HOW TO SEARCH FOR A MODEL

Search for models in the leaderboard by:

  1. Name, e.g., model_name
  2. Multiple names, separated by ;, e.g., model_name1;model_name2
  3. License, prefix with Hub License:..., e.g., Hub License: MIT
  4. Combination of name and license, order is irrelevant, e.g., model_name; Hub License: cc-by-sa-4.0

EDITING SUBMISSIONS

I upgraded my model and want to re-submit, how can I do that?


OTHER

Why do you differentiate between pretrained, continuously pretrained, fine-tuned, merges, etc?

What should I use the leaderboard for?

Why don’t you display closed-source model scores?

I have an issue with accessing the leaderboard through the Gradio API

I have another problem, help!

< > Update on GitHub