Update src/about.py
Browse files- src/about.py +5 -1
src/about.py
CHANGED
|
@@ -29,7 +29,8 @@ LINKS_AND_INFO = """
|
|
| 29 |
<p><a href="https://hunyuan.tencent.com/" target="_blank">Hunyuan</a>, Tencent</p> <br>
|
| 30 |
|
| 31 |
<a href="https://codegoat24.github.io/UnifiedReward/Pref-GRPO" target="_blank">π Homepage</a> |
|
| 32 |
-
<a href="https://arxiv.org/pdf/2508.20751" target="_blank">π arXiv Paper</a>
|
|
|
|
| 33 |
|
| 34 |
<a href="https://github.com/CodeGoat24/UniGenBench" target="_blank" rel="noopener noreferrer"><img alt="Code" src="https://img.shields.io/github/stars/CodeGoat24/UniGenBench.svg?style=social&label=Official"></a>
|
| 35 |
|
|
@@ -42,7 +43,10 @@ INTRODUCTION_TEXT = """
|
|
| 42 |
|
| 43 |
π§ You can use the official [GitHub repo](https://github.com/CodeGoat24/UniGenBench) to evaluate your model on [UniGenBench](https://github.com/CodeGoat24/UniGenBench).
|
| 44 |
|
|
|
|
|
|
|
| 45 |
π To add your own model to the leaderboard, please send an Email to [Yibin Wang](https://codegoat24.github.io/), then we will help with the evaluation and updating the leaderboard.
|
|
|
|
| 46 |
"""
|
| 47 |
|
| 48 |
|
|
|
|
| 29 |
<p><a href="https://hunyuan.tencent.com/" target="_blank">Hunyuan</a>, Tencent</p> <br>
|
| 30 |
|
| 31 |
<a href="https://codegoat24.github.io/UnifiedReward/Pref-GRPO" target="_blank">π Homepage</a> |
|
| 32 |
+
<a href="https://arxiv.org/pdf/2508.20751" target="_blank">π arXiv Paper</a> |
|
| 33 |
+
<a href="https://huggingface.co/datasets/CodeGoat24/UniGenBench/tree/main">π Huggingface</a> |
|
| 34 |
|
| 35 |
<a href="https://github.com/CodeGoat24/UniGenBench" target="_blank" rel="noopener noreferrer"><img alt="Code" src="https://img.shields.io/github/stars/CodeGoat24/UniGenBench.svg?style=social&label=Official"></a>
|
| 36 |
|
|
|
|
| 43 |
|
| 44 |
π§ You can use the official [GitHub repo](https://github.com/CodeGoat24/UniGenBench) to evaluate your model on [UniGenBench](https://github.com/CodeGoat24/UniGenBench).
|
| 45 |
|
| 46 |
+
π We release **all generated images from the T2I models** evaluated in our UniGenBench on [UniGenBench-Eval-Images](https://huggingface.co/datasets/CodeGoat24/UniGenBench-Eval-Images). Feel free to use any evaluation model that is convenient and suitable for you to assess and compare the performance of your models.
|
| 47 |
+
|
| 48 |
π To add your own model to the leaderboard, please send an Email to [Yibin Wang](https://codegoat24.github.io/), then we will help with the evaluation and updating the leaderboard.
|
| 49 |
+
|
| 50 |
"""
|
| 51 |
|
| 52 |
|