Any plans to fine tune this medical vlm with UCSC-VLAA/MedTrinity-25M dataset ?

#1
by satheeshkola - opened

MedTrinity-25M which is the current largest publicly available medical dataset with multiple modalities.
MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. This dataset can be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain.

Homepage: https://github.com/yunfeixie233/MedTrinity-25M
Paperlink: https://arxiv.org/abs/2408.02900
Githubrepo: https://github.com/UCSC-VLAA/MedTrinity-25M
Dataset: https://huggingface.co/datasets/UCSC-VLAA/MedTrinity-25M

Sign up or log in to comment