rawalkhirodkar commited on
Commit
95e7f33
·
verified ·
1 Parent(s): c349f67

Update model card for Sapiens

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -3,7 +3,7 @@ language: en
3
  license: cc-by-nc-4.0
4
  ---
5
 
6
- # Sapiens-2b-torchscript
7
 
8
  ## Model Card for Sapiens
9
 
@@ -12,12 +12,12 @@ Sapiens is a family of vision transformers pretrained on 300 million human image
12
  ## Model Details
13
 
14
  ### Model Description
15
- Sapiens-2b natively support 1K high-resolution inference and are extremely easy to adapt for individual tasks by simply fine-tuning models pretrained on over 300 million in-the-wild human images. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic. Our simple model design also brings scalability - model performance across tasks improves as we scale the parameters from 0.3 to 2 billion. Sapiens consistently surpasses existing baselines across various human-centric benchmarks.
16
 
17
  - **Developed by:** Meta
18
  - **Model type:** Vision Transformers
19
  - **License:** Creative Commons Attribution-NonCommercial 4.0
20
- - **Model Size:** 2b
21
  - **Task:** pretrain
22
  - **Format:** torchscript
23
  - **File:** sapiens_2b_epoch_660_torchscript.pt2
@@ -29,5 +29,5 @@ Sapiens-2b natively support 1K high-resolution inference and are extremely easy
29
 
30
  ## Uses
31
 
32
- Pretrained 2b model can be used for feature extraction, fine-tuning, or as a starting point for training new models.
33
 
 
3
  license: cc-by-nc-4.0
4
  ---
5
 
6
+ # Sapiens-2B-torchscript
7
 
8
  ## Model Card for Sapiens
9
 
 
12
  ## Model Details
13
 
14
  ### Model Description
15
+ Sapiens-2B natively support 1K high-resolution inference and are extremely easy to adapt for individual tasks by simply fine-tuning models pretrained on over 300 million in-the-wild human images. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic. Our simple model design also brings scalability - model performance across tasks improves as we scale the parameters from 0.3 to 2 billion. Sapiens consistently surpasses existing baselines across various human-centric benchmarks.
16
 
17
  - **Developed by:** Meta
18
  - **Model type:** Vision Transformers
19
  - **License:** Creative Commons Attribution-NonCommercial 4.0
20
+ - **Model Size:** 2B
21
  - **Task:** pretrain
22
  - **Format:** torchscript
23
  - **File:** sapiens_2b_epoch_660_torchscript.pt2
 
29
 
30
  ## Uses
31
 
32
+ Pretrained 2B model can be used for feature extraction, fine-tuning, or as a starting point for training new models.
33