TNauen commited on
Commit
61f9e08
·
verified ·
1 Parent(s): 01ce0b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -15
README.md CHANGED
@@ -7,21 +7,61 @@ size_categories:
7
  - 1M<n<10M
8
  ---
9
 
10
- [![arXiv](https://img.shields.io/badge/arXiv-2503.09399-b31b1b?logo=arxiv)](https://arxiv.org/abs/2503.09399) [![Static Badge](https://img.shields.io/badge/GitHub-Repo-blue?logo=github)](https://github.com/tobna/ForAug)
11
 
12
- # ForAug/ForNet
13
 
14
  ![ForAug](images/foraug.png)
15
 
16
- This is the ForNet dataset from the paper [ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation](https://www.arxiv.org/abs/2503.09399).
17
 
18
  ### Updates
19
 
20
- - [19.03.2025] We release the code to download and use [ForNet on GitHub](https://github.com/tobna/ForAug) :computer:
21
- - [19.03.2025] We release the patch files of ForNet on Huggingface :hugs:
22
- - [12.03.2025] We release the preprint of [ForAug on arXiv](https://www.arxiv.org/abs/2503.09399) :spiral_notepad:
 
23
 
24
- ## Using ForAug/ForNet
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ### Preliminaries
27
 
@@ -51,7 +91,7 @@ To be able to download ForNet, you will need the ImageNet dataset in the usual f
51
 
52
  To download and prepare the already-segmented ForNet dataset at `<data_path>`, follow these steps:
53
 
54
- #### 1. Clone the git repository and install the requirements
55
 
56
  ```
57
  git clone https://github.com/tobna/ForAug
@@ -75,6 +115,14 @@ python apply_patch.py -p <data_path> -in <in_path> -o <data_path>
75
 
76
  This will apply the diffs to ImageNet and store the results in the `<data_path>` folder. It will also delete the already-processes patch files (the ones downloaded in step 2). In order to keep the patch files, add the `--keep` flag.
77
 
 
 
 
 
 
 
 
 
78
  #### Optional: Zip the files without compression
79
 
80
  When dealing with a large cluster and dataset files that have to be sent over the network (i.e. the dataset is on another server than the one used for processing) it's sometimes useful to not deal with many small files and have fewer large ones instead.
@@ -134,14 +182,10 @@ help(ForNet.__init__)
134
  }
135
  ```
136
 
137
- ### Dataset Sources
138
-
139
- - **Repository:** [GitHub](https://github.com/tobna/ForAug)
140
- - **Paper:** [arXiv](https://www.arxiv.org/abs/2503.09399)
141
- - **Project Page:** coming soon
142
-
143
  ## ToDos
144
 
145
  - [x] release code to download and create ForNet
146
  - [x] release code to use ForNet for training and evaluation
147
- - [ ] integrate ForNet into Huggingface Datasets
 
 
 
7
  - 1M<n<10M
8
  ---
9
 
10
+ [![arXiv](https://img.shields.io/badge/arXiv-2503.09399-b31b1b?logo=arxiv)](https://arxiv.org/abs/2503.09399) [![Static Badge](https://img.shields.io/badge/Huggingface-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/TNauen/ForNet)
11
 
12
+ # ForAug
13
 
14
  ![ForAug](images/foraug.png)
15
 
16
+ This is the public code repository for the paper [_ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation_](https://www.arxiv.org/abs/2503.09399).
17
 
18
  ### Updates
19
 
20
+ - [24.03.2025] We have [integrated ForNet into Huggingface Datasets](#with--huggingface-datasets) for easy and convenient use 🤗 💫
21
+ - [19.03.2025] We release the code to download and use ForNet in this repo 💻
22
+ - [19.03.2025] We release the patch files of [ForNet on Huggingface](https://huggingface.co/datasets/TNauen/ForNet) 🤗
23
+ - [12.03.2025] We release the preprint of [ForAug on arXiv](https://www.arxiv.org/abs/2503.09399) 🗒️
24
 
25
+ # Using ForAug/ForNet
26
+
27
+ ## With 🤗 Huggingface Datasets
28
+
29
+ We have integrated ForNet into [🤗 huggingface datasets](https://huggingface.co/docs/datasets/index):
30
+
31
+ ```Python
32
+ from datasets import load_dataset
33
+
34
+ ds = load_dataset(
35
+ "TNauen/ForNet",
36
+ trust_remote_code=True,
37
+ split="train",
38
+ )
39
+ ```
40
+
41
+ ⚠️ You must be authenticated and have access to the `ILSVRC/imagenet-1k` dataset on the hub, since it is used to apply the patches and get the foreground and background information.
42
+
43
+ ⚠️ Be prepared to wait while the files are downloaded and the patches are applied. This will only happen the first time you load the dataset. By default, well use as many CPU cores as available on the system. To limit the number of cores used set the `MAX_WORKERS` environment variable.
44
+
45
+ You can pass additional parameters to control the recombination phase:
46
+
47
+ - `background_combination`: Which backgrounds to combine with foregrounds. Options: `"orig", "same", "all"`.
48
+ - `fg_scale_jitter`: How much should the size of the foreground be changed (random ratio). Example: `(0.1, 0.8)`.
49
+ - `pruning_ratio`: For pruning backgrounds, with (foreground size/background size) $\geq$ <pruning_ratio>. Backgrounds from images that contain very large foreground objects are mostly computer generated and therefore relatively unnatural. Full dataset: `1.1`.
50
+ - `fg_size_mode`: How to determine the size of the foreground, based on the foreground sizes of the foreground and background images. Options: `"range", "min", "max", "mean"`.
51
+ - `fg_bates_n`: Bates parameter for the distribution of the object position in the foreground. Uniform Distribution: 1. The higher the value, the more likely the object is in the center. For fg_bates_n = 0, the object is always in the center.
52
+ - `mask_smoothing_sigma`: Sigma for the Gaussian blur of the mask edge.
53
+ - `rel_jut_out`: How much is the foreground allowed to stand/jut out of the background (and then cut off).
54
+ - `orig_img_prob`: Probability to use the original image, instead of the fg-bg recombinations. Options: `0.0`-`1.0`, `"linear", "revlinear", "cos"`.
55
+
56
+ For `orig_img_prob` schedules to work, you need to set `ds.epochs` to the total number of epochs you want to train.
57
+ Before each epoch set `ds.epoch` to the current epoch ($0 \leq$ `ds.epoch` $<$ `ds.epochs`).
58
+
59
+ To recreate out evaluation metrics, you may set:
60
+
61
+ - `fg_in_nonant`: Integer from 0 to 8. This will scale down the foreground and put it into the corresponding nonant (part of a 3x3 grid) in the image.
62
+ - `fg_size_fact`: The foreground object is (additionally) scaled by this factor.
63
+
64
+ ## Local Installation
65
 
66
  ### Preliminaries
67
 
 
91
 
92
  To download and prepare the already-segmented ForNet dataset at `<data_path>`, follow these steps:
93
 
94
+ #### 1. Clone this repository and install the requirements
95
 
96
  ```
97
  git clone https://github.com/tobna/ForAug
 
115
 
116
  This will apply the diffs to ImageNet and store the results in the `<data_path>` folder. It will also delete the already-processes patch files (the ones downloaded in step 2). In order to keep the patch files, add the `--keep` flag.
117
 
118
+ #### 4. Validate the ForNet files
119
+
120
+ To validate that you have all required files, run
121
+
122
+ ```
123
+ python validate.py -f <data_path>
124
+ ```
125
+
126
  #### Optional: Zip the files without compression
127
 
128
  When dealing with a large cluster and dataset files that have to be sent over the network (i.e. the dataset is on another server than the one used for processing) it's sometimes useful to not deal with many small files and have fewer large ones instead.
 
182
  }
183
  ```
184
 
 
 
 
 
 
 
185
  ## ToDos
186
 
187
  - [x] release code to download and create ForNet
188
  - [x] release code to use ForNet for training and evaluation
189
+ - [x] integrate ForNet into Huggingface Datasets
190
+ - [ ] release code for the segmentation phase
191
+