BritishWerewolf commited on
Commit
becee72
·
1 Parent(s): 5227614

Add ONNX model (fp32).

Browse files
Files changed (4) hide show
  1. README.md +53 -0
  2. config.json +20 -0
  3. onnx/model.onnx +3 -0
  4. preprocessor_config.json +27 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ pipeline_tag: image-segmentation
4
+ tags:
5
+ - image-segmentation
6
+ - mask-generation
7
+ - transformers.js
8
  license: apache-2.0
9
+ language:
10
+ - en
11
  ---
12
+ # U-2-Netp
13
+
14
+ ## Model Description
15
+ U-2-Netp is a lightweight version of the U2Net model designed for efficient and effective image segmentation tasks, especially for generating masks. It retains the core architectural design of U2Net while being optimized for faster inference times and reduced memory usage.
16
+
17
+ ## Usage
18
+ Perform mask generation with `BritishWerewolf/U-2-Netp`.
19
+
20
+ ### Example
21
+ ```javascript
22
+ import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
23
+
24
+ const img_url = 'https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png';
25
+ const image = await RawImage.read(img_url);
26
+
27
+ const processor = await AutoProcessor.from_pretrained('BritishWerewolf/U-2-Netp');
28
+ const processed = await processor(image);
29
+
30
+ const model = await AutoModel.from_pretrained('BritishWerewolf/U-2-Netp', {
31
+ dtype: 'fp32',
32
+ });
33
+
34
+ const output = await model({ input: processed.pixel_values });
35
+ // {
36
+ // mask: Tensor {
37
+ // dims: [ 1, 320, 320 ],
38
+ // type: 'uint8',
39
+ // data: Uint8Array(102400) [ ... ],
40
+ // size: 102400
41
+ // }
42
+ // }
43
+ ```
44
+
45
+ ## Model Architecture
46
+ The U-2-Netp model is based on a simplified version of the original U2Net architecture, designed to be more lightweight while still achieving high performance in segmentation tasks. The model consists of several stages with down-sampling and up-sampling paths, using Residual U-blocks (RSU) for enhanced feature representation.
47
+
48
+ ### Inference
49
+ To use the model for inference, you can follow the example provided above. The `AutoProcessor` and `AutoModel` classes from the `transformers` library make it easy to load the model and processor.
50
+
51
+ ## Credits
52
+ * [`rembg`](https://github.com/danielgatis/rembg) for the ONNX model.
53
+ * The authors of the original U-2-Net model can be credited at https://github.com/xuebinqin/U-2-Net.
54
+
55
+ ## Licence
56
+ This model is licensed under the Apache License 2.0 to match the original U-2-Net model.
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "u2netp",
3
+ "model_type": "u2net",
4
+ "architectures": [
5
+ "U2NetModel"
6
+ ],
7
+ "input_name": "input.1",
8
+ "input_shape": [1, 3, 320, 320],
9
+ "output_composite": "1959",
10
+ "output_names": [
11
+ "1959",
12
+ "1960",
13
+ "1961",
14
+ "1962",
15
+ "1963",
16
+ "1964",
17
+ "1965"
18
+ ],
19
+ "output_shape": [1, 320, 320]
20
+ }
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:309c8469258dda742793dce0ebea8e6dd393174f89934733ecc8b14c76f4ddd8
3
+ size 4574861
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "processor_class": "U2NetProcessor",
3
+ "image_processor_type": "U2NetImageProcessor",
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_pad": true,
7
+ "do_rescale": true,
8
+ "do_resize": true,
9
+ "keep_aspect_ratio": true,
10
+ "image_mean": [
11
+ 0.485,
12
+ 0.456,
13
+ 0.406
14
+ ],
15
+ "image_std": [
16
+ 0.229,
17
+ 0.224,
18
+ 0.225
19
+ ],
20
+ "pad_size": {
21
+ "width": 320,
22
+ "height": 320
23
+ },
24
+ "size": {
25
+ "longest_edge": 320
26
+ }
27
+ }