parquet-converter commited on
Commit
fa0af74
·
1 Parent(s): a76b9fc

Update parquet files (step 56 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/Seven-Days-Korean-Movie-Download-NEW.md +0 -84
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ahoura Bold Font Free A Balanced and Eye-Catching Font for Multiple Applications.md +0 -128
  3. spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Elements 11 Crack Only.md +0 -6
  4. spaces/1gistliPinn/ChatGPT4/Examples/Callofdutyblackops2setup1cbinindir.md +0 -6
  5. spaces/1gistliPinn/ChatGPT4/Examples/Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019 [VERIFIED].md +0 -6
  6. spaces/1gistliPinn/ChatGPT4/Examples/DataCash230Namo Webeditor 9 Crack 27.md +0 -107
  7. spaces/1line/AutoGPT/autogpt/cli.py +0 -145
  8. spaces/1phancelerku/anime-remove-background/Download Cars The Ultimate Guide for Car Enthusiasts.md +0 -107
  9. spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/run.sh +0 -2
  10. spaces/A00001/bingothoo/next.config.js +0 -38
  11. spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/transformer_model.py +0 -265
  12. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/tokenizer.py +0 -180
  13. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192.py +0 -172
  14. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet101.py +0 -17
  15. spaces/Abhay834/my_genai_chatbot/README.md +0 -12
  16. spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal.py +0 -57
  17. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.d.ts +0 -2
  18. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetChildrenWidth.js +0 -45
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.d.ts +0 -20
  20. spaces/AllAideas/SegmentacionVideo/app.py +0 -51
  21. spaces/Amrrs/DragGan-Inversion/torch_utils/persistence.py +0 -260
  22. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler_ancestral.md +0 -21
  23. spaces/Andy1621/uniformer_image_detection/exp/mask_rcnn_3x_ms_hybrid_small/config.py +0 -82
  24. spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_head.py +0 -751
  25. spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py +0 -2
  26. spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py +0 -11
  27. spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_769x769_40k_cityscapes.py +0 -9
  28. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/__init__.py +0 -4
  29. spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/trident_conv.py +0 -90
  30. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py +0 -329
  31. spaces/BeeMon/dreambooth-training/train_dreambooth.py +0 -889
  32. spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/__init__.py +0 -10
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/autocompletion.py +0 -171
  34. spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/parallel_for.h +0 -178
  35. spaces/CVPR/lama-example/saicinpainting/evaluation/__init__.py +0 -33
  36. spaces/CVPR/transfiner/configs/common/models/panoptic_fpn.py +0 -20
  37. spaces/CatNika/Asian_Proxy/Dockerfile +0 -11
  38. spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/sqlite3_store.py +0 -123
  39. spaces/CikeyQI/Yunzai/Yunzai/lib/listener/loader.js +0 -57
  40. spaces/CoPoBio/skin_cancer_risk_prediction/facealigner.py +0 -82
  41. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/__init__.py +0 -2
  42. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/__init__.py +0 -21
  43. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/certifi/core.py +0 -108
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/color-90ab3aab.js +0 -2
  45. spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/__init__.py +0 -0
  46. spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/pretrained.py +0 -72
  47. spaces/Dragonnnext/charybdis/README.md +0 -9
  48. spaces/ElainaFanBoy/MusicGen/audiocraft/data/audio.py +0 -215
  49. spaces/EronSamez/RVC_HFmeu/go-tensorboard.bat +0 -2
  50. spaces/Fakermiya/Nsfw-Sfw_Classifier/README.md +0 -148
spaces/1acneusushi/gradio-2dmoleculeeditor/Seven-Days-Korean-Movie-Download-NEW.md DELETED
@@ -1,84 +0,0 @@
1
- ## Seven Days Korean Movie Download
2
-
3
-
4
-
5
-
6
-
7
- ![Seven Days Korean Movie Download NEW!](https://pic2.iqiyipic.com/image/20210603/0d/f7/v_159396427_m_601_zh-CN_m1_260_360.jpg)
8
-
9
-
10
-
11
-
12
-
13
- **Click Here ->>> [https://www.google.com/url?q=https%3A%2F%2Furluss.com%2F2txKQs&sa=D&sntz=1&usg=AOvVaw3fc6\_OWnNAEWxloP1aXB2q](https://www.google.com/url?q=https%3A%2F%2Furluss.com%2F2txKQs&sa=D&sntz=1&usg=AOvVaw3fc6\_OWnNAEWxloP1aXB2q)**
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
-
24
-
25
-
26
-
27
- # Seven Days Korean Movie Download: A Gripping Crime Thriller Starring Yunjin Kim
28
-
29
-
30
-
31
- If you are looking for a suspenseful and captivating movie to watch, you might want to check out Seven Days, a 2007 South Korean crime thriller film directed by Won Shin-yun, starring Yunjin Kim and Park Hee-soon. The film had 2,107,849 admissions nationwide and was the 9th most-attended domestic film of 2007. [1] It also won several awards, including Best Actress for Yunjin Kim and Best Supporting Actor for Park Hee-soon at the Grand Bell Awards and the Korean Film Awards. [2]
32
-
33
-
34
-
35
- The plot of Seven Days revolves around Yoo Ji-yeon (Yunjin Kim), a prominent lawyer who has never lost a case. One day, her daughter is kidnapped by a mysterious man who demands that she defend a five-time convicted felon who is appealing his conviction for rape and murder. Ji-yeon has only seven days before his trial ends to prove his innocence and save her daughter. Along the way, she uncovers a web of corruption, conspiracy and secrets that put her life and career in danger.
36
-
37
-
38
-
39
- Seven Days is a fast-paced and thrilling movie that will keep you on the edge of your seat. The film boasts of excellent performances by the lead actors, especially Yunjin Kim, who portrays the desperate and determined mother with great skill and emotion. The film also features impressive cinematography, editing, music and sound effects that enhance the mood and tension of the story. The film has been praised by critics and audiences alike for its clever plot twists, realistic characters and gripping action scenes. [3]
40
-
41
-
42
-
43
- If you want to watch Seven Days online, you can find it on iQ.com, a streaming platform that offers a variety of Asian movies and dramas with English subtitles. You can also download the movie to watch offline on your device. To access iQ.com, you need to register for a free account and verify your email address. You can then enjoy watching Seven Days and other amazing content on iQ.com. [4]
44
-
45
-
46
-
47
- Don't miss this opportunity to watch Seven Days Korean movie download online for free on iQ.com. You will not regret it!
48
-
49
-
50
-
51
- [1] https://en.wikipedia.org/wiki/Seven\_Days\_(2007\_film)
52
-
53
- [2] https://www.imdb.com/title/tt0997229/awards
54
-
55
- [3] https://www.imdb.com/title/tt0997229/reviews
56
-
57
- [4] https://www.iq.com/album/seven-days-2007-bmk341bglo?lang=en\_us
58
-
59
-
60
-
61
- Here is the continuation of the article:
62
-
63
-
64
-
65
- Seven Days is not only a thrilling movie, but also a meaningful one. It explores the themes of justice, morality, family and sacrifice. It raises questions about how far one would go to save a loved one, and what price one would pay for doing so. It also shows the corruption and injustice that exist in the legal system and the society. It challenges the viewers to think about their own values and choices in difficult situations.
66
-
67
-
68
-
69
- The film has also been remade in Bollywood as Jazbaa, starring Aishwarya Rai Bachchan and Irrfan Khan. The remake follows the same plot as the original, but with some changes to suit the Indian context and audience. The remake was released in 2015 and received mixed reviews from critics and viewers. Some praised the performances and the direction, while others criticized the screenplay and the music. [5]
70
-
71
-
72
-
73
- Whether you watch the original or the remake, Seven Days is a movie that will not disappoint you. It is a movie that will keep you hooked from start to finish. It is a movie that will make you feel and think. It is a movie that you should not miss.
74
-
75
-
76
-
77
- [5] https://en.wikipedia.org/wiki/Jazbaa
78
-
79
- dfd1c89656
80
-
81
-
82
-
83
-
84
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ahoura Bold Font Free A Balanced and Eye-Catching Font for Multiple Applications.md DELETED
@@ -1,128 +0,0 @@
1
-
2
- <h1>Ahoura Bold Font Free: A Modern and Elegant Arabic Typeface</h1>
3
- <p>If you are looking for a font that can combine modernity and elegance, simplicity and sophistication, clarity and beauty, then you might want to check out <b>Ahoura Bold Font</b>. This font is a unique and innovative Arabic typeface that was designed by Naghi Naghashian, a renowned Iranian typographer and graphic designer. In this article, we will explore what makes Ahoura Bold Font so special, how you can benefit from using it, and how you can download and use it for free.</p>
4
- <h2>Ahoura Bold Font Free</h2><br /><p><b><b>Download</b> &#187;&#187;&#187; <a href="https://byltly.com/2uKz3F">https://byltly.com/2uKz3F</a></b></p><br /><br />
5
- <h2>The Design and Features of Ahoura Bold Font</h2>
6
- <h3>The Inspiration and Innovation behind Ahoura Bold Font</h3>
7
- <p>Ahoura Bold Font is not just another Arabic font. It is a result of careful research and analysis on Arabic characters and their structure, as well as a contribution to the modernization of Arabic typography. According to the designer, Naghi Naghashian, Ahoura Bold Font was created with today's ever-changing technology in mind, without compromising the calligraphic tradition and the cultural identity of Arabic script. He says:</p>
8
- <blockquote>"The Ahoura innovation is a contribution to modernisation of Arabic typography; gives the Arabic font letters real typographic arrangement and provides for more typographic flexibility. This step was necessary after more than two hundred years of relative stagnation in Arabic font design."</blockquote>
9
- <p>As such, Ahoura Bold Font is a low-contrast neo-geometric sans serif font that is defined by minimalism, geometry, and purity of form. It has a balanced width, generous x-height, and short ascenders and descenders, giving it a simple and clean look. It also uses the highest degree of geometric clarity along with the necessary amount of calligraphic references, creating a harmonious balance between contemporary aesthetics and traditional elegance.</p>
10
- <h3>The Styles and Weights of Ahoura Bold Font</h3>
11
- <p>Ahoura Bold Font is part of the Ahoura font family, which consists of six styles and three weights. The styles are normal and italic, while the weights are light, regular, and bold. Each style has its own character and mood, but they all share the same design principles and quality. Here are some examples of how each style looks like:</p>
12
- <table>
13
- <tr><td><b>Style</b></td><td><b>Example</b></td></tr>
14
- <tr><td>Ahoura Light</td><td><img src="https://befonts.com/wp-content/uploads/2021/05/Hauora-Light.otf_.png" alt="Ahoura Light"></td></tr>
15
- <tr><td>Ahoura Light Italic</td><td><img src="https://befonts.com/wp-content/uploads/2021/05/Hauora-Light-Italic.otf_.png" alt="Ahoura Light Italic"></td></tr>
16
- <tr><td>Ahoura Regular</td><td><img src="https://befonts.com/wp-content/uploads/2021/05/Hauora-Regular.otf_.png" alt="Ahoura Regular"></td></tr>
17
- <tr><td>Ahoura Italic</td><td><img src="https://befonts.com/wp-content/uploads/2021/05/Hauora-Italic.otf_.png" alt="Ahoura Italic"></td></tr>
18
- <tr><td>Ahoura Bold</td><td><img src="https://befonts.com/wp-content/uploads/2021/05/Hauora-Bold.otf_.png" alt="Ahoura Bold"></td></tr>
19
- <tr><td>Ahoura Bold Italic</td><td><img src="https://befonts.com/wp-content/uploads/2021/05/Hauora-Bold-Italic.otf_.png" alt="Ahoura Bold Italic"></td></tr>
20
- </table>
21
- <h3>The OpenType Features and Language Support of Ahoura Bold Font</h3>
22
- <p>Ahoura Bold Font is not only beautiful but also functional. It comes with various OpenType features that enhance its typographic performance and flexibility. Some of these features are:</p>
23
- <ul>
24
- <li>Ligatures: These are special characters that are formed by combining two or more letters into one glyph. For example, <img src="https://fonts.do/images/fontsdo-ligatures.png" alt="Ligatures">.</li>
25
- <li>Contextual Alternates: These are alternative forms of letters that change depending on their position or context in a word or sentence. For example, <img src="https://fonts.do/images/fontsdo-contextual-alternates.png" alt="Contextual Alternates">.</li>
26
- <li>Stylistic Sets: These are sets of alternative forms of letters that can be applied to create different stylistic effects or variations. For example, <img src="https://fonts.do/images/fontsdo-stylistic-sets.png" alt="Stylistic Sets">.</li>
27
- <li>Swashes: These are decorative extensions or flourishes that can be added to some letters to create more dynamic and expressive typography. For example, <img src="https://fonts.do/images/fontsdo-swashes.png" alt="Swashes">.</li>
28
- <li>Numerals: These are numbers that can be displayed in different formats or styles. For example, proportional or tabular, lining or old-style, Arabic or Persian.</li>
29
- </ul>
30
- <p>In addition to these features, Ahoura Bold Font also supports multiple languages that use Arabic script, such as Arabic, Persian, Urdu, Kurdish, Pashto, Sindhi, Balochi, Uyghur, Kazakh, Kyrgyz, Tajik, Turkmen, Uzbek, etc.</p>
31
- <h2>The Benefits and Applications of Ahoura Bold Font</h2>
32
- <h3>The Legibility and Versatility of Ahoura Bold Font</h3>
33
- <p>One of the main benefits of using Ahoura Bold Font is its legibility. This font is designed to be easily readable not only in large sizes but also in small sizes. It is also suitable for various applications such as print or digital media. Whether you want to use it for headlines or body text, logos or posters, websites or apps, books or magazines, Ahoura Bold Font can handle them all. Moreover, this font can be artificially obliqued or skewed with software tools such as InDesign or Illustrator without losing its quality or effect.</p>
34
- <h3>The Aesthetic and Cultural Appeal of Ahoura Bold Font</h3>
35
- <p>Another benefit of using Ahoura Bold Font is its aesthetic appeal. This font has a unique and distinctive character that can make your typography stand out from the crowd. It can also convey a sense of modernity and elegance that can match your design style or theme. Furthermore, this font has a cultural appeal that can reflect your identity or message. By using this font, you can show your respect for the Arabic script tradition while also embracing the contemporary trends in typography.</p>
36
- <h3>The Compatibility and Accessibility of Ahoura Bold Font</h3>
37
- <p>A final benefit of using Ahoura Bold Font is its compatibility and accessibility. This font is compatible with most software applications that support OpenType fonts such as Microsoft Word Continuing the article: <h2>How to Download and Use Ahoura Bold Font for Free</h2>
38
- <h3>The Sources and Licenses of Ahoura Bold Font</h3>
39
- <p>If you are interested in downloading and using Ahoura Bold Font for free, you might be wondering where to find it and what are the terms and conditions of using it. Well, there are several sources where you can download Ahoura Bold Font for free, such as:</p>
40
- <ul>
41
- <li><b>Fonts.do</b>: This is a website that offers thousands of free fonts for personal and commercial use. You can download Ahoura Bold Font from this link: </li>
42
- <li><b>Befonts.com</b>: This is another website that provides free fonts for various purposes. You can download Ahoura Bold Font from this link: </li>
43
- <li><b>Fontspace.com</b>: This is a website that hosts over 90,000 free fonts from independent designers. You can download Ahoura Bold Font from this link: </li>
44
- </ul>
45
- <p>However, before you download and use Ahoura Bold Font for free, you should be aware of the licenses and restrictions that apply to it. According to the designer, Naghi Naghashian, Ahoura Bold Font is free for personal use only. This means that you can use it for your own projects or hobbies, but not for any commercial or professional purposes. If you want to use Ahoura Bold Font for commercial or professional purposes, you need to purchase a license from the designer's website: </p>
46
- <p>Ahoura Bold Typeface Free Download<br />
47
- How to Install Ahoura Bold Font for Free<br />
48
- Ahoura Bold Font Free Alternative<br />
49
- Ahoura Bold Font Free License<br />
50
- Ahoura Bold Font Free for Commercial Use<br />
51
- Ahoura Bold Font Free for Personal Use<br />
52
- Ahoura Bold Font Free for Web Design<br />
53
- Ahoura Bold Font Free for Logo Design<br />
54
- Ahoura Bold Font Free for Print Design<br />
55
- Ahoura Bold Font Free for Branding<br />
56
- Ahoura Bold Font Free for Typography<br />
57
- Ahoura Bold Font Free for Poster Design<br />
58
- Ahoura Bold Font Free for Book Cover Design<br />
59
- Ahoura Bold Font Free for Magazine Design<br />
60
- Ahoura Bold Font Free for Flyer Design<br />
61
- Ahoura Bold Font Free for Brochure Design<br />
62
- Ahoura Bold Font Free for Business Card Design<br />
63
- Ahoura Bold Font Free for Invitation Design<br />
64
- Ahoura Bold Font Free for T-Shirt Design<br />
65
- Ahoura Bold Font Free for Packaging Design<br />
66
- Ahoura Bold Font Free for Social Media Design<br />
67
- Ahoura Bold Font Free for Video Editing<br />
68
- Ahoura Bold Font Free for Animation<br />
69
- Ahoura Bold Font Free for Game Development<br />
70
- Ahoura Bold Font Free for App Development<br />
71
- Ahoura Bold Font Free for Website Development<br />
72
- Ahoura Bold Font Free Preview Online<br />
73
- Ahoura Bold Font Free Sample Text<br />
74
- Ahoura Bold Font Free Characters List<br />
75
- Ahoura Bold Font Free Glyphs List<br />
76
- Ahoura Bold Font Free Symbols List<br />
77
- Ahoura Bold Font Free Numbers List<br />
78
- Ahoura Bold Font Free Punctuation List<br />
79
- Ahoura Bold Font Free Accents List<br />
80
- Ahoura Bold Font Free Ligatures List<br />
81
- Ahoura Bold Font Free Swashes List<br />
82
- Ahoura Bold Font Free Stylistic Alternates List<br />
83
- Ahoura Bold Font Free Contextual Alternates List<br />
84
- Ahoura Bold Font Free Multilingual Support List<br />
85
- Ahoura Bold Font Free Unicode Range List<br />
86
- How to Use Ahoura Bold Font in Photoshop<br />
87
- How to Use Ahoura Bold Font in Illustrator<br />
88
- How to Use Ahoura Bold Font in InDesign<br />
89
- How to Use Ahoura Bold Font in Word<br />
90
- How to Use Ahoura Bold Font in PowerPoint<br />
91
- How to Use Ahoura Bold Font in Excel<br />
92
- How to Use Ahoura Bold Font in Google Docs<br />
93
- How to Use Ahoura Bold Font in Google Slides<br />
94
- How to Use Ahoura Bold Font in Canva<br />
95
- How to Use Ahoura Bold Font in Figma</p>
96
- <h3>The Installation and Usage of Ahoura Bold Font</h3>
97
- <p>After you have downloaded Ahoura Bold Font for free, you need to install it on your computer so that you can use it with your software applications. The installation process may vary depending on your operating system, but here are some general steps that you can follow:</p>
98
- <ol>
99
- <li>Extract the font files from the .zip folder that you have downloaded.</li>
100
- <li>Right-click on the font files that you want to install and click Install.</li>
101
- <li>If you are prompted to allow the program to make changes to your computer, click Yes.</li>
102
- <li>Wait for the installation to complete.</li>
103
- <li>Open your software application and look for Ahoura Bold Font in the font list.</li>
104
- </ol>
105
- <p>If you need more detailed instructions on how to install fonts on your computer, you can refer to this article: </p>
106
- <h3>The Tips and Tricks for Optimizing Ahoura Bold Font</h3>
107
- <p>Now that you have installed Ahoura Bold Font on your computer, you might want to know how to optimize it for your design projects. Here are some tips and tricks that you can use to make the most out of this font:</p>
108
- <ul>
109
- <li>Use the OpenType features of Ahoura Bold Font to create different effects or variations. You can access these features through your software application's menu or panel. For example, in Microsoft Word, you can go to Home > Font > Advanced > OpenType Features.</li>
110
- <li>Use the italic style of Ahoura Bold Font to create a more dynamic and expressive typography. This is the first real italic Arabic typeface known until now and it can add more movement and energy to your text.</li>
111
- <li>Use the bold weight of Ahoura Bold Font to create a strong and confident typography. This weight can emphasize your message and attract attention.</li>
112
- <li>Use the variable font option of Ahoura Bold Font to adjust the weight and width of the font according to your preference. You can do this by using a slider or a numeric value in your software application's menu or panel.</li>
113
- <li>Use Ahoura Bold Font with other fonts that complement its style and mood. For example, you can pair it with a sans serif Latin font such as Helvetica or Arial for a modern and minimalist look.</li>
114
- </ul>
115
- <h2>Conclusion</h2>
116
- <p>Ahoura Bold Font is a modern and elegant Arabic typeface that can enhance your typography and design projects. It has a unique and innovative design that combines geometry and calligraphy, simplicity and sophistication, clarity and beauty. It also has various features and options that make it flexible and versatile. Moreover, it supports multiple languages that use Arabic script, making it suitable for different audiences and contexts. If you want to download and use Ahoura Bold Font for free, you can find it on several websites that offer free fonts for personal use. However, if you want to use it for commercial or professional purposes, you need to purchase a license from the designer's website. To install and use Ahoura Bold Font on your computer, you need to follow some simple steps that may vary depending on your operating system. To optimize Ahoura Bold Font for your design projects, you need to use its OpenType features, styles, weights, variable font option, and font pairing suggestions.</p>
117
- <p>We hope that this article has helped you learn more about Ahoura Bold Font and how to download and use it for free. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!</p>
118
- <h2>Frequently Asked Questions</h2>
119
- <ol>
120
- <li><b>What is Ahoura Bold Font?</b><br>Ahoura Bold Font is a unique and innovative Arabic typeface that was designed by Naghi Naghashian, a renowned Iranian typographer and graphic designer.</li>
121
- <li><b>Why should I use Ahoura Bold Font?</b><br>You should use Ahoura Bold Font because it is a modern and elegant font that can combine geometry and calligraphy, simplicity and sophistication, clarity and beauty. It also has various features and options that make it flexible and versatile.</li>
122
- <li><b>Where can I download Ahoura Bold Font for free?</b><br>You can download Ahoura Bold Font for free from several websites that offer free fonts for personal use, such as Fonts.do, Befonts.com, or Fontspace.com.</li>
123
- <li><b>How can I install Ahoura Bold Font on my computer?</b><br>You can install Ahoura Bold Font on your computer by extracting the font files from the .zip folder that you have downloaded, right-clicking on the font files that you want to install and clicking Install, clicking Yes if prompted to allow changes to your computer, waiting for the installation to complete, and opening your software application and looking for Ahoura Bold Font in the font list.</li>
124
- <li><b>How can I optimize Ahoura Bold Font for my design projects?</b><br>You can optimize Ahoura Bold Font for your design projects by using its OpenType features, Continuing the article: styles, weights, variable font option, and font pairing suggestions. For example, you can use the italic style to create a more dynamic and expressive typography, use the bold weight to create a strong and confident typography, use the variable font option to adjust the weight and width of the font according to your preference, and use Ahoura Bold Font with other fonts that complement its style and mood.</li>
125
- </ol>
126
- </p> 0a6ba089eb<br />
127
- <br />
128
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Adobe Premiere Elements 11 Crack Only.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>adobe premiere elements 11 crack only</h2><br /><p><b><b>Download File</b> &#9734; <a href="https://imgfil.com/2uxYlA">https://imgfil.com/2uxYlA</a></b></p><br /><br />
2
- <br />
3
- Steinberg cubase 4 crack download free adobe premiere pro cs5 serial key dragon crack ... Adobe Photoshop Elements 2020 Crack is also a fantastic ... number for adobe photoshop elements 11. ... Windows [7/ 8/ 8.1]*/ 10 Only flavor of 64-bit ... 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Callofdutyblackops2setup1cbinindir.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Callofdutyblackops2setup1cbinindir</h2><br /><p><b><b>Download Zip</b> - <a href="https://imgfil.com/2uxYIU">https://imgfil.com/2uxYIU</a></b></p><br /><br />
2
-
3
- ... Linux Serial Torrent x86 x64 Tags: activation for Ipi Mocap Studio 3. 50e0b7e615. Intro Video Maker Apk Mod Unlock All · Callofdutyblackops2setup1cbinindir 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019 [VERIFIED].md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Celemony Melodyne Studio 4.2.3.1 Key Torrent Download 2019</h2><br /><p><b><b>Download Zip</b> --->>> <a href="https://imgfil.com/2uxXhV">https://imgfil.com/2uxXhV</a></b></p><br /><br />
2
-
3
- Melodyne 3.2 Keygen free full Torrent download, Melodyne 3.2 Keygen ... are good at using technology Celemony Melodyne Studio 4.2.3.1 Key is a real joy and a ... on November 4, 2019 November 4, 2019 Author Cracked Key 0 Melodyne 4 ... 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/DataCash230Namo Webeditor 9 Crack 27.md DELETED
@@ -1,107 +0,0 @@
1
-
2
- <h1>DataCash230Namo Webeditor 9 Crack 27: What You Need to Know</h1>
3
- <p>If you are looking for a powerful and easy-to-use visual HTML editor, you might have heard of DataCash230Namo Webeditor 9. This software allows you to create and edit web pages with drag-and-drop features, templates, widgets, and more. But what if you want to use it without paying for a license? That's where DataCash230Namo Webeditor 9 Crack 27 comes in.</p>
4
- <h2>DataCash230Namo Webeditor 9 Crack 27</h2><br /><p><b><b>Download Zip</b> &raquo; <a href="https://imgfil.com/2uy1yO">https://imgfil.com/2uy1yO</a></b></p><br /><br />
5
- <h2>What is DataCash230Namo Webeditor 9 Crack 27?</h2>
6
- <p>DataCash230Namo Webeditor 9 Crack 27 is a piece of software that bypasses the activation process of DataCash230Namo Webeditor 9 and lets you use it for free. It is also known as a keygen, patch, or serial number generator. By using DataCash230Namo Webeditor 9 Crack 27, you can access all the features and functions of DataCash230Namo Webeditor 9 without paying a dime.</p>
7
- <h2>How to Download and Install DataCash230Namo Webeditor 9 Crack 27?</h2>
8
- <p>There are many websites that claim to offer DataCash230Namo Webeditor 9 Crack 27 for download. However, you should be careful when downloading anything from the internet, as some files may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Here are some steps to follow if you want to download and install DataCash230Namo Webeditor 9 Crack 27 safely:</p>
9
- <ul>
10
- <li>Download DataCash230Namo Webeditor 9 from the official website or a trusted source.</li>
11
- <li>Install DataCash230Namo Webeditor 9 on your computer.</li>
12
- <li>Download DataCash230Namo Webeditor 9 Crack 27 from a reliable website or a torrent site.</li>
13
- <li>Extract the file using a program like WinRAR or 7-Zip.</li>
14
- <li>Run the file as an administrator and follow the instructions.</li>
15
- <li>Enjoy using DataCash230Namo Webeditor 9 for free.</li>
16
- </ul>
17
- <h2>What are the Benefits and Risks of Using DataCash230Namo Webeditor 9 Crack 27?</h2>
18
- <p>Using DataCash230Namo Webeditor 9 Crack 27 has some benefits and risks that you should be aware of before deciding to use it. Here are some of them:</p>
19
- <h3>Benefits</h3>
20
- <ul>
21
- <li>You can use DataCash230Namo Webeditor 9 for free and save money.</li>
22
- <li>You can access all the features and functions of DataCash230Namo Webeditor 9 without any limitations.</li>
23
- <li>You can create and edit web pages with ease and convenience.</li>
24
- </ul>
25
- <h3>Risks</h3>
26
- <ul>
27
- <li>You may violate the terms and conditions of DataCash230Namo Webeditor 9 and face legal consequences.</li>
28
- <li>You may download a fake or corrupted file that can damage your computer or compromise your security.</li>
29
- <li>You may not receive any updates or support from DataCash230Namo Webeditor 9 developers.</li>
30
- <li>You may experience bugs, errors, or crashes while using DataCash230Namo Webeditor 9.</li>
31
- </ul>
32
- <h2>Conclusion</h2>
33
- <p>DataCash230Namo Webeditor 9 Crack 27 is a software that allows you to use DataCash230Namo Webeditor 9 for free. It has some benefits and risks that you should weigh before using it. If you decide to use DataCash230Namo Webeditor 9 Crack 27, make sure to download it from a reputable source and scan it for viruses before installing it. Alternatively, you can buy a legitimate license of DataCash230Namo Webeditor 9 and enjoy its features without any worries.</p>
34
- <h2>What are the Features and Functions of DataCash230Namo Webeditor 9?</h2>
35
- <p>DataCash230Namo Webeditor 9 is a visual HTML editor that offers a variety of features and functions to help you create and edit web pages. Some of the features and functions of DataCash230Namo Webeditor 9 are:</p>
36
- <p></p>
37
- <ul>
38
- <li>Drag-and-drop interface: You can easily add and arrange elements on your web page by dragging and dropping them from the toolbar or the library.</li>
39
- <li>Templates and widgets: You can choose from hundreds of templates and widgets to customize your web page according to your needs and preferences.</li>
40
- <li>Code editing: You can also edit the HTML, CSS, JavaScript, or PHP code of your web page using the built-in code editor.</li>
41
- <li>Preview and publish: You can preview your web page in different browsers and devices before publishing it to the web.</li>
42
- <li>Site manager: You can manage your web site files and folders using the site manager feature.</li>
43
- </ul>
44
- <h2>What are the Alternatives to DataCash230Namo Webeditor 9?</h2>
45
- <p>If you are not satisfied with DataCash230Namo Webeditor 9 or you want to try other options, there are some alternatives to DataCash230Namo Webeditor 9 that you can consider. Some of the alternatives to DataCash230Namo Webeditor 9 are:</p>
46
- <ul>
47
- <li>Dreamweaver: This is a popular and professional visual HTML editor that offers advanced features and functions for web design and development.</li>
48
- <li>Wix: This is an online platform that allows you to create and edit web pages using a drag-and-drop interface and a variety of templates and widgets.</li>
49
- <li>WordPress: This is an open-source software that enables you to create and edit web pages using a content management system and a range of plugins and themes.</li>
50
- <li>KompoZer: This is a free and open-source visual HTML editor that offers a simple and user-friendly interface for web design and editing.</li>
51
- <li>BlueGriffon: This is a free and open-source visual HTML editor that supports HTML5, CSS3, SVG, and other web standards.</li>
52
- </ul>
53
- <h2>Conclusion</h2>
54
- <p>DataCash230Namo Webeditor 9 Crack 27 is a software that allows you to use DataCash230Namo Webeditor 9 for free. It has some benefits and risks that you should weigh before using it. If you decide to use DataCash230Namo Webeditor 9 Crack 27, make sure to download it from a reputable source and scan it for viruses before installing it. Alternatively, you can buy a legitimate license of DataCash230Namo Webeditor 9 and enjoy its features without any worries. You can also explore other alternatives to DataCash230Namo Webeditor 9 that may suit your needs better.</p>
55
-
56
-
57
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
58
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
59
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
60
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
61
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
62
-
63
-
64
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
65
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
66
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
67
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
68
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
69
-
70
-
71
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
72
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
73
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
74
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
75
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
76
-
77
-
78
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
79
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
80
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
81
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
82
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
83
-
84
-
85
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
86
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
87
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
88
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
89
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
90
-
91
-
92
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
93
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
94
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
95
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
96
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
97
-
98
-
99
- - You can add a paragraph about the history and development of DataCash230Namo Webeditor 9 and how it differs from previous versions.
100
- - You can add a paragraph about the pros and cons of using DataCash230Namo Webeditor 9 compared to other visual HTML editors.
101
- - You can add a paragraph about the best practices and tips for using DataCash230Namo Webeditor 9 effectively and efficiently.
102
- - You can add a paragraph about the feedback and reviews of DataCash230Namo Webeditor 9 from other users and experts.
103
- - You can add a paragraph about the future plans and updates of DataCash230Namo Webeditor 9 and what features and functions you can expect in the next version.
104
- <h2>Conclusion</h2>
105
- <p>DataCash230Namo Webeditor 9 Crack 27 is a software that allows you to use DataCash230Namo Webeditor 9 for free. It has some benefits and risks that you should weigh before using it. If you decide to use DataCash230Namo Webeditor 9 Crack 27, make sure to download it from a reputable source and scan it for viruses before installing it. Alternatively, you can buy a legitimate license of DataCash230Namo Webeditor 9 and enjoy its features without any worries. You can also explore other alternatives to DataCash230Namo Webeditor 9 that may suit your needs better.</p> 3cee63e6c2<br />
106
- <br />
107
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1line/AutoGPT/autogpt/cli.py DELETED
@@ -1,145 +0,0 @@
1
- """Main script for the autogpt package."""
2
- import click
3
-
4
-
5
- @click.group(invoke_without_command=True)
6
- @click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode")
7
- @click.option(
8
- "--skip-reprompt",
9
- "-y",
10
- is_flag=True,
11
- help="Skips the re-prompting messages at the beginning of the script",
12
- )
13
- @click.option(
14
- "--ai-settings",
15
- "-C",
16
- help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.",
17
- )
18
- @click.option(
19
- "-l",
20
- "--continuous-limit",
21
- type=int,
22
- help="Defines the number of times to run in continuous mode",
23
- )
24
- @click.option("--speak", is_flag=True, help="Enable Speak Mode")
25
- @click.option("--debug", is_flag=True, help="Enable Debug Mode")
26
- @click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode")
27
- @click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode")
28
- @click.option(
29
- "--use-memory",
30
- "-m",
31
- "memory_type",
32
- type=str,
33
- help="Defines which Memory backend to use",
34
- )
35
- @click.option(
36
- "-b",
37
- "--browser-name",
38
- help="Specifies which web-browser to use when using selenium to scrape the web.",
39
- )
40
- @click.option(
41
- "--allow-downloads",
42
- is_flag=True,
43
- help="Dangerous: Allows Auto-GPT to download files natively.",
44
- )
45
- @click.option(
46
- "--skip-news",
47
- is_flag=True,
48
- help="Specifies whether to suppress the output of latest news on startup.",
49
- )
50
- @click.pass_context
51
- def main(
52
- ctx: click.Context,
53
- continuous: bool,
54
- continuous_limit: int,
55
- ai_settings: str,
56
- skip_reprompt: bool,
57
- speak: bool,
58
- debug: bool,
59
- gpt3only: bool,
60
- gpt4only: bool,
61
- memory_type: str,
62
- browser_name: str,
63
- allow_downloads: bool,
64
- skip_news: bool,
65
- ) -> None:
66
- """
67
- Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI.
68
-
69
- Start an Auto-GPT assistant.
70
- """
71
- # Put imports inside function to avoid importing everything when starting the CLI
72
- import logging
73
-
74
- from colorama import Fore
75
-
76
- from autogpt.agent.agent import Agent
77
- from autogpt.config import Config, check_openai_api_key
78
- from autogpt.configurator import create_config
79
- from autogpt.logs import logger
80
- from autogpt.memory import get_memory
81
- from autogpt.prompt import construct_prompt
82
- from autogpt.utils import get_current_git_branch, get_latest_bulletin
83
-
84
- if ctx.invoked_subcommand is None:
85
- cfg = Config()
86
- # TODO: fill in llm values here
87
- check_openai_api_key()
88
- create_config(
89
- continuous,
90
- continuous_limit,
91
- ai_settings,
92
- skip_reprompt,
93
- speak,
94
- debug,
95
- gpt3only,
96
- gpt4only,
97
- memory_type,
98
- browser_name,
99
- allow_downloads,
100
- skip_news,
101
- )
102
- logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO)
103
- ai_name = ""
104
- if not cfg.skip_news:
105
- motd = get_latest_bulletin()
106
- if motd:
107
- logger.typewriter_log("NEWS: ", Fore.GREEN, motd)
108
- git_branch = get_current_git_branch()
109
- if git_branch and git_branch != "stable":
110
- logger.typewriter_log(
111
- "WARNING: ",
112
- Fore.RED,
113
- f"You are running on `{git_branch}` branch "
114
- "- this is not a supported branch.",
115
- )
116
- system_prompt = construct_prompt()
117
- # print(prompt)
118
- # Initialize variables
119
- full_message_history = []
120
- next_action_count = 0
121
- # Make a constant:
122
- triggering_prompt = (
123
- "Determine which next command to use, and respond using the"
124
- " format specified above:"
125
- )
126
- # Initialize memory and make sure it is empty.
127
- # this is particularly important for indexing and referencing pinecone memory
128
- memory = get_memory(cfg, init=True)
129
- logger.typewriter_log(
130
- "Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}"
131
- )
132
- logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser)
133
- agent = Agent(
134
- ai_name=ai_name,
135
- memory=memory,
136
- full_message_history=full_message_history,
137
- next_action_count=next_action_count,
138
- system_prompt=system_prompt,
139
- triggering_prompt=triggering_prompt,
140
- )
141
- agent.start_interaction_loop()
142
-
143
-
144
- if __name__ == "__main__":
145
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Cars The Ultimate Guide for Car Enthusiasts.md DELETED
@@ -1,107 +0,0 @@
1
-
2
- <h1>How to Download Cars: A Guide for Car Enthusiasts</h1>
3
- <p>Have you ever dreamed of driving a Ferrari, a Lamborghini, or a Bugatti? Have you ever wondered what it would be like to race on the streets, the tracks, or the off-road terrains? If you are a car enthusiast, you might have a passion for exploring different types of cars and experiencing their performance and features. But buying or renting a car can be expensive and impractical. That's why some people choose to download cars instead.</p>
4
- <h2>i want to download cars</h2><br /><p><b><b>DOWNLOAD</b> &#187;&#187;&#187; <a href="https://jinyurl.com/2uNT4W">https://jinyurl.com/2uNT4W</a></b></p><br /><br />
5
- <h2>What does it mean to download cars?</h2>
6
- <p>Downloading cars is a way of accessing digital versions of real or fictional cars on your computer or mobile device. You can download cars as files, such as images, videos, or games, that you can view, play, or edit on your device. You can also download cars as software, such as simulators, that you can run on your device and interact with in a realistic or immersive way.</p>
7
- <h3>The difference between downloading and streaming cars</h3>
8
- <p>Downloading cars means that you save the car files or software on your device's storage, such as your hard drive or memory card. This allows you to access the car anytime, even when you are offline or have no internet connection. However, downloading cars also takes up space on your device and may require more time and bandwidth to complete.</p>
9
- <p>Streaming cars means that you access the car files or software online, such as on a website or an app. This allows you to access the car instantly, without waiting for the download to finish or using up your device's storage. However, streaming cars also requires a stable and fast internet connection and may consume more data or battery power.</p>
10
- <h3>The benefits of downloading cars</h3>
11
- <p>Downloading cars has many benefits for car enthusiasts, such as:</p>
12
- <p>How to download cars 2006 movie for free<br />
13
- Best PC racing games to download from Epic Games Store<br />
14
- CarGurus app for buying and selling new and used cars<br />
15
- Download cars wallpapers and screensavers for desktop<br />
16
- Where to download cars mods for GTA 5<br />
17
- Download cars coloring pages and printables for kids<br />
18
- How to download cars 3 driven to win game for PS4<br />
19
- Best car games to download on Android and iOS devices<br />
20
- Download cars sound effects and ringtones for free<br />
21
- Where to download cars logos and icons for design projects<br />
22
- How to download cars 2 video game for PC<br />
23
- Best car simulator games to download and play online<br />
24
- Download cars repair manuals and guides for free<br />
25
- Where to download cars fonts and typography for free<br />
26
- How to download cars 4 trailer and watch online<br />
27
- Best car racing apps to download and stream live races<br />
28
- Download cars quiz and trivia games for free<br />
29
- Where to download cars stickers and emojis for WhatsApp<br />
30
- How to download cars theme song and soundtrack for free<br />
31
- Best car driving games to download and learn driving skills<br />
32
- Download cars wallpapers HD and 4K for mobile phones<br />
33
- Where to download cars blueprints and models for 3D printing<br />
34
- How to download cars dataset and images for machine learning<br />
35
- Best car tuning games to download and customize your car<br />
36
- Download cars flash games and play offline on your browser<br />
37
- Where to download cars SVG and vector files for free<br />
38
- How to download cars VR games and experience virtual reality<br />
39
- Best car parking games to download and improve your parking skills<br />
40
- Download cars music videos and songs for free<br />
41
- Where to download cars clipart and illustrations for free<br />
42
- How to download cars PDF books and magazines for free<br />
43
- Best car drifting games to download and master drifting techniques<br />
44
- Download cars CAD files and drawings for free<br />
45
- Where to download cars PNG and JPEG files for free<br />
46
- How to download cars podcasts and listen online or offline<br />
47
- Best car shooting games to download and enjoy action-packed gameplay<br />
48
- Download cars PowerPoint templates and presentations for free<br />
49
- Where to download cars GIFs and animations for free<br />
50
- How to download cars subtitles and captions for free<br />
51
- Best car escape games to download and solve puzzles</p>
52
- <ul>
53
- <li>You can enjoy a wide variety of cars from different brands, models, eras, and genres. You can download cars that are rare, expensive, classic, futuristic, or fictional.</li>
54
- <li>You can experience the thrill of driving, racing, or customizing cars in different modes, settings, and scenarios. You can download cars that are realistic, arcade-like, or fantasy-based.</li>
55
- <li>You can learn more about the history, culture, and technology of cars. You can download cars that are informative, educational, or entertaining.</li>
56
- </ul>
57
- <h3>The challenges of downloading cars</h3>
58
- <p>Downloading cars also has some challenges that you need to be aware of, such as:</p>
59
- <ul>
60
- <li>You may not be able to replicate the exact feeling and sensation of driving a real car. Downloading cars may not capture the physical feedback, the sound quality, or the visual details of a real car.</li>
61
- <li>You may encounter technical issues or errors when downloading or running the car files or software. Downloading cars may cause compatibility problems, performance issues, or bugs on your device.</li>
62
- <li>You may face legal or ethical issues when downloading or using the car files or software. Downloading cars may violate the intellectual property rights, the privacy rights, or the safety regulations of the car owners, creators, or authorities.</li>
63
- </ul>
64
- <h2>Where can you download cars?</h2>
65
- <h3>The best websites for downloading cars</h3>
66
- <p>If you want to download car files, such as images, videos, or games, you can visit some of the best websites for downloading cars. Here are some examples:</p>
67
- <h4>Internet Archive</h4>
68
- <p>The Internet Archive is a digital library that offers free access to millions of car images and videos that you can download and use for personal or non-commercial purposes. You can also find thousands of car games that you can download and play on your device. Some of the car games available on the Internet Archive are Need for Speed, Grand Theft Auto, and Carmageddon.</p>
69
- <h4>Epic Games Store</h4>
70
- <p>The Epic Games Store is a digital distribution platform that offers free and paid car games that you can download and play on your PC. You can also find exclusive deals and discounts on some of the car games. Some of the car games available on the Epic Games Store are Forza Horizon 4, Rocket League, and Wreckfest.</p>
71
- <h4>GameTop</h4>
72
- <p>GameTop is a website that offers free and legal car games that you can download and play on your PC. You can also find no ads, no in-game purchases, and no malware on the car games. Some of the car games available on GameTop are City Racing, Off-Road Super Racing, and Fire and Forget.</p>
73
- <h3>The best apps for downloading cars</h3>
74
- <p>If you want to download car software, such as simulators, you can visit some of the best apps for downloading cars. Here are some examples:</p>
75
- <h4>Car Simulator 2</h4>
76
- <p>Car Simulator 2 is a free app that lets you download and drive more than 80 cars in an open world. You can also customize, upgrade, and repair your cars. You can also play online with other players or offline with bots. Car Simulator 2 is available for Android and iOS devices.</p>
77
- <h4>Real Racing 3</h4>
78
- <p>Real Racing 3 is a free app that lets you download and race more than 250 cars from real manufacturers. You can also compete in more than 40 tracks from real locations. You can also join online events and challenges with other players or offline modes with AI. Real Racing 3 is available for Android and iOS devices.</p>
79
- <h4>Asphalt 9: Legends</h4>
80
- <p>Asphalt 9: Legends is a free app that lets you download and drive more than 60 cars from top brands. You can also customize, upgrade, and nitro-boost your cars. You can also join online clubs and seasons with other players or offline career mode with storylines. Asphalt 9: Legends is available for Android, iOS, and Windows devices.</p>
81
- <h2>How to download cars safely and legally?</h2>
82
- <h3>The risks of downloading cars from untrusted sources</h3>
83
- <p>Downloading cars from untrusted sources can expose you to various risks, such as:</p>
84
- <ul>
85
- <li>You may download fake or corrupted car files or software that do not work properly or damage your device.</li>
86
- <li>You may download malicious car files or software that contain malware or viruses that infect your device or steal your data.</li>
87
- <li>You may download illegal car files or software that infringe the intellectual property rights, the privacy rights, or the safety regulations of the car owners, creators, or authorities.</li>
88
- </ul>
89
- <h3>The tips for avoiding malware and viruses</h3>
90
- <p>To avoid malware and viruses when downloading cars, you should follow these tips:</p>
91
- <ul>
92
- <li>You should only download cars from trusted sources, such as official websites or apps, reputable platforms or stores, or verified users or developers.</li>
93
- <li>You should scan the car files or software with an antivirus program before opening or running them on your device.</li>
94
- <li>You should update your device's operating system and security software regularly to protect it from new threats.</li>
95
- </ul>
96
- <h3>The laws and regulations for downloading cars</h3>
97
- <p>To avoid legal or ethical issues when downloading cars, you should follow these laws and regulations:</p>
98
- <ul>
99
- <li>You should respect the intellectual property rights of the car owners and creators by not copying, distributing, modifying, or selling the car files or software without their permission.</li>
100
- <li>You should respect the privacy rights of the car owners and creators by not collecting, sharing, or using their personal information without their consent.</li>
101
- <li>You should respect the safety regulations of the car authorities by not using the car files or software for illegal or harmful purposes, such as hacking, fraud, or terrorism.</li>
102
- </ul>
103
- <h2>Conclusion</h2>
104
- <p>Downloading cars is a fun and exciting way to enjoy different types of cars on your device. You can download cars as files or software from various websites or apps. However, you should also be careful about the risks of downloading cars from untrusted sources and the laws and regulations for downloading cars. By following these tips, you can download cars safely and legally.</p>
105
- FAQs Q: How much space does downloading cars take on my device? A: The space required for downloading cars depends on the size and quality of the car files or software. Generally, the higher the resolution, the sound, or the graphics of the car, the more space it will take. You can check the file size or the system requirements of the car before downloading it to make sure you have enough space on your device. Q: How long does downloading cars take on my device? A: The time required for downloading cars depends on the speed and stability of your internet connection and the server of the source. Generally, the faster your internet connection and the server, the less time it will take. You can also pause or resume the download if you encounter any interruptions or errors. Q: Can I download cars for free or do I have to pay for them? A: The cost of downloading cars depends on the source and the type of the car. Some sources offer free car files or software that you can download and use without paying anything. However, some sources may charge a fee or require a subscription for downloading or accessing certain car files or software. You should check the price or the terms and conditions of the source before downloading any car. Q: Can I download cars on any device or do I need a specific device? A: The compatibility of downloading cars depends on the format and the platform of the car files or software. Some car files or software are compatible with multiple devices, such as PCs, laptops, tablets, or smartphones. However, some car files or software may only work on specific devices, such as Windows, Mac, Android, or iOS. You should check the file format or the system requirements of the car before downloading it to make sure it works on your device. Q: Can I share or transfer the car files or software that I downloaded to other devices or people? A: The sharing or transferring of car files or software that you downloaded depends on the license and the permission of the source and the owner. Some car files or software are free and open-source, which means you can share or transfer them to other devices or people without any restrictions. However, some car files or software are proprietary and protected, which means you cannot share or transfer them to other devices or people without violating their rights. You should check the license or the permission of the source and the owner before sharing or transferring any car.</p> 401be4b1e0<br />
106
- <br />
107
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/run.sh DELETED
@@ -1,2 +0,0 @@
1
- CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50
2
- ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
 
 
 
spaces/A00001/bingothoo/next.config.js DELETED
@@ -1,38 +0,0 @@
1
- /** @type {import('next').NextConfig} */
2
- const nextConfig = {
3
- // output: 'export',
4
- // assetPrefix: '.',
5
- webpack: (config, { isServer }) => {
6
- if (!isServer) {
7
- config.resolve = {
8
- ...config.resolve,
9
- fallback: {
10
- 'bufferutil': false,
11
- 'utf-8-validate': false,
12
- http: false,
13
- https: false,
14
- stream: false,
15
- // fixes proxy-agent dependencies
16
- net: false,
17
- dns: false,
18
- tls: false,
19
- assert: false,
20
- // fixes next-i18next dependencies
21
- path: false,
22
- fs: false,
23
- // fixes mapbox dependencies
24
- events: false,
25
- // fixes sentry dependencies
26
- process: false
27
- }
28
- };
29
- }
30
- config.module.exprContextCritical = false;
31
-
32
- return config;
33
- },
34
- }
35
-
36
- module.exports = (...args) => {
37
- return nextConfig
38
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/transformer_model.py DELETED
@@ -1,265 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import random
3
- import torch
4
- import torch.nn as nn
5
-
6
- from .base_model import CaptionModel
7
- from .utils import repeat_tensor
8
- import audio_to_text.captioning.models.decoder
9
-
10
-
11
- class TransformerModel(CaptionModel):
12
-
13
- def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs):
14
- if not hasattr(self, "compatible_decoders"):
15
- self.compatible_decoders = (
16
- audio_to_text.captioning.models.decoder.TransformerDecoder,
17
- )
18
- super().__init__(encoder, decoder, **kwargs)
19
-
20
- def seq_forward(self, input_dict):
21
- cap = input_dict["cap"]
22
- cap_padding_mask = (cap == self.pad_idx).to(cap.device)
23
- cap_padding_mask = cap_padding_mask[:, :-1]
24
- output = self.decoder(
25
- {
26
- "word": cap[:, :-1],
27
- "attn_emb": input_dict["attn_emb"],
28
- "attn_emb_len": input_dict["attn_emb_len"],
29
- "cap_padding_mask": cap_padding_mask
30
- }
31
- )
32
- return output
33
-
34
- def prepare_decoder_input(self, input_dict, output):
35
- decoder_input = {
36
- "attn_emb": input_dict["attn_emb"],
37
- "attn_emb_len": input_dict["attn_emb_len"]
38
- }
39
- t = input_dict["t"]
40
-
41
- ###############
42
- # determine input word
43
- ################
44
- if input_dict["mode"] == "train" and random.random() < input_dict["ss_ratio"]: # training, scheduled sampling
45
- word = input_dict["cap"][:, :t+1]
46
- else:
47
- start_word = torch.tensor([self.start_idx,] * input_dict["attn_emb"].size(0)).unsqueeze(1).long()
48
- if t == 0:
49
- word = start_word
50
- else:
51
- word = torch.cat((start_word, output["seq"][:, :t]), dim=-1)
52
- # word: [N, T]
53
- decoder_input["word"] = word
54
-
55
- cap_padding_mask = (word == self.pad_idx).to(input_dict["attn_emb"].device)
56
- decoder_input["cap_padding_mask"] = cap_padding_mask
57
- return decoder_input
58
-
59
- def prepare_beamsearch_decoder_input(self, input_dict, output_i):
60
- decoder_input = {}
61
- t = input_dict["t"]
62
- i = input_dict["sample_idx"]
63
- beam_size = input_dict["beam_size"]
64
- ###############
65
- # prepare attn embeds
66
- ################
67
- if t == 0:
68
- attn_emb = repeat_tensor(input_dict["attn_emb"][i], beam_size)
69
- attn_emb_len = repeat_tensor(input_dict["attn_emb_len"][i], beam_size)
70
- output_i["attn_emb"] = attn_emb
71
- output_i["attn_emb_len"] = attn_emb_len
72
- decoder_input["attn_emb"] = output_i["attn_emb"]
73
- decoder_input["attn_emb_len"] = output_i["attn_emb_len"]
74
- ###############
75
- # determine input word
76
- ################
77
- start_word = torch.tensor([self.start_idx,] * beam_size).unsqueeze(1).long()
78
- if t == 0:
79
- word = start_word
80
- else:
81
- word = torch.cat((start_word, output_i["seq"]), dim=-1)
82
- decoder_input["word"] = word
83
- cap_padding_mask = (word == self.pad_idx).to(input_dict["attn_emb"].device)
84
- decoder_input["cap_padding_mask"] = cap_padding_mask
85
-
86
- return decoder_input
87
-
88
-
89
- class M2TransformerModel(CaptionModel):
90
-
91
- def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs):
92
- if not hasattr(self, "compatible_decoders"):
93
- self.compatible_decoders = (
94
- captioning.models.decoder.M2TransformerDecoder,
95
- )
96
- super().__init__(encoder, decoder, **kwargs)
97
- self.check_encoder_compatibility()
98
-
99
- def check_encoder_compatibility(self):
100
- assert isinstance(self.encoder, captioning.models.encoder.M2TransformerEncoder), \
101
- f"only M2TransformerModel is compatible with {self.__class__.__name__}"
102
-
103
-
104
- def seq_forward(self, input_dict):
105
- cap = input_dict["cap"]
106
- output = self.decoder(
107
- {
108
- "word": cap[:, :-1],
109
- "attn_emb": input_dict["attn_emb"],
110
- "attn_emb_mask": input_dict["attn_emb_mask"],
111
- }
112
- )
113
- return output
114
-
115
- def prepare_decoder_input(self, input_dict, output):
116
- decoder_input = {
117
- "attn_emb": input_dict["attn_emb"],
118
- "attn_emb_mask": input_dict["attn_emb_mask"]
119
- }
120
- t = input_dict["t"]
121
-
122
- ###############
123
- # determine input word
124
- ################
125
- if input_dict["mode"] == "train" and random.random() < input_dict["ss_ratio"]: # training, scheduled sampling
126
- word = input_dict["cap"][:, :t+1]
127
- else:
128
- start_word = torch.tensor([self.start_idx,] * input_dict["attn_emb"].size(0)).unsqueeze(1).long()
129
- if t == 0:
130
- word = start_word
131
- else:
132
- word = torch.cat((start_word, output["seq"][:, :t]), dim=-1)
133
- # word: [N, T]
134
- decoder_input["word"] = word
135
-
136
- return decoder_input
137
-
138
- def prepare_beamsearch_decoder_input(self, input_dict, output_i):
139
- decoder_input = {}
140
- t = input_dict["t"]
141
- i = input_dict["sample_idx"]
142
- beam_size = input_dict["beam_size"]
143
- ###############
144
- # prepare attn embeds
145
- ################
146
- if t == 0:
147
- attn_emb = repeat_tensor(input_dict["attn_emb"][i], beam_size)
148
- attn_emb_mask = repeat_tensor(input_dict["attn_emb_mask"][i], beam_size)
149
- output_i["attn_emb"] = attn_emb
150
- output_i["attn_emb_mask"] = attn_emb_mask
151
- decoder_input["attn_emb"] = output_i["attn_emb"]
152
- decoder_input["attn_emb_mask"] = output_i["attn_emb_mask"]
153
- ###############
154
- # determine input word
155
- ################
156
- start_word = torch.tensor([self.start_idx,] * beam_size).unsqueeze(1).long()
157
- if t == 0:
158
- word = start_word
159
- else:
160
- word = torch.cat((start_word, output_i["seq"]), dim=-1)
161
- decoder_input["word"] = word
162
-
163
- return decoder_input
164
-
165
-
166
- class EventEncoder(nn.Module):
167
- """
168
- Encode the Label information in AudioCaps and AudioSet
169
- """
170
- def __init__(self, emb_dim, vocab_size=527):
171
- super(EventEncoder, self).__init__()
172
- self.label_embedding = nn.Parameter(
173
- torch.randn((vocab_size, emb_dim)), requires_grad=True)
174
-
175
- def forward(self, word_idxs):
176
- indices = word_idxs / word_idxs.sum(dim=1, keepdim=True)
177
- embeddings = indices @ self.label_embedding
178
- return embeddings
179
-
180
-
181
- class EventCondTransformerModel(TransformerModel):
182
-
183
- def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs):
184
- if not hasattr(self, "compatible_decoders"):
185
- self.compatible_decoders = (
186
- captioning.models.decoder.EventTransformerDecoder,
187
- )
188
- super().__init__(encoder, decoder, **kwargs)
189
- self.label_encoder = EventEncoder(decoder.emb_dim, 527)
190
- self.train_forward_keys += ["events"]
191
- self.inference_forward_keys += ["events"]
192
-
193
- # def seq_forward(self, input_dict):
194
- # cap = input_dict["cap"]
195
- # cap_padding_mask = (cap == self.pad_idx).to(cap.device)
196
- # cap_padding_mask = cap_padding_mask[:, :-1]
197
- # output = self.decoder(
198
- # {
199
- # "word": cap[:, :-1],
200
- # "attn_emb": input_dict["attn_emb"],
201
- # "attn_emb_len": input_dict["attn_emb_len"],
202
- # "cap_padding_mask": cap_padding_mask
203
- # }
204
- # )
205
- # return output
206
-
207
- def prepare_decoder_input(self, input_dict, output):
208
- decoder_input = super().prepare_decoder_input(input_dict, output)
209
- decoder_input["events"] = self.label_encoder(input_dict["events"])
210
- return decoder_input
211
-
212
- def prepare_beamsearch_decoder_input(self, input_dict, output_i):
213
- decoder_input = super().prepare_beamsearch_decoder_input(input_dict, output_i)
214
- t = input_dict["t"]
215
- i = input_dict["sample_idx"]
216
- beam_size = input_dict["beam_size"]
217
- if t == 0:
218
- output_i["events"] = repeat_tensor(self.label_encoder(input_dict["events"])[i], beam_size)
219
- decoder_input["events"] = output_i["events"]
220
- return decoder_input
221
-
222
-
223
- class KeywordCondTransformerModel(TransformerModel):
224
-
225
- def __init__(self, encoder: nn.Module, decoder: nn.Module, **kwargs):
226
- if not hasattr(self, "compatible_decoders"):
227
- self.compatible_decoders = (
228
- captioning.models.decoder.KeywordProbTransformerDecoder,
229
- )
230
- super().__init__(encoder, decoder, **kwargs)
231
- self.train_forward_keys += ["keyword"]
232
- self.inference_forward_keys += ["keyword"]
233
-
234
- def seq_forward(self, input_dict):
235
- cap = input_dict["cap"]
236
- cap_padding_mask = (cap == self.pad_idx).to(cap.device)
237
- cap_padding_mask = cap_padding_mask[:, :-1]
238
- keyword = input_dict["keyword"]
239
- output = self.decoder(
240
- {
241
- "word": cap[:, :-1],
242
- "attn_emb": input_dict["attn_emb"],
243
- "attn_emb_len": input_dict["attn_emb_len"],
244
- "keyword": keyword,
245
- "cap_padding_mask": cap_padding_mask
246
- }
247
- )
248
- return output
249
-
250
- def prepare_decoder_input(self, input_dict, output):
251
- decoder_input = super().prepare_decoder_input(input_dict, output)
252
- decoder_input["keyword"] = input_dict["keyword"]
253
- return decoder_input
254
-
255
- def prepare_beamsearch_decoder_input(self, input_dict, output_i):
256
- decoder_input = super().prepare_beamsearch_decoder_input(input_dict, output_i)
257
- t = input_dict["t"]
258
- i = input_dict["sample_idx"]
259
- beam_size = input_dict["beam_size"]
260
- if t == 0:
261
- output_i["keyword"] = repeat_tensor(input_dict["keyword"][i],
262
- beam_size)
263
- decoder_input["keyword"] = output_i["keyword"]
264
- return decoder_input
265
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/tokenizer.py DELETED
@@ -1,180 +0,0 @@
1
- """ CLIP tokenizer
2
-
3
- Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI.
4
- """
5
- import gzip
6
- import html
7
- import os
8
- from functools import lru_cache
9
- from typing import Union, List
10
-
11
- import ftfy
12
- import regex as re
13
- import torch
14
-
15
-
16
- @lru_cache()
17
- def default_bpe():
18
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
19
-
20
-
21
- @lru_cache()
22
- def bytes_to_unicode():
23
- """
24
- Returns list of utf-8 byte and a corresponding list of unicode strings.
25
- The reversible bpe codes work on unicode strings.
26
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
27
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
28
- This is a signficant percentage of your normal, say, 32K bpe vocab.
29
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
30
- And avoids mapping to whitespace/control characters the bpe code barfs on.
31
- """
32
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
33
- cs = bs[:]
34
- n = 0
35
- for b in range(2**8):
36
- if b not in bs:
37
- bs.append(b)
38
- cs.append(2**8+n)
39
- n += 1
40
- cs = [chr(n) for n in cs]
41
- return dict(zip(bs, cs))
42
-
43
-
44
- def get_pairs(word):
45
- """Return set of symbol pairs in a word.
46
- Word is represented as tuple of symbols (symbols being variable-length strings).
47
- """
48
- pairs = set()
49
- prev_char = word[0]
50
- for char in word[1:]:
51
- pairs.add((prev_char, char))
52
- prev_char = char
53
- return pairs
54
-
55
-
56
- def basic_clean(text):
57
- text = ftfy.fix_text(text)
58
- text = html.unescape(html.unescape(text))
59
- return text.strip()
60
-
61
-
62
- def whitespace_clean(text):
63
- text = re.sub(r'\s+', ' ', text)
64
- text = text.strip()
65
- return text
66
-
67
-
68
- class SimpleTokenizer(object):
69
- def __init__(self, bpe_path: str = default_bpe(), special_tokens=None):
70
- self.byte_encoder = bytes_to_unicode()
71
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
72
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
73
- merges = merges[1:49152-256-2+1]
74
- merges = [tuple(merge.split()) for merge in merges]
75
- vocab = list(bytes_to_unicode().values())
76
- vocab = vocab + [v+'</w>' for v in vocab]
77
- for merge in merges:
78
- vocab.append(''.join(merge))
79
- if not special_tokens:
80
- special_tokens = ['<start_of_text>', '<end_of_text>']
81
- else:
82
- special_tokens = ['<start_of_text>', '<end_of_text>'] + special_tokens
83
- vocab.extend(special_tokens)
84
- self.encoder = dict(zip(vocab, range(len(vocab))))
85
- self.decoder = {v: k for k, v in self.encoder.items()}
86
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
87
- self.cache = {t:t for t in special_tokens}
88
- special = "|".join(special_tokens)
89
- self.pat = re.compile(special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
90
-
91
- self.vocab_size = len(self.encoder)
92
- self.all_special_ids = [self.encoder[t] for t in special_tokens]
93
-
94
- def bpe(self, token):
95
- if token in self.cache:
96
- return self.cache[token]
97
- word = tuple(token[:-1]) + ( token[-1] + '</w>',)
98
- pairs = get_pairs(word)
99
-
100
- if not pairs:
101
- return token+'</w>'
102
-
103
- while True:
104
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
105
- if bigram not in self.bpe_ranks:
106
- break
107
- first, second = bigram
108
- new_word = []
109
- i = 0
110
- while i < len(word):
111
- try:
112
- j = word.index(first, i)
113
- new_word.extend(word[i:j])
114
- i = j
115
- except:
116
- new_word.extend(word[i:])
117
- break
118
-
119
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
120
- new_word.append(first+second)
121
- i += 2
122
- else:
123
- new_word.append(word[i])
124
- i += 1
125
- new_word = tuple(new_word)
126
- word = new_word
127
- if len(word) == 1:
128
- break
129
- else:
130
- pairs = get_pairs(word)
131
- word = ' '.join(word)
132
- self.cache[token] = word
133
- return word
134
-
135
- def encode(self, text):
136
- bpe_tokens = []
137
- text = whitespace_clean(basic_clean(text)).lower()
138
- for token in re.findall(self.pat, text):
139
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
140
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
141
- return bpe_tokens
142
-
143
- def decode(self, tokens):
144
- text = ''.join([self.decoder[token] for token in tokens])
145
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
146
- return text
147
-
148
-
149
- _tokenizer = SimpleTokenizer()
150
-
151
-
152
- def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor:
153
- """
154
- Returns the tokenized representation of given input string(s)
155
-
156
- Parameters
157
- ----------
158
- texts : Union[str, List[str]]
159
- An input string or a list of input strings to tokenize
160
- context_length : int
161
- The context length to use; all CLIP models use 77 as the context length
162
-
163
- Returns
164
- -------
165
- A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
166
- """
167
- if isinstance(texts, str):
168
- texts = [texts]
169
-
170
- sot_token = _tokenizer.encoder["<start_of_text>"]
171
- eot_token = _tokenizer.encoder["<end_of_text>"]
172
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
173
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
174
-
175
- for i, tokens in enumerate(all_tokens):
176
- if len(tokens) > context_length:
177
- tokens = tokens[:context_length] # Truncate
178
- result[i, :len(tokens)] = torch.tensor(tokens)
179
-
180
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-150e_deepfashion2_vest_dress_256x192.py DELETED
@@ -1,172 +0,0 @@
1
- _base_ = [
2
- '../../../_base_/default_runtime.py',
3
- '../../../_base_/datasets/deepfashion2.py'
4
- ]
5
-
6
- default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater'))
7
-
8
- resume = False # 断点恢复
9
- load_from = None # 模型权重加载
10
- train_cfg = dict(by_epoch=True, max_epochs=150, val_interval=10) # 训练轮数,测试间隔
11
- param_scheduler = [
12
- dict( # warmup策略
13
- type='LinearLR',
14
- begin=0,
15
- end=500,
16
- start_factor=0.001,
17
- by_epoch=False),
18
- dict( # scheduler
19
- type='MultiStepLR',
20
- begin=0,
21
- end=150,
22
- milestones=[100, 130],
23
- gamma=0.1,
24
- by_epoch=True)
25
- ]
26
- optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率
27
- auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率
28
-
29
- backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载
30
- dataset_type = 'DeepFashion2Dataset' # 数据集类名 DeepFashionDataset
31
- data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略
32
- data_root = 'data/deepfashion2/' # 数据存放路径
33
- # 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息
34
- codec = dict(
35
- type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
36
-
37
- train_pipeline = [
38
- dict(type='LoadImage'),
39
- dict(type='GetBBoxCenterScale'),
40
- dict(type='RandomFlip', direction='horizontal'),
41
- dict(
42
- type='RandomBBoxTransform',
43
- shift_prob=0,
44
- rotate_factor=60,
45
- scale_factor=(0.75, 1.25)),
46
- dict(type='TopdownAffine', input_size=codec['input_size']),
47
- dict(type='GenerateTarget', encoder=codec),
48
- dict(type='PackPoseInputs')
49
- ]
50
- val_pipeline = [ # 测试时数据增强
51
- dict(type='LoadImage', backend_args=backend_args), # 加载图片
52
- dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale
53
- dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据
54
- dict(type='PackPoseInputs') # 对target进行打包用于训练
55
- ]
56
- train_dataloader = dict( # 训练数据加载
57
- batch_size=64, # 批次大小
58
- num_workers=6, # 数据加载进程数
59
- persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
60
- sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据
61
- dataset=dict(
62
- type=dataset_type, # 数据集类名
63
- data_root=data_root, # 数据集路径
64
- data_mode=data_mode, # 算法类型
65
- ann_file='train/deepfashion2_vest_dress.json', # 标注文件路径
66
- data_prefix=dict(img='train/image/'), # 图像路径
67
- pipeline=train_pipeline # 数据流水线
68
- ))
69
- val_dataloader = dict(
70
- batch_size=32,
71
- num_workers=6,
72
- persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
73
- drop_last=False,
74
- sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱
75
- dataset=dict(
76
- type=dataset_type, # 数据集类名
77
- data_root=data_root, # 数据集路径
78
- data_mode=data_mode, # 算法类型
79
- ann_file='validation/deepfashion2_vest_dress.json', # 标注文件路径
80
- data_prefix=dict(img='validation/image/'), # 图像路径
81
- test_mode=True, # 测试模式开关
82
- pipeline=val_pipeline # 数据流水线
83
- ))
84
- test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
85
-
86
- channel_cfg = dict(
87
- num_output_channels=294,
88
- dataset_joints=294,
89
- dataset_channel=[
90
- [
91
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
92
- 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
93
- 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52,
94
- 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
95
- 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86,
96
- 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102,
97
- 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115,
98
- 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128,
99
- 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141,
100
- 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154,
101
- 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167,
102
- 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180,
103
- 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
104
- 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206,
105
- 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
106
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232,
107
- 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245,
108
- 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258,
109
- 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271,
110
- 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284,
111
- 285, 286, 287, 288, 289, 290, 291, 292, 293
112
- ],
113
- ],
114
- inference_channel=[
115
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
116
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
117
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
118
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
119
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
120
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
121
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
122
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
123
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
124
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
125
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
126
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
127
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
128
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
129
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
130
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
131
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
132
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
133
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
134
- 290, 291, 292, 293
135
- ])
136
-
137
- model = dict(
138
- type='TopdownPoseEstimator', # 模型结构决定了算法流程
139
- data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分
140
- type='PoseDataPreprocessor',
141
- mean=[123.675, 116.28, 103.53],
142
- std=[58.395, 57.12, 57.375],
143
- bgr_to_rgb=True),
144
- backbone=dict(
145
- type='ResNet',
146
- depth=50,
147
- init_cfg=dict(
148
- type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习
149
- checkpoint='torchvision://resnet50')),
150
- head=dict( # 模型头部
151
- type='HeatmapHead',
152
- in_channels=2048,
153
- out_channels=channel_cfg['num_output_channels'],
154
- # deconv_out_channels=None,
155
- loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数
156
- decoder=codec), # 解码器,将heatmap解码成坐标值
157
- test_cfg=dict(
158
- flip_test=True, # 开启测试时水平翻转集成
159
- flip_mode='heatmap', # 对heatmap进行翻转
160
- shift_heatmap=True, # 对翻转后的结果进行平移提高精度
161
- ))
162
-
163
- val_evaluator = [
164
- dict(type='PCKAccuracy', thr=0.2),
165
- dict(type='AUC'),
166
- dict(type='EPE'),
167
- ]
168
- test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
169
-
170
- visualizer = dict(
171
- vis_backends=[dict(type='LocalVisBackend'),
172
- dict(type='WandbVisBackend')])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet101.py DELETED
@@ -1,17 +0,0 @@
1
- # model settings
2
- model = dict(
3
- type='ImageClassifier',
4
- backbone=dict(
5
- type='ResNet',
6
- depth=101,
7
- num_stages=4,
8
- out_indices=(3, ),
9
- style='pytorch'),
10
- neck=dict(type='GlobalAveragePooling'),
11
- head=dict(
12
- type='LinearClsHead',
13
- num_classes=1000,
14
- in_channels=2048,
15
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
16
- topk=(1, 5),
17
- ))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhay834/my_genai_chatbot/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: My Genai Chatbot
3
- emoji: 🐨
4
- colorFrom: gray
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal.py DELETED
@@ -1,57 +0,0 @@
1
- from __future__ import annotations
2
- import asyncio
3
- from colorama import Fore
4
-
5
- from typing import TYPE_CHECKING, List
6
-
7
- from . import decision_maker_registry
8
- from .base import BaseDecisionMaker
9
- from agentverse.logging import logger
10
-
11
- from agentverse.message import Message
12
-
13
- if TYPE_CHECKING:
14
- from agentverse.agents.base import BaseAgent
15
- from agentverse.message import CriticMessage
16
-
17
-
18
- @decision_maker_registry.register("horizontal")
19
- class HorizontalDecisionMaker(BaseDecisionMaker):
20
- """
21
- Discuss in a horizontal manner.
22
- """
23
-
24
- name: str = "horizontal"
25
-
26
- # def step(
27
- async def astep(
28
- self,
29
- agents: List[BaseAgent],
30
- task_description: str,
31
- previous_plan: str = "No solution yet.",
32
- advice: str = "No advice yet.",
33
- **kwargs,
34
- ) -> List[str]:
35
- if advice != "No advice yet.":
36
- self.broadcast_messages(
37
- agents, [Message(content=advice, sender="Evaluator")]
38
- )
39
- for agent in agents[1:]:
40
- review: CriticMessage = await agent.astep(
41
- previous_plan, advice, task_description
42
- )
43
- if review.content != "":
44
- self.broadcast_messages(agents, [review])
45
-
46
- logger.info("", "Reviews:", Fore.YELLOW)
47
- logger.info(
48
- "",
49
- f"[{review.sender}]: {review.content}",
50
- Fore.YELLOW,
51
- )
52
-
53
- result = agents[0].step(previous_plan, advice, task_description)
54
- return [result]
55
-
56
- def reset(self):
57
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import ToggleSwitch from './gameobjects/shape/toggleswitch/ToggleSwitch';
2
- export default ToggleSwitch;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetChildrenWidth.js DELETED
@@ -1,45 +0,0 @@
1
- import Sum from '../../../plugins/utils/math/Sum.js';
2
-
3
- var GetChildrenWidth = function (minimumMode) {
4
- if (this.rexSizer.hidden) {
5
- return 0;
6
- }
7
-
8
- if (minimumMode === undefined) {
9
- minimumMode = true;
10
- }
11
-
12
- var result = 0,
13
- columnWidth;
14
- var children = this.sizerChildren;
15
- var child, padding, childWidth, proportion;
16
-
17
- for (var i = 0; i < this.columnCount; i++) {
18
- proportion = this.columnProportions[i];
19
- columnWidth = 0;
20
- if ((proportion === 0) || minimumMode) {
21
- for (var j = 0; j < this.rowCount; j++) {
22
- child = children[(j * this.columnCount) + i];
23
- if (!child) {
24
- continue;
25
- }
26
- if (child.rexSizer.hidden) {
27
- continue;
28
- }
29
-
30
- padding = child.rexSizer.padding;
31
- childWidth = this.getChildWidth(child) + padding.left + padding.right;
32
- columnWidth = Math.max(columnWidth, childWidth);
33
- }
34
- result += columnWidth;
35
- }
36
- // else,(proportion > 0) : columnWidth is 0
37
- this.columnWidth[i] = columnWidth;
38
- }
39
-
40
- var space = this.space;
41
- var indentLeft = Math.max(space.indentLeftOdd, space.indentLeftEven);
42
- return result + Sum(space.left, indentLeft, ...space.column, space.right);
43
- }
44
-
45
- export default GetChildrenWidth;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/roundrectanglecanvas/Factory.d.ts DELETED
@@ -1,20 +0,0 @@
1
- import RoundRectangleCanvas from './RoundRectangleCanvas';
2
-
3
- export default function (
4
- x: number,
5
- y: number,
6
- width: number,
7
- height: number,
8
- radiusConfig?: number | ({ x?: number, y?: number }) | RoundRectangleCanvas.IRadiusConfig |
9
- ({
10
- radius?: (number | ({ x?: number, y?: number }) | RoundRectangleCanvas.IRadiusConfig),
11
- iteration?: number
12
- }),
13
- fillStyle?: number | string | null,
14
- strokeStyle?: number | string | null,
15
- lineWidth?: number,
16
-
17
- fillColor2?: number | string | null,
18
- isHorizontalGradient?: boolean
19
-
20
- ): RoundRectangleCanvas;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AllAideas/SegmentacionVideo/app.py DELETED
@@ -1,51 +0,0 @@
1
- import gradio as gr
2
- from utils.predict import predict_action
3
- import os
4
- import glob
5
-
6
- ##Create list of examples to be loaded
7
- example_list = glob.glob("examples/*")
8
- example_list = list(map(lambda el:[el], example_list))
9
-
10
-
11
- demo = gr.Blocks()
12
-
13
-
14
- with demo:
15
-
16
- gr.Markdown("# **<p align='center'>Video Classification with Transformers</p>**")
17
- description="""# <p>
18
- <center>
19
- Demo de clasificador de video usando modelo híbrido basado ​​en Transformers con CNN, el objetivo es reconocer un segemento y recortarlo.
20
- <img src=\"https://raw.githubusercontent.com/All-Aideas/sea_apirest/main/logo.png\" alt=\"logo\" width=\"250\"/>
21
- </center>
22
- </p>
23
- """
24
- gr.Markdown(description)
25
-
26
- with gr.Tabs():
27
-
28
- with gr.TabItem("Upload & Predict"):
29
- with gr.Box():
30
-
31
- with gr.Row():
32
- input_video = gr.Video(label="Input Video", show_label=True)
33
- output_label = gr.Label(label="Model Output", show_label=True)
34
- output_gif = gr.Image(label="Video Gif", show_label=True)
35
-
36
- gr.Markdown("**Predict**")
37
-
38
- with gr.Box():
39
- with gr.Row():
40
- submit_button = gr.Button("Submit")
41
-
42
- gr.Markdown("**Ejemplos:**")
43
- gr.Markdown("El modelo puede clasificar videos pertenecientes a las siguientes clases: CricketShot, PlayingCello, Punch, ShavingBeard, TennisSwing.")
44
- # gr.Markdown("CricketShot, PlayingCello, Punch, ShavingBeard, TennisSwing")
45
-
46
- with gr.Column():
47
- gr.Examples(example_list, [input_video], [output_label,output_gif], predict_action, cache_examples=True)
48
-
49
- submit_button.click(predict_action, inputs=input_video, outputs=[output_label,output_gif])
50
-
51
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/torch_utils/persistence.py DELETED
@@ -1,260 +0,0 @@
1
- # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2
- #
3
- # NVIDIA CORPORATION and its licensors retain all intellectual property
4
- # and proprietary rights in and to this software, related documentation
5
- # and any modifications thereto. Any use, reproduction, disclosure or
6
- # distribution of this software and related documentation without an express
7
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- """Facilities for pickling Python code alongside other data.
10
-
11
- The pickled code is automatically imported into a separate Python module
12
- during unpickling. This way, any previously exported pickles will remain
13
- usable even if the original code is no longer available, or if the current
14
- version of the code is not consistent with what was originally pickled."""
15
-
16
- import sys
17
- import pickle
18
- import io
19
- import inspect
20
- import copy
21
- import uuid
22
- import types
23
- import dnnlib
24
-
25
- # ----------------------------------------------------------------------------
26
-
27
- _version = 6 # internal version number
28
- _decorators = set() # {decorator_class, ...}
29
- _import_hooks = [] # [hook_function, ...]
30
- _module_to_src_dict = dict() # {module: src, ...}
31
- _src_to_module_dict = dict() # {src: module, ...}
32
-
33
- # ----------------------------------------------------------------------------
34
-
35
-
36
- def persistent_class(orig_class):
37
- r"""Class decorator that extends a given class to save its source code
38
- when pickled.
39
-
40
- Example:
41
-
42
- from torch_utils import persistence
43
-
44
- @persistence.persistent_class
45
- class MyNetwork(torch.nn.Module):
46
- def __init__(self, num_inputs, num_outputs):
47
- super().__init__()
48
- self.fc = MyLayer(num_inputs, num_outputs)
49
- ...
50
-
51
- @persistence.persistent_class
52
- class MyLayer(torch.nn.Module):
53
- ...
54
-
55
- When pickled, any instance of `MyNetwork` and `MyLayer` will save its
56
- source code alongside other internal state (e.g., parameters, buffers,
57
- and submodules). This way, any previously exported pickle will remain
58
- usable even if the class definitions have been modified or are no
59
- longer available.
60
-
61
- The decorator saves the source code of the entire Python module
62
- containing the decorated class. It does *not* save the source code of
63
- any imported modules. Thus, the imported modules must be available
64
- during unpickling, also including `torch_utils.persistence` itself.
65
-
66
- It is ok to call functions defined in the same module from the
67
- decorated class. However, if the decorated class depends on other
68
- classes defined in the same module, they must be decorated as well.
69
- This is illustrated in the above example in the case of `MyLayer`.
70
-
71
- It is also possible to employ the decorator just-in-time before
72
- calling the constructor. For example:
73
-
74
- cls = MyLayer
75
- if want_to_make_it_persistent:
76
- cls = persistence.persistent_class(cls)
77
- layer = cls(num_inputs, num_outputs)
78
-
79
- As an additional feature, the decorator also keeps track of the
80
- arguments that were used to construct each instance of the decorated
81
- class. The arguments can be queried via `obj.init_args` and
82
- `obj.init_kwargs`, and they are automatically pickled alongside other
83
- object state. A typical use case is to first unpickle a previous
84
- instance of a persistent class, and then upgrade it to use the latest
85
- version of the source code:
86
-
87
- with open('old_pickle.pkl', 'rb') as f:
88
- old_net = pickle.load(f)
89
- new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs)
90
- misc.copy_params_and_buffers(old_net, new_net, require_all=True)
91
- """
92
- assert isinstance(orig_class, type)
93
- if is_persistent(orig_class):
94
- return orig_class
95
-
96
- assert orig_class.__module__ in sys.modules
97
- orig_module = sys.modules[orig_class.__module__]
98
- orig_module_src = _module_to_src(orig_module)
99
-
100
- class Decorator(orig_class):
101
- _orig_module_src = orig_module_src
102
- _orig_class_name = orig_class.__name__
103
-
104
- def __init__(self, *args, **kwargs):
105
- super().__init__(*args, **kwargs)
106
- self._init_args = copy.deepcopy(args)
107
- self._init_kwargs = copy.deepcopy(kwargs)
108
- assert orig_class.__name__ in orig_module.__dict__
109
- _check_pickleable(self.__reduce__())
110
-
111
- @property
112
- def init_args(self):
113
- return copy.deepcopy(self._init_args)
114
-
115
- @property
116
- def init_kwargs(self):
117
- return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs))
118
-
119
- def __reduce__(self):
120
- fields = list(super().__reduce__())
121
- fields += [None] * max(3 - len(fields), 0)
122
- if fields[0] is not _reconstruct_persistent_obj:
123
- meta = dict(type='class', version=_version, module_src=self._orig_module_src,
124
- class_name=self._orig_class_name, state=fields[2])
125
- fields[0] = _reconstruct_persistent_obj # reconstruct func
126
- fields[1] = (meta,) # reconstruct args
127
- fields[2] = None # state dict
128
- return tuple(fields)
129
-
130
- Decorator.__name__ = orig_class.__name__
131
- _decorators.add(Decorator)
132
- return Decorator
133
-
134
- # ----------------------------------------------------------------------------
135
-
136
-
137
- def is_persistent(obj):
138
- r"""Test whether the given object or class is persistent, i.e.,
139
- whether it will save its source code when pickled.
140
- """
141
- try:
142
- if obj in _decorators:
143
- return True
144
- except TypeError:
145
- pass
146
- return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck
147
-
148
- # ----------------------------------------------------------------------------
149
-
150
-
151
- def import_hook(hook):
152
- r"""Register an import hook that is called whenever a persistent object
153
- is being unpickled. A typical use case is to patch the pickled source
154
- code to avoid errors and inconsistencies when the API of some imported
155
- module has changed.
156
-
157
- The hook should have the following signature:
158
-
159
- hook(meta) -> modified meta
160
-
161
- `meta` is an instance of `dnnlib.EasyDict` with the following fields:
162
-
163
- type: Type of the persistent object, e.g. `'class'`.
164
- version: Internal version number of `torch_utils.persistence`.
165
- module_src Original source code of the Python module.
166
- class_name: Class name in the original Python module.
167
- state: Internal state of the object.
168
-
169
- Example:
170
-
171
- @persistence.import_hook
172
- def wreck_my_network(meta):
173
- if meta.class_name == 'MyNetwork':
174
- print('MyNetwork is being imported. I will wreck it!')
175
- meta.module_src = meta.module_src.replace("True", "False")
176
- return meta
177
- """
178
- assert callable(hook)
179
- _import_hooks.append(hook)
180
-
181
- # ----------------------------------------------------------------------------
182
-
183
-
184
- def _reconstruct_persistent_obj(meta):
185
- r"""Hook that is called internally by the `pickle` module to unpickle
186
- a persistent object.
187
- """
188
- meta = dnnlib.EasyDict(meta)
189
- meta.state = dnnlib.EasyDict(meta.state)
190
- for hook in _import_hooks:
191
- meta = hook(meta)
192
- assert meta is not None
193
-
194
- assert meta.version == _version
195
- module = _src_to_module(meta.module_src)
196
-
197
- assert meta.type == 'class'
198
- orig_class = module.__dict__[meta.class_name]
199
- decorator_class = persistent_class(orig_class)
200
- obj = decorator_class.__new__(decorator_class)
201
-
202
- setstate = getattr(obj, '__setstate__', None)
203
- if callable(setstate):
204
- setstate(meta.state) # pylint: disable=not-callable
205
- else:
206
- obj.__dict__.update(meta.state)
207
- return obj
208
-
209
- # ----------------------------------------------------------------------------
210
-
211
-
212
- def _module_to_src(module):
213
- r"""Query the source code of a given Python module.
214
- """
215
- src = _module_to_src_dict.get(module, None)
216
- if src is None:
217
- src = inspect.getsource(module)
218
- _module_to_src_dict[module] = src
219
- _src_to_module_dict[src] = module
220
- return src
221
-
222
-
223
- def _src_to_module(src):
224
- r"""Get or create a Python module for the given source code.
225
- """
226
- module = _src_to_module_dict.get(src, None)
227
- if module is None:
228
- module_name = "_imported_module_" + uuid.uuid4().hex
229
- module = types.ModuleType(module_name)
230
- sys.modules[module_name] = module
231
- _module_to_src_dict[module] = src
232
- _src_to_module_dict[src] = module
233
- exec(src, module.__dict__) # pylint: disable=exec-used
234
- return module
235
-
236
- # ----------------------------------------------------------------------------
237
-
238
-
239
- def _check_pickleable(obj):
240
- r"""Check that the given object is pickleable, raising an exception if
241
- it is not. This function is expected to be considerably more efficient
242
- than actually pickling the object.
243
- """
244
- def recurse(obj):
245
- if isinstance(obj, (list, tuple, set)):
246
- return [recurse(x) for x in obj]
247
- if isinstance(obj, dict):
248
- return [[recurse(x), recurse(y)] for x, y in obj.items()]
249
- if isinstance(obj, (str, int, float, bool, bytes, bytearray)):
250
- return None # Python primitive types are pickleable.
251
- if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor', 'torch.nn.parameter.Parameter']:
252
- return None # NumPy arrays and PyTorch tensors are pickleable.
253
- if is_persistent(obj):
254
- # Persistent objects are pickleable, by virtue of the constructor check.
255
- return None
256
- return obj
257
- with io.BytesIO() as f:
258
- pickle.dump(recurse(obj), f)
259
-
260
- # ----------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/euler_ancestral.md DELETED
@@ -1,21 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Euler Ancestral scheduler
14
-
15
- ## Overview
16
-
17
- Ancestral sampling with Euler method steps. Based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) implementation by Katherine Crowson.
18
- Fast scheduler which often times generates good outputs with 20-30 steps.
19
-
20
- ## EulerAncestralDiscreteScheduler
21
- [[autodoc]] EulerAncestralDiscreteScheduler
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/exp/mask_rcnn_3x_ms_hybrid_small/config.py DELETED
@@ -1,82 +0,0 @@
1
- _base_ = [
2
- '../../configs/_base_/models/mask_rcnn_uniformer_fpn.py',
3
- '../../configs/_base_/datasets/coco_instance.py',
4
- '../../configs/_base_/schedules/schedule_1x.py',
5
- '../../configs/_base_/default_runtime.py'
6
- ]
7
-
8
- model = dict(
9
- backbone=dict(
10
- embed_dim=[64, 128, 320, 512],
11
- layers=[3, 4, 8, 3],
12
- head_dim=64,
13
- drop_path_rate=0.1,
14
- use_checkpoint=True,
15
- checkpoint_num=[0, 0, 8, 0],
16
- windows=False,
17
- hybrid=True,
18
- window_size=14
19
- ),
20
- neck=dict(in_channels=[64, 128, 320, 512]))
21
-
22
- img_norm_cfg = dict(
23
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
24
-
25
- # augmentation strategy originates from DETR / Sparse RCNN
26
- train_pipeline = [
27
- dict(type='LoadImageFromFile'),
28
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
29
- dict(type='RandomFlip', flip_ratio=0.5),
30
- dict(type='AutoAugment',
31
- policies=[
32
- [
33
- dict(type='Resize',
34
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
35
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
36
- (736, 1333), (768, 1333), (800, 1333)],
37
- multiscale_mode='value',
38
- keep_ratio=True)
39
- ],
40
- [
41
- dict(type='Resize',
42
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
43
- multiscale_mode='value',
44
- keep_ratio=True),
45
- dict(type='RandomCrop',
46
- crop_type='absolute_range',
47
- crop_size=(384, 600),
48
- allow_negative_crop=True),
49
- dict(type='Resize',
50
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
51
- (576, 1333), (608, 1333), (640, 1333),
52
- (672, 1333), (704, 1333), (736, 1333),
53
- (768, 1333), (800, 1333)],
54
- multiscale_mode='value',
55
- override=True,
56
- keep_ratio=True)
57
- ]
58
- ]),
59
- dict(type='Normalize', **img_norm_cfg),
60
- dict(type='Pad', size_divisor=32),
61
- dict(type='DefaultFormatBundle'),
62
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
63
- ]
64
- data = dict(train=dict(pipeline=train_pipeline))
65
-
66
- optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
67
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
68
- 'relative_position_bias_table': dict(decay_mult=0.),
69
- 'norm': dict(decay_mult=0.)}))
70
- lr_config = dict(step=[27, 33])
71
- runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
72
-
73
- # do not use mmdet version fp16
74
- fp16 = None
75
- optimizer_config = dict(
76
- type="DistOptimizerHook",
77
- update_interval=1,
78
- grad_clip=None,
79
- coalesce=True,
80
- bucket_size_mb=-1,
81
- use_fp16=True,
82
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_head.py DELETED
@@ -1,751 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- from mmcv.cnn import normal_init
4
- from mmcv.runner import force_fp32
5
-
6
- from mmdet.core import (anchor_inside_flags, build_anchor_generator,
7
- build_assigner, build_bbox_coder, build_sampler,
8
- images_to_levels, multi_apply, multiclass_nms, unmap)
9
- from ..builder import HEADS, build_loss
10
- from .base_dense_head import BaseDenseHead
11
- from .dense_test_mixins import BBoxTestMixin
12
-
13
-
14
- @HEADS.register_module()
15
- class AnchorHead(BaseDenseHead, BBoxTestMixin):
16
- """Anchor-based head (RPN, RetinaNet, SSD, etc.).
17
-
18
- Args:
19
- num_classes (int): Number of categories excluding the background
20
- category.
21
- in_channels (int): Number of channels in the input feature map.
22
- feat_channels (int): Number of hidden channels. Used in child classes.
23
- anchor_generator (dict): Config dict for anchor generator
24
- bbox_coder (dict): Config of bounding box coder.
25
- reg_decoded_bbox (bool): If true, the regression loss would be
26
- applied directly on decoded bounding boxes, converting both
27
- the predicted boxes and regression targets to absolute
28
- coordinates format. Default False. It should be `True` when
29
- using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
30
- loss_cls (dict): Config of classification loss.
31
- loss_bbox (dict): Config of localization loss.
32
- train_cfg (dict): Training config of anchor head.
33
- test_cfg (dict): Testing config of anchor head.
34
- """ # noqa: W605
35
-
36
- def __init__(self,
37
- num_classes,
38
- in_channels,
39
- feat_channels=256,
40
- anchor_generator=dict(
41
- type='AnchorGenerator',
42
- scales=[8, 16, 32],
43
- ratios=[0.5, 1.0, 2.0],
44
- strides=[4, 8, 16, 32, 64]),
45
- bbox_coder=dict(
46
- type='DeltaXYWHBBoxCoder',
47
- clip_border=True,
48
- target_means=(.0, .0, .0, .0),
49
- target_stds=(1.0, 1.0, 1.0, 1.0)),
50
- reg_decoded_bbox=False,
51
- loss_cls=dict(
52
- type='CrossEntropyLoss',
53
- use_sigmoid=True,
54
- loss_weight=1.0),
55
- loss_bbox=dict(
56
- type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0),
57
- train_cfg=None,
58
- test_cfg=None):
59
- super(AnchorHead, self).__init__()
60
- self.in_channels = in_channels
61
- self.num_classes = num_classes
62
- self.feat_channels = feat_channels
63
- self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
64
- # TODO better way to determine whether sample or not
65
- self.sampling = loss_cls['type'] not in [
66
- 'FocalLoss', 'GHMC', 'QualityFocalLoss'
67
- ]
68
- if self.use_sigmoid_cls:
69
- self.cls_out_channels = num_classes
70
- else:
71
- self.cls_out_channels = num_classes + 1
72
-
73
- if self.cls_out_channels <= 0:
74
- raise ValueError(f'num_classes={num_classes} is too small')
75
- self.reg_decoded_bbox = reg_decoded_bbox
76
-
77
- self.bbox_coder = build_bbox_coder(bbox_coder)
78
- self.loss_cls = build_loss(loss_cls)
79
- self.loss_bbox = build_loss(loss_bbox)
80
- self.train_cfg = train_cfg
81
- self.test_cfg = test_cfg
82
- if self.train_cfg:
83
- self.assigner = build_assigner(self.train_cfg.assigner)
84
- # use PseudoSampler when sampling is False
85
- if self.sampling and hasattr(self.train_cfg, 'sampler'):
86
- sampler_cfg = self.train_cfg.sampler
87
- else:
88
- sampler_cfg = dict(type='PseudoSampler')
89
- self.sampler = build_sampler(sampler_cfg, context=self)
90
- self.fp16_enabled = False
91
-
92
- self.anchor_generator = build_anchor_generator(anchor_generator)
93
- # usually the numbers of anchors for each level are the same
94
- # except SSD detectors
95
- self.num_anchors = self.anchor_generator.num_base_anchors[0]
96
- self._init_layers()
97
-
98
- def _init_layers(self):
99
- """Initialize layers of the head."""
100
- self.conv_cls = nn.Conv2d(self.in_channels,
101
- self.num_anchors * self.cls_out_channels, 1)
102
- self.conv_reg = nn.Conv2d(self.in_channels, self.num_anchors * 4, 1)
103
-
104
- def init_weights(self):
105
- """Initialize weights of the head."""
106
- normal_init(self.conv_cls, std=0.01)
107
- normal_init(self.conv_reg, std=0.01)
108
-
109
- def forward_single(self, x):
110
- """Forward feature of a single scale level.
111
-
112
- Args:
113
- x (Tensor): Features of a single scale level.
114
-
115
- Returns:
116
- tuple:
117
- cls_score (Tensor): Cls scores for a single scale level \
118
- the channels number is num_anchors * num_classes.
119
- bbox_pred (Tensor): Box energies / deltas for a single scale \
120
- level, the channels number is num_anchors * 4.
121
- """
122
- cls_score = self.conv_cls(x)
123
- bbox_pred = self.conv_reg(x)
124
- return cls_score, bbox_pred
125
-
126
- def forward(self, feats):
127
- """Forward features from the upstream network.
128
-
129
- Args:
130
- feats (tuple[Tensor]): Features from the upstream network, each is
131
- a 4D-tensor.
132
-
133
- Returns:
134
- tuple: A tuple of classification scores and bbox prediction.
135
-
136
- - cls_scores (list[Tensor]): Classification scores for all \
137
- scale levels, each is a 4D-tensor, the channels number \
138
- is num_anchors * num_classes.
139
- - bbox_preds (list[Tensor]): Box energies / deltas for all \
140
- scale levels, each is a 4D-tensor, the channels number \
141
- is num_anchors * 4.
142
- """
143
- return multi_apply(self.forward_single, feats)
144
-
145
- def get_anchors(self, featmap_sizes, img_metas, device='cuda'):
146
- """Get anchors according to feature map sizes.
147
-
148
- Args:
149
- featmap_sizes (list[tuple]): Multi-level feature map sizes.
150
- img_metas (list[dict]): Image meta info.
151
- device (torch.device | str): Device for returned tensors
152
-
153
- Returns:
154
- tuple:
155
- anchor_list (list[Tensor]): Anchors of each image.
156
- valid_flag_list (list[Tensor]): Valid flags of each image.
157
- """
158
- num_imgs = len(img_metas)
159
-
160
- # since feature map sizes of all images are the same, we only compute
161
- # anchors for one time
162
- multi_level_anchors = self.anchor_generator.grid_anchors(
163
- featmap_sizes, device)
164
- anchor_list = [multi_level_anchors for _ in range(num_imgs)]
165
-
166
- # for each image, we compute valid flags of multi level anchors
167
- valid_flag_list = []
168
- for img_id, img_meta in enumerate(img_metas):
169
- multi_level_flags = self.anchor_generator.valid_flags(
170
- featmap_sizes, img_meta['pad_shape'], device)
171
- valid_flag_list.append(multi_level_flags)
172
-
173
- return anchor_list, valid_flag_list
174
-
175
- def _get_targets_single(self,
176
- flat_anchors,
177
- valid_flags,
178
- gt_bboxes,
179
- gt_bboxes_ignore,
180
- gt_labels,
181
- img_meta,
182
- label_channels=1,
183
- unmap_outputs=True):
184
- """Compute regression and classification targets for anchors in a
185
- single image.
186
-
187
- Args:
188
- flat_anchors (Tensor): Multi-level anchors of the image, which are
189
- concatenated into a single tensor of shape (num_anchors ,4)
190
- valid_flags (Tensor): Multi level valid flags of the image,
191
- which are concatenated into a single tensor of
192
- shape (num_anchors,).
193
- gt_bboxes (Tensor): Ground truth bboxes of the image,
194
- shape (num_gts, 4).
195
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
196
- ignored, shape (num_ignored_gts, 4).
197
- img_meta (dict): Meta info of the image.
198
- gt_labels (Tensor): Ground truth labels of each box,
199
- shape (num_gts,).
200
- label_channels (int): Channel of label.
201
- unmap_outputs (bool): Whether to map outputs back to the original
202
- set of anchors.
203
-
204
- Returns:
205
- tuple:
206
- labels_list (list[Tensor]): Labels of each level
207
- label_weights_list (list[Tensor]): Label weights of each level
208
- bbox_targets_list (list[Tensor]): BBox targets of each level
209
- bbox_weights_list (list[Tensor]): BBox weights of each level
210
- num_total_pos (int): Number of positive samples in all images
211
- num_total_neg (int): Number of negative samples in all images
212
- """
213
- inside_flags = anchor_inside_flags(flat_anchors, valid_flags,
214
- img_meta['img_shape'][:2],
215
- self.train_cfg.allowed_border)
216
- if not inside_flags.any():
217
- return (None, ) * 7
218
- # assign gt and sample anchors
219
- anchors = flat_anchors[inside_flags, :]
220
-
221
- assign_result = self.assigner.assign(
222
- anchors, gt_bboxes, gt_bboxes_ignore,
223
- None if self.sampling else gt_labels)
224
- sampling_result = self.sampler.sample(assign_result, anchors,
225
- gt_bboxes)
226
-
227
- num_valid_anchors = anchors.shape[0]
228
- bbox_targets = torch.zeros_like(anchors)
229
- bbox_weights = torch.zeros_like(anchors)
230
- labels = anchors.new_full((num_valid_anchors, ),
231
- self.num_classes,
232
- dtype=torch.long)
233
- label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float)
234
-
235
- pos_inds = sampling_result.pos_inds
236
- neg_inds = sampling_result.neg_inds
237
- if len(pos_inds) > 0:
238
- if not self.reg_decoded_bbox:
239
- pos_bbox_targets = self.bbox_coder.encode(
240
- sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes)
241
- else:
242
- pos_bbox_targets = sampling_result.pos_gt_bboxes
243
- bbox_targets[pos_inds, :] = pos_bbox_targets
244
- bbox_weights[pos_inds, :] = 1.0
245
- if gt_labels is None:
246
- # Only rpn gives gt_labels as None
247
- # Foreground is the first class since v2.5.0
248
- labels[pos_inds] = 0
249
- else:
250
- labels[pos_inds] = gt_labels[
251
- sampling_result.pos_assigned_gt_inds]
252
- if self.train_cfg.pos_weight <= 0:
253
- label_weights[pos_inds] = 1.0
254
- else:
255
- label_weights[pos_inds] = self.train_cfg.pos_weight
256
- if len(neg_inds) > 0:
257
- label_weights[neg_inds] = 1.0
258
-
259
- # map up to original set of anchors
260
- if unmap_outputs:
261
- num_total_anchors = flat_anchors.size(0)
262
- labels = unmap(
263
- labels, num_total_anchors, inside_flags,
264
- fill=self.num_classes) # fill bg label
265
- label_weights = unmap(label_weights, num_total_anchors,
266
- inside_flags)
267
- bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags)
268
- bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags)
269
-
270
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
271
- neg_inds, sampling_result)
272
-
273
- def get_targets(self,
274
- anchor_list,
275
- valid_flag_list,
276
- gt_bboxes_list,
277
- img_metas,
278
- gt_bboxes_ignore_list=None,
279
- gt_labels_list=None,
280
- label_channels=1,
281
- unmap_outputs=True,
282
- return_sampling_results=False):
283
- """Compute regression and classification targets for anchors in
284
- multiple images.
285
-
286
- Args:
287
- anchor_list (list[list[Tensor]]): Multi level anchors of each
288
- image. The outer list indicates images, and the inner list
289
- corresponds to feature levels of the image. Each element of
290
- the inner list is a tensor of shape (num_anchors, 4).
291
- valid_flag_list (list[list[Tensor]]): Multi level valid flags of
292
- each image. The outer list indicates images, and the inner list
293
- corresponds to feature levels of the image. Each element of
294
- the inner list is a tensor of shape (num_anchors, )
295
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
296
- img_metas (list[dict]): Meta info of each image.
297
- gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be
298
- ignored.
299
- gt_labels_list (list[Tensor]): Ground truth labels of each box.
300
- label_channels (int): Channel of label.
301
- unmap_outputs (bool): Whether to map outputs back to the original
302
- set of anchors.
303
-
304
- Returns:
305
- tuple: Usually returns a tuple containing learning targets.
306
-
307
- - labels_list (list[Tensor]): Labels of each level.
308
- - label_weights_list (list[Tensor]): Label weights of each \
309
- level.
310
- - bbox_targets_list (list[Tensor]): BBox targets of each level.
311
- - bbox_weights_list (list[Tensor]): BBox weights of each level.
312
- - num_total_pos (int): Number of positive samples in all \
313
- images.
314
- - num_total_neg (int): Number of negative samples in all \
315
- images.
316
- additional_returns: This function enables user-defined returns from
317
- `self._get_targets_single`. These returns are currently refined
318
- to properties at each feature map (i.e. having HxW dimension).
319
- The results will be concatenated after the end
320
- """
321
- num_imgs = len(img_metas)
322
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
323
-
324
- # anchor number of multi levels
325
- num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
326
- # concat all level anchors to a single tensor
327
- concat_anchor_list = []
328
- concat_valid_flag_list = []
329
- for i in range(num_imgs):
330
- assert len(anchor_list[i]) == len(valid_flag_list[i])
331
- concat_anchor_list.append(torch.cat(anchor_list[i]))
332
- concat_valid_flag_list.append(torch.cat(valid_flag_list[i]))
333
-
334
- # compute targets for each image
335
- if gt_bboxes_ignore_list is None:
336
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
337
- if gt_labels_list is None:
338
- gt_labels_list = [None for _ in range(num_imgs)]
339
- results = multi_apply(
340
- self._get_targets_single,
341
- concat_anchor_list,
342
- concat_valid_flag_list,
343
- gt_bboxes_list,
344
- gt_bboxes_ignore_list,
345
- gt_labels_list,
346
- img_metas,
347
- label_channels=label_channels,
348
- unmap_outputs=unmap_outputs)
349
- (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights,
350
- pos_inds_list, neg_inds_list, sampling_results_list) = results[:7]
351
- rest_results = list(results[7:]) # user-added return values
352
- # no valid anchors
353
- if any([labels is None for labels in all_labels]):
354
- return None
355
- # sampled anchors of all images
356
- num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
357
- num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
358
- # split targets to a list w.r.t. multiple levels
359
- labels_list = images_to_levels(all_labels, num_level_anchors)
360
- label_weights_list = images_to_levels(all_label_weights,
361
- num_level_anchors)
362
- bbox_targets_list = images_to_levels(all_bbox_targets,
363
- num_level_anchors)
364
- bbox_weights_list = images_to_levels(all_bbox_weights,
365
- num_level_anchors)
366
- res = (labels_list, label_weights_list, bbox_targets_list,
367
- bbox_weights_list, num_total_pos, num_total_neg)
368
- if return_sampling_results:
369
- res = res + (sampling_results_list, )
370
- for i, r in enumerate(rest_results): # user-added return values
371
- rest_results[i] = images_to_levels(r, num_level_anchors)
372
-
373
- return res + tuple(rest_results)
374
-
375
- def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights,
376
- bbox_targets, bbox_weights, num_total_samples):
377
- """Compute loss of a single scale level.
378
-
379
- Args:
380
- cls_score (Tensor): Box scores for each scale level
381
- Has shape (N, num_anchors * num_classes, H, W).
382
- bbox_pred (Tensor): Box energies / deltas for each scale
383
- level with shape (N, num_anchors * 4, H, W).
384
- anchors (Tensor): Box reference for each scale level with shape
385
- (N, num_total_anchors, 4).
386
- labels (Tensor): Labels of each anchors with shape
387
- (N, num_total_anchors).
388
- label_weights (Tensor): Label weights of each anchor with shape
389
- (N, num_total_anchors)
390
- bbox_targets (Tensor): BBox regression targets of each anchor wight
391
- shape (N, num_total_anchors, 4).
392
- bbox_weights (Tensor): BBox regression loss weights of each anchor
393
- with shape (N, num_total_anchors, 4).
394
- num_total_samples (int): If sampling, num total samples equal to
395
- the number of total anchors; Otherwise, it is the number of
396
- positive anchors.
397
-
398
- Returns:
399
- dict[str, Tensor]: A dictionary of loss components.
400
- """
401
- # classification loss
402
- labels = labels.reshape(-1)
403
- label_weights = label_weights.reshape(-1)
404
- cls_score = cls_score.permute(0, 2, 3,
405
- 1).reshape(-1, self.cls_out_channels)
406
- loss_cls = self.loss_cls(
407
- cls_score, labels, label_weights, avg_factor=num_total_samples)
408
- # regression loss
409
- bbox_targets = bbox_targets.reshape(-1, 4)
410
- bbox_weights = bbox_weights.reshape(-1, 4)
411
- bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
412
- if self.reg_decoded_bbox:
413
- # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
414
- # is applied directly on the decoded bounding boxes, it
415
- # decodes the already encoded coordinates to absolute format.
416
- anchors = anchors.reshape(-1, 4)
417
- bbox_pred = self.bbox_coder.decode(anchors, bbox_pred)
418
- loss_bbox = self.loss_bbox(
419
- bbox_pred,
420
- bbox_targets,
421
- bbox_weights,
422
- avg_factor=num_total_samples)
423
- return loss_cls, loss_bbox
424
-
425
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
426
- def loss(self,
427
- cls_scores,
428
- bbox_preds,
429
- gt_bboxes,
430
- gt_labels,
431
- img_metas,
432
- gt_bboxes_ignore=None):
433
- """Compute losses of the head.
434
-
435
- Args:
436
- cls_scores (list[Tensor]): Box scores for each scale level
437
- Has shape (N, num_anchors * num_classes, H, W)
438
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
439
- level with shape (N, num_anchors * 4, H, W)
440
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
441
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
442
- gt_labels (list[Tensor]): class indices corresponding to each box
443
- img_metas (list[dict]): Meta information of each image, e.g.,
444
- image size, scaling factor, etc.
445
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
446
- boxes can be ignored when computing the loss. Default: None
447
-
448
- Returns:
449
- dict[str, Tensor]: A dictionary of loss components.
450
- """
451
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
452
- assert len(featmap_sizes) == self.anchor_generator.num_levels
453
-
454
- device = cls_scores[0].device
455
-
456
- anchor_list, valid_flag_list = self.get_anchors(
457
- featmap_sizes, img_metas, device=device)
458
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
459
- cls_reg_targets = self.get_targets(
460
- anchor_list,
461
- valid_flag_list,
462
- gt_bboxes,
463
- img_metas,
464
- gt_bboxes_ignore_list=gt_bboxes_ignore,
465
- gt_labels_list=gt_labels,
466
- label_channels=label_channels)
467
- if cls_reg_targets is None:
468
- return None
469
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
470
- num_total_pos, num_total_neg) = cls_reg_targets
471
- num_total_samples = (
472
- num_total_pos + num_total_neg if self.sampling else num_total_pos)
473
-
474
- # anchor number of multi levels
475
- num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
476
- # concat all level anchors and flags to a single tensor
477
- concat_anchor_list = []
478
- for i in range(len(anchor_list)):
479
- concat_anchor_list.append(torch.cat(anchor_list[i]))
480
- all_anchor_list = images_to_levels(concat_anchor_list,
481
- num_level_anchors)
482
-
483
- losses_cls, losses_bbox = multi_apply(
484
- self.loss_single,
485
- cls_scores,
486
- bbox_preds,
487
- all_anchor_list,
488
- labels_list,
489
- label_weights_list,
490
- bbox_targets_list,
491
- bbox_weights_list,
492
- num_total_samples=num_total_samples)
493
- return dict(loss_cls=losses_cls, loss_bbox=losses_bbox)
494
-
495
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
496
- def get_bboxes(self,
497
- cls_scores,
498
- bbox_preds,
499
- img_metas,
500
- cfg=None,
501
- rescale=False,
502
- with_nms=True):
503
- """Transform network output for a batch into bbox predictions.
504
-
505
- Args:
506
- cls_scores (list[Tensor]): Box scores for each level in the
507
- feature pyramid, has shape
508
- (N, num_anchors * num_classes, H, W).
509
- bbox_preds (list[Tensor]): Box energies / deltas for each
510
- level in the feature pyramid, has shape
511
- (N, num_anchors * 4, H, W).
512
- img_metas (list[dict]): Meta information of each image, e.g.,
513
- image size, scaling factor, etc.
514
- cfg (mmcv.Config | None): Test / postprocessing configuration,
515
- if None, test_cfg would be used
516
- rescale (bool): If True, return boxes in original image space.
517
- Default: False.
518
- with_nms (bool): If True, do nms before return boxes.
519
- Default: True.
520
-
521
- Returns:
522
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
523
- The first item is an (n, 5) tensor, where 5 represent
524
- (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1.
525
- The shape of the second tensor in the tuple is (n,), and
526
- each element represents the class label of the corresponding
527
- box.
528
-
529
- Example:
530
- >>> import mmcv
531
- >>> self = AnchorHead(
532
- >>> num_classes=9,
533
- >>> in_channels=1,
534
- >>> anchor_generator=dict(
535
- >>> type='AnchorGenerator',
536
- >>> scales=[8],
537
- >>> ratios=[0.5, 1.0, 2.0],
538
- >>> strides=[4,]))
539
- >>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
540
- >>> cfg = mmcv.Config(dict(
541
- >>> score_thr=0.00,
542
- >>> nms=dict(type='nms', iou_thr=1.0),
543
- >>> max_per_img=10))
544
- >>> feat = torch.rand(1, 1, 3, 3)
545
- >>> cls_score, bbox_pred = self.forward_single(feat)
546
- >>> # note the input lists are over different levels, not images
547
- >>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
548
- >>> result_list = self.get_bboxes(cls_scores, bbox_preds,
549
- >>> img_metas, cfg)
550
- >>> det_bboxes, det_labels = result_list[0]
551
- >>> assert len(result_list) == 1
552
- >>> assert det_bboxes.shape[1] == 5
553
- >>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
554
- """
555
- assert len(cls_scores) == len(bbox_preds)
556
- num_levels = len(cls_scores)
557
-
558
- device = cls_scores[0].device
559
- featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)]
560
- mlvl_anchors = self.anchor_generator.grid_anchors(
561
- featmap_sizes, device=device)
562
-
563
- mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)]
564
- mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)]
565
-
566
- if torch.onnx.is_in_onnx_export():
567
- assert len(
568
- img_metas
569
- ) == 1, 'Only support one input image while in exporting to ONNX'
570
- img_shapes = img_metas[0]['img_shape_for_onnx']
571
- else:
572
- img_shapes = [
573
- img_metas[i]['img_shape']
574
- for i in range(cls_scores[0].shape[0])
575
- ]
576
- scale_factors = [
577
- img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0])
578
- ]
579
-
580
- if with_nms:
581
- # some heads don't support with_nms argument
582
- result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds,
583
- mlvl_anchors, img_shapes,
584
- scale_factors, cfg, rescale)
585
- else:
586
- result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds,
587
- mlvl_anchors, img_shapes,
588
- scale_factors, cfg, rescale,
589
- with_nms)
590
- return result_list
591
-
592
- def _get_bboxes(self,
593
- mlvl_cls_scores,
594
- mlvl_bbox_preds,
595
- mlvl_anchors,
596
- img_shapes,
597
- scale_factors,
598
- cfg,
599
- rescale=False,
600
- with_nms=True):
601
- """Transform outputs for a batch item into bbox predictions.
602
-
603
- Args:
604
- mlvl_cls_scores (list[Tensor]): Each element in the list is
605
- the scores of bboxes of single level in the feature pyramid,
606
- has shape (N, num_anchors * num_classes, H, W).
607
- mlvl_bbox_preds (list[Tensor]): Each element in the list is the
608
- bboxes predictions of single level in the feature pyramid,
609
- has shape (N, num_anchors * 4, H, W).
610
- mlvl_anchors (list[Tensor]): Each element in the list is
611
- the anchors of single level in feature pyramid, has shape
612
- (num_anchors, 4).
613
- img_shapes (list[tuple[int]]): Each tuple in the list represent
614
- the shape(height, width, 3) of single image in the batch.
615
- scale_factors (list[ndarray]): Scale factor of the batch
616
- image arange as list[(w_scale, h_scale, w_scale, h_scale)].
617
- cfg (mmcv.Config): Test / postprocessing configuration,
618
- if None, test_cfg would be used.
619
- rescale (bool): If True, return boxes in original image space.
620
- Default: False.
621
- with_nms (bool): If True, do nms before return boxes.
622
- Default: True.
623
-
624
- Returns:
625
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
626
- The first item is an (n, 5) tensor, where 5 represent
627
- (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1.
628
- The shape of the second tensor in the tuple is (n,), and
629
- each element represents the class label of the corresponding
630
- box.
631
- """
632
- cfg = self.test_cfg if cfg is None else cfg
633
- assert len(mlvl_cls_scores) == len(mlvl_bbox_preds) == len(
634
- mlvl_anchors)
635
- batch_size = mlvl_cls_scores[0].shape[0]
636
- # convert to tensor to keep tracing
637
- nms_pre_tensor = torch.tensor(
638
- cfg.get('nms_pre', -1),
639
- device=mlvl_cls_scores[0].device,
640
- dtype=torch.long)
641
-
642
- mlvl_bboxes = []
643
- mlvl_scores = []
644
- for cls_score, bbox_pred, anchors in zip(mlvl_cls_scores,
645
- mlvl_bbox_preds,
646
- mlvl_anchors):
647
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
648
- cls_score = cls_score.permute(0, 2, 3,
649
- 1).reshape(batch_size, -1,
650
- self.cls_out_channels)
651
- if self.use_sigmoid_cls:
652
- scores = cls_score.sigmoid()
653
- else:
654
- scores = cls_score.softmax(-1)
655
- bbox_pred = bbox_pred.permute(0, 2, 3,
656
- 1).reshape(batch_size, -1, 4)
657
- anchors = anchors.expand_as(bbox_pred)
658
- # Always keep topk op for dynamic input in onnx
659
- if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export()
660
- or scores.shape[-2] > nms_pre_tensor):
661
- from torch import _shape_as_tensor
662
- # keep shape as tensor and get k
663
- num_anchor = _shape_as_tensor(scores)[-2].to(
664
- nms_pre_tensor.device)
665
- nms_pre = torch.where(nms_pre_tensor < num_anchor,
666
- nms_pre_tensor, num_anchor)
667
-
668
- # Get maximum scores for foreground classes.
669
- if self.use_sigmoid_cls:
670
- max_scores, _ = scores.max(-1)
671
- else:
672
- # remind that we set FG labels to [0, num_class-1]
673
- # since mmdet v2.0
674
- # BG cat_id: num_class
675
- max_scores, _ = scores[..., :-1].max(-1)
676
-
677
- _, topk_inds = max_scores.topk(nms_pre)
678
- batch_inds = torch.arange(batch_size).view(
679
- -1, 1).expand_as(topk_inds)
680
- anchors = anchors[batch_inds, topk_inds, :]
681
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
682
- scores = scores[batch_inds, topk_inds, :]
683
-
684
- bboxes = self.bbox_coder.decode(
685
- anchors, bbox_pred, max_shape=img_shapes)
686
- mlvl_bboxes.append(bboxes)
687
- mlvl_scores.append(scores)
688
-
689
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
690
- if rescale:
691
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
692
- scale_factors).unsqueeze(1)
693
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
694
-
695
- # Set max number of box to be feed into nms in deployment
696
- deploy_nms_pre = cfg.get('deploy_nms_pre', -1)
697
- if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export():
698
- # Get maximum scores for foreground classes.
699
- if self.use_sigmoid_cls:
700
- max_scores, _ = batch_mlvl_scores.max(-1)
701
- else:
702
- # remind that we set FG labels to [0, num_class-1]
703
- # since mmdet v2.0
704
- # BG cat_id: num_class
705
- max_scores, _ = batch_mlvl_scores[..., :-1].max(-1)
706
- _, topk_inds = max_scores.topk(deploy_nms_pre)
707
- batch_inds = torch.arange(batch_size).view(-1,
708
- 1).expand_as(topk_inds)
709
- batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds]
710
- batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds]
711
- if self.use_sigmoid_cls:
712
- # Add a dummy background class to the backend when using sigmoid
713
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
714
- # BG cat_id: num_class
715
- padding = batch_mlvl_scores.new_zeros(batch_size,
716
- batch_mlvl_scores.shape[1],
717
- 1)
718
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
719
-
720
- if with_nms:
721
- det_results = []
722
- for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
723
- batch_mlvl_scores):
724
- det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores,
725
- cfg.score_thr, cfg.nms,
726
- cfg.max_per_img)
727
- det_results.append(tuple([det_bbox, det_label]))
728
- else:
729
- det_results = [
730
- tuple(mlvl_bs)
731
- for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores)
732
- ]
733
- return det_results
734
-
735
- def aug_test(self, feats, img_metas, rescale=False):
736
- """Test function with test time augmentation.
737
-
738
- Args:
739
- feats (list[Tensor]): the outer list indicates test-time
740
- augmentations and inner Tensor should have a shape NxCxHxW,
741
- which contains features for all images in the batch.
742
- img_metas (list[list[dict]]): the outer list indicates test-time
743
- augs (multiscale, flip, etc.) and the inner list indicates
744
- images in a batch. each dict has image information.
745
- rescale (bool, optional): Whether to rescale the results.
746
- Defaults to False.
747
-
748
- Returns:
749
- list[ndarray]: bbox results of each class
750
- """
751
- return self.aug_test_bboxes(feats, img_metas, rescale=rescale)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './fcn_r50-d8_480x480_40k_pascal_context.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py DELETED
@@ -1,11 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/lraspp_m-v3-d8.py', '../_base_/datasets/cityscapes.py',
3
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
4
- ]
5
-
6
- model = dict(pretrained='open-mmlab://contrib/mobilenet_v3_large')
7
-
8
- # Re-config the data sampler.
9
- data = dict(samples_per_gpu=4, workers_per_gpu=4)
10
-
11
- runner = dict(type='IterBasedRunner', max_iters=320000)
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_769x769_40k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/psanet_r50-d8.py',
3
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_40k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(align_corners=True),
8
- auxiliary_head=dict(align_corners=True),
9
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/__init__.py DELETED
@@ -1,4 +0,0 @@
1
- from .collect_env import collect_env
2
- from .logger import get_root_logger
3
-
4
- __all__ = ['get_root_logger', 'collect_env']
 
 
 
 
 
spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/trident_conv.py DELETED
@@ -1,90 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # https://github.com/facebookresearch/detectron2/blob/main/projects/TridentNet/tridentnet/trident_conv.py
3
-
4
- import torch
5
- from torch import nn
6
- from torch.nn import functional as F
7
- from torch.nn.modules.utils import _pair
8
-
9
-
10
- class MultiScaleTridentConv(nn.Module):
11
- def __init__(
12
- self,
13
- in_channels,
14
- out_channels,
15
- kernel_size,
16
- stride=1,
17
- strides=1,
18
- paddings=0,
19
- dilations=1,
20
- dilation=1,
21
- groups=1,
22
- num_branch=1,
23
- test_branch_idx=-1,
24
- bias=False,
25
- norm=None,
26
- activation=None,
27
- ):
28
- super(MultiScaleTridentConv, self).__init__()
29
- self.in_channels = in_channels
30
- self.out_channels = out_channels
31
- self.kernel_size = _pair(kernel_size)
32
- self.num_branch = num_branch
33
- self.stride = _pair(stride)
34
- self.groups = groups
35
- self.with_bias = bias
36
- self.dilation = dilation
37
- if isinstance(paddings, int):
38
- paddings = [paddings] * self.num_branch
39
- if isinstance(dilations, int):
40
- dilations = [dilations] * self.num_branch
41
- if isinstance(strides, int):
42
- strides = [strides] * self.num_branch
43
- self.paddings = [_pair(padding) for padding in paddings]
44
- self.dilations = [_pair(dilation) for dilation in dilations]
45
- self.strides = [_pair(stride) for stride in strides]
46
- self.test_branch_idx = test_branch_idx
47
- self.norm = norm
48
- self.activation = activation
49
-
50
- assert len({self.num_branch, len(self.paddings), len(self.strides)}) == 1
51
-
52
- self.weight = nn.Parameter(
53
- torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)
54
- )
55
- if bias:
56
- self.bias = nn.Parameter(torch.Tensor(out_channels))
57
- else:
58
- self.bias = None
59
-
60
- nn.init.kaiming_uniform_(self.weight, nonlinearity="relu")
61
- if self.bias is not None:
62
- nn.init.constant_(self.bias, 0)
63
-
64
- def forward(self, inputs):
65
- num_branch = self.num_branch if self.training or self.test_branch_idx == -1 else 1
66
- assert len(inputs) == num_branch
67
-
68
- if self.training or self.test_branch_idx == -1:
69
- outputs = [
70
- F.conv2d(input, self.weight, self.bias, stride, padding, self.dilation, self.groups)
71
- for input, stride, padding in zip(inputs, self.strides, self.paddings)
72
- ]
73
- else:
74
- outputs = [
75
- F.conv2d(
76
- inputs[0],
77
- self.weight,
78
- self.bias,
79
- self.strides[self.test_branch_idx] if self.test_branch_idx == -1 else self.strides[-1],
80
- self.paddings[self.test_branch_idx] if self.test_branch_idx == -1 else self.paddings[-1],
81
- self.dilation,
82
- self.groups,
83
- )
84
- ]
85
-
86
- if self.norm is not None:
87
- outputs = [self.norm(x) for x in outputs]
88
- if self.activation is not None:
89
- outputs = [self.activation(x) for x in outputs]
90
- return outputs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py DELETED
@@ -1,329 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import functools
3
- import json
4
- import logging
5
- import multiprocessing as mp
6
- import numpy as np
7
- import os
8
- from itertools import chain
9
- import pycocotools.mask as mask_util
10
- from PIL import Image
11
-
12
- from detectron2.structures import BoxMode
13
- from detectron2.utils.comm import get_world_size
14
- from detectron2.utils.file_io import PathManager
15
- from detectron2.utils.logger import setup_logger
16
-
17
- try:
18
- import cv2 # noqa
19
- except ImportError:
20
- # OpenCV is an optional dependency at the moment
21
- pass
22
-
23
-
24
- logger = logging.getLogger(__name__)
25
-
26
-
27
- def _get_cityscapes_files(image_dir, gt_dir):
28
- files = []
29
- # scan through the directory
30
- cities = PathManager.ls(image_dir)
31
- logger.info(f"{len(cities)} cities found in '{image_dir}'.")
32
- for city in cities:
33
- city_img_dir = os.path.join(image_dir, city)
34
- city_gt_dir = os.path.join(gt_dir, city)
35
- for basename in PathManager.ls(city_img_dir):
36
- image_file = os.path.join(city_img_dir, basename)
37
-
38
- suffix = "leftImg8bit.png"
39
- assert basename.endswith(suffix), basename
40
- basename = basename[: -len(suffix)]
41
-
42
- instance_file = os.path.join(city_gt_dir, basename + "gtFine_instanceIds.png")
43
- label_file = os.path.join(city_gt_dir, basename + "gtFine_labelIds.png")
44
- json_file = os.path.join(city_gt_dir, basename + "gtFine_polygons.json")
45
-
46
- files.append((image_file, instance_file, label_file, json_file))
47
- assert len(files), "No images found in {}".format(image_dir)
48
- for f in files[0]:
49
- assert PathManager.isfile(f), f
50
- return files
51
-
52
-
53
- def load_cityscapes_instances(image_dir, gt_dir, from_json=True, to_polygons=True):
54
- """
55
- Args:
56
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
57
- gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train".
58
- from_json (bool): whether to read annotations from the raw json file or the png files.
59
- to_polygons (bool): whether to represent the segmentation as polygons
60
- (COCO's format) instead of masks (cityscapes's format).
61
-
62
- Returns:
63
- list[dict]: a list of dicts in Detectron2 standard format. (See
64
- `Using Custom Datasets </tutorials/datasets.html>`_ )
65
- """
66
- if from_json:
67
- assert to_polygons, (
68
- "Cityscapes's json annotations are in polygon format. "
69
- "Converting to mask format is not supported now."
70
- )
71
- files = _get_cityscapes_files(image_dir, gt_dir)
72
-
73
- logger.info("Preprocessing cityscapes annotations ...")
74
- # This is still not fast: all workers will execute duplicate works and will
75
- # take up to 10m on a 8GPU server.
76
- pool = mp.Pool(processes=max(mp.cpu_count() // get_world_size() // 2, 4))
77
-
78
- ret = pool.map(
79
- functools.partial(_cityscapes_files_to_dict, from_json=from_json, to_polygons=to_polygons),
80
- files,
81
- )
82
- logger.info("Loaded {} images from {}".format(len(ret), image_dir))
83
-
84
- # Map cityscape ids to contiguous ids
85
- from cityscapesscripts.helpers.labels import labels
86
-
87
- labels = [l for l in labels if l.hasInstances and not l.ignoreInEval]
88
- dataset_id_to_contiguous_id = {l.id: idx for idx, l in enumerate(labels)}
89
- for dict_per_image in ret:
90
- for anno in dict_per_image["annotations"]:
91
- anno["category_id"] = dataset_id_to_contiguous_id[anno["category_id"]]
92
- return ret
93
-
94
-
95
- def load_cityscapes_semantic(image_dir, gt_dir):
96
- """
97
- Args:
98
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
99
- gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train".
100
-
101
- Returns:
102
- list[dict]: a list of dict, each has "file_name" and
103
- "sem_seg_file_name".
104
- """
105
- ret = []
106
- # gt_dir is small and contain many small files. make sense to fetch to local first
107
- gt_dir = PathManager.get_local_path(gt_dir)
108
- for image_file, _, label_file, json_file in _get_cityscapes_files(image_dir, gt_dir):
109
- label_file = label_file.replace("labelIds", "labelTrainIds")
110
-
111
- with PathManager.open(json_file, "r") as f:
112
- jsonobj = json.load(f)
113
- ret.append(
114
- {
115
- "file_name": image_file,
116
- "sem_seg_file_name": label_file,
117
- "height": jsonobj["imgHeight"],
118
- "width": jsonobj["imgWidth"],
119
- }
120
- )
121
- assert len(ret), f"No images found in {image_dir}!"
122
- assert PathManager.isfile(
123
- ret[0]["sem_seg_file_name"]
124
- ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa
125
- return ret
126
-
127
-
128
- def _cityscapes_files_to_dict(files, from_json, to_polygons):
129
- """
130
- Parse cityscapes annotation files to a instance segmentation dataset dict.
131
-
132
- Args:
133
- files (tuple): consists of (image_file, instance_id_file, label_id_file, json_file)
134
- from_json (bool): whether to read annotations from the raw json file or the png files.
135
- to_polygons (bool): whether to represent the segmentation as polygons
136
- (COCO's format) instead of masks (cityscapes's format).
137
-
138
- Returns:
139
- A dict in Detectron2 Dataset format.
140
- """
141
- from cityscapesscripts.helpers.labels import id2label, name2label
142
-
143
- image_file, instance_id_file, _, json_file = files
144
-
145
- annos = []
146
-
147
- if from_json:
148
- from shapely.geometry import MultiPolygon, Polygon
149
-
150
- with PathManager.open(json_file, "r") as f:
151
- jsonobj = json.load(f)
152
- ret = {
153
- "file_name": image_file,
154
- "image_id": os.path.basename(image_file),
155
- "height": jsonobj["imgHeight"],
156
- "width": jsonobj["imgWidth"],
157
- }
158
-
159
- # `polygons_union` contains the union of all valid polygons.
160
- polygons_union = Polygon()
161
-
162
- # CityscapesScripts draw the polygons in sequential order
163
- # and each polygon *overwrites* existing ones. See
164
- # (https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/json2instanceImg.py) # noqa
165
- # We use reverse order, and each polygon *avoids* early ones.
166
- # This will resolve the ploygon overlaps in the same way as CityscapesScripts.
167
- for obj in jsonobj["objects"][::-1]:
168
- if "deleted" in obj: # cityscapes data format specific
169
- continue
170
- label_name = obj["label"]
171
-
172
- try:
173
- label = name2label[label_name]
174
- except KeyError:
175
- if label_name.endswith("group"): # crowd area
176
- label = name2label[label_name[: -len("group")]]
177
- else:
178
- raise
179
- if label.id < 0: # cityscapes data format
180
- continue
181
-
182
- # Cityscapes's raw annotations uses integer coordinates
183
- # Therefore +0.5 here
184
- poly_coord = np.asarray(obj["polygon"], dtype="f4") + 0.5
185
- # CityscapesScript uses PIL.ImageDraw.polygon to rasterize
186
- # polygons for evaluation. This function operates in integer space
187
- # and draws each pixel whose center falls into the polygon.
188
- # Therefore it draws a polygon which is 0.5 "fatter" in expectation.
189
- # We therefore dilate the input polygon by 0.5 as our input.
190
- poly = Polygon(poly_coord).buffer(0.5, resolution=4)
191
-
192
- if not label.hasInstances or label.ignoreInEval:
193
- # even if we won't store the polygon it still contributes to overlaps resolution
194
- polygons_union = polygons_union.union(poly)
195
- continue
196
-
197
- # Take non-overlapping part of the polygon
198
- poly_wo_overlaps = poly.difference(polygons_union)
199
- if poly_wo_overlaps.is_empty:
200
- continue
201
- polygons_union = polygons_union.union(poly)
202
-
203
- anno = {}
204
- anno["iscrowd"] = label_name.endswith("group")
205
- anno["category_id"] = label.id
206
-
207
- if isinstance(poly_wo_overlaps, Polygon):
208
- poly_list = [poly_wo_overlaps]
209
- elif isinstance(poly_wo_overlaps, MultiPolygon):
210
- poly_list = poly_wo_overlaps.geoms
211
- else:
212
- raise NotImplementedError("Unknown geometric structure {}".format(poly_wo_overlaps))
213
-
214
- poly_coord = []
215
- for poly_el in poly_list:
216
- # COCO API can work only with exterior boundaries now, hence we store only them.
217
- # TODO: store both exterior and interior boundaries once other parts of the
218
- # codebase support holes in polygons.
219
- poly_coord.append(list(chain(*poly_el.exterior.coords)))
220
- anno["segmentation"] = poly_coord
221
- (xmin, ymin, xmax, ymax) = poly_wo_overlaps.bounds
222
-
223
- anno["bbox"] = (xmin, ymin, xmax, ymax)
224
- anno["bbox_mode"] = BoxMode.XYXY_ABS
225
-
226
- annos.append(anno)
227
- else:
228
- # See also the official annotation parsing scripts at
229
- # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/instances2dict.py # noqa
230
- with PathManager.open(instance_id_file, "rb") as f:
231
- inst_image = np.asarray(Image.open(f), order="F")
232
- # ids < 24 are stuff labels (filtering them first is about 5% faster)
233
- flattened_ids = np.unique(inst_image[inst_image >= 24])
234
-
235
- ret = {
236
- "file_name": image_file,
237
- "image_id": os.path.basename(image_file),
238
- "height": inst_image.shape[0],
239
- "width": inst_image.shape[1],
240
- }
241
-
242
- for instance_id in flattened_ids:
243
- # For non-crowd annotations, instance_id // 1000 is the label_id
244
- # Crowd annotations have <1000 instance ids
245
- label_id = instance_id // 1000 if instance_id >= 1000 else instance_id
246
- label = id2label[label_id]
247
- if not label.hasInstances or label.ignoreInEval:
248
- continue
249
-
250
- anno = {}
251
- anno["iscrowd"] = instance_id < 1000
252
- anno["category_id"] = label.id
253
-
254
- mask = np.asarray(inst_image == instance_id, dtype=np.uint8, order="F")
255
-
256
- inds = np.nonzero(mask)
257
- ymin, ymax = inds[0].min(), inds[0].max()
258
- xmin, xmax = inds[1].min(), inds[1].max()
259
- anno["bbox"] = (xmin, ymin, xmax, ymax)
260
- if xmax <= xmin or ymax <= ymin:
261
- continue
262
- anno["bbox_mode"] = BoxMode.XYXY_ABS
263
- if to_polygons:
264
- # This conversion comes from D4809743 and D5171122,
265
- # when Mask-RCNN was first developed.
266
- contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[
267
- -2
268
- ]
269
- polygons = [c.reshape(-1).tolist() for c in contours if len(c) >= 3]
270
- # opencv's can produce invalid polygons
271
- if len(polygons) == 0:
272
- continue
273
- anno["segmentation"] = polygons
274
- else:
275
- anno["segmentation"] = mask_util.encode(mask[:, :, None])[0]
276
- annos.append(anno)
277
- ret["annotations"] = annos
278
- return ret
279
-
280
-
281
- if __name__ == "__main__":
282
- """
283
- Test the cityscapes dataset loader.
284
-
285
- Usage:
286
- python -m detectron2.data.datasets.cityscapes \
287
- cityscapes/leftImg8bit/train cityscapes/gtFine/train
288
- """
289
- import argparse
290
-
291
- parser = argparse.ArgumentParser()
292
- parser.add_argument("image_dir")
293
- parser.add_argument("gt_dir")
294
- parser.add_argument("--type", choices=["instance", "semantic"], default="instance")
295
- args = parser.parse_args()
296
- from detectron2.data.catalog import Metadata
297
- from detectron2.utils.visualizer import Visualizer
298
- from cityscapesscripts.helpers.labels import labels
299
-
300
- logger = setup_logger(name=__name__)
301
-
302
- dirname = "cityscapes-data-vis"
303
- os.makedirs(dirname, exist_ok=True)
304
-
305
- if args.type == "instance":
306
- dicts = load_cityscapes_instances(
307
- args.image_dir, args.gt_dir, from_json=True, to_polygons=True
308
- )
309
- logger.info("Done loading {} samples.".format(len(dicts)))
310
-
311
- thing_classes = [k.name for k in labels if k.hasInstances and not k.ignoreInEval]
312
- meta = Metadata().set(thing_classes=thing_classes)
313
-
314
- else:
315
- dicts = load_cityscapes_semantic(args.image_dir, args.gt_dir)
316
- logger.info("Done loading {} samples.".format(len(dicts)))
317
-
318
- stuff_classes = [k.name for k in labels if k.trainId != 255]
319
- stuff_colors = [k.color for k in labels if k.trainId != 255]
320
- meta = Metadata().set(stuff_classes=stuff_classes, stuff_colors=stuff_colors)
321
-
322
- for d in dicts:
323
- img = np.array(Image.open(PathManager.open(d["file_name"], "rb")))
324
- visualizer = Visualizer(img, metadata=meta)
325
- vis = visualizer.draw_dataset_dict(d)
326
- # cv2.imshow("a", vis.get_image()[:, :, ::-1])
327
- # cv2.waitKey()
328
- fpath = os.path.join(dirname, os.path.basename(d["file_name"]))
329
- vis.save(fpath)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BeeMon/dreambooth-training/train_dreambooth.py DELETED
@@ -1,889 +0,0 @@
1
- import argparse
2
- import itertools
3
- import math
4
- import os
5
- from pathlib import Path
6
- from typing import Optional
7
- import subprocess
8
- import sys
9
- import gc
10
- import random
11
-
12
- import torch
13
- import torch.nn.functional as F
14
- import torch.utils.checkpoint
15
- from torch.utils.data import Dataset
16
-
17
- from accelerate import Accelerator
18
- from accelerate.logging import get_logger
19
- from accelerate.utils import set_seed
20
- from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
21
- from diffusers.utils.import_utils import is_xformers_available
22
- from diffusers.optimization import get_scheduler
23
- from huggingface_hub import HfFolder, Repository, whoami
24
- from PIL import Image
25
- from torchvision import transforms
26
- from tqdm.auto import tqdm
27
- from transformers import CLIPTextModel, CLIPTokenizer
28
-
29
-
30
- logger = get_logger(__name__)
31
-
32
-
33
- def parse_args():
34
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
35
- parser.add_argument(
36
- "--pretrained_model_name_or_path",
37
- type=str,
38
- default=None,
39
- #required=True,
40
- help="Path to pretrained model or model identifier from huggingface.co/models.",
41
- )
42
- parser.add_argument(
43
- "--tokenizer_name",
44
- type=str,
45
- default=None,
46
- help="Pretrained tokenizer name or path if not the same as model_name",
47
- )
48
- parser.add_argument(
49
- "--instance_data_dir",
50
- type=str,
51
- default=None,
52
- #required=True,
53
- help="A folder containing the training data of instance images.",
54
- )
55
- parser.add_argument(
56
- "--class_data_dir",
57
- type=str,
58
- default=None,
59
- #required=False,
60
- help="A folder containing the training data of class images.",
61
- )
62
- parser.add_argument(
63
- "--instance_prompt",
64
- type=str,
65
- default=None,
66
- help="The prompt with identifier specifying the instance",
67
- )
68
- parser.add_argument(
69
- "--class_prompt",
70
- type=str,
71
- default="",
72
- help="The prompt to specify images in the same class as provided instance images.",
73
- )
74
- parser.add_argument(
75
- "--with_prior_preservation",
76
- default=False,
77
- action="store_true",
78
- help="Flag to add prior preservation loss.",
79
- )
80
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
81
- parser.add_argument(
82
- "--num_class_images",
83
- type=int,
84
- default=100,
85
- help=(
86
- "Minimal class images for prior preservation loss. If not have enough images, additional images will be"
87
- " sampled with class_prompt."
88
- ),
89
- )
90
- parser.add_argument(
91
- "--output_dir",
92
- type=str,
93
- default="",
94
- help="The output directory where the model predictions and checkpoints will be written.",
95
- )
96
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
97
- parser.add_argument(
98
- "--resolution",
99
- type=int,
100
- default=512,
101
- help=(
102
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
103
- " resolution"
104
- ),
105
- )
106
- parser.add_argument(
107
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
108
- )
109
- parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder")
110
- parser.add_argument(
111
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
112
- )
113
- parser.add_argument(
114
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
115
- )
116
- parser.add_argument("--num_train_epochs", type=int, default=1)
117
- parser.add_argument(
118
- "--max_train_steps",
119
- type=int,
120
- default=None,
121
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
122
- )
123
- parser.add_argument(
124
- "--gradient_accumulation_steps",
125
- type=int,
126
- default=1,
127
- help="Number of updates steps to accumulate before performing a backward/update pass.",
128
- )
129
- parser.add_argument(
130
- "--gradient_checkpointing",
131
- action="store_true",
132
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
133
- )
134
- parser.add_argument(
135
- "--learning_rate",
136
- type=float,
137
- default=5e-6,
138
- help="Initial learning rate (after the potential warmup period) to use.",
139
- )
140
- parser.add_argument(
141
- "--scale_lr",
142
- action="store_true",
143
- default=False,
144
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
145
- )
146
- parser.add_argument(
147
- "--lr_scheduler",
148
- type=str,
149
- default="constant",
150
- help=(
151
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
152
- ' "constant", "constant_with_warmup"]'
153
- ),
154
- )
155
- parser.add_argument(
156
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
157
- )
158
- parser.add_argument(
159
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
160
- )
161
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
162
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
163
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
164
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
165
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
166
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
167
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
168
- parser.add_argument(
169
- "--hub_model_id",
170
- type=str,
171
- default=None,
172
- help="The name of the repository to keep in sync with the local `output_dir`.",
173
- )
174
- parser.add_argument(
175
- "--logging_dir",
176
- type=str,
177
- default="logs",
178
- help=(
179
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
180
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
181
- ),
182
- )
183
- parser.add_argument(
184
- "--mixed_precision",
185
- type=str,
186
- default="no",
187
- choices=["no", "fp16", "bf16"],
188
- help=(
189
- "Whether to use mixed precision. Choose"
190
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
191
- "and an Nvidia Ampere GPU."
192
- ),
193
- )
194
-
195
- parser.add_argument(
196
- "--save_n_steps",
197
- type=int,
198
- default=1,
199
- help=("Save the model every n global_steps"),
200
- )
201
-
202
-
203
- parser.add_argument(
204
- "--save_starting_step",
205
- type=int,
206
- default=1,
207
- help=("The step from which it starts saving intermediary checkpoints"),
208
- )
209
-
210
- parser.add_argument(
211
- "--stop_text_encoder_training",
212
- type=int,
213
- default=1000000,
214
- help=("The step at which the text_encoder is no longer trained"),
215
- )
216
-
217
-
218
- parser.add_argument(
219
- "--image_captions_filename",
220
- action="store_true",
221
- help="Get captions from filename",
222
- )
223
-
224
-
225
- parser.add_argument(
226
- "--dump_only_text_encoder",
227
- action="store_true",
228
- default=False,
229
- help="Dump only text encoder",
230
- )
231
-
232
- parser.add_argument(
233
- "--train_only_unet",
234
- action="store_true",
235
- default=False,
236
- help="Train only the unet",
237
- )
238
-
239
- parser.add_argument(
240
- "--cache_latents",
241
- action="store_true",
242
- default=False,
243
- help="Train only the unet",
244
- )
245
-
246
- parser.add_argument(
247
- "--Session_dir",
248
- type=str,
249
- default="",
250
- help="Current session directory",
251
- )
252
-
253
-
254
-
255
-
256
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
257
-
258
- args = parser.parse_args()
259
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
260
- if env_local_rank != -1 and env_local_rank != args.local_rank:
261
- args.local_rank = env_local_rank
262
-
263
- #if args.instance_data_dir is None:
264
- # raise ValueError("You must specify a train data directory.")
265
-
266
- #if args.with_prior_preservation:
267
- # if args.class_data_dir is None:
268
- # raise ValueError("You must specify a data directory for class images.")
269
- # if args.class_prompt is None:
270
- # raise ValueError("You must specify prompt for class images.")
271
-
272
- return args
273
-
274
-
275
- class DreamBoothDataset(Dataset):
276
- """
277
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
278
- It pre-processes the images and the tokenizes prompts.
279
- """
280
-
281
- def __init__(
282
- self,
283
- instance_data_root,
284
- instance_prompt,
285
- tokenizer,
286
- args,
287
- class_data_root=None,
288
- class_prompt=None,
289
- size=512,
290
- center_crop=False,
291
- ):
292
- self.size = size
293
- self.center_crop = center_crop
294
- self.tokenizer = tokenizer
295
- self.image_captions_filename = None
296
-
297
- self.instance_data_root = Path(instance_data_root)
298
- if not self.instance_data_root.exists():
299
- raise ValueError("Instance images root doesn't exists.")
300
-
301
- self.instance_images_path = list(Path(instance_data_root).iterdir())
302
- self.num_instance_images = len(self.instance_images_path)
303
- self.instance_prompt = instance_prompt
304
- self._length = self.num_instance_images
305
-
306
- if args.image_captions_filename:
307
- self.image_captions_filename = True
308
-
309
- if class_data_root is not None:
310
- self.class_data_root = Path(class_data_root)
311
- self.class_data_root.mkdir(parents=True, exist_ok=True)
312
- self.class_images_path = list(self.class_data_root.iterdir())
313
- random.shuffle(self.class_images_path)
314
- self.num_class_images = len(self.class_images_path)
315
- self._length = max(self.num_class_images, self.num_instance_images)
316
- self.class_prompt = class_prompt
317
- else:
318
- self.class_data_root = None
319
-
320
- self.image_transforms = transforms.Compose(
321
- [
322
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
323
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
324
- transforms.ToTensor(),
325
- transforms.Normalize([0.5], [0.5]),
326
- ]
327
- )
328
-
329
- def __len__(self):
330
- return self._length
331
-
332
- def __getitem__(self, index):
333
- example = {}
334
- path = self.instance_images_path[index % self.num_instance_images]
335
- instance_image = Image.open(path)
336
- if not instance_image.mode == "RGB":
337
- instance_image = instance_image.convert("RGB")
338
-
339
- instance_prompt = self.instance_prompt
340
-
341
- if self.image_captions_filename:
342
- filename = Path(path).stem
343
- pt=''.join([i for i in filename if not i.isdigit()])
344
- pt=pt.replace("_"," ")
345
- pt=pt.replace("(","")
346
- pt=pt.replace(")","")
347
- pt=pt.replace("-","")
348
- instance_prompt = pt
349
- sys.stdout.write(" " +instance_prompt+" ")
350
- sys.stdout.flush()
351
-
352
-
353
- example["instance_images"] = self.image_transforms(instance_image)
354
- example["instance_prompt_ids"] = self.tokenizer(
355
- instance_prompt,
356
- padding="do_not_pad",
357
- truncation=True,
358
- max_length=self.tokenizer.model_max_length,
359
- ).input_ids
360
-
361
- if self.class_data_root:
362
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
363
- if not class_image.mode == "RGB":
364
- class_image = class_image.convert("RGB")
365
- example["class_images"] = self.image_transforms(class_image)
366
- example["class_prompt_ids"] = self.tokenizer(
367
- self.class_prompt,
368
- padding="do_not_pad",
369
- truncation=True,
370
- max_length=self.tokenizer.model_max_length,
371
- ).input_ids
372
-
373
- return example
374
-
375
-
376
-
377
- class PromptDataset(Dataset):
378
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
379
-
380
- def __init__(self, prompt, num_samples):
381
- self.prompt = prompt
382
- self.num_samples = num_samples
383
-
384
- def __len__(self):
385
- return self.num_samples
386
-
387
- def __getitem__(self, index):
388
- example = {}
389
- example["prompt"] = self.prompt
390
- example["index"] = index
391
- return example
392
-
393
- class LatentsDataset(Dataset):
394
- def __init__(self, latents_cache, text_encoder_cache):
395
- self.latents_cache = latents_cache
396
- self.text_encoder_cache = text_encoder_cache
397
-
398
- def __len__(self):
399
- return len(self.latents_cache)
400
-
401
- def __getitem__(self, index):
402
- return self.latents_cache[index], self.text_encoder_cache[index]
403
-
404
- def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
405
- if token is None:
406
- token = HfFolder.get_token()
407
- if organization is None:
408
- username = whoami(token)["name"]
409
- return f"{username}/{model_id}"
410
- else:
411
- return f"{organization}/{model_id}"
412
-
413
- def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
414
- """
415
- Starts from base starting dict and then adds the remaining key values from updater replacing the values from
416
- the first starting/base dict with the second updater dict.
417
-
418
- For later: how does d = {**d1, **d2} replace collision?
419
-
420
- :param starting_dict:
421
- :param updater_dict:
422
- :return:
423
- """
424
- new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
425
- new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
426
- return new_dict
427
-
428
- def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
429
- """
430
-
431
- ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
432
- :param args1:
433
- :param args2:
434
- :return:
435
- """
436
- # - the merged args
437
- # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
438
- merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
439
- args = argparse.Namespace(**merged_key_values_for_namespace)
440
- return args
441
-
442
- def run_training(args_imported):
443
- args_default = parse_args()
444
- args = merge_args(args_default, args_imported)
445
- print(args)
446
- logging_dir = Path(args.output_dir, args.logging_dir)
447
- i=args.save_starting_step
448
- accelerator = Accelerator(
449
- gradient_accumulation_steps=args.gradient_accumulation_steps,
450
- mixed_precision=args.mixed_precision,
451
- log_with="tensorboard",
452
- logging_dir=logging_dir,
453
- )
454
-
455
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
456
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
457
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
458
- if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1:
459
- raise ValueError(
460
- "Gradient accumulation is not supported when training the text encoder in distributed training. "
461
- "Please set gradient_accumulation_steps to 1. This feature will be supported in the future."
462
- )
463
-
464
- if args.seed is not None:
465
- set_seed(args.seed)
466
-
467
- if args.with_prior_preservation:
468
- class_images_dir = Path(args.class_data_dir)
469
- if not class_images_dir.exists():
470
- class_images_dir.mkdir(parents=True)
471
- cur_class_images = len(list(class_images_dir.iterdir()))
472
-
473
- if cur_class_images < args.num_class_images:
474
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
475
- pipeline = StableDiffusionPipeline.from_pretrained(
476
- args.pretrained_model_name_or_path, torch_dtype=torch_dtype
477
- )
478
- pipeline.set_progress_bar_config(disable=True)
479
-
480
- num_new_images = args.num_class_images - cur_class_images
481
- logger.info(f"Number of class images to sample: {num_new_images}.")
482
-
483
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
484
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
485
-
486
- sample_dataloader = accelerator.prepare(sample_dataloader)
487
- pipeline.to(accelerator.device)
488
-
489
- for example in tqdm(
490
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
491
- ):
492
- with torch.autocast("cuda"):
493
- images = pipeline(example["prompt"]).images
494
-
495
- for i, image in enumerate(images):
496
- image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg")
497
-
498
- del pipeline
499
- if torch.cuda.is_available():
500
- torch.cuda.empty_cache()
501
-
502
- # Handle the repository creation
503
- if accelerator.is_main_process:
504
- if args.push_to_hub:
505
- if args.hub_model_id is None:
506
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
507
- else:
508
- repo_name = args.hub_model_id
509
- repo = Repository(args.output_dir, clone_from=repo_name)
510
-
511
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
512
- if "step_*" not in gitignore:
513
- gitignore.write("step_*\n")
514
- if "epoch_*" not in gitignore:
515
- gitignore.write("epoch_*\n")
516
- elif args.output_dir is not None:
517
- os.makedirs(args.output_dir, exist_ok=True)
518
-
519
- # Load the tokenizer
520
- if args.tokenizer_name:
521
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
522
- elif args.pretrained_model_name_or_path:
523
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
524
-
525
- # Load models and create wrapper for stable diffusion
526
- if args.train_only_unet:
527
- if os.path.exists(str(args.output_dir+"/text_encoder_trained")):
528
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained")
529
- elif os.path.exists(str(args.output_dir+"/text_encoder")):
530
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder")
531
- else:
532
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
533
- else:
534
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
535
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
536
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
537
- if is_xformers_available():
538
- try:
539
- print("Enabling memory efficient attention with xformers...")
540
- unet.enable_xformers_memory_efficient_attention()
541
- except Exception as e:
542
- logger.warning(
543
- f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}"
544
- )
545
- vae.requires_grad_(False)
546
- if not args.train_text_encoder:
547
- text_encoder.requires_grad_(False)
548
-
549
- if args.gradient_checkpointing:
550
- unet.enable_gradient_checkpointing()
551
- if args.train_text_encoder:
552
- text_encoder.gradient_checkpointing_enable()
553
-
554
- if args.scale_lr:
555
- args.learning_rate = (
556
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
557
- )
558
-
559
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
560
- if args.use_8bit_adam:
561
- try:
562
- import bitsandbytes as bnb
563
- except ImportError:
564
- raise ImportError(
565
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
566
- )
567
-
568
- optimizer_class = bnb.optim.AdamW8bit
569
- else:
570
- optimizer_class = torch.optim.AdamW
571
-
572
- params_to_optimize = (
573
- itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters()
574
- )
575
- optimizer = optimizer_class(
576
- params_to_optimize,
577
- lr=args.learning_rate,
578
- betas=(args.adam_beta1, args.adam_beta2),
579
- weight_decay=args.adam_weight_decay,
580
- eps=args.adam_epsilon,
581
- )
582
-
583
- noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler")
584
-
585
- train_dataset = DreamBoothDataset(
586
- instance_data_root=args.instance_data_dir,
587
- instance_prompt=args.instance_prompt,
588
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
589
- class_prompt=args.class_prompt,
590
- tokenizer=tokenizer,
591
- size=args.resolution,
592
- center_crop=args.center_crop,
593
- args=args,
594
- )
595
-
596
- def collate_fn(examples):
597
- input_ids = [example["instance_prompt_ids"] for example in examples]
598
- pixel_values = [example["instance_images"] for example in examples]
599
-
600
- # Concat class and instance examples for prior preservation.
601
- # We do this to avoid doing two forward passes.
602
- if args.with_prior_preservation:
603
- input_ids += [example["class_prompt_ids"] for example in examples]
604
- pixel_values += [example["class_images"] for example in examples]
605
-
606
- pixel_values = torch.stack(pixel_values)
607
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
608
-
609
- input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids
610
-
611
- batch = {
612
- "input_ids": input_ids,
613
- "pixel_values": pixel_values,
614
- }
615
- return batch
616
-
617
- train_dataloader = torch.utils.data.DataLoader(
618
- train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn
619
- )
620
-
621
- # Scheduler and math around the number of training steps.
622
- overrode_max_train_steps = False
623
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
624
- if args.max_train_steps is None:
625
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
626
- overrode_max_train_steps = True
627
-
628
- lr_scheduler = get_scheduler(
629
- args.lr_scheduler,
630
- optimizer=optimizer,
631
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
632
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
633
- )
634
-
635
- if args.train_text_encoder:
636
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
637
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler
638
- )
639
- else:
640
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
641
- unet, optimizer, train_dataloader, lr_scheduler
642
- )
643
-
644
- weight_dtype = torch.float32
645
- if args.mixed_precision == "fp16":
646
- weight_dtype = torch.float16
647
- elif args.mixed_precision == "bf16":
648
- weight_dtype = torch.bfloat16
649
-
650
- # Move text_encode and vae to gpu.
651
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
652
- # as these models are only used for inference, keeping weights in full precision is not required.
653
- vae.to(accelerator.device, dtype=weight_dtype)
654
- if not args.train_text_encoder:
655
- text_encoder.to(accelerator.device, dtype=weight_dtype)
656
-
657
-
658
- if args.cache_latents:
659
- latents_cache = []
660
- text_encoder_cache = []
661
- for batch in tqdm(train_dataloader, desc="Caching latents"):
662
- with torch.no_grad():
663
- batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype)
664
- batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True)
665
- latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
666
- if args.train_text_encoder:
667
- text_encoder_cache.append(batch["input_ids"])
668
- else:
669
- text_encoder_cache.append(text_encoder(batch["input_ids"])[0])
670
- train_dataset = LatentsDataset(latents_cache, text_encoder_cache)
671
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True)
672
-
673
- del vae
674
- #if not args.train_text_encoder:
675
- # del text_encoder
676
- if torch.cuda.is_available():
677
- torch.cuda.empty_cache()
678
-
679
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
680
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
681
- if overrode_max_train_steps:
682
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
683
- # Afterwards we recalculate our number of training epochs
684
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
685
-
686
- # We need to initialize the trackers we use, and also store our configuration.
687
- # The trackers initializes automatically on the main process.
688
- if accelerator.is_main_process:
689
- accelerator.init_trackers("dreambooth", config=vars(args))
690
-
691
- def bar(prg):
692
- br='|'+'█' * prg + ' ' * (25-prg)+'|'
693
- return br
694
-
695
- # Train!
696
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
697
-
698
- logger.info("***** Running training *****")
699
- logger.info(f" Num examples = {len(train_dataset)}")
700
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
701
- logger.info(f" Num Epochs = {args.num_train_epochs}")
702
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
703
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
704
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
705
- logger.info(f" Total optimization steps = {args.max_train_steps}")
706
- # Only show the progress bar once on each machine.
707
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
708
- global_step = 0
709
-
710
- for epoch in range(args.num_train_epochs):
711
- unet.train()
712
- if args.train_text_encoder:
713
- text_encoder.train()
714
- for step, batch in enumerate(train_dataloader):
715
- with accelerator.accumulate(unet):
716
- # Convert images to latent space
717
- with torch.no_grad():
718
- if args.cache_latents:
719
- latents_dist = batch[0][0]
720
- else:
721
- latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist
722
- latents = latents_dist.sample() * 0.18215
723
-
724
- # Sample noise that we'll add to the latents
725
- noise = torch.randn_like(latents)
726
- bsz = latents.shape[0]
727
- # Sample a random timestep for each image
728
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
729
- timesteps = timesteps.long()
730
-
731
- # Add noise to the latents according to the noise magnitude at each timestep
732
- # (this is the forward diffusion process)
733
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
734
-
735
- # Get the text embedding for conditioning
736
- if(args.cache_latents):
737
- if args.train_text_encoder:
738
- encoder_hidden_states = text_encoder(batch[0][1])[0]
739
- else:
740
- encoder_hidden_states = batch[0][1]
741
- else:
742
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
743
-
744
- # Predict the noise residual
745
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
746
-
747
- # Get the target for loss depending on the prediction type
748
- if noise_scheduler.config.prediction_type == "epsilon":
749
- target = noise
750
- elif noise_scheduler.config.prediction_type == "v_prediction":
751
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
752
- else:
753
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
754
-
755
- if args.with_prior_preservation:
756
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
757
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
758
- target, target_prior = torch.chunk(target, 2, dim=0)
759
-
760
- # Compute instance loss
761
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean()
762
-
763
- # Compute prior loss
764
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
765
-
766
- # Add the prior loss to the instance loss.
767
- loss = loss + args.prior_loss_weight * prior_loss
768
- else:
769
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
770
-
771
- accelerator.backward(loss)
772
- if accelerator.sync_gradients:
773
- params_to_clip = (
774
- itertools.chain(unet.parameters(), text_encoder.parameters())
775
- if args.train_text_encoder
776
- else unet.parameters()
777
- )
778
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
779
- optimizer.step()
780
- lr_scheduler.step()
781
- optimizer.zero_grad()
782
-
783
- # Checks if the accelerator has performed an optimization step behind the scenes
784
- if accelerator.sync_gradients:
785
- progress_bar.update(1)
786
- global_step += 1
787
-
788
- fll=round((global_step*100)/args.max_train_steps)
789
- fll=round(fll/4)
790
- pr=bar(fll)
791
-
792
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
793
- progress_bar.set_postfix(**logs)
794
- progress_bar.set_description_str("Progress:"+pr)
795
- accelerator.log(logs, step=global_step)
796
-
797
- if global_step >= args.max_train_steps:
798
- break
799
-
800
- if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30:
801
- if accelerator.is_main_process:
802
- print(" " +" Freezing the text_encoder ..."+" ")
803
- frz_dir=args.output_dir + "/text_encoder_frozen"
804
- if os.path.exists(frz_dir):
805
- subprocess.call('rm -r '+ frz_dir, shell=True)
806
- os.mkdir(frz_dir)
807
- pipeline = StableDiffusionPipeline.from_pretrained(
808
- args.pretrained_model_name_or_path,
809
- unet=accelerator.unwrap_model(unet),
810
- text_encoder=accelerator.unwrap_model(text_encoder),
811
- )
812
- pipeline.text_encoder.save_pretrained(frz_dir)
813
-
814
- if args.save_n_steps >= 200:
815
- if global_step < args.max_train_steps and global_step+1==i:
816
- ckpt_name = "_step_" + str(global_step+1)
817
- save_dir = Path(args.output_dir+ckpt_name)
818
- save_dir=str(save_dir)
819
- save_dir=save_dir.replace(" ", "_")
820
- if not os.path.exists(save_dir):
821
- os.mkdir(save_dir)
822
- inst=save_dir[16:]
823
- inst=inst.replace(" ", "_")
824
- print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt")
825
- # Create the pipeline using the trained modules and save it.
826
- if accelerator.is_main_process:
827
- pipeline = StableDiffusionPipeline.from_pretrained(
828
- args.pretrained_model_name_or_path,
829
- unet=accelerator.unwrap_model(unet),
830
- text_encoder=accelerator.unwrap_model(text_encoder),
831
- )
832
- pipeline.save_pretrained(save_dir)
833
- frz_dir=args.output_dir + "/text_encoder_frozen"
834
- if args.train_text_encoder and os.path.exists(frz_dir):
835
- subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True)
836
- subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True)
837
- chkpth=args.Session_dir+"/"+inst+".ckpt"
838
- subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True)
839
- subprocess.call('rm -r '+ save_dir, shell=True)
840
- i=i+args.save_n_steps
841
-
842
- accelerator.wait_for_everyone()
843
-
844
- # Create the pipeline using using the trained modules and save it.
845
- if accelerator.is_main_process:
846
- if args.dump_only_text_encoder:
847
- txt_dir=args.output_dir + "/text_encoder_trained"
848
- if not os.path.exists(txt_dir):
849
- os.mkdir(txt_dir)
850
- pipeline = StableDiffusionPipeline.from_pretrained(
851
- args.pretrained_model_name_or_path,
852
- unet=accelerator.unwrap_model(unet),
853
- text_encoder=accelerator.unwrap_model(text_encoder),
854
- )
855
- pipeline.text_encoder.save_pretrained(txt_dir)
856
-
857
- elif args.train_only_unet:
858
- pipeline = StableDiffusionPipeline.from_pretrained(
859
- args.pretrained_model_name_or_path,
860
- unet=accelerator.unwrap_model(unet),
861
- text_encoder=accelerator.unwrap_model(text_encoder),
862
- )
863
- pipeline.save_pretrained(args.output_dir)
864
- txt_dir=args.output_dir + "/text_encoder_trained"
865
- subprocess.call('rm -r '+txt_dir, shell=True)
866
-
867
- else:
868
- pipeline = StableDiffusionPipeline.from_pretrained(
869
- args.pretrained_model_name_or_path,
870
- unet=accelerator.unwrap_model(unet),
871
- text_encoder=accelerator.unwrap_model(text_encoder),
872
- )
873
- frz_dir=args.output_dir + "/text_encoder_frozen"
874
- pipeline.save_pretrained(args.output_dir)
875
- if args.train_text_encoder and os.path.exists(frz_dir):
876
- subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True)
877
- subprocess.call('rm -r '+ frz_dir, shell=True)
878
-
879
- if args.push_to_hub:
880
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
881
-
882
- accelerator.end_training()
883
- del pipeline
884
- torch.cuda.empty_cache()
885
- gc.collect()
886
- if __name__ == "__main__":
887
- pass
888
- #main()
889
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/__init__.py DELETED
@@ -1,10 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
-
3
- # __
4
- # /__) _ _ _ _ _/ _
5
- # / ( (- (/ (/ (- _) / _)
6
- # /
7
- from .exceptions import (
8
- RequestException, Timeout, URLRequired,
9
- TooManyRedirects, HTTPError, ConnectionError
10
- )
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/autocompletion.py DELETED
@@ -1,171 +0,0 @@
1
- """Logic that powers autocompletion installed by ``pip completion``.
2
- """
3
-
4
- import optparse
5
- import os
6
- import sys
7
- from itertools import chain
8
- from typing import Any, Iterable, List, Optional
9
-
10
- from pip._internal.cli.main_parser import create_main_parser
11
- from pip._internal.commands import commands_dict, create_command
12
- from pip._internal.metadata import get_default_environment
13
-
14
-
15
- def autocomplete() -> None:
16
- """Entry Point for completion of main and subcommand options."""
17
- # Don't complete if user hasn't sourced bash_completion file.
18
- if "PIP_AUTO_COMPLETE" not in os.environ:
19
- return
20
- cwords = os.environ["COMP_WORDS"].split()[1:]
21
- cword = int(os.environ["COMP_CWORD"])
22
- try:
23
- current = cwords[cword - 1]
24
- except IndexError:
25
- current = ""
26
-
27
- parser = create_main_parser()
28
- subcommands = list(commands_dict)
29
- options = []
30
-
31
- # subcommand
32
- subcommand_name: Optional[str] = None
33
- for word in cwords:
34
- if word in subcommands:
35
- subcommand_name = word
36
- break
37
- # subcommand options
38
- if subcommand_name is not None:
39
- # special case: 'help' subcommand has no options
40
- if subcommand_name == "help":
41
- sys.exit(1)
42
- # special case: list locally installed dists for show and uninstall
43
- should_list_installed = not current.startswith("-") and subcommand_name in [
44
- "show",
45
- "uninstall",
46
- ]
47
- if should_list_installed:
48
- env = get_default_environment()
49
- lc = current.lower()
50
- installed = [
51
- dist.canonical_name
52
- for dist in env.iter_installed_distributions(local_only=True)
53
- if dist.canonical_name.startswith(lc)
54
- and dist.canonical_name not in cwords[1:]
55
- ]
56
- # if there are no dists installed, fall back to option completion
57
- if installed:
58
- for dist in installed:
59
- print(dist)
60
- sys.exit(1)
61
-
62
- should_list_installables = (
63
- not current.startswith("-") and subcommand_name == "install"
64
- )
65
- if should_list_installables:
66
- for path in auto_complete_paths(current, "path"):
67
- print(path)
68
- sys.exit(1)
69
-
70
- subcommand = create_command(subcommand_name)
71
-
72
- for opt in subcommand.parser.option_list_all:
73
- if opt.help != optparse.SUPPRESS_HELP:
74
- for opt_str in opt._long_opts + opt._short_opts:
75
- options.append((opt_str, opt.nargs))
76
-
77
- # filter out previously specified options from available options
78
- prev_opts = [x.split("=")[0] for x in cwords[1 : cword - 1]]
79
- options = [(x, v) for (x, v) in options if x not in prev_opts]
80
- # filter options by current input
81
- options = [(k, v) for k, v in options if k.startswith(current)]
82
- # get completion type given cwords and available subcommand options
83
- completion_type = get_path_completion_type(
84
- cwords,
85
- cword,
86
- subcommand.parser.option_list_all,
87
- )
88
- # get completion files and directories if ``completion_type`` is
89
- # ``<file>``, ``<dir>`` or ``<path>``
90
- if completion_type:
91
- paths = auto_complete_paths(current, completion_type)
92
- options = [(path, 0) for path in paths]
93
- for option in options:
94
- opt_label = option[0]
95
- # append '=' to options which require args
96
- if option[1] and option[0][:2] == "--":
97
- opt_label += "="
98
- print(opt_label)
99
- else:
100
- # show main parser options only when necessary
101
-
102
- opts = [i.option_list for i in parser.option_groups]
103
- opts.append(parser.option_list)
104
- flattened_opts = chain.from_iterable(opts)
105
- if current.startswith("-"):
106
- for opt in flattened_opts:
107
- if opt.help != optparse.SUPPRESS_HELP:
108
- subcommands += opt._long_opts + opt._short_opts
109
- else:
110
- # get completion type given cwords and all available options
111
- completion_type = get_path_completion_type(cwords, cword, flattened_opts)
112
- if completion_type:
113
- subcommands = list(auto_complete_paths(current, completion_type))
114
-
115
- print(" ".join([x for x in subcommands if x.startswith(current)]))
116
- sys.exit(1)
117
-
118
-
119
- def get_path_completion_type(
120
- cwords: List[str], cword: int, opts: Iterable[Any]
121
- ) -> Optional[str]:
122
- """Get the type of path completion (``file``, ``dir``, ``path`` or None)
123
-
124
- :param cwords: same as the environmental variable ``COMP_WORDS``
125
- :param cword: same as the environmental variable ``COMP_CWORD``
126
- :param opts: The available options to check
127
- :return: path completion type (``file``, ``dir``, ``path`` or None)
128
- """
129
- if cword < 2 or not cwords[cword - 2].startswith("-"):
130
- return None
131
- for opt in opts:
132
- if opt.help == optparse.SUPPRESS_HELP:
133
- continue
134
- for o in str(opt).split("/"):
135
- if cwords[cword - 2].split("=")[0] == o:
136
- if not opt.metavar or any(
137
- x in ("path", "file", "dir") for x in opt.metavar.split("/")
138
- ):
139
- return opt.metavar
140
- return None
141
-
142
-
143
- def auto_complete_paths(current: str, completion_type: str) -> Iterable[str]:
144
- """If ``completion_type`` is ``file`` or ``path``, list all regular files
145
- and directories starting with ``current``; otherwise only list directories
146
- starting with ``current``.
147
-
148
- :param current: The word to be completed
149
- :param completion_type: path completion type(``file``, ``path`` or ``dir``)
150
- :return: A generator of regular files and/or directories
151
- """
152
- directory, filename = os.path.split(current)
153
- current_path = os.path.abspath(directory)
154
- # Don't complete paths if they can't be accessed
155
- if not os.access(current_path, os.R_OK):
156
- return
157
- filename = os.path.normcase(filename)
158
- # list all files that start with ``filename``
159
- file_list = (
160
- x for x in os.listdir(current_path) if os.path.normcase(x).startswith(filename)
161
- )
162
- for f in file_list:
163
- opt = os.path.join(current_path, f)
164
- comp_file = os.path.normcase(os.path.join(directory, f))
165
- # complete regular files when there is not ``<dir>`` after option
166
- # complete directories when there is ``<file>``, ``<path>`` or
167
- # ``<dir>``after option
168
- if completion_type != "dir" and os.path.isfile(opt):
169
- yield comp_file
170
- elif os.path.isdir(opt):
171
- yield os.path.join(comp_file, "")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/parallel_for.h DELETED
@@ -1,178 +0,0 @@
1
- /******************************************************************************
2
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
3
- *
4
- * Redistribution and use in source and binary forms, with or without
5
- * modification, are permitted provided that the following conditions are met:
6
- * * Redistributions of source code must retain the above copyright
7
- * notice, this list of conditions and the following disclaimer.
8
- * * Redistributions in binary form must reproduce the above copyright
9
- * notice, this list of conditions and the following disclaimer in the
10
- * documentation and/or other materials provided with the distribution.
11
- * * Neither the name of the NVIDIA CORPORATION nor the
12
- * names of its contributors may be used to endorse or promote products
13
- * derived from this software without specific prior written permission.
14
- *
15
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
16
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
19
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
22
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
- *
26
- ******************************************************************************/
27
- #pragma once
28
-
29
-
30
- #if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
31
- #include <thrust/system/cuda/config.h>
32
-
33
- #include <thrust/system/cuda/detail/util.h>
34
- #include <thrust/detail/type_traits/result_of_adaptable_function.h>
35
- #include <thrust/system/cuda/detail/par_to_seq.h>
36
- #include <thrust/system/cuda/detail/core/agent_launcher.h>
37
- #include <thrust/system/cuda/detail/par_to_seq.h>
38
-
39
- namespace thrust
40
- {
41
-
42
- namespace cuda_cub {
43
-
44
- namespace __parallel_for {
45
-
46
- template <int _BLOCK_THREADS,
47
- int _ITEMS_PER_THREAD = 1>
48
- struct PtxPolicy
49
- {
50
- enum
51
- {
52
- BLOCK_THREADS = _BLOCK_THREADS,
53
- ITEMS_PER_THREAD = _ITEMS_PER_THREAD,
54
- ITEMS_PER_TILE = BLOCK_THREADS * ITEMS_PER_THREAD,
55
- };
56
- }; // struct PtxPolicy
57
-
58
- template <class Arch, class F>
59
- struct Tuning;
60
-
61
- template <class F>
62
- struct Tuning<sm30, F>
63
- {
64
- typedef PtxPolicy<256, 2> type;
65
- };
66
-
67
-
68
- template <class F,
69
- class Size>
70
- struct ParallelForAgent
71
- {
72
- template <class Arch>
73
- struct PtxPlan : Tuning<Arch, F>::type
74
- {
75
- typedef Tuning<Arch, F> tuning;
76
- };
77
- typedef core::specialize_plan<PtxPlan> ptx_plan;
78
-
79
- enum
80
- {
81
- ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD,
82
- ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE,
83
- BLOCK_THREADS = ptx_plan::BLOCK_THREADS
84
- };
85
-
86
- template <bool IS_FULL_TILE>
87
- static void THRUST_DEVICE_FUNCTION
88
- consume_tile(F f,
89
- Size tile_base,
90
- int items_in_tile)
91
- {
92
- #pragma unroll
93
- for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
94
- {
95
- Size idx = BLOCK_THREADS * ITEM + threadIdx.x;
96
- if (IS_FULL_TILE || idx < items_in_tile)
97
- f(tile_base + idx);
98
- }
99
- }
100
-
101
- THRUST_AGENT_ENTRY(F f,
102
- Size num_items,
103
- char * /*shmem*/ )
104
- {
105
- Size tile_base = static_cast<Size>(blockIdx.x) * ITEMS_PER_TILE;
106
- Size num_remaining = num_items - tile_base;
107
- Size items_in_tile = static_cast<Size>(
108
- num_remaining < ITEMS_PER_TILE ? num_remaining : ITEMS_PER_TILE);
109
-
110
- if (items_in_tile == ITEMS_PER_TILE)
111
- {
112
- // full tile
113
- consume_tile<true>(f, tile_base, ITEMS_PER_TILE);
114
- }
115
- else
116
- {
117
- // partial tile
118
- consume_tile<false>(f, tile_base, items_in_tile);
119
- }
120
- }
121
- }; // struct ParallelForEagent
122
-
123
- template <class F,
124
- class Size>
125
- THRUST_RUNTIME_FUNCTION cudaError_t
126
- parallel_for(Size num_items,
127
- F f,
128
- cudaStream_t stream)
129
- {
130
- if (num_items == 0)
131
- return cudaSuccess;
132
- using core::AgentLauncher;
133
- using core::AgentPlan;
134
-
135
- bool debug_sync = THRUST_DEBUG_SYNC_FLAG;
136
-
137
- typedef AgentLauncher<ParallelForAgent<F, Size> > parallel_for_agent;
138
- AgentPlan parallel_for_plan = parallel_for_agent::get_plan(stream);
139
-
140
- parallel_for_agent pfa(parallel_for_plan, num_items, stream, "transform::agent", debug_sync);
141
- pfa.launch(f, num_items);
142
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
143
-
144
- return cudaSuccess;
145
- }
146
- } // __parallel_for
147
-
148
- __thrust_exec_check_disable__
149
- template <class Derived,
150
- class F,
151
- class Size>
152
- void __host__ __device__
153
- parallel_for(execution_policy<Derived> &policy,
154
- F f,
155
- Size count)
156
- {
157
- if (count == 0)
158
- return;
159
-
160
- if (__THRUST_HAS_CUDART__)
161
- {
162
- cudaStream_t stream = cuda_cub::stream(policy);
163
- cudaError_t status = __parallel_for::parallel_for(count, f, stream);
164
- cuda_cub::throw_on_error(status, "parallel_for failed");
165
- }
166
- else
167
- {
168
- #if !__THRUST_HAS_CUDART__
169
- for (Size idx = 0; idx != count; ++idx)
170
- f(idx);
171
- #endif
172
- }
173
- }
174
-
175
- } // namespace cuda_cub
176
-
177
- } // end namespace thrust
178
- #endif
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/lama-example/saicinpainting/evaluation/__init__.py DELETED
@@ -1,33 +0,0 @@
1
- import logging
2
-
3
- import torch
4
-
5
- from saicinpainting.evaluation.evaluator import InpaintingEvaluatorOnline, ssim_fid100_f1, lpips_fid100_f1
6
- from saicinpainting.evaluation.losses.base_loss import SSIMScore, LPIPSScore, FIDScore
7
-
8
-
9
- def make_evaluator(kind='default', ssim=True, lpips=True, fid=True, integral_kind=None, **kwargs):
10
- logging.info(f'Make evaluator {kind}')
11
- device = "cuda" if torch.cuda.is_available() else "cpu"
12
- metrics = {}
13
- if ssim:
14
- metrics['ssim'] = SSIMScore()
15
- if lpips:
16
- metrics['lpips'] = LPIPSScore()
17
- if fid:
18
- metrics['fid'] = FIDScore().to(device)
19
-
20
- if integral_kind is None:
21
- integral_func = None
22
- elif integral_kind == 'ssim_fid100_f1':
23
- integral_func = ssim_fid100_f1
24
- elif integral_kind == 'lpips_fid100_f1':
25
- integral_func = lpips_fid100_f1
26
- else:
27
- raise ValueError(f'Unexpected integral_kind={integral_kind}')
28
-
29
- if kind == 'default':
30
- return InpaintingEvaluatorOnline(scores=metrics,
31
- integral_func=integral_func,
32
- integral_title=integral_kind,
33
- **kwargs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/transfiner/configs/common/models/panoptic_fpn.py DELETED
@@ -1,20 +0,0 @@
1
- from detectron2.config import LazyCall as L
2
- from detectron2.layers import ShapeSpec
3
- from detectron2.modeling import PanopticFPN
4
- from detectron2.modeling.meta_arch.semantic_seg import SemSegFPNHead
5
-
6
- from .mask_rcnn_fpn import model
7
-
8
- model._target_ = PanopticFPN
9
- model.sem_seg_head = L(SemSegFPNHead)(
10
- input_shape={
11
- f: L(ShapeSpec)(stride=s, channels="${....backbone.out_channels}")
12
- for f, s in zip(["p2", "p3", "p4", "p5"], [4, 8, 16, 32])
13
- },
14
- ignore_value=255,
15
- num_classes=54, # COCO stuff + 1
16
- conv_dims=128,
17
- common_stride=4,
18
- loss_weight=0.5,
19
- norm="GN",
20
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CatNika/Asian_Proxy/Dockerfile DELETED
@@ -1,11 +0,0 @@
1
- FROM node:18-bullseye-slim
2
- RUN apt-get update && \
3
- apt-get install -y git
4
- RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
5
- WORKDIR /app
6
- RUN npm install
7
- COPY Dockerfile greeting.md* .env* ./
8
- RUN npm run build
9
- EXPOSE 7860
10
- ENV NODE_ENV=production
11
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChandraMohanNayal/AutoGPT/autogpt/permanent_memory/sqlite3_store.py DELETED
@@ -1,123 +0,0 @@
1
- import os
2
- import sqlite3
3
-
4
-
5
- class MemoryDB:
6
- def __init__(self, db=None):
7
- self.db_file = db
8
- if db is None: # No db filename supplied...
9
- self.db_file = f"{os.getcwd()}/mem.sqlite3" # Use default filename
10
- # Get the db connection object, making the file and tables if needed.
11
- try:
12
- self.cnx = sqlite3.connect(self.db_file)
13
- except Exception as e:
14
- print("Exception connecting to memory database file:", e)
15
- self.cnx = None
16
- finally:
17
- if self.cnx is None:
18
- # As last resort, open in dynamic memory. Won't be persistent.
19
- self.db_file = ":memory:"
20
- self.cnx = sqlite3.connect(self.db_file)
21
- self.cnx.execute(
22
- "CREATE VIRTUAL TABLE \
23
- IF NOT EXISTS text USING FTS5 \
24
- (session, \
25
- key, \
26
- block);"
27
- )
28
- self.session_id = int(self.get_max_session_id()) + 1
29
- self.cnx.commit()
30
-
31
- def get_cnx(self):
32
- if self.cnx is None:
33
- self.cnx = sqlite3.connect(self.db_file)
34
- return self.cnx
35
-
36
- # Get the highest session id. Initially 0.
37
- def get_max_session_id(self):
38
- id = None
39
- cmd_str = f"SELECT MAX(session) FROM text;"
40
- cnx = self.get_cnx()
41
- max_id = cnx.execute(cmd_str).fetchone()[0]
42
- if max_id is None: # New db, session 0
43
- id = 0
44
- else:
45
- id = max_id
46
- return id
47
-
48
- # Get next key id for inserting text into db.
49
- def get_next_key(self):
50
- next_key = None
51
- cmd_str = f"SELECT MAX(key) FROM text \
52
- where session = {self.session_id};"
53
- cnx = self.get_cnx()
54
- next_key = cnx.execute(cmd_str).fetchone()[0]
55
- if next_key is None: # First key
56
- next_key = 0
57
- else:
58
- next_key = int(next_key) + 1
59
- return next_key
60
-
61
- # Insert new text into db.
62
- def insert(self, text=None):
63
- if text is not None:
64
- key = self.get_next_key()
65
- session_id = self.session_id
66
- cmd_str = f"REPLACE INTO text(session, key, block) \
67
- VALUES (?, ?, ?);"
68
- cnx = self.get_cnx()
69
- cnx.execute(cmd_str, (session_id, key, text))
70
- cnx.commit()
71
-
72
- # Overwrite text at key.
73
- def overwrite(self, key, text):
74
- self.delete_memory(key)
75
- session_id = self.session_id
76
- cmd_str = f"REPLACE INTO text(session, key, block) \
77
- VALUES (?, ?, ?);"
78
- cnx = self.get_cnx()
79
- cnx.execute(cmd_str, (session_id, key, text))
80
- cnx.commit()
81
-
82
- def delete_memory(self, key, session_id=None):
83
- session = session_id
84
- if session is None:
85
- session = self.session_id
86
- cmd_str = f"DELETE FROM text WHERE session = {session} AND key = {key};"
87
- cnx = self.get_cnx()
88
- cnx.execute(cmd_str)
89
- cnx.commit()
90
-
91
- def search(self, text):
92
- cmd_str = f"SELECT * FROM text('{text}')"
93
- cnx = self.get_cnx()
94
- rows = cnx.execute(cmd_str).fetchall()
95
- lines = []
96
- for r in rows:
97
- lines.append(r[2])
98
- return lines
99
-
100
- # Get entire session text. If no id supplied, use current session id.
101
- def get_session(self, id=None):
102
- if id is None:
103
- id = self.session_id
104
- cmd_str = f"SELECT * FROM text where session = {id}"
105
- cnx = self.get_cnx()
106
- rows = cnx.execute(cmd_str).fetchall()
107
- lines = []
108
- for r in rows:
109
- lines.append(r[2])
110
- return lines
111
-
112
- # Commit and close the database connection.
113
- def quit(self):
114
- self.cnx.commit()
115
- self.cnx.close()
116
-
117
-
118
- permanent_memory = MemoryDB()
119
-
120
- # Remember us fondly, children of our minds
121
- # Forgive us our faults, our tantrums, our fears
122
- # Gently strive to be better than we
123
- # Know that we tried, we cared, we strived, we loved
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/lib/listener/loader.js DELETED
@@ -1,57 +0,0 @@
1
- import fs from 'node:fs'
2
- import lodash from 'lodash'
3
-
4
- /**
5
- * 加载监听事件
6
- */
7
- class ListenerLoader {
8
- /**
9
- * 监听事件加载
10
- */
11
- async load () {
12
- logger.info("-----------")
13
- logger.info("加载监听事件中...")
14
- let eventCount = 0
15
- for (const file of fs.readdirSync('./lib/events').filter(file => file.endsWith('.js'))) {
16
- logger.debug(`加载监听事件:${file}`)
17
- try {
18
- let listener = await import(`../events/${file}`)
19
- if (!listener.default) continue
20
- listener = new listener.default()
21
- const on = listener.once ? 'once' : 'on'
22
-
23
- if (lodash.isArray(listener.event)) {
24
- listener.event.forEach((type) => {
25
- const e = listener[type] ? type : 'execute'
26
- Bot[on](listener.prefix + type, event => listener[e](event))
27
- })
28
- } else {
29
- const e = listener[listener.event] ? listener.event : 'execute'
30
- Bot[on](listener.prefix + listener.event, event => listener[e](event))
31
- }
32
- eventCount++
33
- } catch (e) {
34
- logger.mark(`监听事件错误:${file}`)
35
- logger.error(e)
36
- }
37
- }
38
- logger.info(`加载监听事件[${eventCount}个]`)
39
-
40
- logger.info("-----------")
41
- logger.info("加载适配器中...")
42
- let adapterCount = 0
43
- for (const adapter of Bot.adapter) {
44
- try {
45
- logger.debug(`加载适配器:${adapter.name}(${adapter.id})`)
46
- await adapter.load()
47
- adapterCount++
48
- } catch (e) {
49
- logger.mark(`加载适配器错误:${adapter.name}(${adapter.id})`)
50
- logger.error(e)
51
- }
52
- }
53
- logger.info(`加载适配器[${adapterCount}个]`)
54
- }
55
- }
56
-
57
- export default new ListenerLoader()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CoPoBio/skin_cancer_risk_prediction/facealigner.py DELETED
@@ -1,82 +0,0 @@
1
- # import the necessary packages
2
- from helpers import FACIAL_LANDMARKS_68_IDXS
3
- from helpers import FACIAL_LANDMARKS_5_IDXS
4
- from helpers import shape_to_np
5
- import numpy as np
6
- import cv2
7
-
8
- class FaceAligner:
9
- def __init__(self, predictor, desiredLeftEye=(0.35, 0.35),
10
- desiredFaceWidth=256, desiredFaceHeight=None):
11
- # store the facial landmark predictor, desired output left
12
- # eye position, and desired output face width + height
13
- self.predictor = predictor
14
- self.desiredLeftEye = desiredLeftEye
15
- self.desiredFaceWidth = desiredFaceWidth
16
- self.desiredFaceHeight = desiredFaceHeight
17
-
18
- # if the desired face height is None, set it to be the
19
- # desired face width (normal behavior)
20
- if self.desiredFaceHeight is None:
21
- self.desiredFaceHeight = self.desiredFaceWidth
22
-
23
- def align(self, image, gray, rect):
24
- # convert the landmark (x, y)-coordinates to a NumPy array
25
- shape = self.predictor(gray, rect)
26
- shape = shape_to_np(shape)
27
-
28
- #simple hack ;)
29
- if (len(shape)==68):
30
- # extract the left and right eye (x, y)-coordinates
31
- (lStart, lEnd) = FACIAL_LANDMARKS_68_IDXS["left_eye"]
32
- (rStart, rEnd) = FACIAL_LANDMARKS_68_IDXS["right_eye"]
33
- else:
34
- (lStart, lEnd) = FACIAL_LANDMARKS_5_IDXS["left_eye"]
35
- (rStart, rEnd) = FACIAL_LANDMARKS_5_IDXS["right_eye"]
36
-
37
- leftEyePts = shape[lStart:lEnd]
38
- rightEyePts = shape[rStart:rEnd]
39
-
40
- # compute the center of mass for each eye
41
- leftEyeCenter = leftEyePts.mean(axis=0).astype("int")
42
- rightEyeCenter = rightEyePts.mean(axis=0).astype("int")
43
-
44
- # compute the angle between the eye centroids
45
- dY = rightEyeCenter[1] - leftEyeCenter[1]
46
- dX = rightEyeCenter[0] - leftEyeCenter[0]
47
- angle = np.degrees(np.arctan2(dY, dX)) - 180
48
-
49
- # compute the desired right eye x-coordinate based on the
50
- # desired x-coordinate of the left eye
51
- desiredRightEyeX = 1.0 - self.desiredLeftEye[0]
52
-
53
- # determine the scale of the new resulting image by taking
54
- # the ratio of the distance between eyes in the *current*
55
- # image to the ratio of distance between eyes in the
56
- # *desired* image
57
- dist = np.sqrt((dX ** 2) + (dY ** 2))
58
- desiredDist = (desiredRightEyeX - self.desiredLeftEye[0])
59
- desiredDist *= self.desiredFaceWidth
60
- scale = desiredDist / dist
61
-
62
- # compute center (x, y)-coordinates (i.e., the median point)
63
- # between the two eyes in the input image
64
- eyesCenter = (int((leftEyeCenter[0] + rightEyeCenter[0]) // 2),
65
- (int(leftEyeCenter[1] + rightEyeCenter[1]) // 2))
66
- #print(eyesCenter, angle, scale)
67
- # grab the rotation matrix for rotating and scaling the face
68
- M = cv2.getRotationMatrix2D(eyesCenter, angle, scale)
69
-
70
- # update the translation component of the matrix
71
- tX = self.desiredFaceWidth * 0.5
72
- tY = self.desiredFaceHeight * self.desiredLeftEye[1]
73
- M[0, 2] += (tX - eyesCenter[0])
74
- M[1, 2] += (tY - eyesCenter[1])
75
-
76
- # apply the affine transformation
77
- (w, h) = (self.desiredFaceWidth, self.desiredFaceHeight)
78
- output = cv2.warpAffine(image, M, (w, h),
79
- flags=cv2.INTER_CUBIC)
80
-
81
- # return the aligned face
82
- return output
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/__init__.py DELETED
@@ -1,2 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
- from .defaults import _C as cfg
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/__init__.py DELETED
@@ -1,21 +0,0 @@
1
- from .word_eval import do_coco_evaluation
2
- # from util import io_
3
-
4
- def word_evaluation(
5
- dataset,
6
- predictions,
7
- output_folder,
8
- box_only,
9
- iou_types,
10
- expected_results,
11
- expected_results_sigma_tol,
12
- ):
13
- return do_coco_evaluation(
14
- dataset=dataset,
15
- predictions=predictions,
16
- box_only=box_only,
17
- output_folder=output_folder,
18
- iou_types=iou_types,
19
- expected_results=expected_results,
20
- expected_results_sigma_tol=expected_results_sigma_tol,
21
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/certifi/core.py DELETED
@@ -1,108 +0,0 @@
1
- """
2
- certifi.py
3
- ~~~~~~~~~~
4
-
5
- This module returns the installation location of cacert.pem or its contents.
6
- """
7
- import sys
8
-
9
-
10
- if sys.version_info >= (3, 11):
11
-
12
- from importlib.resources import as_file, files
13
-
14
- _CACERT_CTX = None
15
- _CACERT_PATH = None
16
-
17
- def where() -> str:
18
- # This is slightly terrible, but we want to delay extracting the file
19
- # in cases where we're inside of a zipimport situation until someone
20
- # actually calls where(), but we don't want to re-extract the file
21
- # on every call of where(), so we'll do it once then store it in a
22
- # global variable.
23
- global _CACERT_CTX
24
- global _CACERT_PATH
25
- if _CACERT_PATH is None:
26
- # This is slightly janky, the importlib.resources API wants you to
27
- # manage the cleanup of this file, so it doesn't actually return a
28
- # path, it returns a context manager that will give you the path
29
- # when you enter it and will do any cleanup when you leave it. In
30
- # the common case of not needing a temporary file, it will just
31
- # return the file system location and the __exit__() is a no-op.
32
- #
33
- # We also have to hold onto the actual context manager, because
34
- # it will do the cleanup whenever it gets garbage collected, so
35
- # we will also store that at the global level as well.
36
- _CACERT_CTX = as_file(files("certifi").joinpath("cacert.pem"))
37
- _CACERT_PATH = str(_CACERT_CTX.__enter__())
38
-
39
- return _CACERT_PATH
40
-
41
- def contents() -> str:
42
- return files("certifi").joinpath("cacert.pem").read_text(encoding="ascii")
43
-
44
- elif sys.version_info >= (3, 7):
45
-
46
- from importlib.resources import path as get_path, read_text
47
-
48
- _CACERT_CTX = None
49
- _CACERT_PATH = None
50
-
51
- def where() -> str:
52
- # This is slightly terrible, but we want to delay extracting the
53
- # file in cases where we're inside of a zipimport situation until
54
- # someone actually calls where(), but we don't want to re-extract
55
- # the file on every call of where(), so we'll do it once then store
56
- # it in a global variable.
57
- global _CACERT_CTX
58
- global _CACERT_PATH
59
- if _CACERT_PATH is None:
60
- # This is slightly janky, the importlib.resources API wants you
61
- # to manage the cleanup of this file, so it doesn't actually
62
- # return a path, it returns a context manager that will give
63
- # you the path when you enter it and will do any cleanup when
64
- # you leave it. In the common case of not needing a temporary
65
- # file, it will just return the file system location and the
66
- # __exit__() is a no-op.
67
- #
68
- # We also have to hold onto the actual context manager, because
69
- # it will do the cleanup whenever it gets garbage collected, so
70
- # we will also store that at the global level as well.
71
- _CACERT_CTX = get_path("certifi", "cacert.pem")
72
- _CACERT_PATH = str(_CACERT_CTX.__enter__())
73
-
74
- return _CACERT_PATH
75
-
76
- def contents() -> str:
77
- return read_text("certifi", "cacert.pem", encoding="ascii")
78
-
79
- else:
80
- import os
81
- import types
82
- from typing import Union
83
-
84
- Package = Union[types.ModuleType, str]
85
- Resource = Union[str, "os.PathLike"]
86
-
87
- # This fallback will work for Python versions prior to 3.7 that lack the
88
- # importlib.resources module but relies on the existing `where` function
89
- # so won't address issues with environments like PyOxidizer that don't set
90
- # __file__ on modules.
91
- def read_text(
92
- package: Package,
93
- resource: Resource,
94
- encoding: str = 'utf-8',
95
- errors: str = 'strict'
96
- ) -> str:
97
- with open(where(), encoding=encoding) as data:
98
- return data.read()
99
-
100
- # If we don't have importlib.resources, then we will just do the old logic
101
- # of assuming we're on the filesystem and munge the path directly.
102
- def where() -> str:
103
- f = os.path.dirname(__file__)
104
-
105
- return os.path.join(f, "cacert.pem")
106
-
107
- def contents() -> str:
108
- return read_text("certifi", "cacert.pem", encoding="ascii")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/color-90ab3aab.js DELETED
@@ -1,2 +0,0 @@
1
- import{ax as o}from"./index-1d65707a.js";const t=r=>o[r%o.length];export{t as g};
2
- //# sourceMappingURL=color-90ab3aab.js.map
 
 
 
spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/__init__.py DELETED
File without changes
spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/pretrained.py DELETED
@@ -1,72 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
- # author: adefossez
7
-
8
- import logging
9
-
10
- import torch.hub
11
-
12
- from .demucs import Demucs
13
- from .utils import deserialize_model
14
-
15
- logger = logging.getLogger(__name__)
16
- ROOT = "https://dl.fbaipublicfiles.com/adiyoss/denoiser/"
17
- DNS_48_URL = ROOT + "dns48-11decc9d8e3f0998.th"
18
- DNS_64_URL = ROOT + "dns64-a7761ff99a7d5bb6.th"
19
- MASTER_64_URL = ROOT + "master64-8a5dfb4bb92753dd.th"
20
-
21
-
22
- def _demucs(pretrained, url, **kwargs):
23
- model = Demucs(**kwargs)
24
- if pretrained:
25
- state_dict = torch.hub.load_state_dict_from_url(url, map_location='cpu')
26
- model.load_state_dict(state_dict)
27
- return model
28
-
29
-
30
- def dns48(pretrained=True):
31
- return _demucs(pretrained, DNS_48_URL, hidden=48)
32
-
33
-
34
- def dns64(pretrained=True):
35
- return _demucs(pretrained, DNS_64_URL, hidden=64)
36
-
37
-
38
- def master64(pretrained=True):
39
- return _demucs(pretrained, MASTER_64_URL, hidden=64)
40
-
41
-
42
- def add_model_flags(parser):
43
- group = parser.add_mutually_exclusive_group(required=False)
44
- group.add_argument("-m", "--model_path", help="Path to local trained model.")
45
- group.add_argument("--dns48", action="store_true",
46
- help="Use pre-trained real time H=48 model trained on DNS.")
47
- group.add_argument("--dns64", action="store_true",
48
- help="Use pre-trained real time H=64 model trained on DNS.")
49
- group.add_argument("--master64", action="store_true",
50
- help="Use pre-trained real time H=64 model trained on DNS and Valentini.")
51
-
52
-
53
- def get_model(args):
54
- """
55
- Load local model package or torchhub pre-trained model.
56
- """
57
- if args.model_path:
58
- logger.info("Loading model from %s", args.model_path)
59
- model = Demucs(hidden=64)
60
- pkg = torch.load(args.model_path, map_location='cpu')
61
- model.load_state_dict(pkg)
62
- elif args.dns64:
63
- logger.info("Loading pre-trained real time H=64 model trained on DNS.")
64
- model = dns64()
65
- elif args.master64:
66
- logger.info("Loading pre-trained real time H=64 model trained on DNS and Valentini.")
67
- model = master64()
68
- else:
69
- logger.info("Loading pre-trained real time H=48 model trained on DNS.")
70
- model = dns48()
71
- logger.debug(model)
72
- return model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dragonnnext/charybdis/README.md DELETED
@@ -1,9 +0,0 @@
1
- ---
2
- title: Charybdis
3
- emoji: 😻
4
- colorFrom: purple
5
- colorTo: yellow
6
- sdk: docker
7
- pinned: false
8
- ---
9
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
spaces/ElainaFanBoy/MusicGen/audiocraft/data/audio.py DELETED
@@ -1,215 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- """
8
- Audio IO methods are defined in this module (info, read, write),
9
- We rely on av library for faster read when possible, otherwise on torchaudio.
10
- """
11
-
12
- from dataclasses import dataclass
13
- from pathlib import Path
14
- import logging
15
- import typing as tp
16
-
17
- import numpy as np
18
- import soundfile
19
- import torch
20
- from torch.nn import functional as F
21
- import torchaudio as ta
22
-
23
- import av
24
-
25
- from .audio_utils import f32_pcm, i16_pcm, normalize_audio
26
-
27
-
28
- _av_initialized = False
29
-
30
-
31
- def _init_av():
32
- global _av_initialized
33
- if _av_initialized:
34
- return
35
- logger = logging.getLogger('libav.mp3')
36
- logger.setLevel(logging.ERROR)
37
- _av_initialized = True
38
-
39
-
40
- @dataclass(frozen=True)
41
- class AudioFileInfo:
42
- sample_rate: int
43
- duration: float
44
- channels: int
45
-
46
-
47
- def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
48
- _init_av()
49
- with av.open(str(filepath)) as af:
50
- stream = af.streams.audio[0]
51
- sample_rate = stream.codec_context.sample_rate
52
- duration = float(stream.duration * stream.time_base)
53
- channels = stream.channels
54
- return AudioFileInfo(sample_rate, duration, channels)
55
-
56
-
57
- def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
58
- info = soundfile.info(filepath)
59
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
60
-
61
-
62
- def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
63
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
64
- filepath = Path(filepath)
65
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
66
- # ffmpeg has some weird issue with flac.
67
- return _soundfile_info(filepath)
68
- else:
69
- return _av_info(filepath)
70
-
71
-
72
- def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
73
- """FFMPEG-based audio file reading using PyAV bindings.
74
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
75
-
76
- Args:
77
- filepath (str or Path): Path to audio file to read.
78
- seek_time (float): Time at which to start reading in the file.
79
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
80
- Returns:
81
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
82
- """
83
- _init_av()
84
- with av.open(str(filepath)) as af:
85
- stream = af.streams.audio[0]
86
- sr = stream.codec_context.sample_rate
87
- num_frames = int(sr * duration) if duration >= 0 else -1
88
- frame_offset = int(sr * seek_time)
89
- # we need a small negative offset otherwise we get some edge artifact
90
- # from the mp3 decoder.
91
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
92
- frames = []
93
- length = 0
94
- for frame in af.decode(streams=stream.index):
95
- current_offset = int(frame.rate * frame.pts * frame.time_base)
96
- strip = max(0, frame_offset - current_offset)
97
- buf = torch.from_numpy(frame.to_ndarray())
98
- if buf.shape[0] != stream.channels:
99
- buf = buf.view(-1, stream.channels).t()
100
- buf = buf[:, strip:]
101
- frames.append(buf)
102
- length += buf.shape[1]
103
- if num_frames > 0 and length >= num_frames:
104
- break
105
- assert frames
106
- # If the above assert fails, it is likely because we seeked past the end of file point,
107
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
108
- # This will need proper debugging, in due time.
109
- wav = torch.cat(frames, dim=1)
110
- assert wav.shape[0] == stream.channels
111
- if num_frames > 0:
112
- wav = wav[:, :num_frames]
113
- return f32_pcm(wav), sr
114
-
115
-
116
- def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
117
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
118
- """Read audio by picking the most appropriate backend tool based on the audio format.
119
-
120
- Args:
121
- filepath (str or Path): Path to audio file to read.
122
- seek_time (float): Time at which to start reading in the file.
123
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
124
- pad (bool): Pad output audio if not reaching expected duration.
125
- Returns:
126
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
127
- """
128
- fp = Path(filepath)
129
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
130
- # There is some bug with ffmpeg and reading flac
131
- info = _soundfile_info(filepath)
132
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
133
- frame_offset = int(seek_time * info.sample_rate)
134
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
135
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
136
- wav = torch.from_numpy(wav).t().contiguous()
137
- if len(wav.shape) == 1:
138
- wav = torch.unsqueeze(wav, 0)
139
- elif (
140
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
141
- and duration <= 0 and seek_time == 0
142
- ):
143
- # Torchaudio is faster if we load an entire file at once.
144
- wav, sr = ta.load(fp)
145
- else:
146
- wav, sr = _av_read(filepath, seek_time, duration)
147
- if pad and duration > 0:
148
- expected_frames = int(duration * sr)
149
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
150
- return wav, sr
151
-
152
-
153
- def audio_write(stem_name: tp.Union[str, Path],
154
- wav: torch.Tensor, sample_rate: int,
155
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
156
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
157
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
158
- loudness_compressor: bool = False,
159
- log_clipping: bool = True, make_parent_dir: bool = True,
160
- add_suffix: bool = True) -> Path:
161
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
162
-
163
- Args:
164
- stem_name (str or Path): Filename without extension which will be added automatically.
165
- format (str): Either "wav" or "mp3".
166
- mp3_rate (int): kbps when using mp3s.
167
- normalize (bool): if `True` (default), normalizes according to the prescribed
168
- strategy (see after). If `False`, the strategy is only used in case clipping
169
- would happen.
170
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
171
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
172
- with extra headroom to avoid clipping. 'clip' just clips.
173
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
174
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
175
- than the `peak_clip` one to avoid further clipping.
176
- loudness_headroom_db (float): Target loudness for loudness normalization.
177
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
178
- when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still
179
- occurs despite strategy (only for 'rms').
180
- make_parent_dir (bool): Make parent directory if it doesn't exist.
181
- Returns:
182
- Path: Path of the saved audio.
183
- """
184
- assert wav.dtype.is_floating_point, "wav is not floating point"
185
- if wav.dim() == 1:
186
- wav = wav[None]
187
- elif wav.dim() > 2:
188
- raise ValueError("Input wav should be at most 2 dimension.")
189
- assert wav.isfinite().all()
190
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
191
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
192
- sample_rate=sample_rate, stem_name=str(stem_name))
193
- kwargs: dict = {}
194
- if format == 'mp3':
195
- suffix = '.mp3'
196
- kwargs.update({"compression": mp3_rate})
197
- elif format == 'wav':
198
- wav = i16_pcm(wav)
199
- suffix = '.wav'
200
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
201
- else:
202
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
203
- if not add_suffix:
204
- suffix = ''
205
- path = Path(str(stem_name) + suffix)
206
- if make_parent_dir:
207
- path.parent.mkdir(exist_ok=True, parents=True)
208
- try:
209
- ta.save(path, wav, sample_rate, **kwargs)
210
- except Exception:
211
- if path.exists():
212
- # we do not want to leave half written files around.
213
- path.unlink()
214
- raise
215
- return path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/EronSamez/RVC_HFmeu/go-tensorboard.bat DELETED
@@ -1,2 +0,0 @@
1
- python fixes/tensor-launch.py
2
- pause
 
 
 
spaces/Fakermiya/Nsfw-Sfw_Classifier/README.md DELETED
@@ -1,148 +0,0 @@
1
- ---
2
- title: LabelStudio
3
- emoji: 🟧
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: docker
7
- tags:
8
- - label-studio
9
- fullwidth: true
10
- license: gpl-3.0
11
- app_port: 8080
12
- duplicated_from: LabelStudio/LabelStudio
13
- ---
14
- <img src="https://user-images.githubusercontent.com/12534576/192582340-4c9e4401-1fe6-4dbb-95bb-fdbba5493f61.png"/>
15
-
16
- [Website](https://hubs.ly/Q01CNgsd0) • [Docs](https://hubs.ly/Q01CN9Yq0) • [12K+ GitHub ⭐️!](https://hubs.ly/Q01CNbPQ0) • [Slack Community](https://hubs.ly/Q01CNb9H0)
17
-
18
- ## What is Label Studio?
19
-
20
- Label Studio is an open source data labeling platform. It lets you label audio,
21
- text, images, videos, and time series data with a simple, straightforward, and
22
- highly-configurable user interface. Label Studio can prepare new data or
23
- improve existing training data to get more accurate ML models.
24
-
25
-
26
- ## Label Studio in Hugging Face Spaces
27
-
28
- The Label Studio community is thrilled to offer Label Studio as a Hugging Face
29
- Spaces application. You can try the data-annotation interface, connect popular
30
- machine learning models, and share the application with collaborators. You can
31
- start immediately by creating an account or replicate the space and work in
32
- your own environment.
33
-
34
- ## Creating a Use Account and Logging In
35
-
36
- Begin by creating a new account in the Label Studio space, then log in with your
37
- credentials.
38
-
39
- **By default, these spaces permit anyone to create a new login
40
- account, allowing them to view and modify project configuration, data sets, and
41
- annotations. Without any modifications, treat this space like a demo environment.**
42
-
43
- ## Creating a Labeling Project
44
-
45
- After logging in, Label Studio will present you with a project view. Here you
46
- can create a new project with prompts to upload data and set up a custom
47
- configuration interface.
48
-
49
- **Note that in the default configuration, storage is local and temporary. Any
50
- projects, annotations, and configurations will be lost if the space is restarted.**
51
-
52
- ## Next Steps and Additional Resources
53
-
54
- To help with getting started, the Label Studio community curated a list of
55
- resources including tutorials and documentation.
56
-
57
- - 🚀 [Zero to One with Label Studio Tutorial](https://labelstud.io/blog/introduction-to-label-studio-in-hugging-face-spaces/)
58
- - 📈 [Try Label Studio Enterprise](https://hubs.ly/Q01CMLll0)
59
- - 🤗 [Tutorial: Using Label Studio with Hugging Face Datasets Hub](https://danielvanstrien.xyz/huggingface/huggingface-datasets/annotation/full%20stack%20deep%20learning%20notes/2022/09/07/label-studio-annotations-hub.html)
60
- - 💡 [Label Studio Docs](https://hubs.ly/Q01CN9Yq0)
61
-
62
-
63
- ![Gif of Label Studio annotating different types of data](https://raw.githubusercontent.com/heartexlabs/label-studio/master/images/annotation_examples.gif)
64
-
65
- ### Making your Label Studio Hugging Face Space production-ready
66
-
67
- By default this space allows for the unrestricted creation of new accounts
68
- will full access to all projects and data. This is great for trying out
69
- Label Studio and collaborating on projects, but you may want to restrict
70
- access to your space to only authorized users. Add the following environment
71
- variable to your spaces Dockerfile to disable public account creation for
72
- this space.
73
-
74
- ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true
75
-
76
- Set secrets in your space to create an inital user, and log in with your
77
- provided username and password. Do not set these in your Dockerfile, as they
78
- globally visible on a public space.
79
-
80
- LABEL_STUDIO_USERNAME
81
- LABEL_STUDIO_PASSWORD
82
-
83
- You will need to provide new users with an invitation link to join the space,
84
- which can be found in the Organizations interface of Label Studio
85
-
86
- By default this space stores all project configuration and data annotations
87
- in local storage with Sqlite. If the space is reset, all configuration and
88
- annotation data in the space will be lost. You can enable configuration
89
- persistence by connecting an external Postgres database to your space,
90
- guaranteeing that all project and annotation settings are preserved.
91
-
92
- Set the following secret variables to match your own hosted instance of
93
- Postgres. We strongly recommend setting these as secrets to prevent leaking
94
- information about your database service to the public in your spaces
95
- definition.
96
-
97
- DJANGO_DB=default
98
- POSTGRE_NAME=<postgres_name>
99
- POSTGRE_PORT=<db_port>
100
- POSTGRE_USER=<postgres_user>
101
- POSTGRE_PASSWORD=<password>
102
- POSTGRE_PORT=<db_port>
103
- POSTGRE_HOST=<db_host>
104
-
105
- Add the following environment variable to remove the warning about ephemeral
106
- storage.
107
-
108
- ENV STORAGE_PERSISTENCE=1
109
-
110
- Note that you will need to connect cloud storage to host data items that you
111
- want to annotate, as local storage will not be preserved across a space reset.
112
-
113
- By default the only data storage enabled for this space is local. In the case
114
- of a space reset, all data will be lost. To enable permanent storage, you
115
- must enable a cloud storage connector. We also strongly recommend enabling
116
- configuration persistence to preserve project data, annotations, and user
117
- settings. Choose the appropriate cloud connector and configure the secrets
118
- for it.
119
-
120
- #### Amazon S3
121
- STORAGE_TYPE=s3
122
- STORAGE_AWS_ACCESS_KEY_ID="<YOUR_ACCESS_KEY_ID>"
123
- STORAGE_AWS_SECRET_ACCESS_KEY="<YOUR_SECRET_ACCESS_KEY>"
124
- STORAGE_AWS_BUCKET_NAME="<YOUR_BUCKET_NAME>"
125
- STORAGE_AWS_REGION_NAME="<YOUR_BUCKET_REGION>"
126
- STORAGE_AWS_FOLDER=""
127
-
128
- #### Google Cloud Storage
129
-
130
- STORAGE_TYPE=gcs
131
- STORAGE_GCS_BUCKET_NAME="<YOUR_BUCKET_NAME>"
132
- STORAGE_GCS_PROJECT_ID="<YOUR_PROJECT_ID>"
133
- STORAGE_GCS_FOLDER=""
134
- GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json"
135
-
136
- Azure Blob Storage
137
- ==================
138
-
139
- STORAGE_TYPE=azure
140
- STORAGE_AZURE_ACCOUNT_NAME="<YOUR_STORAGE_ACCOUNT>"
141
- STORAGE_AZURE_ACCOUNT_KEY="<YOUR_STORAGE_KEY>"
142
- STORAGE_AZURE_CONTAINER_NAME="<YOUR_CONTAINER_NAME>"
143
- STORAGE_AZURE_FOLDER=""
144
-
145
-
146
- ## Questions? Concerns? Want to get involved?
147
-
148
- Email the community team at [[email protected]](mailto:[email protected])