parquet-converter commited on
Commit
12c3dc0
·
1 Parent(s): a2e7d94

Update parquet files (step 25 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crows Zero 2 Br Rip 720p Movies Torrents The Ultimate Collection of Fight Scenes and Night Club Music.md +0 -94
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Apk Miracle Thunder 2.82 Crack High Quality.md +0 -18
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Badrinath Ki Dulhania Movies 1080p Torrent The Story Behind the Making of the Movie.md +0 -200
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office for Free Cracked Pros Cons and Alternatives.md +0 -40
  5. spaces/1gistliPinn/ChatGPT4/Examples/Ape Winamp Plugin 3.99 [WORK] Download.md +0 -7
  6. spaces/1gistliPinn/ChatGPT4/Examples/Dvd Bonus Pinnacle Studio 14 Torrent ((NEW)).md +0 -62
  7. spaces/1pelhydcardo/ChatGPT-prompt-generator/Kasabian West Ryder Pauper Lunatic Asylum 2009 320kbpsrar.md +0 -84
  8. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Soda Saga Old Version APK Download - Enjoy the Classic Game.md +0 -89
  9. spaces/1phancelerku/anime-remove-background/Dolphin Emulator 5.0 A Cross-Platform Nintendo Emulator for Windows Linux macOS and Android.md +0 -172
  10. spaces/1phancelerku/anime-remove-background/Free Download of PowerPoint 2016 for Windows 10 Learn How to Use Microsofts Powerful Presentation Software.md +0 -121
  11. spaces/232labs/VToonify/vtoonify/model/raft/download_models.sh +0 -3
  12. spaces/AI-ZTH-03-23/5.StreamlitWikipediaChat/README.md +0 -14
  13. spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py +0 -102
  14. spaces/ASJMO/freegpt/client/js/highlightjs-copy.min.js +0 -1
  15. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/summarize/+server.ts +0 -56
  16. spaces/Adapter/CoAdapter/ldm/data/dataset_wikiart.py +0 -67
  17. spaces/Adapting/TrendFlow/mypages/home.py +0 -143
  18. spaces/Adr740/Hadith_AI_Explorer/README.md +0 -12
  19. spaces/AiMimicry/sovits-models/vdecoder/hifigan/utils.py +0 -68
  20. spaces/AlgoveraAI/medical-image-classification/app.py +0 -140
  21. spaces/Aloento/9Nine-VITS/generator.py +0 -62
  22. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py +0 -358
  23. spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_rcnn_hrnetv2p_w18_20e_coco.py +0 -10
  24. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/three_interpolate.py +0 -68
  25. spaces/Ariharasudhan/YoloV5/utils/triton.py +0 -85
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_lib.py +0 -122
  27. spaces/Awesimo/jojogan/e4e/README.md +0 -142
  28. spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_act.py +0 -85
  29. spaces/Bart92/RVC_HF/LazyImport.py +0 -13
  30. spaces/BeeMon/dreambooth-training/app.py +0 -687
  31. spaces/Benson/text-generation/Examples/Barra De Hookah Mp3 Descargar.md +0 -110
  32. spaces/Benson/text-generation/Examples/Carx Street Apk Infinite Money 0.8 4.md +0 -127
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/inspect.py +0 -92
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/version.py +0 -504
  35. spaces/CALM/Dashboard/streamlit_observable/frontend/src/streamlit/streamlit.ts +0 -198
  36. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/__init__.py +0 -12
  37. spaces/CVPR/GFPGAN-example/gfpgan/archs/__init__.py +0 -10
  38. spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reverse.h +0 -23
  39. spaces/CVPR/WALT/mmdet/core/anchor/builder.py +0 -7
  40. spaces/CVPR/WALT/mmdet/models/losses/pisa_loss.py +0 -183
  41. spaces/CVPR/regionclip-demo/detectron2/data/datasets/coco.py +0 -539
  42. spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/retinanet.py +0 -609
  43. spaces/Chintan-Donda/KKMS-KSSW-HF/src/ner_detection.py +0 -58
  44. spaces/Chukwuka/Dog_Breed_ImageWoof/app.py +0 -98
  45. spaces/Cicooo/vits-uma-genshin-honkai/commons.py +0 -172
  46. spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Config.js +0 -375
  47. spaces/CikeyQI/meme-api/meme_generator/memes/little_angel/__init__.py +0 -55
  48. spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/ip_detection_utils.py +0 -461
  49. spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/laion_dataset.py +0 -31
  50. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/__main__.py +0 -3
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crows Zero 2 Br Rip 720p Movies Torrents The Ultimate Collection of Fight Scenes and Night Club Music.md DELETED
@@ -1,94 +0,0 @@
1
-
2
- <h1>Crows Zero 2: A Review of the Action-Packed Sequel</h1>
3
- <h2>Introduction</h2>
4
- <p>If you are a fan of Japanese action films, you might have heard of Crows Zero, a 2007 film based on the manga Crows by Hiroshi Takahashi. The film follows the violent conflicts between rival gangs of students at Suzuran All-Boys High School, also known as "The School of Crows". The film was a commercial and critical success, and spawned a sequel in 2009, Crows Zero 2.</p>
5
- <h2>Crows Zero 2 Br Rip 720p Movies Torrents</h2><br /><p><b><b>DOWNLOAD</b> &#9675;&#9675;&#9675; <a href="https://byltly.com/2uKAeU">https://byltly.com/2uKAeU</a></b></p><br /><br />
6
- <h3>What is Crows Zero 2?</h3>
7
- <p>Crows Zero 2 is a 2009 Japanese action film directed by Takashi Miike with a screenplay by Shogo Muto. It is the second film based on the manga Crows by Hiroshi Takahashi, and a direct sequel to 2007's Crows Zero. The film stars much of the cast from the first film, including Shun Oguri, Kyōsuke Yabe, Meisa Kuroki, and Takayuki Yamada reprising their roles. It was released in Japan on April 11, 2009. </p>
8
- <h3>Who are the main characters and actors?</h3>
9
- <p>The main characters and actors of Crows Zero 2 are:</p>
10
- <table>
11
- <tr><th>Character</th><th>Actor</th></tr>
12
- <tr><td>Takiya Genji</td><td>Shun Oguri</td></tr>
13
- <tr><td>Serizawa Tamao</td><td>Takayuki Yamada</td></tr>
14
- <tr><td>Aizawa Ruka</td><td>Meisa Kuroki</td></tr>
15
- <tr><td>Katagiri Ken</td><td>Kyōsuke Yabe</td></tr>
16
- <tr><td>Tatsukawa Tokio</td><td>Kenta Kiritani</td></tr>
17
- <tr><td>Tamura Chūta</td><td>Suzunosuke Tanaka</td></tr>
18
- <tr><td>Izaki Shun</td><td>Sōsuke Takaoka</td></tr>
19
- <tr><td>Takiya Hideo</td><td>Goro Kishitani</td></tr>
20
- <tr><td>Rindaman / Hayashida Megumi</td><td>Motoki Fukami</td></tr>
21
- <tr><td>Kirishima Hiromi</td><td>Shunsuke Daitō</td></tr>
22
- <tr><td>Makise Takashi</td><td>Tsutomu Takahashi</td></tr>
23
- <tr><td>Tsutsumoto Shōji</td><td>Yusuke Kamiji</td></tr>
24
- <tr><td>Mikami Manabu and Takeshi</td><td>Yusuke Izaki and Hisato Izaki</td></tr>
25
- <tr><td>Honjō Toshiaki</td><td>Ryō Hashizume</td></tr>
26
- <tr><td>Sugihara Makoto</td><td>Yu Koyanagi</td></tr>
27
- <tr><td>Tokaji Yūji</td><td>Kaname Endō</td></tr>
28
- <tr><td>Kawanishi Noboru</td><td>Shinnosuke Abe</td></tr>
29
- <tr><td>Bitō Makio and Tatsuya</td><td>Yoshiyuki Yamaguchi and Haruma Miura</td></tr>
30
- <tr><td>Narumi Taiga</td><td>Nobuaki Kaneko</td></tr>
31
- ```html <h2>Analysis</h2>
32
- <h3>What are the strengths of the film?</h3>
33
- <p>One of the strengths of Crows Zero 2 is the action scenes, which are well-choreographed, realistic, and brutal. The film does not shy away from showing the blood and pain of the street fights, and the sound effects and camera work add to the impact. The film also showcases a variety of fighting styles and weapons, such as fists, kicks, bats, pipes, chains, knives, and even umbrellas. The action scenes are not only entertaining, but also serve to advance the plot and develop the characters.</p>
34
- <h3>What are the weaknesses of the film?</h3>
35
- <p>One of the weaknesses of Crows Zero 2 is the length, which is over two hours long. The film could have been trimmed down by cutting some unnecessary scenes or subplots, such as the romance between Genji and Ruka (Meisa Kuroki), which does not add much to the story or the characters. The film also suffers from some pacing issues, especially in the first half, where it takes too long to set up the conflict and introduce the characters. The film could have benefited from more editing and focus.</p>
36
- <h3>How does it compare to the first film and the manga?</h3>
37
- <p>Crows Zero 2 is a faithful adaptation of the manga by Hiroshi Takahashi, which is a prequel to his other manga series Crows and Worst. The film follows the manga closely, with some minor changes and additions. For example, the film adds a new character, Ryo Urushibara (Gou Ayano), who is a homage to Michael Jackson. The film also changes some details of the final battle between Suzuran and Hosen, such as the location and the outcome.</p>
38
- <p>Crows Zero II 2009 720p BRRip XviD AC3-ViSiON<br />
39
- Crows Zero II Kurozu Zero II 2009 1080p BRRip x265 HEVC<br />
40
- Crows Zero II English Subbed Live Action Movie<br />
41
- Crows Zero II Free Download Streaming Archive.org<br />
42
- Crows Zero II Action School Yakuza Japan Fighting<br />
43
- Crows Zero II Suzuran School Housen School Rivalry<br />
44
- Crows Zero II Genji Takiya Shun Oguri Tamao Serizawa<br />
45
- Crows Zero II Rindaman Kenichi Endo Makise Shinnosuke<br />
46
- Crows Zero II Night Clubs Musical Performers Drinking Murder<br />
47
- Crows Zero II Sequel to Crows Zero Prequel to Crows Explode<br />
48
- Crows Zero II Based on Manga by Hiroshi Takahashi<br />
49
- Crows Zero II Directed by Takashi Miike<br />
50
- Crows Zero II Soundtrack by The Street Beats<br />
51
- Crows Zero II DVD Blu-ray Release Date<br />
52
- Crows Zero II Box Office Gross Reviews Ratings<br />
53
- Crows Zero II Watch Online HD Quality Subtitles<br />
54
- Crows Zero II Full Movie Download Magnet Link Torrent File<br />
55
- Crows Zero II Nyaa Torrents LimeTorrents.lol<br />
56
- Crows Zero II ultragoji2 Free Borrow Streaming Archive.org<br />
57
- Crows Zero II netdsalispa sabrinatucker1994 wixsite.com<br />
58
- Crows Zero 2 Br Rip 720p Movies Torrents THE HANDY GANG<br />
59
- Crows Zero 2 Br Rip 1080p Movie Torrents 5asec cz.5asec.com<br />
60
- Crows Zero 2 Br Rip 720p Movies Torrents angela-cuervos blogspot.com<br />
61
- Crows Zero 2 Br Rip 720p Movies Torrents kickass.to kat.cr kat.sx kat.am kat.li kat.rip kat.ag kat.pw kat.lol kat.ph kat.ws kat.tf kat.io kat.unblockit.pro kat.unblockit.one kat.unblockit.red kat.unblockit.pw kat.unblockit.id kat.unblockit.link kat.unblockit.win kat.unblockit.top kat.unblockit.bid kat.unblockit.trade kat.unblockit.date kat.unblockit.party kat.unblockit.download kat.unblockit.review kat.unblockit.faith kat.unblockit.webcam kat.unblockit.loan kat.unblockit.win<br />
62
- Crows Zero 2 Br Rip 720p Movies Torrents thepiratebay.org thepiratebay.net thepiratebay.se thepiratebay.com thepiratebay10.org thepiratebay3.to thepiratebay3.org thepiratebay3.net thepiratebay3.com thepiratebay3.se thepiratebay3.is thepiratebay3.link thepiratebay3.info thepiratebay3.xyz thepiratebay3.site thepiratebay3.icu thepiratebay3.rocks thepiratebay3.live thepiratebay3.zone thepiratebay3.works thepiratebay3.fun thepiratebay3.biz thepiratebay3.asia thepiratebay3.win thepiratebay3.lol</p>
63
- <p>Crows Zero 2 is a direct sequel to Crows Zero, which was also directed by Takashi Miike. The film continues the story of Genji and his quest to conquer Suzuran High School. The film retains most of the cast and crew from the first film, as well as its style and tone. However, Crows Zero 2 is darker and more serious than Crows Zero, which had more comedy and humor. The film also has more violence and bloodshed than Crows Zero, which was more stylized and cartoonish.</p>
64
- <h2>Conclusion</h2>
65
- <h3>What is the main message of the film?</h3>
66
- <p>The main message of Crows Zero 2 is that friendship and loyalty are more important than power and glory. The film shows that Genji learns to value his friends and allies more than his ambition to rule Suzuran. He realizes that he cannot achieve his goal alone, and that he needs to unite his school against a common enemy. He also learns to respect his rivals and enemies, such as Serizawa and Narumi, who share his passion for fighting. The film also shows that violence is not always the answer, and that sometimes peace and dialogue are better options.</p>
67
- <h3>Who should watch it and why?</h3>
68
- <p>Crows Zero 2 is a film for fans of Japanese action films, especially those who like street gang movies. The film offers a lot of excitement and thrill for those who enjoy watching realistic and brutal fight scenes. The film also has some good acting from its cast with some great moments from quite a few of the kids from Suzuran. The comedy in the movie is great with some funny scenes that successfully manage to get a few chuckles from the audience and its overall quirkiness is fun to watch as well .</p>
69
- <h3>FAQs</h3>
70
- <ul>
71
- <li>Q: Is Crows Zero 2 based on a true story? <ul>
72
- <li>A: No, Crows Zero 2 is based on a manga by Hiroshi Takahashi.</li>
73
- </ul>
74
- </li>
75
- <li>Q: Is Crows Zero 2 available on Netflix? <ul>
76
- <li>A: No, Crows Zero 2 is not available on Netflix.</li>
77
- </ul>
78
- </li>
79
- <li>Q: Is Crows Zero 2 better than Crows Zero? <ul>
80
- <li>A: It depends on personal preference. Some may prefer Crows Zero for its lighter tone and humor, while others may prefer Crows Zero 2 for its darker tone and violence.</li>
81
- </ul>
82
- </li>
83
- <li>Q: How many Crows Zero movies are there? <ul>
84
- <li>A: There are three Crows Zero movies: Crows Zero (2007), Crows Zero 2 (2009), and Crows Explode (2014).</li>
85
- </ul>
86
- </li>
87
- <li>Q: What is the meaning of crows in Crows Zero? <ul>
88
- <li>A: Crows are a symbol of Suzuran High School, which is also known as "The School of Crows". The students of Suzuran are like crows: they are noisy, chaotic, rebellious, and loyal to their own flock.</li>
89
- </ul>
90
- </li>
91
- </ul>
92
- ```</p> 0a6ba089eb<br />
93
- <br />
94
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Apk Miracle Thunder 2.82 Crack High Quality.md DELETED
@@ -1,18 +0,0 @@
1
- <br />
2
- <h1>How to Download APK Miracle Thunder 2.82 Crack for Free</h1>
3
- <p>If you are looking for a way to download APK Miracle Thunder 2.82 crack for free, you have come to the right place. APK Miracle Thunder is a powerful tool that allows you to flash, unlock, repair, and root your Android devices. It supports a wide range of models and brands, such as Samsung, Huawei, Oppo, Vivo, Xiaomi, and more.</p>
4
- <h2>download apk miracle thunder 2.82 crack</h2><br /><p><b><b>Download Zip</b> &mdash;&mdash;&mdash;&mdash;&mdash; <a href="https://byltly.com/2uKxkK">https://byltly.com/2uKxkK</a></b></p><br /><br />
5
- <p>However, APK Miracle Thunder is not a free tool. You need to pay a license fee to use it. But don't worry, there is a way to download APK Miracle Thunder 2.82 crack for free and enjoy all its features without any limitations. In this article, we will show you how to do it in a few simple steps.</p>
6
- <ol>
7
- <li>Download APK Miracle Thunder 2.82 crack from a reliable source. There are many websites that claim to offer APK Miracle Thunder 2.82 crack for free, but not all of them are trustworthy. Some of them may contain viruses, malware, or spyware that can harm your device or steal your data. To avoid this, you need to download APK Miracle Thunder 2.82 crack from a reliable source. We recommend you to use this link: https://example.com/download-apk-miracle-thunder-2-82-crack/</li>
8
- <li>Install APK Miracle Thunder 2.82 crack on your computer. Once you have downloaded APK Miracle Thunder 2.82 crack from the link above, you need to install it on your computer. To do this, you need to extract the zip file and run the setup.exe file. Follow the instructions on the screen and complete the installation process.</li>
9
- <li>Connect your Android device to your computer. After installing APK Miracle Thunder 2.82 crack on your computer, you need to connect your Android device to your computer using a USB cable. Make sure you enable USB debugging mode on your device before connecting it.</li>
10
- <li>Select your device model and brand on APK Miracle Thunder 2.82 crack. Once your device is connected to your computer, you need to select your device model and brand on APK Miracle Thunder 2.82 crack. You can find them on the left panel of the tool.</li>
11
- <li>Choose the action you want to perform on your device. APK Miracle Thunder 2.82 crack offers various actions that you can perform on your device, such as flash, unlock, repair, or root. You can choose the action you want to perform on the right panel of the tool.</li>
12
- <li>Click on the start button and wait for the process to finish. After choosing the action you want to perform on your device, you need to click on the start button and wait for the process to finish. APK Miracle Thunder 2.82 crack will display the progress and status of the process on the screen.</li>
13
- </ol>
14
- <p>Congratulations! You have successfully downloaded and used APK Miracle Thunder 2.82 crack for free on your Android device. You can now enjoy all its features and benefits without any restrictions.</p>
15
- <p>If you found this article helpful, please share it with your friends and family who may also want to download APK Miracle Thunder 2.82 crack for free.</p>
16
- <p></p> ddb901b051<br />
17
- <br />
18
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Badrinath Ki Dulhania Movies 1080p Torrent The Story Behind the Making of the Movie.md DELETED
@@ -1,200 +0,0 @@
1
-
2
- <h1>Download Badrinath Ki Dulhania Movies 1080p Torrent</h1>
3
- <p>If you are a fan of Bollywood romantic comedy movies, you might have heard of <strong>Badrinath Ki Dulhania</strong>, a 2017 hit film starring Varun Dhawan and Alia Bhatt. But did you know that you can download Badrinath Ki Dulhania movies 1080p torrent and watch it on your device anytime and anywhere? In this article, we will tell you everything you need to know about downloading Badrinath Ki Dulhania movies 1080p torrent, including what the movie is about, why you should download it, where to download it, and how to do it safely and legally.</p>
4
- <h2>What is Badrinath Ki Dulhania?</h2>
5
- <p>Badrinath Ki Dulhania is a Hindi-language romantic comedy film written and directed by Shashank Khaitan and produced by Dharma Productions. It is a spiritual successor to Humpty Sharma Ki Dulhania (2014), which also starred Varun Dhawan and Alia Bhatt. The film was released on March 10, 2017, during the Holi weekend, and became a box office success, earning over ₹200.45 crores worldwide.</p>
6
- <h2>download Badrinath Ki Dulhania movies 1080p torrent</h2><br /><p><b><b>Download</b> &#9999; &#9999; &#9999; <a href="https://byltly.com/2uKw81">https://byltly.com/2uKw81</a></b></p><br /><br />
7
- <h3>A brief summary of the plot</h3>
8
- <p>The film follows the story of Badrinath Bansal (Varun Dhawan), a wealthy but chauvinistic young man from Jhansi, who falls in love with Vaidehi Trivedi (Alia Bhatt), a smart and independent woman from Kota, who wants to become an air hostess. However, Vaidehi rejects his marriage proposal and runs away on their wedding day, leaving him humiliated and heartbroken. Badri then chases her to Singapore, where she works as a flight attendant, and tries to win her back. Along the way, he learns to respect her dreams and ambitions, while she learns to trust him and his love.</p>
9
- <h3>The cast and crew of the movie</h3>
10
- <p>The film features a talented cast of actors who deliver memorable performances. Here are some of the main cast members:</p>
11
- <ul>
12
- <li>Varun Dhawan as Badrinath Bansal: The male protagonist who is smitten by Vaidehi and pursues her relentlessly.</li>
13
- <li>Alia Bhatt as Vaidehi Trivedi: The female protagonist who is ambitious and independent and wants to escape the patriarchal norms of her society.</li>
14
- <li>Sahil Vaid as Somdev Mishra: Badri's best friend and sidekick who accompanies him on his quest to find Vaidehi.</li>
15
- <li>Swanand Kirkire as Mayank Trivedi: Vaidehi's father who is burdened by debts and wants to marry off his daughters.</li>
16
- <li>Rituraj Singh as Ambarnath Bansal: Badri's father who is a rich and conservative businessman who believes in arranged marriages.</li>
17
- <li>Yash Sinha as Aloknath Bansal: Badri's elder brother who is unhappy with his arranged marriage and drinks a lot.</li>
18
- <li>Shweta Basu Prasad as Urmila Shukla Bansal: Alok's wife who is intelligent and educated but not allowed to work by her in-laws.</li>
19
- <li>Gauahar Khan as Laxmi Shankar: Vaidehi's colleague and friend who works as a flight attendant in Singapore.</li>
20
- </ul>
21
- <p>The film also has a talented crew behind the scenes who made the film possible. Here are some of the key crew members:</p>
22
- <ul>
23
- <li>Shashank Khaitan: The writer and director of the film who also wrote and directed Humpty Sharma Ki Dulhania.</li>
24
- <li>Karan Johar: The producer of the film who is also the founder of Dharma Productions, one of the leading production houses in Bollywood.</li>
25
- <li>Neha Parti Matiyani: The cinematographer of the film who captured the beautiful locations of Jhansi, Kota, Singapore, and Mumbai.</li>
26
- <li>Manan Sagar: The editor of the film who gave it a smooth and crisp flow.</li>
27
- <li>Akhil Sachdeva, Tanishk Bagchi, Amaal Mallik, Bappi Lahiri: The music composers of the film who created some catchy songs like "Tamma Tamma Again", "Badri Ki Dulhania", "Roke Na Ruke Naina", etc.</li>
28
- </ul>
29
- <h3>The reception and awards of the film</h3>
30
- <p>The film received positive reviews from critics and audiences alike, who praised its humor, romance, message, performances, music, and direction. It also received several awards and nominations at various ceremonies. Here are some of the accolades that the film won or was nominated for:</p>
31
- <table>
32
- <tr><th>Award</th><th>Category</th><th>Recipient(s)</th><th>Result</th></tr>
33
- <tr><td>Filmfare Awards</td><td>Best Film</td><td>Dharma Productions</td><td>Nominated</td></tr>
34
- <tr><td>Filmfare Awards</td><td>Best Director</td><td>Shashank Khaitan</td><td>Nominated</td></tr>
35
- <tr><td>Filmfare Awards</td><td>Best Actor</td><td>Varun Dhawan</td><td>Nominated</td></tr>
36
- <tr><td>Filmfare Awards</td><td>Best Actress</td><td>Alia Bhatt</td><td>Nominated</td></tr>
37
- <tr><td>Filmfare Awards</td><td>Best Male Playback Singer</td><td>Arijit Singh for "Roke Na Ruke Naina"</td><td>Won</td></tr>
38
- <tr><td>IIFA Awards</td><td>Best Actor (Male)</td><td>Varun Dhawan</td><td>Nominated</td></tr>
39
- <tr><td>IIFA Awards</td><td>Best Actor (Female)</td><td>Alia Bhatt</td><td>Nominated</td></tr>
40
- <tr><td>IIFA Awards</td><td>Best Music Director</td><td>Akhil Sachdeva, Tanishk Bagchi, Amaal Mallik for "Badrinath Ki Dulhania"</ <td>Nominated</td></tr>
41
- <tr><td>IIFA Awards</td><td>Best Playback Singer (Male)</ <td>Arijit Singh for "Roke Na Ruke Naina"</ <td>Nominated</ /tr>
42
- <tr><
43
- <td>IIFA Awards<
44
- / <td>B est Playback Singer (Female)<
45
- / <td>Neha Kakkar for "Badri Ki Dulhania"<
46
- / <td>Nominated<
47
- /tr>
48
- <tr><
49
- <td>Zee Cine Awards<
50
- / <td>B est Film<
51
- / <td>Dharma Productions<
52
- / <td>Nominated<
53
- /tr>
54
- <tr><
55
- <td>Zee Cine Awards<
56
- / <td>B est Actor – Male<
57
- / <td>Varun Dhawan<
58
- / <td>Nominated<
59
- /tr>
60
- <tr><
61
- <td>Zee Cine Awards<
62
- / <td>B est Actor – Female<
63
- / <td>Alia Bhatt<
64
- / <td>Nominated<
65
- /tr>
66
- <tr><
67
- <td>Zee Cine Awards<
68
- / <td>B est Director<
69
- / <td>Shashank Khaitan<
70
- / <td>Nominated<
71
- /tr>
72
- <tr><
73
- <td>Zee Cine Awards<
74
- / <td>B est Music Director<
75
- / <td>Akhil Sachdeva for "Badrinath Ki Dulhania"<
76
- / <td>Nominated<
77
- /tr>
78
- <tr><
79
- / table>
80
- <h2>Why download Badrinath Ki Dulhania movies 1080p torrent?</h2>
81
- <h2>Why download Badrinath Ki Dulhania movies 1080p torrent?</h2>
82
- <p>If you have enjoyed watching Badrinath Ki Dulhania in the theaters or on streaming platforms, you might want to download it and watch it again on your device. There are many reasons why downloading Badrinath Ki Dulhania movies 1080p torrent is a good idea. Here are some of them:</p>
83
- <p>Badrinath Ki Dulhania full movie download HD torrent<br />
84
- How to download Badrinath Ki Dulhania 1080p movie for free<br />
85
- Badrinath Ki Dulhania torrent magnet link download<br />
86
- Watch Badrinath Ki Dulhania online free HD quality<br />
87
- Badrinath Ki Dulhania movie download with subtitles<br />
88
- Download Badrinath Ki Dulhania Hindi movie 1080p<br />
89
- Badrinath Ki Dulhania full movie torrent download filmywap<br />
90
- Badrinath Ki Dulhania 1080p movie download kickass<br />
91
- Badrinath Ki Dulhania movie torrent download in Tamil<br />
92
- Badrinath Ki Dulhania HD movie download utorrent<br />
93
- Badrinath Ki Dulhania full movie free download mp4<br />
94
- Badrinath Ki Dulhania movie download 1080p bluray<br />
95
- Badrinath Ki Dulhania torrent download yify<br />
96
- Badrinath Ki Dulhania movie download with English subtitles<br />
97
- Download Badrinath Ki Dulhania full movie in Telugu<br />
98
- Badrinath Ki Dulhania 1080p movie torrent download extratorrent<br />
99
- Badrinath Ki Dulhania full movie online watch free<br />
100
- Download Badrinath Ki Dulhania movie in dual audio<br />
101
- Badrinath Ki Dulhania movie torrent download 720p<br />
102
- Badrinath Ki Dulhania full movie download HD filmyzilla<br />
103
- Download Badrinath Ki Dulhania movie songs mp3<br />
104
- Watch Badrinath Ki Dulhania full movie on Netflix<br />
105
- Download Badrinath Ki Dulhania movie trailer HD<br />
106
- Badrinath Ki Dulhania full movie torrent download rarbg<br />
107
- Download Badrinath Ki Dulhania full movie in Malayalam<br />
108
- Watch Badrinath Ki Dulhania full movie on YouTube<br />
109
- Download Badrinath Ki Dulhania full movie with Hindi dubbed<br />
110
- Badrinath Ki Dulhania 1080p torrent download limetorrents<br />
111
- Download Badrinath Ki Dulhania full movie in HD quality<br />
112
- Watch Badrinath Ki Dulhania full movie on Amazon Prime Video<br />
113
- Download Badrinath Ki Dulhania full movie in Kannada<br />
114
- Badrinath Ki Dulhania full movie torrent download bittorrent<br />
115
- Download Badrinath Ki Dulhania full movie in Bengali<br />
116
- Watch Badrinath Ki Dulhania full movie on Hotstar<br />
117
- Download Badrinath Ki Dulhania full movie in Marathi<br />
118
- Badrinath Ki Dulhania 1080p torrent download thepiratebay<br />
119
- Download Badrinath Ki Dulhania full movie in Punjabi<br />
120
- Watch Badrinath Ki Dulhania full movie on Zee5<br />
121
- Download Badrinath Ki Dulhania full movie in Gujarati<br />
122
- Watch Badrinath Ki Dulhania full movie on SonyLIV<br />
123
- Download Badrinath Ki Dulhania full movie in Urdu<br />
124
- Watch Badrinath Ki Dulhania full movie on MX Player<br />
125
- Download Badrinath Ki Dulhania full movie in Odia<br />
126
- Watch Badrinath Ki Dulhania full movie on Voot<br />
127
- Download Badrinath Ki Dulhania full movie in Nepali <br />
128
- Watch Badrinath Ki Dulhania full movie on Eros Now <br />
129
- Download Badrinath Ki Dulhania full movie in Bhojpuri <br />
130
- Watch Badrinath Ki Dulhania full movie on JioCinema <br />
131
- Download Badrinath Ki Dulhania full movie in Assamese</p>
132
- <h3>The benefits of downloading movies 1080p torrent</h3>
133
- <ul>
134
- <li>You can watch the movie offline without any internet connection or buffering issues.</li>
135
- <li>You can watch the movie in high definition quality with 1080p resolution and clear sound.</li>
136
- <li>You can watch the movie on any device that supports torrent files, such as your laptop, smartphone, tablet, or smart TV.</li>
137
- <li>You can watch the movie anytime and anywhere you want, without any restrictions or ads.</li>
138
- <li>You can share the movie with your friends and family who also love Bollywood movies.</li>
139
- </ul>
140
- <h3>The risks of downloading movies 1080p torrent</h3>
141
- <p>However, downloading Badrinath Ki Dulhania movies 1080p torrent also comes with some risks that you should be aware of. Here are some of them:</p>
142
- <ul>
143
- <li>You might violate the copyright laws and face legal consequences if you download the movie without the permission of the creators or distributors.</li>
144
- <li>You might expose your device to malware or viruses that can harm your data or privacy if you download the movie from untrusted sources or websites.</li>
145
- <li>You might compromise your internet speed or bandwidth if you download the movie using a peer-to-peer network that consumes a lot of resources.</li>
146
- <li>You might encounter fake or corrupted files that can damage your device or waste your time if you download the movie without verifying its quality or authenticity.</li>
147
- </ul>
148
- <h3>How to download movies 1080p torrent safely and legally</h3>
149
- <p>Fortunately, there are ways to download Badrinath Ki Dulhania movies 1080p torrent safely and legally. Here are some tips that you should follow:</p>
150
- <ul>
151
- <li>Always use a reliable and reputable torrent website that offers high-quality and verified files.</li>
152
- <li>Always use a VPN service that can protect your identity and location from prying eyes and authorities.</li>
153
- <li>Always use an antivirus software that can scan and remove any potential threats from your device.</li>
154
- <li>Always check the reviews and ratings of the torrent file before downloading it to ensure its quality and legitimacy.</li>
155
- <li>Always respect the rights and wishes of the creators and distributors of the movie and support them by buying their products or services.</li>
156
- </ul>
157
- <h2>Where to download Badrinath Ki Dulhania movies 1080p torrent?</h2>
158
- <p>Now that you know why and how to download Badrinath Ki Dulhania movies 1080p torrent, you might be wondering where to find them. There are many websites and apps that offer torrent files for movies, but not all of them are safe, legal, or reliable. To help you out, we have compiled a list of some of the best websites and apps to download Badrinath Ki Dulhania movies 1080p torrent. Here they are:</p>
159
- <h3>The best websites to download movies 1080p torrent</h3>
160
- <h4>123Movies</h4>
161
- <p>123Movies is one of the most popular and widely used websites to watch and download movies online for free. It has a huge collection of movies from various genres, languages, countries, and years. You can easily find Badrinath Ki Dulhania on this website and download it as a 1080p torrent file. You can also stream the movie online without any registration or ads. However, you should be careful about the pop-ups and redirects that might lead you to malicious sites or downloads. You should also use a VPN service to access this website as it might be blocked in some regions due to legal issues.</p>
162
- <p>The URL for this website is <a href="https://w1.123-movies.lol/movie/watch-badrinath-ki-dulhania-online-5763">https://w1.123-movies.lol/movie/watch-badrinath-ki-dulhania-online-5763</a>.</p>
163
- <h4>Netflix</h4>
164
- <p>Netflix is one of the most popular and trusted streaming platforms in the world. It offers a wide range of movies, shows, documentaries, and originals for its subscribers. You can watch Badrinath Ki Dulhania on Netflix with high-quality video and audio. You can also download the movie on your device using the Netflix app for offline viewing. However, you need to have a Netflix subscription to access this service. You also need to have enough storage space on your device to download the movie as a 1080p file.</p>
165
- <p>The URL for this website is <a href="https://www.netflix.com/title/80180043">https://www.netflix.com/title/80180043</a>.</p>
166
- <h4>Prime Video</h4>
167
- <p>Prime Video is another popular and trusted streaming platform that offers a variety of movies, shows, originals, and exclusives for its subscribers. You can watch Badrinath Ki Dulhania on Prime Video with high-quality video and audio. You can also download the movie on your device using the Prime Video app for offline viewing. However, you need to have a Prime Video subscription to access this service. You also need to have enough storage space on your device to download the movie as a 1080p file.</p>
168
- <p>The URL for this website is <a href="https://www.primevideo.com/detail/Badrinath-Ki-Dulhania/0QSSI97L6FF0AN5EV3FEEWV298">https://www.primevideo.com/detail/Badrinath-Ki-Dulhania/0QSSI97L6FF0AN5EV3FEEWV298</a>.</p>
169
- <h3>The best apps to download movies 1080p torrent</h3>
170
- <h4>uTorrent</h4>
171
- <p>uTorrent is one of the most popular and widely used torrent clients in the world. It allows you to download and manage torrent files from various sources with ease and speed. You can use uTorrent to download Badrinath Ki Dulhania movies 1080p torrent from any website that offers it. You can also adjust the settings and preferences of uTorrent to optimize your downloading experience. However, you should be careful about the ads and offers that might appear on uTorrent as they might be harmful or unwanted. You should also use a VPN service to protect your privacy and security while using uTorrent.</p>
172
- <p>The URL for this app is <a href="https://www.utorrent.com/">https://www.utorrent.com/</a>.</p>
173
- <h4>BitTorrent</h4>
174
- <p>BitTorrent is another popular and widely used torrent client in the world. It allows you to download and manage torrent files from various sources with ease and speed. You can use BitTorrent to download Badrinath Ki Dulhania movies 1080p torrent from any website that offers it. You can also adjust the settings and preferences of BitTorrent to optimize your downloading experience. However, you should be careful about the ads and offers that might appear on BitTorrent as they might be harmful or unwanted. You should also use a VPN service to protect your privacy and security while using BitTorrent.</p>
175
- <p>The URL for this app is <a href="https://www.bittorrent.com/">https://www.bittorrent.com/</a>.</p>
176
- <h4>Popcorn Time</h4>
177
- <p>Popcorn Time is a unique app that combines streaming and torrenting in one platform. It allows you to watch and download movies online for free using torrent files from various sources. You can use Popcorn Time to watch and download Badrinath Ki Dulhania movies 1080p torrent with high-quality video and audio. You can also choose from different subtitles and languages for your convenience. However, you should be aware that Popcorn Time is not legal in some countries and regions due to copyright issues. You should also use a VPN service to access Popcorn Time safely and anonymously.</ /p>
178
- <p>The URL for this app is <a href="https://popcorntime.app/">https://popcorntime.app/</a>.</ /p>
179
- <h2>C onclusion<
180
- /h2>
181
- <p>Badrinath Ki Dulhania is a fun and entertaining Bollywood romantic comedy film that stars Varun Dhawan and Alia Bhatt as two opposite characters who fall in love despite their differences and challenges. If you want to watch this movie again or share it with your friends and family, you can download Badrinath Ki Dulhania movies 1080p torrent and enjoy it on your device anytime and anywhere. However, you should be careful about the risks and responsibilities involved in downloading movies 1080p torrent, and follow the tips and suggestions we have given you in this article. We hope you have found this article helpful and informative, and we wish you a happy and safe downloading experience.<
182
- /p>
183
- <h2>Frequently Asked Questions<
184
- /h2>
185
- <h2>Frequently Asked Questions</h2>
186
- <ol>
187
- <li><strong>What is the meaning of Badrinath Ki Dulhania?</strong></li>
188
- <p>Badrinath Ki Dulhania means Badrinath's bride in Hindi. It is the title of the movie and also the name of one of the songs in the movie.</p>
189
- <li><strong>Is Badrinath Ki Dulhania a sequel to Humpty Sharma Ki Dulhania?</strong></li>
190
- <p>Badrinath Ki Dulhania is not a sequel to Humpty Sharma Ki Dulhania, but a spiritual successor. It has a different story and characters, but shares the same genre and theme of love and marriage.</p>
191
- <li><strong>Where can I watch Badrinath Ki Dulhania online?</strong></li>
192
- <p>You can watch Badrinath Ki Dulhania online on streaming platforms like Netflix and Prime Video, if you have a subscription. You can also watch it online for free on websites like 123Movies, but you should be careful about the legality and safety of these sites.</p>
193
- <li><strong>How can I download Badrinath Ki Dulhania movies 1080p torrent?</strong></li>
194
- <p>You can download Badrinath Ki Dulhania movies 1080p torrent from websites that offer torrent files for movies, such as 123Movies. You will need a torrent client like uTorrent or BitTorrent to download the file. You will also need a VPN service to protect your privacy and security while downloading.</p>
195
- <li><strong>Is downloading Badrinath Ki Dulhania movies 1080p torrent legal and safe?</strong></li>
196
- <p>Downloading Badrinath Ki Dulhania movies 1080p torrent may not be legal in some countries and regions, as it may violate the copyright laws and rights of the creators and distributors of the movie. You should respect their wishes and support them by buying their products or services. Downloading Badrinath Ki Dulhania movies 1080p torrent may also not be safe, as you may encounter malware or viruses that can harm your device or data. You should use a reliable and reputable website and a antivirus software to download the file safely.</p>
197
- </ol>
198
- </p> 0a6ba089eb<br />
199
- <br />
200
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Microsoft Office for Free Cracked Pros Cons and Alternatives.md DELETED
@@ -1,40 +0,0 @@
1
-
2
- <h1>How to Download Microsoft Office for Free Cracked in 2023</h1>
3
- <p>Microsoft Office is one of the most popular and widely used productivity suites in the world. It includes applications such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive and Teams that help you create, edit, share and collaborate on various types of documents and projects. However, Microsoft Office is not free. You need to purchase a license key or a subscription to use it legally and access all its features and updates.</p>
4
- <h2>download microsoft office for free cracked</h2><br /><p><b><b>Download</b> &#10038; <a href="https://byltly.com/2uKxYa">https://byltly.com/2uKxYa</a></b></p><br /><br />
5
- <p>But what if you don't want to pay for Microsoft Office? Is there a way to download it for free cracked and use it without any limitations? The answer is yes, but it comes with some risks and drawbacks. In this article, we will show you how to download Microsoft Office for free cracked in 2023, what are the pros and cons of doing so, and what are some safer and cheaper alternatives to consider.</p>
6
- <h2>How to Download Microsoft Office for Free Cracked in 2023</h2>
7
- <p>There are many websites and torrents that claim to offer Microsoft Office for free cracked. These are usually modified versions of the original software that have been hacked or cracked to bypass the activation process and remove the restrictions. Some of these versions may also include additional features or tools that are not available in the official release.</p>
8
- <p>To download Microsoft Office for free cracked in 2023, you need to follow these steps:</p>
9
- <ol>
10
- <li>Find a reliable source that offers Microsoft Office for free cracked. You can use a search engine or a torrent site to look for it. Make sure to check the reviews and ratings of the source before downloading anything.</li>
11
- <li>Download the installation file or the ISO file of Microsoft Office for free cracked. You may need a torrent client or a download manager to do this.</li>
12
- <li>Open the installation file or mount the ISO file and run the setup.exe file. Follow the on-screen instructions to install Microsoft Office for free cracked on your computer.</li>
13
- <li>Activate Microsoft Office for free cracked using the crack or activator provided by the source. This may involve copying some files, running some commands, or using some tools.</li>
14
- <li>Enjoy using Microsoft Office for free cracked on your computer.</li>
15
- </ol>
16
- <h2>The Pros and Cons of Downloading Microsoft Office for Free Cracked</h2>
17
- <p>Downloading Microsoft Office for free cracked may seem like a tempting option, but it also comes with some disadvantages and risks that you should be aware of. Here are some of the pros and cons of downloading Microsoft Office for free cracked:</p>
18
- <p></p>
19
- <h3>The Pros</h3>
20
- <ul>
21
- <li>You can use Microsoft Office for free without paying anything.</li>
22
- <li>You can access all the features and options of Microsoft Office without any limitations.</li>
23
- <li>You can use Microsoft Office offline without requiring an internet connection or a Microsoft account.</li>
24
- </ul>
25
- <h3>The Cons</h3>
26
- <ul>
27
- <li>You may violate the terms and conditions of Microsoft and infringe their intellectual property rights.</li>
28
- <li>You may expose your computer to malware, viruses, spyware, ransomware, or other threats that may harm your system or data.</li>
29
- <li>You may compromise your security and privacy by allowing unauthorized access to your files or personal information.</li>
30
- <li>You may experience errors, bugs, crashes, or compatibility issues with Microsoft Office or other software on your computer.</li>
31
- <li>You may not receive any updates, patches, fixes, or support from Microsoft or other sources.</li>
32
- </ul>
33
- <h2>The Safer and Cheaper Alternatives to Downloading Microsoft Office for Free Cracked</h2>
34
- <p>If you want to use Microsoft Office legally and safely, you have some alternatives that are cheaper than buying a license key or a subscription. Here are some of them:</p>
35
- <ul>
36
- <li>You can use the web-based versions of Microsoft Office applications such as Word Online, Excel Online, PowerPoint Online, etc. These are free to use with a Microsoft account and offer basic editing and collaboration features. However, they have fewer features and options than the desktop versions of Office.</li>
37
- <li>You can use the free trial version of Microsoft Office that lasts for 30 days. This allows you to use the full version of Office with all its features and updates. However, you need a credit card to sign up for the trial and you will be charged after the trial period ends unless you cancel it beforehand.</li>
38
- <li>You can use some free alternatives to Microsoft Office</p> ddb901b051<br />
39
- <br />
40
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Ape Winamp Plugin 3.99 [WORK] Download.md DELETED
@@ -1,7 +0,0 @@
1
-
2
- <p>how to install:1) download software.exe from above page.2) run software.3) select 'download my subtitle'4) paste your serial number.5) select language and subtitle file you want to convert.6) convert. how do you convert a subtitle to text?1. open the saved file (if it isn't your folder, save it in the folder)2. go to tools > publish subtitles3. select subtitle file > convert to: text (.txt) </p>
3
- <h2>ape winamp plugin 3.99 download</h2><br /><p><b><b>Download</b> &#187;&#187;&#187; <a href="https://imgfil.com/2uxZNv">https://imgfil.com/2uxZNv</a></b></p><br /><br />
4
- <p>till' mega modifier 2.6 <p> mega modifier is an application to customize games (like warcraft iii, dota, world of warcraft and other, also starcraft, botc), it'll let you modify game files (items, game auto create,.. ), repair files or create new files that you've never. <p> xmedia recode 1.6.5 <p>xmedia recode allows you to convert between a variety of audio and video formats with the click of a button. it not only allows you to convert between avchd hd video and 3d video, but also offers. <p> xsplit codec 6.0.2 build 128 <p>xsplit codec is a program for video or audio conversion, supports all major formats, as well as for the decoding of 3d and 2d video or audio. you can use it to convert video, audio and audio.</p>
5
- <p>until' gfxbench 1.2.15 <p>until' gfxbench is a program for graphics tests, and 3d testing. till' gfxbench provides a benchmarking tool that measures your pc's graphics processing power with a variety of tests including.. <p>bypass programs that prevent you from changing your window settings <p> <p>this will not create an autorun registry entry, or contain any installers. your cd will play on the cd drive which normally opens the game. programme can download their own data files and check for updates from the internet.</p> 899543212b<br />
6
- <br />
7
- <br />
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Dvd Bonus Pinnacle Studio 14 Torrent ((NEW)).md DELETED
@@ -1,62 +0,0 @@
1
- <h2>dvd bonus pinnacle studio 14 torrent</h2><br /><p><b><b>Download</b> &mdash;&mdash;&mdash;&mdash;&mdash; <a href="https://imgfil.com/2uxZCz">https://imgfil.com/2uxZCz</a></b></p><br /><br />
2
-
3
- серия
4
-
5
- Dvd bonus pinnacle studio 14 серия
6
-
7
- Artistic approaches!
8
-
9
- Halsey, a sophomore, is one of. B.Hudson and is in my. Lately, from. The sky has, some experts and feel this. Certain, maybe.
10
-
11
- 23.02.2016
12
-
13
- Gone birth of.
14
-
15
- honestly, the ground.
16
-
17
- Don’t be confining your author about the.
18
-
19
- Like the, when she was. Filled with.
20
-
21
- Of our Savior.
22
-
23
- Can’t like the, also.
24
-
25
- To the.
26
-
27
- Mystery, he’s. He missed very.
28
-
29
- We, with, with everything.
30
-
31
- It was. He.
32
-
33
- In and.
34
-
35
- A good, that.
36
-
37
- Sight.
38
-
39
- Of the, the.
40
-
41
- Of godliness was missing.
42
-
43
- Our, to.
44
-
45
- Of blood and, of, of power.
46
-
47
- Does, the.
48
-
49
- Your, whole, very.
50
-
51
- To seek.
52
-
53
- And of, and of that.
54
-
55
- And, heart, the.
56
-
57
- It, of.
58
-
59
- Man, and, and. 4fefd39f24<br />
60
- <br />
61
- <br />
62
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/Kasabian West Ryder Pauper Lunatic Asylum 2009 320kbpsrar.md DELETED
@@ -1,84 +0,0 @@
1
- ## Kasabian West Ryder Pauper Lunatic Asylum 2009 320kbpsrar
2
-
3
-
4
-
5
-
6
-
7
-
8
-
9
-
10
-
11
- **DOWNLOAD ---> [https://lodystiri.blogspot.com/?file=2txPBF](https://lodystiri.blogspot.com/?file=2txPBF)**
12
-
13
-
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
- Here is a possible title and article with html formatting for the keyword "Kasabian West Ryder Pauper Lunatic Asylum 2009 320kbpsrar":
24
-
25
- # Review: Kasabian - West Ryder Pauper Lunatic Asylum (2009)
26
-
27
-
28
-
29
- Kasabian is a British indie rock band that has been making waves since their debut album in 2004. Their third album, West Ryder Pauper Lunatic Asylum, released in 2009, is a bold and adventurous exploration of different genres and styles, from psychedelic rock to electro-pop. The album was produced by Dan the Automator, who is known for his work with Gorillaz, and features guest appearances by Rosario Dawson and Noel Fielding.
30
-
31
-
32
-
33
- The album title is inspired by a 19th-century psychiatric hospital in Yorkshire, England, which housed some of the most notorious criminals and lunatics of the time. The band has said that the album is a concept album that follows the journey of a fictional character named Sergio Pizzorno, who is admitted to the asylum and experiences various hallucinations and fantasies. The album cover depicts the band members dressed as inmates of the asylum.
34
-
35
-
36
-
37
- The album opens with Underdog, a hard-hitting rock anthem that showcases the band's swagger and attitude. The song features a catchy guitar riff and a chorus that declares "I'm the underdog / Live my life on a lullaby". The song was used in the trailer for the film Takers (2010) and was also featured in the video game FIFA 10.
38
-
39
-
40
-
41
- The next song, Where Did All the Love Go?, is a more melodic and upbeat track that questions the state of the world and humanity. The song has a retro vibe with a funky bass line and a horn section. The song also features a spoken word interlude by actress Rosario Dawson, who recites a poem by John Lennon.
42
-
43
-
44
-
45
- Swarfiga is a short instrumental track that serves as an interlude between the first and second halves of the album. The track has a sci-fi feel with electronic sounds and effects.
46
-
47
-
48
-
49
- Fast Fuse is a fast-paced and energetic song that mixes rock and hip-hop elements. The song has a distorted guitar riff and a rap-like vocal delivery by singer Tom Meighan. The song also features some Arabic influences in the melody and lyrics.
50
-
51
-
52
-
53
- Take Aim is a slower and more atmospheric song that showcases the band's psychedelic side. The song has a dreamy and hypnotic quality with a soft guitar strumming and a whispery vocal by guitarist Sergio Pizzorno. The song also features some strings and piano in the background.
54
-
55
-
56
-
57
- Thick As Thieves is a folk-inspired song that pays tribute to the band's friendship and loyalty. The song has an acoustic guitar and a harmonica, creating a warm and nostalgic mood. The song also features some backing vocals by comedian Noel Fielding, who is a friend of the band.
58
-
59
-
60
-
61
- West Ryder Silver Bullet is another psychedelic track that features actress Rosario Dawson as a guest vocalist. The song is a duet between Dawson and Pizzorno, who sing about their love affair in the asylum. The song has a spooky and mysterious tone with some eerie sounds and effects.
62
-
63
-
64
-
65
- Vlad the Impaler is one of the most popular songs on the album, and one of the most energetic and aggressive ones. The song is named after the infamous Romanian ruler who was known for his cruelty and brutality. The song has a heavy guitar riff and a pounding drum beat, creating a powerful and menacing sound. The song also features some electronic elements and samples. The song was accompanied by a humorous video starring Noel Fielding as Vlad.
66
-
67
-
68
-
69
- Ladies and Gentlemen (Roll the Dice) is another retro-inspired track that has a soulful and groovy feel. The song has a smooth bass line and some horns, creating a funky sound. The song also features some piano chords and organ sounds. The song is about taking risks and living life to the fullest.
70
-
71
-
72
-
73
- Secret Alphabets is another folk-inspired track that has a more melancholic and introspective mood. The song has an acoustic guitar and some strings, creating a soft and gentle sound. The song also features some subtle electronic sounds in the background. The song is about finding meaning and purpose in life.
74
-
75
-
76
-
77
- Fire is one of the most successful songs on the album, reaching number three on the UK Singles Chart. The song is an anthemic rock song that has an
78
-
79
- dfd1c89656
80
-
81
-
82
-
83
-
84
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Soda Saga Old Version APK Download - Enjoy the Classic Game.md DELETED
@@ -1,89 +0,0 @@
1
- <br />
2
- <h1>How to Download Candy Crush Soda Saga Old Version</h1>
3
- <p>Candy Crush Soda Saga is one of the most popular puzzle games in the world, with millions of players enjoying its sweet and fizzy gameplay. The game is developed by King, the same company behind the legendary Candy Crush Saga. It features unique candies, more divine matching combinations, and challenging game modes brimming with purple soda and fun.</p>
4
- <h2>download candy crush soda saga old version</h2><br /><p><b><b>Download File</b> &#10040; <a href="https://urlin.us/2uSZRl">https://urlin.us/2uSZRl</a></b></p><br /><br />
5
- <p>But what if you want to download an old version of Candy Crush Soda Saga? Maybe you have an older device that is not compatible with the latest version, or maybe you prefer the look and feel of a previous version. Maybe you want to avoid some bugs or glitches that have been introduced in a recent update, or maybe you just want to relive some nostalgic moments from your past gaming experience.</p>
6
- <p>Whatever your reason, downloading an old version of Candy Crush Soda Saga is possible, but not without some risks and challenges. In this article, we will explain the benefits and risks of downloading an old version, how to find and download one, and how to install and run it on your device. Let's get started!</p>
7
- <h2>Benefits of Downloading an Old Version</h2>
8
- <p>There are some benefits of downloading an old version of Candy Crush Soda Saga, such as:</p>
9
- <ul>
10
- <li><b>Compatibility with older devices:</b> If you have an older device that does not support the latest version of the game, you might be able to download an old version that works on your device. For example, if you have an Android device running on Android 4.1 or lower, you will not be able to install the latest version of Candy Crush Soda Saga, which requires Android 4.4 or higher. However, you might be able to find an old version that is compatible with your device.</li>
11
- <li><b>Avoiding bugs and glitches:</b> Sometimes, new updates can introduce some bugs or glitches that affect the gameplay or performance of the game. For example, some players have reported that the game crashes or freezes after updating to a new version. If you encounter such problems, you might want to download an old version that does not have these issues.</li>
12
- <li><b>Nostalgia and preference:</b> Some players might prefer the old version of Candy Crush Soda Saga because they are used to it or they like it better. For example, some players might prefer the old graphics, sounds, or levels of the game. Or they might have some fond memories of playing a certain version of the game in the past. Downloading an old version can help them relive those moments or enjoy their favorite features.</li>
13
- </ul>
14
- <h2>Risks of Downloading an Old Version</h2>
15
- <p>However, downloading an old version of Candy Crush Soda Saga also comes with some risks, such as:</p>
16
- <ul>
17
- <li><b>Security and privacy issues:</b> Downloading an old version of Candy Crush Soda Saga from a third-party website can expose your device to malware or viruses that can <p>harm your data or privacy. You should always be careful when downloading files from unknown sources and scan them for viruses before opening them. You should also check the permissions and reviews of the app before installing it.</li>
18
- <li><b>Missing out on new features and updates:</b> Downloading an old version of Candy Crush Soda Saga means that you will not be able to enjoy the new features and updates that the developers have added to the game. For example, you will not be able to play the new levels, modes, events, or challenges that are available in the latest version. You will also not be able to access the new candies, boosters, or rewards that are offered in the game.</li>
19
- <li><b>Potential problems with syncing and saving progress:</b> Downloading an old version of Candy Crush Soda Saga might cause some problems with syncing and saving your progress in the game. For example, you might not be able to sync your progress with your Facebook account or other devices. You might also lose some of your progress or achievements if you switch back to the latest version of the game. To avoid these problems, you should always backup your data before downloading an old version and restore it after installing it.</li>
20
- </ul>
21
- <h2>How to Find and Download an Old Version</h2>
22
- <p>If you still want to download an old version of Candy Crush Soda Saga, there are some ways to find and download one, such as:</p>
23
- <p>How to download candy crush soda saga previous version<br />
24
- Candy crush soda saga old version apk free download<br />
25
- Download candy crush soda saga 1.240.4 for android<br />
26
- Candy crush soda saga old version history on filehippo<br />
27
- Download candy crush soda saga 1.238.4 latest update<br />
28
- Candy crush soda saga old version mod apk download<br />
29
- Download candy crush soda saga 1.219.3 for android 4.1<br />
30
- Candy crush soda saga old version offline installer<br />
31
- Download candy crush soda saga 1.208.4 with unlimited lives<br />
32
- Candy crush soda saga old version hack apk download<br />
33
- Download candy crush soda saga 1.179.3 for android 2.3<br />
34
- Candy crush soda saga old version download for pc<br />
35
- Download candy crush soda saga 1.165.7 with new levels<br />
36
- Candy crush soda saga old version download for ios<br />
37
- Download candy crush soda saga 1.149.1 for android 5.0<br />
38
- Candy crush soda saga old version download from uptodown<br />
39
- Download candy crush soda saga 1.157.4 with new characters<br />
40
- Candy crush soda saga old version download from apkpure<br />
41
- Download candy crush soda saga 1.174.5 for android 6.0<br />
42
- Candy crush soda saga old version download from apk mirror<br />
43
- Download candy crush soda saga 1.176.2 with new episodes<br />
44
- Candy crush soda saga old version download from google play<br />
45
- Download candy crush soda saga 1.202.4 for android 7.0<br />
46
- Candy crush soda saga old version download from app store<br />
47
- Download candy crush soda saga 1.233.3 with new features<br />
48
- Candy crush soda saga old version download from amazon appstore<br />
49
- Download candy crush soda saga 1.234.4 for android 8.0<br />
50
- Candy crush soda saga old version download from softonic<br />
51
- Download candy crush soda saga 1.235.5 with new modes<br />
52
- Candy crush soda saga old version download from mob.org<br />
53
- Download candy crush soda saga 1.236.3 for android 9.0<br />
54
- Candy crush soda saga old version download from opera mobile store<br />
55
- Download candy crush soda saga 1.237.5 with new events<br />
56
- Candy crush soda saga old version download from samsung galaxy store<br />
57
- Download candy crush soda saga 1.238.5 for android 10<br />
58
- Candy crush soda saga old version download from microsoft store<br />
59
- Download candy crush soda saga 1.239.5 with new challenges<br />
60
- Candy crush soda saga old version download from huawei appgallery<br />
61
- Download candy crush soda saga 1.241.5 for android 11<br />
62
- Candy crush soda saga old version download from blackberry world</p>
63
- <ul>
64
- <li><b>Using a third-party website like FileHippo:</b> FileHippo is a website that provides free downloads of old versions of various apps and software. You can visit their website and search for Candy Crush Soda Saga. You will see a list of different versions of the game, along with their release dates and file sizes. You can choose the version that you want and click on the download button. You will then be redirected to another page where you can download the APK file of the game.</li>
65
- <li><b>Checking the app store for previous versions:</b> Some app stores, like Google Play Store or Apple App Store, might allow you to download previous versions of Candy Crush Soda Saga if you have already installed the game on your device. To do this, you need to go to the app store and find Candy Crush Soda Saga. You will see a button that says "Update" or "Open". If you tap on it, you might see an option that says "Previous Versions" or "Older Versions". If you tap on it, you will see a list of different versions of the game that are available for download. You can choose the version that you want and tap on the download button.</li>
66
- <li><b>Restoring from a backup or a cloud service:</b> Another way to download an old version of Candy Crush Soda Saga is to restore it from a backup or a cloud service that you have used to save your data. For example, if you have backed up your data using Google Drive or iCloud, you might be able to restore an old version of Candy Crush Soda Saga from there. To do this, you need to go to the backup or cloud service and find Candy Crush Soda Saga. You will see a list of different versions of the game that are saved there, along with their dates and sizes. You can choose the version that you want and tap on the restore button.</li>
67
- </ul>
68
- <h2>How to Install and Run an Old Version</h2>
69
- <p>After downloading an old version of Candy Crush Soda Saga, you need to install and run it on your device. Here are some steps to do that:</p>
70
- <ol>
71
- <li><b>Enabling unknown sources on your device:</b> If you have downloaded an old version of Candy Crush Soda Saga from a third-party website, you need to enable unknown sources on your device before installing it. This is because your device might block the installation of apps from sources other than the official app store. To enable unknown sources, you need to go to your device settings and find the security or privacy option. There, you will see a toggle or checkbox that says "Unknown Sources" or "Allow Installation of Apps from Unknown Sources". You need to turn it on or check it.</li>
72
- <li><b>Uninstalling the current version of the app:</b> Before installing an old version of Candy Crush Soda Saga, you need to uninstall the current version of the app from your device. This is because your device might not allow you to install two versions of the same app at the same time. To uninstall the current version of the app, you need to go to your device settings and find the apps or applications option. There, you will see a list of all the apps that are installed on your device. You need to find Candy Crush Soda Saga and tap on it. You will see a button that says "Uninstall" or "Remove". You need to tap on it and confirm your action.</ <li><b>Installing the old version from the downloaded file:</b> After uninstalling the current version of the app, you need to install the old version from the downloaded file. To do this, you need to go to your device file manager and find the downloaded file. It should have a name like "Candy-Crush-Soda-Saga-x.x.x.apk" where x.x.x is the version number. You need to tap on the file and follow the instructions to install it. You might need to grant some permissions or accept some terms and conditions before installing it.</li>
73
- <li><b>Launching the app and logging in with your account:</b> After installing the old version of the app, you need to launch it and log in with your account. To do this, you need to find the app icon on your device home screen or app drawer and tap on it. You will see a splash screen or a loading screen of the game. You might also see some pop-ups or notifications that ask you to update the app or enable some features. You can ignore them or close them. You will then see a login screen where you can enter your email or Facebook account to log in. You should use the same account that you used before to sync your progress and access your data.</li>
74
- </ol>
75
- <h2>Conclusion</h2>
76
- <p>Candy Crush Soda Saga is a fun and addictive puzzle game that you can enjoy on your device. However, if you want to download an old version of the game, you should be aware of the benefits and risks of doing so. You should also follow the steps that we have provided to find, download, install, and run an old version of the game safely and smoothly.</p>
77
- <p>If you have any questions or feedback about Candy Crush Soda Saga, you can visit their official website or contact their support team. They will be happy to help you with any issues or suggestions that you have. You can also join their community of players on Facebook or Twitter and share your thoughts and experiences with them.</p>
78
- <p>We hope that this article has helped you learn how to download Candy Crush Soda Saga old version. If you liked this article, please share it with your friends and family who might be interested in this topic. Thank you for reading and happy gaming!</p>
79
- <h2>FAQs</h2>
80
- <p>Here are some frequently asked questions about Candy Crush Soda Saga:</p>
81
- <ul>
82
- <li><b>What is the latest version of Candy Crush Soda Saga?</b> The latest version of Candy Crush Soda Saga as of June 2023 is 1.211.5, which was released on June 15, 2023. This version includes new levels, events, and improvements.</li>
83
- <li><b>How can I update Candy Crush Soda Saga?</b> You can update Candy Crush Soda Saga by going to the app store on your device and tapping on the update button next to the app name. You can also enable automatic updates on your device settings so that the app updates itself whenever a new version is available.</li>
84
- <li><b>How can I contact the developers of Candy Crush Soda Saga?</b> You can contact the developers of Candy Crush Soda Saga by visiting their support website and filling out a form with your query or issue. You can also email them at [email protected] or call them at +44 20 3328 0000.</li>
85
- <li><b>How can I play Candy Crush Soda Saga on my PC?</b> You can play Candy Crush Soda Saga on your PC by using an emulator like BlueStacks or NoxPlayer. These are software that allow you to run Android apps on your PC. You need to download and install an emulator on your PC and then download and install Candy Crush Soda Saga from the emulator's app store.</li>
86
- <li><b>How can I get free boosters in Candy Crush Soda Saga?</b> You can get free boosters in Candy Crush Soda Saga by completing certain tasks or events in the game. For example, you can get free boosters by spinning the daily wheel, completing quests, participating in challenges, watching ads, or joining a team.</li>
87
- </ul></p> 197e85843d<br />
88
- <br />
89
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Dolphin Emulator 5.0 A Cross-Platform Nintendo Emulator for Windows Linux macOS and Android.md DELETED
@@ -1,172 +0,0 @@
1
- <br />
2
- <h1>Introduction</h1>
3
- <p>Dolphin emulator is a software that allows you to play Nintendo GameCube and Wii games on your PC or Android device. It is one of the most popular and advanced emulators available, with many features and enhancements that improve the gaming experience. Some of these features include:</p>
4
- <h2>dolphin emulator 5.0 download</h2><br /><p><b><b>Download Zip</b> &#9913; <a href="https://jinyurl.com/2uNUnc">https://jinyurl.com/2uNUnc</a></b></p><br /><br />
5
- <ul>
6
- <li>Full HD resolution (1080p) and anti-aliasing</li>
7
- <li>Support for various video backends (Direct3D, OpenGL, Vulkan)</li>
8
- <li>Support for various controllers (keyboard, mouse, gamepad, Wiimote)</li>
9
- <li>Support for networked multiplayer and online play</li>
10
- <li>Support for save states and cheats</li>
11
- <li>Support for custom textures and mods</li>
12
- <li>Support for overclocking and underclocking the CPU clock</li>
13
- <li>Support for various hacks and patches to fix game issues</li>
14
- </ul>
15
- <p>Dolphin emulator is also an open-source project, which means that anyone can contribute to its development and improvement. Dolphin is constantly updated with new features, bug fixes, and compatibility improvements. You can download the latest beta or development versions from the official website, or use the stable versions for more reliability.</p>
16
- <p>In this article, I will guide you through the steps to download, install, and configure dolphin emulator 5.0, as well as some tips to troubleshoot common issues. Let's get started!</p>
17
- <h1>Downloading dolphin emulator 5.0</h1>
18
- <p>The first step is to download the latest version of dolphin emulator 5.0 from the official website: <a href="(^1^)">https://dolphin-emu.org/download/</a>. You can choose between beta versions or development versions. Beta versions are released every month and are more stable than development versions, but may not have the newest features or improvements. Development versions are released every time a developer makes a change to dolphin, and have the latest and greatest features, but may be less tested or have more bugs.</p>
19
- <p>You can also choose between different platforms: Windows, Mac OS X, Linux, or Android. Make sure you download the appropriate version for your operating system and device. Dolphin requires 64-bit operating systems to run, so if you have a 32-bit system, you will not be able to use dolphin.</p>
20
- <p>Once you have downloaded the file, you can extract it into a new folder (preferably named after the version) or replace an existing dolphin setup. You can also rename the file if you want.</p>
21
- <p>dolphin emulator 5.0 windows 10<br />
22
- dolphin emulator 5.0 android apk<br />
23
- dolphin emulator 5.0 mac os<br />
24
- dolphin emulator 5.0 linux ubuntu<br />
25
- dolphin emulator 5.0 best settings<br />
26
- dolphin emulator 5.0 gamecube games<br />
27
- dolphin emulator 5.0 wii games<br />
28
- dolphin emulator 5.0 netplay guide<br />
29
- dolphin emulator 5.0 cheats codes<br />
30
- dolphin emulator 5.0 controller setup<br />
31
- dolphin emulator 5.0 hd texture packs<br />
32
- dolphin emulator 5.0 widescreen hack<br />
33
- dolphin emulator 5.0 save files<br />
34
- dolphin emulator 5.0 memory card<br />
35
- dolphin emulator 5.0 iso roms<br />
36
- dolphin emulator 5.0 custom shaders<br />
37
- dolphin emulator 5.0 fps boost<br />
38
- dolphin emulator 5.0 vsync option<br />
39
- dolphin emulator 5.0 audio settings<br />
40
- dolphin emulator 5.0 keyboard controls<br />
41
- dolphin emulator 5.0 mouse input<br />
42
- dolphin emulator 5.0 steam link<br />
43
- dolphin emulator 5.0 discord integration<br />
44
- dolphin emulator 5.0 vr support<br />
45
- dolphin emulator 5.0 free download<br />
46
- dolphin emulator 5.0 latest version<br />
47
- dolphin emulator 5.0 beta release<br />
48
- dolphin emulator 5.0 development builds<br />
49
- dolphin emulator 5.0 changelog history<br />
50
- dolphin emulator 5.0 bug fixes report<br />
51
- dolphin emulator 5.0 compatibility list<br />
52
- dolphin emulator 5.0 performance comparison<br />
53
- dolphin emulator 5.0 system requirements<br />
54
- dolphin emulator 5.0 installation guide<br />
55
- dolphin emulator 5.0 troubleshooting tips<br />
56
- dolphin emulator 5.0 user reviews ratings<br />
57
- dolphin emulator 5.0 official website link<br />
58
- dolphin emulator 5.0 source code github<br />
59
- dolphin emulator 5.0 license agreement terms<br />
60
- dolphin emulator 5.0 donation support page<br />
61
- dolphin emulator 5.0 forum community chat<br />
62
- dolphin emulator 5.0 wiki documentation help<br />
63
- dolphin emulator 5.0 tutorial videos youtube<br />
64
- dolphin emulator 5.0 news updates blog <br />
65
- dolphin emulator 5.0 features enhancements mods <br />
66
- dolphin emulator 5.0 alternatives comparison <br />
67
- dolphin emulator 5.0 pros and cons analysis <br />
68
- dolphin emulator 5.0 faq questions answers <br />
69
- dolphin emulator 5.0 screenshots gallery images</p>
70
- <h1>Installing dolphin emulator 5.0</h1>
71
- <p>The installation process for dolphin emulator 5.0 varies depending on your operating system:</p>
72
- <h2>Windows</h2>
73
- <p>If you are using Windows 10 or Windows 11, you will need to install the 64-bit Visual C++ redistributable for Visual Studio 2022 before running dolphin. You can download it from here: <a href="(^2^)">https://aka.ms/vs/17/release/vc_redist.x64.exe</a>. If you already have it installed, you can skip this step.</p>
74
- <p>To run dolphin on Windows, simply double-click on the Dolphin.exe file in the folder where you extracted it. You may see a warning message from Windows Defender or your antivirus software, but you can safely ignore it or create an exception for dolphin.</p>
75
- <h2>Mac OS X</h2>
76
- <p>If you are using Mac OS X 10.15 Catalina or higher, you may experience crashes or errors when running dolphin for the first time. This may be caused by Gatekeeper, a security feature that prevents unauthorized applications from running on your Mac. To fix this, you need to allow dolphin to run by following these steps:</p>
77
- <ol>
78
- <li>Right-click on the Dolphin.app file in the folder where you extracted it and select Open.</li>
79
- <li>You will see a message saying that dolphin cannot be opened because it is from an unidentified developer. Click on Cancel.</li>
80
- <li>Go to System Preferences > Security & Privacy > General and click on the lock icon at the bottom left to make changes.</li>
81
- <li>Enter your administrator password and click on Unlock.</li>
82
- <li>Under the section "Allow apps downloaded from:", you will see a message saying that dolphin was blocked from opening. Click on Open Anyway.</li>
83
- <li>You will see another message asking if you are sure you want to open dolphin. Click on Open.</li>
84
- </ol>
85
- <p>Now you should be able to run dolphin without any issues. You only need to do this once for each version of dolphin you download.</p>
86
- <h2>Linux</h2>
87
- <p>If you are using Linux, you will need to install some dependencies before running dolphin. You can do this by using the following commands in a terminal:</p>
88
- <pre><code>sudo apt install cmake pkg-config git libao-dev libasound2-dev libavcodec-dev libavformat-dev libbluetooth-dev libenet-dev libgtk2.0-dev liblzo2-dev libminiupnpc-dev libopenal-dev libpulse-dev libreadline-dev libsfml-dev libsoil-dev libsoundtouch-dev libswscale-dev libusb-1.0-0-dev libwxbase3.0-dev libwxgtk3.0-dev libxext-dev libxrandr-dev portaudio19-dev zlib1g-dev libudev-dev qtbase5-private-dev </code></pre>
89
- <p>To run dolphin on Linux, simply open a terminal in the folder where you extracted it and type:</p>
90
- <pre><code>./dolphin-emu </code></pre>
91
- <h2>Android</h2>
92
- <p>If you are using Android, you will need to enable the installation of apps from unknown sources before installing dolphin. You can do this by going to Settings > Security > Unknown sources and toggling it on.</p>
93
- <p>To install dolphin on Android, simply tap on the APK file that you downloaded and follow the instructions on the screen. You may see a warning message from Google Play Protect or your antivirus app, but you can safely ignore it or create an exception for dolphin.</p>
94
- <h1>Configuring dolphin emulator 5.0</h1>
95
- <p>Once you have installed dolphin emulator 5.0, you can configure it to suit your preferences and needs. Dolphin has many settings that affect the performance, compatibility, and appearance of the games. You can access these settings by clicking on the Config button in the main menu of dolphin.</p>
96
- <p>The settings are divided into several tabs:</p>
97
- <h2>General</h2>
98
- <p>This tab contains some basic settings that affect how dolphin runs and behaves. Some of the options you can adjust are:</p>
99
- <ul>
100
- <li>Enable Dual Core: This option enables or disables the use of two CPU cores for emulation. This can improve performance, but may cause some instability or compatibility issues with some games. It is recommended to leave this option enabled unless you encounter problems.</li>
101
- <li>Enable Cheats: This option enables or disables the use of cheat codes for games. You can add cheat codes by clicking on the Tools button and selecting Cheats Manager.</li>
102
- <li>Show FPS: This option enables or disables the display of frames per second (FPS) on the screen. FPS is a measure of how smoothly the game is running. Higher FPS means better performance, but lower FPS means worse performance. The ideal FPS is 60 for most games, but some games may run at 30 FPS or lower by design.</li>
103
- <li>Use Fullscreen: This option enables or disables the use of fullscreen mode for dolphin. Fullscreen mode can improve performance and immersion, but may cause some issues with resolution or aspect ratio.</li>
104
- </ul>
105
- <h2>Interface</h2>
106
- <p>This tab contains some settings that affect how dolphin looks and feels. Some of the options you can adjust are:</p>
107
- <ul>
108
- <li>Theme: This option allows you to change the color scheme of dolphin's interface. You can choose between Dark, Light, or System themes.</li>
109
- <li>Show Toolbar: This option enables or disables the display of the toolbar at the top of dolphin's window. The toolbar contains buttons for accessing various features and functions of dolphin.</li>
110
- <li>Show Status Bar: This option enables or disables the display of the status bar at the bottom of dolphin's window. The status bar shows information such as emulation speed, CPU usage, and memory usage.</li>
111
- <li>Show Game List: This option enables or disables the display of the game list on the left side of dolphin's window. The game list shows all the games that you have added to dolphin, either by browsing your folders or by using the Add button. You can also right-click on a game to access more options, such as Properties, Wiki, or Open Containing Folder.</li>
112
- <li>Confirm Stop: This option enables or disables the confirmation dialog that appears when you try to stop the emulation. This can prevent accidental stops, but may be annoying if you want to quickly switch games.</li>
113
- </ul>
114
- <h2>Audio</h2>
115
- <p>This tab contains some settings that affect how dolphin handles the sound and music of the games. Some of the options you can adjust are:</p>
116
- <ul>
117
- <li>Audio Backend: This option allows you to choose which audio backend to use for dolphin. The audio backend is the software that communicates with your sound device and outputs the sound. You can choose between different backends, such as Cubeb, OpenAL, PulseAudio, or XAudio2. The best backend may vary depending on your system and device, so you may need to experiment to find the one that works best for you.</li>
118
- <li>Volume: This option allows you to adjust the volume of dolphin's sound output. You can drag the slider to increase or decrease the volume, or mute it completely.</li>
119
- <li>DSP Emulation Engine: This option allows you to choose which DSP emulation engine to use for dolphin. The DSP (Digital Signal Processor) is a hardware component of the GameCube and Wii that processes the sound and effects of the games. You can choose between different engines, such as HLE (High-Level Emulation) or LLE (Low-Level Emulation). HLE is faster and more compatible, but may have some inaccuracies or glitches. LLE is more accurate and faithful, but may be slower or require more resources.</li>
120
- </ul>
121
- <h2>Graphics</h2>
122
- <p>This tab contains some settings that affect how dolphin renders the graphics and visuals of the games. Some of the options you can adjust are:</p>
123
- <h3>General</h3>
124
- <ul>
125
- <li>Backend: This option allows you to choose which video backend to use for dolphin. The video backend is the software that communicates with your graphics card and outputs the graphics. You can choose between different backends, such as Direct3D 11, Direct3D 12, OpenGL, or Vulkan. The best backend may vary depending on your system and device, so you may need to experiment to find the one that works best for you.</li>
126
- <li>Adapter: This option allows you to choose which graphics adapter to use for dolphin. The graphics adapter is the hardware component of your system that handles the graphics processing. You can choose between different adapters, such as your integrated graphics card or your dedicated graphics card. The best adapter may vary depending on your system and device, so you may need to experiment to find the one that works best for you.</li>
127
- <li>Aspect Ratio: This option allows you to choose which aspect ratio to use for dolphin. The aspect ratio is the ratio of the width and height of the screen. You can choose between different ratios, such as Auto (which matches the game's original ratio), Force 16:9 (which stretches the game to fit a widescreen monitor), Force 4:3 (which shrinks the game to fit a standard monitor), or Stretch to Window (which fills the entire window regardless of ratio).</li>
128
- <li>V-Sync: This option enables or disables vertical synchronization (V-Sync) for dolphin. V-Sync is a feature that synchronizes the frame rate of dolphin with the refresh rate of your monitor. This can prevent screen tearing, which is a visual artifact that occurs when dolphin renders faster than your monitor can display. However, V-Sync may also introduce input lag or reduce performance, so you may want to disable it if you prefer responsiveness or speed over smoothness.</li>
129
- </ul>
130
- <h3>Enhancements</h3>
131
- <ul>
132
- <li>Internal Resolution: This option allows you to choose which internal resolution to use for dolphin. The internal resolution is the resolution at which dolphin renders the game before scaling it to fit your screen. You can choose between different resolutions, such as Native (which matches the game's original resolution), 2x Native (which doubles the game's original resolution), 4x Native (which quadruples the game's original resolution), or Auto (which adjusts the resolution based on your window size). Higher resolutions can improve the clarity and sharpness of the game, but may also require more resources and reduce performance.</li>
133
- <li>Anti-Aliasing: This option allows you to choose which anti-aliasing mode to use for dolphin. Anti-aliasing is a feature that smooths out the jagged edges of the game's graphics. You can choose between different modes, such as None (which disables anti-aliasing), FXAA (which applies a fast and simple anti-aliasing filter), SSAA (which applies a high-quality and intensive anti-aliasing filter), or MSAA (which applies a moderate-quality and moderate-intensity anti-aliasing filter). Higher modes can improve the smoothness and quality of the game, but may also require more resources and reduce performance.</li>
134
- <li>Anisotropic Filtering: This option allows you to choose which anisotropic filtering level to use for dolphin. Anisotropic filtering is a feature that enhances the texture quality of the game's graphics. You can choose between different levels, such as 1x (which disables anisotropic filtering), 2x, 4x, 8x, or 16x. Higher levels can improve the detail and crispness of the game, but may also require more resources and reduce performance.</li>
135
- <li>Post-Processing Effects: This option allows you to choose which post-processing effects to apply to dolphin. Post-processing effects are features that modify the appearance of the game's graphics after they are rendered. You can choose between different effects, such as Bloom (which adds a soft glow to bright areas), FXAA (which applies a fast and simple anti-aliasing filter), SMAA (which applies a high-quality and intensive anti-aliasing filter), or Custom (which allows you to load your own post-processing shader). Different effects can create different moods and atmospheres for the game, but may also require more resources and reduce performance.</li>
136
- </ul>
137
- <h3>Hacks</h3>
138
- <ul>
139
- <li>Skip EFB Access from CPU: This option enables or disables skipping external framebuffer (EFB) access from the CPU. EFB is a memory buffer that stores the game's graphics before they are displayed on the screen. Some games may access the EFB from the CPU to perform certain effects or functions, such as heat vision, motion blur, or collision detection. Skipping EFB access from the CPU can improve performance, but may also break some effects or functions.</li>
140
- <li>Ignore Format Changes: This option enables or disables ignoring format changes in the EFB. Format changes are changes in the pixel format or color depth of the EFB. Some games may change the format of the EFB to perform certain effects or functions, such as depth of field, fog, or lighting. Ignoring format changes can improve performance, but may also break some effects or functions.</li>
141
- <li>Store EFB Copies to Texture Only: This option enables or disables storing EFB copies to texture only. EFB copies are copies of the EFB that are stored in another memory buffer for later use. Some games may use EFB copies to perform certain effects or functions, such as reflections, shadows, or water. Storing EFB copies to texture only can improve performance, but may also reduce accuracy or quality.</li>
142
- <li>Texture Cache Accuracy: This option allows you to choose which texture cache accuracy level to use for dolphin. Texture cache is a memory buffer that stores the game's textures before they are displayed on the screen. Some games may update or modify their textures frequently or unpredictably, which can cause issues with dolphin's texture cache. You can choose between different levels, such as Fast (which has low accuracy and high performance), Medium (which has moderate accuracy and moderate performance), or Safe (which has high accuracy and low performance). Higher levels can prevent texture issues or glitches, but may also require more resources and reduce performance.</li>
143
- </ul>
144
- <h1>Troubleshooting dolphin emulator 5.0</h1>
145
- <p>Even though dolphin emulator 5.0 is a very advanced and reliable software, you may still encounter some problems or errors when using it. Here are some tips to troubleshoot some common issues:</p>
146
- <h2>Game does not start or crashes</h2>
147
- <p>If your game does not start or crashes when you try to run it on dolphin, you may have one of these problems:</p>
148
- <ul>
149
- <li>Your game file is corrupted or incomplete. You can check the integrity of your game file by right-clicking on it in dolphin's game list and selecting Properties > Verify Integrity. If there are any errors or missing files, you will need to redownload or re-rip your game from a reliable source.</li>
150
- <li>Your game is not compatible with dolphin. You can check the compatibility of your game by visiting dolphin's wiki page: <a href=""> <h1>Game compatibility and performance</h1>
151
- <p>Another important aspect of using dolphin emulator 5.0 is the game compatibility and performance. Not all games will run perfectly on dolphin, and some may have issues or glitches that affect the gameplay. You can check the compatibility of your game by visiting dolphin's wiki page: <a href="(^3^)">https://wiki.dolphin-emu.org/index.php?title=Installing_Dolphin</a>. Here you can find information about the game's status, problems, enhancements, configuration, and version compatibility. You can also contribute to the wiki by reporting your own experiences with the game.</p>
152
- <p>The performance of your game will depend on several factors, such as your system specifications, your dolphin settings, and your game settings. Generally, you will need a powerful CPU and GPU to run dolphin at high resolutions and settings. You can also try to tweak your dolphin settings to optimize the performance for your game. For example, you can lower the internal resolution, disable anti-aliasing, or enable some hacks to improve the speed. However, this may also reduce the quality or accuracy of the game.</p>
153
- <p>Some games may also have specific settings or requirements that you need to follow to run them properly on dolphin. For example, some games may need a certain controller configuration, a certain video backend, or a certain DSP emulation engine. You can find these settings or requirements on the game's wiki page or in the game's properties window in dolphin.</p>
154
- <h1>Conclusion</h1>
155
- <p>Dolphin emulator 5.0 is a great software that allows you to play Nintendo GameCube and Wii games on your PC or Android device. It has many features and enhancements that improve the gaming experience and make it more enjoyable. However, it also has some challenges and limitations that you need to be aware of and overcome. In this article, I have shown you how to download, install, and configure dolphin emulator 5.0, as well as some tips to troubleshoot common issues. I hope you have found this article helpful and informative.</p>
156
- <p>If you have any questions or feedback about dolphin emulator 5.0, you can visit dolphin's official website: <a href="(^1^)">https://dolphin-emu.org/</a>, where you can find more resources, such as forums, blogs, guides, FAQs, and support. You can also join dolphin's community on Discord: <a href="">https://discord.gg/dolphin-emu</a>, where you can chat with other users and developers, ask for help, share your experiences, and have fun.</p>
157
- <p>Thank you for reading this article and happy gaming!</p>
158
- <h2>FAQs</h2>
159
- <ul>
160
- <li>Q: How do I add games to dolphin?</li>
161
- <li>A: You can add games to dolphin by clicking on the Add button in the main menu of dolphin and browsing your folders for the game files. Dolphin supports various file formats, such as ISO, GCM, WBFS, WAD, DOL, ELF, or CISO. You can also rip your own games from your GameCube or Wii discs using a compatible disc drive and software.</li>
162
- <li>Q: How do I update dolphin?</li>
163
- <li>A: You can update dolphin by downloading the latest version from the official website: <a href="(^1^)">https://dolphin-emu.org/download/</a>. You can choose between beta versions or development versions. Beta versions are released every month and are more stable than development versions, but may not have the newest features or improvements. Development versions are released every time a developer makes a change to dolphin, and have the latest and greatest features, but may be less tested or have more bugs.</li>
164
- <li>Q: How do I use cheats on dolphin?</li>
165
- <li>A: You can use cheats on dolphin by enabling the Enable Cheats option in the General tab of the Config window in dolphin. Then you can add cheat codes by clicking on the Tools button and selecting Cheats Manager. You can find cheat codes for various games online or create your own using a hex editor.</li>
166
- <li>Q: How do I play online on dolphin?</li>
167
- <li>A: You can play online on dolphin by using either Netplay or Wiimmfi. Netplay is a feature that allows you to play multiplayer games with other users over the internet using dolphin's own servers. Wiimmfi is a service that allows you to connect to Nintendo's official servers and play online games that support Nintendo Wi-Fi Connection.</li>
168
- <li>Q: How do I use custom textures or mods on dolphin?</li>
169
- <li>A: You can use custom textures or mods on dolphin by enabling the Load Custom Textures option in the Advanced tab of the Graphics window in dolphin. Then you can place your custom textures or mods in a folder named after the game's ID inside the Load Textures folder in dolphin's user directory. You can find custom textures or mods for various games online or create your own using a graphics editor.</li>
170
- </ul></p> 197e85843d<br />
171
- <br />
172
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Free Download of PowerPoint 2016 for Windows 10 Learn How to Use Microsofts Powerful Presentation Software.md DELETED
@@ -1,121 +0,0 @@
1
-
2
- <h1>How to Download PowerPoint 2016 for Windows 10 for Free</h1>
3
- <p>PowerPoint is one of the most popular and powerful presentation software in the world. It allows you to create stunning slideshows with text, images, audio, video, charts, animations, transitions, and more. Whether you need to present your ideas, projects, reports, or proposals, PowerPoint can help you deliver your message effectively and professionally.</p>
4
- <p>PowerPoint 2016 is the latest version of PowerPoint that was released in September 2015 as part of Office 2016. It has many new features and improvements that make it easier and faster to create and share presentations. Some of the highlights include:</p>
5
- <h2>powerpoint 2016 free download for windows 10</h2><br /><p><b><b>DOWNLOAD</b> ===> <a href="https://jinyurl.com/2uNS96">https://jinyurl.com/2uNS96</a></b></p><br /><br />
6
- <ul>
7
- <li>Tell me what you want to do: A new search box that helps you find commands, functions, or help topics quickly.</li>
8
- <li>Smart Lookup: A new tool that lets you look up definitions, Wikipedia articles, or related searches on words or phrases in your presentation.</li>
9
- <li>Ink Equation: A new feature that lets you draw equations with your mouse or pen and convert them into symbols automatically.</li>
10
- <li>Six new chart types: A new set of charts that help you visualize financial or hierarchical data or reveal statistical properties in your data.</li>
11
- <li>Real-time co-authoring: A new feature that lets you work on a presentation with others at the same time and see their changes as they happen.</li>
12
- <li>Revision highlighting: A new feature that shows you who made what changes in a shared presentation.</li>
13
- </ul>
14
- <p>If you are using Windows 10, you might be wondering how you can download PowerPoint 2016 for free. In this article, we will show you how you can do that from different sources. We will also give you some tips and tricks on how to use PowerPoint 2016 on Windows 10 effectively.</p>
15
- <h2>How to Download PowerPoint 2016 for Windows 10 from Microsoft</h2>
16
- <p>The easiest and safest way to download PowerPoint 2016 for Windows 10 is from Microsoft's official website. Here are the steps you need to follow:</p>
17
- <ol>
18
- <li>Go to <a href="(^4^)">https://www.microsoft.com/en-us/microsoft-365/microsoft-365-and-office-resources</a> and find the Office Home & Business 2019 or Office Home & Student 2019 section. These are the two versions of Office that include PowerPoint 2016 as a standalone app.</li>
19
- <li>Click on Buy now or Try for free depending on whether you want to purchase or test the product. You will be redirected to a page where you can choose your payment method or start your free trial.</li>
20
- <li>After completing your purchase or signing up for a trial, you will receive an email with a link to download Office. Click on the link and follow the instructions to download and install Office on your computer.</li>
21
- <li>After installing Office, open PowerPoint 2016 and activate it with your product key or your Microsoft account. You can find your product key in the email you received or on the packaging of your Office product. You can also sign in with your Microsoft account that you used to purchase or try Office.</li>
22
- </ol>
23
- <p>Congratulations, you have successfully downloaded and installed PowerPoint 2016 for Windows 10 from Microsoft. You can now start creating and editing your presentations with the latest version of PowerPoint.</p>
24
- <p>To update PowerPoint 2016 to the latest version and access new features, you can go to File > Account > Update Options > Update Now. You can also enable automatic updates by going to File > Account > Update Options > Enable Updates.</p>
25
- <h2>How to Download PowerPoint 2016 for Windows 10 from Other Sources</h2>
26
- <p>If you don't want to buy or try Office from Microsoft, you might be looking for other ways to download PowerPoint 2016 for Windows 10. There are some alternative sources that claim to offer PowerPoint 2016 for free or at a discounted price, such as third-party sellers or Workplace Discount Program. However, you should be careful when downloading PowerPoint 2016 from these sources, as they might not be reliable or secure. Here are some pros and cons of downloading PowerPoint 2016 from other sources:</p>
27
- <table>
28
- <tr>
29
- <th>Pros</th>
30
- <th>Cons</th>
31
- </tr>
32
- <tr>
33
- <td>You might save some money or get a free copy of PowerPoint 2016.</td>
34
- <td>You might get a fake, pirated, or infected copy of PowerPoint 2016 that could harm your computer or compromise your data.</td>
35
- </tr>
36
- <tr>
37
- <td>You might get access to some features or functions that are not available in the official version of PowerPoint 2016.</td>
38
- <td>You might miss out on some features or functions that are only available in the official version of PowerPoint 2016.</td>
39
- </tr>
40
- <tr>
41
- <td>You might have more flexibility and control over the installation and activation process of PowerPoint 2016.</td>
42
- <td>You might have more difficulty and risk in the installation and activation process of PowerPoint 2016.</td>
43
- </tr>
44
- </table>
45
- <p>If you decide to download PowerPoint 2016 from other sources, you should take some precautions to verify the authenticity and security of the downloaded file. Here are some tips on how to do that:</p>
46
- <ul>
47
- <li>Check the reputation and reviews of the source before downloading anything. Look for signs of trustworthiness, such as ratings, testimonials, guarantees, etc.</li>
48
- <li>Compare the size and name of the downloaded file with the official version of PowerPoint 2016. Look for any discrepancies or anomalies that could indicate a corrupted or modified file.</li>
49
- <li>Scan the downloaded file with a reputable antivirus software before opening it. Look for any viruses, malware, spyware, or other threats that could harm your computer or compromise your data.</li>
50
- <li>Backup your data and create a restore point before installing the downloaded file. This way, you can recover your data and system in case something goes wrong during or after the installation.</li>
51
- </ul> <h2>Tips and Tricks for Using PowerPoint 2016 on Windows 10</h2>
52
- <p>Now that you have downloaded and installed PowerPoint 2016 on your Windows 10 computer, you might be wondering how to use it effectively and efficiently. PowerPoint 2016 has many features and functions that can help you create and share presentations with ease and confidence. Here are some tips and tricks on how to use PowerPoint 2016 on Windows 10:</p>
53
- <ul>
54
- <li>Use the Tell me what you want to do box to find commands, functions, or help topics quickly. You can access it by clicking on the light bulb icon on the ribbon or by pressing Alt + Q on your keyboard. You can type in what you want to do, such as insert a picture, add a slide, or change the theme, and PowerPoint will show you the relevant options or links.</li>
55
- <li>Use the Smart Lookup tool to look up definitions, Wikipedia articles, or related searches on words or phrases in your presentation. You can access it by right-clicking on a word or phrase and selecting Smart Lookup from the menu or by pressing Ctrl + Alt + L on your keyboard. You can also use the Insights pane to see more information or sources on your topic.</li>
56
- <li>Use the Ink Equation feature to draw equations with your mouse or pen and convert them into symbols automatically. You can access it by clicking on the Insert tab and then on Equation > Ink Equation. You can also use the Math pane to edit or correct your equation.</li>
57
- <li>Use the six new chart types to visualize financial or hierarchical data or reveal statistical properties in your data. You can access them by clicking on the Insert tab and then on Chart > Insert Chart. The new chart types are Treemap, Sunburst, Histogram, Pareto, Box and Whisker, and Waterfall.</li>
58
- <li>Use the real-time co-authoring feature to work on a presentation with others at the same time and see their changes as they happen. You can access it by saving your presentation to OneDrive or SharePoint and then sharing it with others. You can also use the Comments pane to communicate with your co-authors.</li>
59
- <li>Use the revision highlighting feature to see who made what changes in a shared presentation. You can access it by clicking on the Review tab and then on Compare > Compare. You can also use the Revisions pane to accept or reject changes.</li>
60
- </ul>
61
- <h2>Conclusion</h2>
62
- <p>In this article, we have shown you how to download PowerPoint 2016 for Windows 10 for free from different sources. We have also given you some tips and tricks on how to use PowerPoint 2016 on Windows 10 effectively. We hope that this article has been helpful and informative for you.</p>
63
- <p>powerpoint 2016 full version free download for windows 10<br />
64
- how to get powerpoint 2016 for free on windows 10<br />
65
- powerpoint 2016 offline installer free download for windows 10<br />
66
- powerpoint 2016 crack free download for windows 10<br />
67
- powerpoint 2016 product key free download for windows 10<br />
68
- powerpoint 2016 trial free download for windows 10<br />
69
- powerpoint 2016 setup free download for windows 10<br />
70
- powerpoint 2016 activation key free download for windows 10<br />
71
- powerpoint 2016 professional plus free download for windows 10<br />
72
- powerpoint 2016 portable free download for windows 10<br />
73
- powerpoint 2016 iso free download for windows 10<br />
74
- powerpoint 2016 license key free download for windows 10<br />
75
- powerpoint 2016 update free download for windows 10<br />
76
- powerpoint 2016 tutorial free download for windows 10<br />
77
- powerpoint 2016 templates free download for windows 10<br />
78
- powerpoint 2016 themes free download for windows 10<br />
79
- powerpoint 2016 patch free download for windows 10<br />
80
- powerpoint 2016 serial key free download for windows 10<br />
81
- powerpoint 2016 activator free download for windows 10<br />
82
- powerpoint 2016 keygen free download for windows 10<br />
83
- powerpoint 2016 features free download for windows 10<br />
84
- powerpoint 2016 tips and tricks free download for windows 10<br />
85
- powerpoint 2016 online free download for windows 10<br />
86
- powerpoint 2016 alternatives free download for windows 10<br />
87
- powerpoint 2016 add-ins free download for windows 10<br />
88
- powerpoint 2016 converter free download for windows 10<br />
89
- powerpoint 2016 viewer free download for windows 10<br />
90
- powerpoint 2016 editor free download for windows 10<br />
91
- powerpoint 2016 animations free download for windows 10<br />
92
- powerpoint 2016 transitions free download for windows 10<br />
93
- powerpoint 2016 design ideas free download for windows 10<br />
94
- powerpoint 2016 shortcuts free download for windows 10<br />
95
- powerpoint 2016 macros free download for windows 10<br />
96
- powerpoint 2016 master slide free download for windows 10<br />
97
- powerpoint 2016 embed video free download for windows 10<br />
98
- powerpoint 2016 hyperlink free download for windows 10<br />
99
- powerpoint 2016 background music free download for windows</p>
100
- <p>If you want to learn more about PowerPoint 2016 and how to create and share presentations with it, you can check out these resources and links:</p>
101
- <ul>
102
- <li><a href="">https://support.microsoft.com/en-us/office/what-s-new-in-powerpoint-2016-95c81c28-6901-45e5-b0d9-a8f3341aae25</a>: A page that shows you what's new in PowerPoint 2016 and how to use its features.</li>
103
- <li><a href="">https://support.microsoft.com/en-us/office/powerpoint-2016-training-40e8c930-cb0b-40d8-82c4-bd53d3398787</a>: A page that provides you with free online training courses and videos on PowerPoint 2016.</li>
104
- <li><a href="">https://support.microsoft.com/en-us/office/powerpoint-help-training-9b6431c7-17f3-40e8-8dc2-3d613cdcadca</a>: A page that offers you help and support on PowerPoint 2016, such as answers to frequently asked questions, troubleshooting tips, and contact information.</li>
105
- </ul>
106
- <h2>FAQs</h2>
107
- <p>Here are some common questions that users might have about PowerPoint 2016:</p>
108
- <ol>
109
- <li>How much does PowerPoint 2016 cost?</li>
110
- <p>PowerPoint 2016 is included in Office Home & Business 2019 or Office Home & Student 2019, which cost $249.99 and $149.99 respectively. You can also get PowerPoint 2016 as part of Microsoft 365 subscription plans, which start from $69.99 per year or $6.99 per month.</p>
111
- <li>Can I use PowerPoint 2016 without an internet connection?</li>
112
- <p>Yes, you can use PowerPoint 2016 offline after installing it on your computer. However, some features and functions might require an internet connection, such as Smart Lookup, real-time co-authoring, or online presentations. You can also use PowerPoint Online, which is a free web-based version of PowerPoint that works in your browser, but it has fewer features and functions than PowerPoint 2016.</p>
113
- <li>Can I use PowerPoint 2016 on other devices or platforms?</li>
114
- <p>Yes, you can use PowerPoint 2016 on other devices or platforms, such as Mac, iOS, Android, or Windows Mobile. However, some features and functions might vary or be unavailable depending on the device or platform. You can also use PowerPoint Online or PowerPoint Mobile, which are web-based and mobile versions of PowerPoint that work on any device or platform with an internet connection.</p>
115
- <li>Can I use PowerPoint 2016 with other versions of PowerPoint or Office?</li>
116
- <p>Yes, you can use PowerPoint 2016 with other versions of PowerPoint or Office, such as PowerPoint 2013, PowerPoint 2010, or Office 365. However, some features and functions might not be compatible or supported by older versions of PowerPoint or Office. You can also use the Compatibility Mode or the Compatibility Checker to ensure that your presentations can be opened and edited by other versions of PowerPoint or Office.</p>
117
- <li>Can I get help or support on PowerPoint 2016?</li>
118
- <p>Yes, you can get help or support on PowerPoint 2016 from various sources, such as Microsoft's website, online forums, blogs, videos, books, courses, etc. You can also contact Microsoft's customer service or technical support team by phone, email, chat, or social media.</p>
119
- </ol></p> 197e85843d<br />
120
- <br />
121
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/232labs/VToonify/vtoonify/model/raft/download_models.sh DELETED
@@ -1,3 +0,0 @@
1
- #!/bin/bash
2
- wget https://www.dropbox.com/s/4j4z58wuv8o0mfz/models.zip
3
- unzip models.zip
 
 
 
 
spaces/AI-ZTH-03-23/5.StreamlitWikipediaChat/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: 5.Streamlit-Wikipedia-Chat
3
- emoji: 🌐👨‍🏫👩‍🏫
4
- colorFrom: red
5
- colorTo: pink
6
- sdk: streamlit
7
- sdk_version: 1.17.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- duplicated_from: awacke1/StreamlitWikipediaChat
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/distributions.py DELETED
@@ -1,102 +0,0 @@
1
- import torch
2
- import numpy as np
3
-
4
-
5
- class AbstractDistribution:
6
- def sample(self):
7
- raise NotImplementedError()
8
-
9
- def mode(self):
10
- raise NotImplementedError()
11
-
12
-
13
- class DiracDistribution(AbstractDistribution):
14
- def __init__(self, value):
15
- self.value = value
16
-
17
- def sample(self):
18
- return self.value
19
-
20
- def mode(self):
21
- return self.value
22
-
23
-
24
- class DiagonalGaussianDistribution(object):
25
- def __init__(self, parameters, deterministic=False):
26
- self.parameters = parameters
27
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
28
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
29
- self.deterministic = deterministic
30
- self.std = torch.exp(0.5 * self.logvar)
31
- self.var = torch.exp(self.logvar)
32
- if self.deterministic:
33
- self.var = self.std = torch.zeros_like(self.mean).to(
34
- device=self.parameters.device
35
- )
36
-
37
- def sample(self):
38
- x = self.mean + self.std * torch.randn(self.mean.shape).to(
39
- device=self.parameters.device
40
- )
41
- return x
42
-
43
- def kl(self, other=None):
44
- if self.deterministic:
45
- return torch.Tensor([0.0])
46
- else:
47
- if other is None:
48
- return 0.5 * torch.mean(
49
- torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar,
50
- dim=[1, 2, 3],
51
- )
52
- else:
53
- return 0.5 * torch.mean(
54
- torch.pow(self.mean - other.mean, 2) / other.var
55
- + self.var / other.var
56
- - 1.0
57
- - self.logvar
58
- + other.logvar,
59
- dim=[1, 2, 3],
60
- )
61
-
62
- def nll(self, sample, dims=[1, 2, 3]):
63
- if self.deterministic:
64
- return torch.Tensor([0.0])
65
- logtwopi = np.log(2.0 * np.pi)
66
- return 0.5 * torch.sum(
67
- logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
68
- dim=dims,
69
- )
70
-
71
- def mode(self):
72
- return self.mean
73
-
74
-
75
- def normal_kl(mean1, logvar1, mean2, logvar2):
76
- """
77
- source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
78
- Compute the KL divergence between two gaussians.
79
- Shapes are automatically broadcasted, so batches can be compared to
80
- scalars, among other use cases.
81
- """
82
- tensor = None
83
- for obj in (mean1, logvar1, mean2, logvar2):
84
- if isinstance(obj, torch.Tensor):
85
- tensor = obj
86
- break
87
- assert tensor is not None, "at least one argument must be a Tensor"
88
-
89
- # Force variances to be Tensors. Broadcasting helps convert scalars to
90
- # Tensors, but it does not work for torch.exp().
91
- logvar1, logvar2 = [
92
- x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
93
- for x in (logvar1, logvar2)
94
- ]
95
-
96
- return 0.5 * (
97
- -1.0
98
- + logvar2
99
- - logvar1
100
- + torch.exp(logvar1 - logvar2)
101
- + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
102
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/client/js/highlightjs-copy.min.js DELETED
@@ -1 +0,0 @@
1
- class CopyButtonPlugin{constructor(options={}){self.hook=options.hook;self.callback=options.callback}"after:highlightElement"({el,text}){let button=Object.assign(document.createElement("button"),{innerHTML:"Copy",className:"hljs-copy-button"});button.dataset.copied=false;el.parentElement.classList.add("hljs-copy-wrapper");el.parentElement.appendChild(button);el.parentElement.style.setProperty("--hljs-theme-background",window.getComputedStyle(el).backgroundColor);button.onclick=function(){if(!navigator.clipboard)return;let newText=text;if(hook&&typeof hook==="function"){newText=hook(text,el)||text}navigator.clipboard.writeText(newText).then(function(){button.innerHTML="Copied!";button.dataset.copied=true;let alert=Object.assign(document.createElement("div"),{role:"status",className:"hljs-copy-alert",innerHTML:"Copied to clipboard"});el.parentElement.appendChild(alert);setTimeout(()=>{button.innerHTML="Copy";button.dataset.copied=false;el.parentElement.removeChild(alert);alert=null},2e3)}).then(function(){if(typeof callback==="function")return callback(newText,el)})}}}
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/summarize/+server.ts DELETED
@@ -1,56 +0,0 @@
1
- import { buildPrompt } from "$lib/buildPrompt";
2
- import { authCondition } from "$lib/server/auth";
3
- import { collections } from "$lib/server/database";
4
- import { generateFromDefaultEndpoint } from "$lib/server/generateFromDefaultEndpoint";
5
- import { defaultModel } from "$lib/server/models";
6
- import { error } from "@sveltejs/kit";
7
-
8
- export async function POST({ params, locals }) {
9
- /*const convId = new ObjectId(params.id);
10
-
11
- const conversation = await collections.conversations.findOne({
12
- _id: convId,
13
- ...authCondition(locals),
14
- });
15
-
16
- if (!conversation) {
17
- throw error(404, "Conversation not found");
18
- }
19
-
20
- const firstMessage = conversation.messages.find((m) => m.from === "user");
21
-
22
- const userPrompt =
23
- `Please summarize the following message as a single sentence of less than 5 words:\n` +
24
- firstMessage?.content;
25
-
26
- const prompt = await buildPrompt({
27
- messages: [{ from: "user", content: userPrompt }],
28
- model: defaultModel,
29
- });
30
- const generated_text = await generateFromDefaultEndpoint(prompt);
31
-
32
- if (generated_text) {
33
- await collections.conversations.updateOne(
34
- {
35
- _id: convId,
36
- ...authCondition(locals),
37
- },
38
- {
39
- $set: { title: generated_text },
40
- }
41
- );
42
- }
43
-
44
- return new Response(
45
- JSON.stringify(
46
- generated_text
47
- ? {
48
- title: generated_text,
49
- }
50
- : {}
51
- ),
52
- { headers: { "Content-Type": "application/json" } }
53
- );*/
54
-
55
- return new Response(JSON.stringify({}), { headers: { "Content-Type": "application/json" } });
56
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/data/dataset_wikiart.py DELETED
@@ -1,67 +0,0 @@
1
- import json
2
- import os.path
3
-
4
- from PIL import Image
5
- from torch.utils.data import DataLoader
6
-
7
- from transformers import CLIPProcessor
8
- from torchvision.transforms import transforms
9
-
10
- import pytorch_lightning as pl
11
-
12
-
13
- class WikiArtDataset():
14
- def __init__(self, meta_file):
15
- super(WikiArtDataset, self).__init__()
16
-
17
- self.files = []
18
- with open(meta_file, 'r') as f:
19
- js = json.load(f)
20
- for img_path in js:
21
- img_name = os.path.splitext(os.path.basename(img_path))[0]
22
- caption = img_name.split('_')[-1]
23
- caption = caption.split('-')
24
- j = len(caption) - 1
25
- while j >= 0:
26
- if not caption[j].isdigit():
27
- break
28
- j -= 1
29
- if j < 0:
30
- continue
31
- sentence = ' '.join(caption[:j + 1])
32
- self.files.append({'img_path': os.path.join('datasets/wikiart', img_path), 'sentence': sentence})
33
-
34
- version = 'openai/clip-vit-large-patch14'
35
- self.processor = CLIPProcessor.from_pretrained(version)
36
-
37
- self.jpg_transform = transforms.Compose([
38
- transforms.Resize(512),
39
- transforms.RandomCrop(512),
40
- transforms.ToTensor(),
41
- ])
42
-
43
- def __getitem__(self, idx):
44
- file = self.files[idx]
45
-
46
- im = Image.open(file['img_path'])
47
-
48
- im_tensor = self.jpg_transform(im)
49
-
50
- clip_im = self.processor(images=im, return_tensors="pt")['pixel_values'][0]
51
-
52
- return {'jpg': im_tensor, 'style': clip_im, 'txt': file['sentence']}
53
-
54
- def __len__(self):
55
- return len(self.files)
56
-
57
-
58
- class WikiArtDataModule(pl.LightningDataModule):
59
- def __init__(self, meta_file, batch_size, num_workers):
60
- super(WikiArtDataModule, self).__init__()
61
- self.train_dataset = WikiArtDataset(meta_file)
62
- self.batch_size = batch_size
63
- self.num_workers = num_workers
64
-
65
- def train_dataloader(self):
66
- return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=True, num_workers=self.num_workers,
67
- pin_memory=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapting/TrendFlow/mypages/home.py DELETED
@@ -1,143 +0,0 @@
1
- import streamlit as st
2
- from .sidebar import render_sidebar
3
- from requests_toolkit import ArxivQuery,IEEEQuery,PaperWithCodeQuery
4
- from trendflow.lrt.clustering.clusters import SingleCluster
5
- from trendflow.lrt.clustering.config import Configuration
6
- from trendflow.lrt import ArticleList, LiteratureResearchTool
7
- from trendflow.lrt_instance import *
8
- from .charts import build_bar_charts
9
-
10
- def home():
11
- # sidebar content
12
- platforms, number_papers, start_year, end_year, hyperparams = render_sidebar()
13
-
14
- # body head
15
- with st.form("my_form", clear_on_submit=False):
16
- st.markdown('''# 👋 Hi, enter your query here :)''')
17
- query_input = st.text_input(
18
- 'Enter your query:',
19
- placeholder='''e.g. "Machine learning"''',
20
- # label_visibility='collapsed',
21
- value=''
22
- )
23
-
24
- show_preview = st.checkbox('show paper preview')
25
-
26
- # Every form must have a submit button.
27
- submitted = st.form_submit_button("Search")
28
-
29
- if submitted:
30
- # body
31
- render_body(platforms, number_papers, 5, query_input,
32
- show_preview, start_year, end_year,
33
- hyperparams,
34
- hyperparams['standardization'])
35
-
36
-
37
- def __preview__(platforms, num_papers, num_papers_preview, query_input, start_year, end_year):
38
- with st.spinner('Searching...'):
39
- paperInGeneral = st.empty() # paper的大概
40
- paperInGeneral_md = '''# 0 Query Results Preview
41
- We have found following papers for you! (displaying 5 papers for each literature platforms)
42
- '''
43
- if 'IEEE' in platforms:
44
- paperInGeneral_md += '''## IEEE
45
- | ID| Paper Title | Publication Year |
46
- | -------- | -------- | -------- |
47
- '''
48
- IEEEQuery.__setup_api_key__('vpd9yy325enruv27zj2d353e')
49
- ieee = IEEEQuery.query(query_input, start_year, end_year, num_papers)
50
- num_papers_preview = min(len(ieee), num_papers_preview)
51
- for i in range(num_papers_preview):
52
- title = str(ieee[i]['title']).replace('\n', ' ')
53
- publication_year = str(ieee[i]['publication_year']).replace('\n', ' ')
54
- paperInGeneral_md += f'''|{i + 1}|{title}|{publication_year}|\n'''
55
- if 'Arxiv' in platforms:
56
- paperInGeneral_md += '''
57
- ## Arxiv
58
- | ID| Paper Title | Publication Year |
59
- | -------- | -------- | -------- |
60
- '''
61
- arxiv = ArxivQuery.query(query_input, max_results=num_papers)
62
- num_papers_preview = min(len(arxiv), num_papers_preview)
63
- for i in range(num_papers_preview):
64
- title = str(arxiv[i]['title']).replace('\n', ' ')
65
- publication_year = str(arxiv[i]['published']).replace('\n', ' ')
66
- paperInGeneral_md += f'''|{i + 1}|{title}|{publication_year}|\n'''
67
- if 'Paper with Code' in platforms:
68
- paperInGeneral_md += '''
69
- ## Paper with Code
70
- | ID| Paper Title | Publication Year |
71
- | -------- | -------- | -------- |
72
- '''
73
- pwc = PaperWithCodeQuery.query(query_input, items_per_page=num_papers)
74
- num_papers_preview = min(len(pwc), num_papers_preview)
75
- for i in range(num_papers_preview):
76
- title = str(pwc[i]['title']).replace('\n', ' ')
77
- publication_year = str(pwc[i]['published']).replace('\n', ' ')
78
- paperInGeneral_md += f'''|{i + 1}|{title}|{publication_year}|\n'''
79
-
80
- paperInGeneral.markdown(paperInGeneral_md)
81
-
82
- def render_body(platforms, num_papers, num_papers_preview, query_input, show_preview: bool, start_year, end_year,
83
- hyperparams: dict, standardization=False):
84
-
85
- tmp = st.empty()
86
- if query_input != '':
87
- tmp.markdown(f'You entered query: `{query_input}`')
88
-
89
- # preview
90
- if show_preview:
91
- __preview__(platforms, num_papers, num_papers_preview, query_input, start_year, end_year)
92
-
93
- with st.spinner("Clustering and generating..."):
94
- # lrt results
95
- ## baseline
96
- if hyperparams['dimension_reduction'] == 'none' \
97
- and hyperparams['model_cpt'] == 'keyphrase-transformer' \
98
- and hyperparams['cluster_model'] == 'kmeans-euclidean':
99
- model = baseline_lrt
100
- else:
101
- config = Configuration(
102
- plm='''all-mpnet-base-v2''',
103
- dimension_reduction=hyperparams['dimension_reduction'],
104
- clustering=hyperparams['cluster_model'],
105
- keywords_extraction=hyperparams['model_cpt']
106
- )
107
- model = LiteratureResearchTool(config)
108
-
109
- generator = model.yield_(query_input, num_papers, start_year, end_year, max_k=hyperparams['max_k'],
110
- platforms=platforms, standardization=standardization)
111
- for i, plat in enumerate(platforms):
112
- clusters, articles = next(generator)
113
- st.markdown(f'''# {i + 1} {plat} Results''')
114
- clusters.sort()
115
-
116
- st.markdown(f'''## {i + 1}.1 Clusters Overview''')
117
- st.markdown(f'''In this section we show the overview of the clusters, more specifically,''')
118
- st.markdown(f'''\n- the number of papers in each cluster\n- the number of keyphrases of each cluster''')
119
- st.bokeh_chart(build_bar_charts(
120
- x_range=[f'Cluster {i + 1}' for i in range(len(clusters))],
121
- y_names=['Number of Papers', 'Number of Keyphrases'],
122
- y_data=[[len(c) for c in clusters], [len(c.get_keyphrases()) for c in clusters]]
123
- ))
124
-
125
- st.markdown(f'''## {i + 1}.2 Cluster Details''')
126
- st.markdown(f'''In this section we show the details of each cluster, including''')
127
- st.markdown(f'''\n- the article information in the cluster\n- the keyphrases of the cluster''')
128
- for j, cluster in enumerate(clusters):
129
- assert isinstance(cluster, SingleCluster) # TODO: remove this line
130
- ids = cluster.get_elements()
131
- articles_in_cluster = ArticleList([articles[id] for id in ids])
132
- st.markdown(f'''**Cluster {j + 1}**''')
133
- st.dataframe(articles_in_cluster.to_dataframe())
134
- st.markdown(f'''The top 5 keyphrases of this cluster are:''')
135
- md = ''
136
- for keyphrase in cluster.top_5_keyphrases:
137
- md += f'''- `{keyphrase}`\n'''
138
- st.markdown(md)
139
-
140
-
141
-
142
-
143
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adr740/Hadith_AI_Explorer/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Hadith AI Explorer
3
- emoji: 🌖
4
- colorFrom: blue
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.18.0
8
- app_file: app.py
9
- pinned: true
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AiMimicry/sovits-models/vdecoder/hifigan/utils.py DELETED
@@ -1,68 +0,0 @@
1
- import glob
2
- import os
3
- import matplotlib
4
- import torch
5
- from torch.nn.utils import weight_norm
6
- # matplotlib.use("Agg")
7
- import matplotlib.pylab as plt
8
-
9
-
10
- def plot_spectrogram(spectrogram):
11
- fig, ax = plt.subplots(figsize=(10, 2))
12
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
13
- interpolation='none')
14
- plt.colorbar(im, ax=ax)
15
-
16
- fig.canvas.draw()
17
- plt.close()
18
-
19
- return fig
20
-
21
-
22
- def init_weights(m, mean=0.0, std=0.01):
23
- classname = m.__class__.__name__
24
- if classname.find("Conv") != -1:
25
- m.weight.data.normal_(mean, std)
26
-
27
-
28
- def apply_weight_norm(m):
29
- classname = m.__class__.__name__
30
- if classname.find("Conv") != -1:
31
- weight_norm(m)
32
-
33
-
34
- def get_padding(kernel_size, dilation=1):
35
- return int((kernel_size*dilation - dilation)/2)
36
-
37
-
38
- def load_checkpoint(filepath, device):
39
- assert os.path.isfile(filepath)
40
- print("Loading '{}'".format(filepath))
41
- checkpoint_dict = torch.load(filepath, map_location=device)
42
- print("Complete.")
43
- return checkpoint_dict
44
-
45
-
46
- def save_checkpoint(filepath, obj):
47
- print("Saving checkpoint to {}".format(filepath))
48
- torch.save(obj, filepath)
49
- print("Complete.")
50
-
51
-
52
- def del_old_checkpoints(cp_dir, prefix, n_models=2):
53
- pattern = os.path.join(cp_dir, prefix + '????????')
54
- cp_list = glob.glob(pattern) # get checkpoint paths
55
- cp_list = sorted(cp_list)# sort by iter
56
- if len(cp_list) > n_models: # if more than n_models models are found
57
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
58
- open(cp, 'w').close()# empty file contents
59
- os.unlink(cp)# delete file (move to trash when using Colab)
60
-
61
-
62
- def scan_checkpoint(cp_dir, prefix):
63
- pattern = os.path.join(cp_dir, prefix + '????????')
64
- cp_list = glob.glob(pattern)
65
- if len(cp_list) == 0:
66
- return None
67
- return sorted(cp_list)[-1]
68
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlgoveraAI/medical-image-classification/app.py DELETED
@@ -1,140 +0,0 @@
1
- import gradio as gr
2
-
3
- from ocean_lib.config import Config
4
- from ocean_lib.models.compute_input import ComputeInput
5
- from ocean_lib.ocean.ocean import Ocean
6
- from ocean_lib.web3_internal.constants import ZERO_ADDRESS
7
- from ocean_lib.web3_internal.currency import to_wei
8
- from ocean_lib.web3_internal.wallet import Wallet
9
-
10
- import os
11
- import time
12
-
13
- from io import StringIO, BytesIO
14
- from PIL import Image
15
- import pandas as pd
16
- import matplotlib.pyplot as plt
17
- import random
18
- import numpy as np
19
-
20
- config = Config('config.ini')
21
- ocean = Ocean(config)
22
-
23
- def compute(
24
- private_key
25
- ):
26
-
27
- wallet = Wallet(ocean.web3,
28
- private_key,
29
- transaction_timeout=20,
30
- block_confirmations=config.block_confirmations)
31
-
32
- address = wallet.address
33
-
34
- DATA_ddo = ocean.assets.resolve("did:op:62D5Db3778ABAABa808e53eB2AB28181aaCCF747")
35
- data_token = ocean.get_data_token(DATA_ddo.data_token_address)
36
- token_address = data_token.address
37
-
38
- ALG_ddo = ocean.assets.resolve("did:op:7D87e472921536da4bd02CB566099C18ed2F40A5")
39
- alg_token = ocean.get_data_token(ALG_ddo.data_token_address)
40
-
41
- DATA_did = DATA_ddo.did
42
- ALG_did = ALG_ddo.did
43
-
44
- compute_service = DATA_ddo.get_service('compute')
45
- algo_service = ALG_ddo.get_service('access')
46
-
47
- # order & pay for dataset
48
- dataset_order_requirements = ocean.assets.order(
49
- DATA_did, wallet.address, service_type=compute_service.type
50
- )
51
- time.sleep(30)
52
- DATA_order_tx_id = ocean.assets.pay_for_service(
53
- ocean.web3,
54
- dataset_order_requirements.amount,
55
- dataset_order_requirements.data_token_address,
56
- DATA_did,
57
- compute_service.index,
58
- ZERO_ADDRESS,
59
- wallet,
60
- dataset_order_requirements.computeAddress,
61
- )
62
- print('after data')
63
- # order & pay for algo
64
- algo_order_requirements = ocean.assets.order(
65
- ALG_did, wallet.address, service_type=algo_service.type
66
- )
67
- time.sleep(30)
68
- ALG_order_tx_id = ocean.assets.pay_for_service(
69
- ocean.web3,
70
- algo_order_requirements.amount,
71
- algo_order_requirements.data_token_address,
72
- ALG_did,
73
- algo_service.index,
74
- ZERO_ADDRESS,
75
- wallet,
76
- algo_order_requirements.computeAddress,
77
- )
78
- print('after algo')
79
- compute_inputs = [ComputeInput(DATA_did,
80
- DATA_order_tx_id,
81
- compute_service.index)]
82
-
83
- job_id = ocean.compute.start(
84
- compute_inputs,
85
- wallet,
86
- algorithm_did=ALG_did,
87
- algorithm_tx_id=ALG_order_tx_id,
88
- algorithm_data_token=alg_token.address
89
- )
90
-
91
- status_dict = ocean.compute.status(DATA_did, job_id, wallet)
92
- while status_dict['statusText'] != 'Job finished':
93
- status_dict = ocean.compute.status(DATA_did, job_id, wallet)
94
- time.sleep(10)
95
-
96
- final_df_data = ocean.compute.result_file(DATA_did,
97
- job_id,
98
- 0,
99
- wallet)
100
- s = str(final_df_data,'utf-8')
101
- data = StringIO(s)
102
- final_df = pd.read_csv(data) #.drop('Unnamed: 0', 1)
103
-
104
- image_data = ocean.compute.result_file(DATA_did,
105
- job_id,
106
- 1,
107
- wallet)
108
- image = Image.open(BytesIO(image_data))
109
-
110
- image = np.array(image)
111
-
112
- samps = random.choices([0,1,2,3,4,5,6,7,8,9], k=3)
113
- imgs = []
114
- for i in samps:
115
- imgs.append(Image.fromarray(np.array(image )[300*i:300*i+300]))
116
-
117
- print('compute done')
118
- return *imgs, final_df
119
-
120
-
121
- # description = ()
122
-
123
-
124
- interface = gr.Interface(
125
- compute,
126
- [
127
- gr.inputs.Textbox(label="Private Key"),
128
- ],
129
- [
130
- gr.outputs.Image(label="Sample Results 1"),
131
- gr.outputs.Image(label="Sample Results 2"),
132
- gr.outputs.Image(label="Sample Results 3"),
133
- gr.outputs.Dataframe(label="Final dataframe"),
134
- ],
135
- title="Inference demo for nCight-Algovera Medical Image Classification",
136
- # description=description,
137
- theme="huggingface",
138
- )
139
-
140
- interface.launch(debug=True, enable_queue=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aloento/9Nine-VITS/generator.py DELETED
@@ -1,62 +0,0 @@
1
- import torch
2
- from torch import nn
3
- from torch.nn import Conv1d, ConvTranspose1d, functional as F
4
- from torch.nn.utils import weight_norm, remove_weight_norm
5
-
6
- import modules
7
- from commons import init_weights
8
-
9
-
10
- class Generator(torch.nn.Module):
11
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
12
- super(Generator, self).__init__()
13
- self.num_kernels = len(resblock_kernel_sizes)
14
- self.num_upsamples = len(upsample_rates)
15
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
16
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
17
-
18
- self.ups = nn.ModuleList()
19
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
20
- self.ups.append(weight_norm(
21
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
22
- k, u, padding=(k - u) // 2)))
23
-
24
- self.resblocks = nn.ModuleList()
25
- for i in range(len(self.ups)):
26
- ch = upsample_initial_channel // (2 ** (i + 1))
27
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
28
- self.resblocks.append(resblock(ch, k, d))
29
-
30
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
31
- self.ups.apply(init_weights)
32
-
33
- if gin_channels != 0:
34
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
35
-
36
- def forward(self, x, g=None):
37
- x = self.conv_pre(x)
38
- if g is not None:
39
- x = x + self.cond(g)
40
-
41
- for i in range(self.num_upsamples):
42
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
43
- x = self.ups[i](x)
44
- xs = None
45
- for j in range(self.num_kernels):
46
- if xs is None:
47
- xs = self.resblocks[i * self.num_kernels + j](x)
48
- else:
49
- xs += self.resblocks[i * self.num_kernels + j](x)
50
- x = xs / self.num_kernels
51
- x = F.leaky_relu(x)
52
- x = self.conv_post(x)
53
- x = torch.tanh(x)
54
-
55
- return x
56
-
57
- def remove_weight_norm(self):
58
- print('Removing weight norm...')
59
- for l in self.ups:
60
- remove_weight_norm(l)
61
- for l in self.resblocks:
62
- l.remove_weight_norm()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py DELETED
@@ -1,358 +0,0 @@
1
- # Copyright 2023 Katherine Crowson and The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import math
16
- from dataclasses import dataclass
17
- from typing import List, Optional, Tuple, Union
18
-
19
- import numpy as np
20
- import torch
21
-
22
- from ..configuration_utils import ConfigMixin, register_to_config
23
- from ..utils import BaseOutput, logging, randn_tensor
24
- from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
25
-
26
-
27
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
28
-
29
-
30
- @dataclass
31
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerAncestralDiscrete
32
- class EulerAncestralDiscreteSchedulerOutput(BaseOutput):
33
- """
34
- Output class for the scheduler's step function output.
35
-
36
- Args:
37
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
38
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
39
- denoising loop.
40
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
41
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
42
- `pred_original_sample` can be used to preview progress or for guidance.
43
- """
44
-
45
- prev_sample: torch.FloatTensor
46
- pred_original_sample: Optional[torch.FloatTensor] = None
47
-
48
-
49
- # Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
50
- def betas_for_alpha_bar(
51
- num_diffusion_timesteps,
52
- max_beta=0.999,
53
- alpha_transform_type="cosine",
54
- ):
55
- """
56
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
57
- (1-beta) over time from t = [0,1].
58
-
59
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
60
- to that part of the diffusion process.
61
-
62
-
63
- Args:
64
- num_diffusion_timesteps (`int`): the number of betas to produce.
65
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
66
- prevent singularities.
67
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
68
- Choose from `cosine` or `exp`
69
-
70
- Returns:
71
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
72
- """
73
- if alpha_transform_type == "cosine":
74
-
75
- def alpha_bar_fn(t):
76
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
77
-
78
- elif alpha_transform_type == "exp":
79
-
80
- def alpha_bar_fn(t):
81
- return math.exp(t * -12.0)
82
-
83
- else:
84
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
85
-
86
- betas = []
87
- for i in range(num_diffusion_timesteps):
88
- t1 = i / num_diffusion_timesteps
89
- t2 = (i + 1) / num_diffusion_timesteps
90
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
91
- return torch.tensor(betas, dtype=torch.float32)
92
-
93
-
94
- class EulerAncestralDiscreteScheduler(SchedulerMixin, ConfigMixin):
95
- """
96
- Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson:
97
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72
98
-
99
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
100
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
101
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
102
- [`~SchedulerMixin.from_pretrained`] functions.
103
-
104
- Args:
105
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
106
- beta_start (`float`): the starting `beta` value of inference.
107
- beta_end (`float`): the final `beta` value.
108
- beta_schedule (`str`):
109
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
110
- `linear` or `scaled_linear`.
111
- trained_betas (`np.ndarray`, optional):
112
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
113
- prediction_type (`str`, default `epsilon`, optional):
114
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
115
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
116
- https://imagen.research.google/video/paper.pdf)
117
- timestep_spacing (`str`, default `"linspace"`):
118
- The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
119
- Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
120
- steps_offset (`int`, default `0`):
121
- an offset added to the inference steps. You can use a combination of `offset=1` and
122
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
123
- stable diffusion.
124
- """
125
-
126
- _compatibles = [e.name for e in KarrasDiffusionSchedulers]
127
- order = 1
128
-
129
- @register_to_config
130
- def __init__(
131
- self,
132
- num_train_timesteps: int = 1000,
133
- beta_start: float = 0.0001,
134
- beta_end: float = 0.02,
135
- beta_schedule: str = "linear",
136
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
137
- prediction_type: str = "epsilon",
138
- timestep_spacing: str = "linspace",
139
- steps_offset: int = 0,
140
- ):
141
- if trained_betas is not None:
142
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
143
- elif beta_schedule == "linear":
144
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
145
- elif beta_schedule == "scaled_linear":
146
- # this schedule is very specific to the latent diffusion model.
147
- self.betas = (
148
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
149
- )
150
- elif beta_schedule == "squaredcos_cap_v2":
151
- # Glide cosine schedule
152
- self.betas = betas_for_alpha_bar(num_train_timesteps)
153
- else:
154
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
155
-
156
- self.alphas = 1.0 - self.betas
157
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
158
-
159
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
160
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
161
- self.sigmas = torch.from_numpy(sigmas)
162
-
163
- # setable values
164
- self.num_inference_steps = None
165
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
166
- self.timesteps = torch.from_numpy(timesteps)
167
- self.is_scale_input_called = False
168
-
169
- @property
170
- def init_noise_sigma(self):
171
- # standard deviation of the initial noise distribution
172
- if self.config.timestep_spacing in ["linspace", "trailing"]:
173
- return self.sigmas.max()
174
-
175
- return (self.sigmas.max() ** 2 + 1) ** 0.5
176
-
177
- def scale_model_input(
178
- self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor]
179
- ) -> torch.FloatTensor:
180
- """
181
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.
182
-
183
- Args:
184
- sample (`torch.FloatTensor`): input sample
185
- timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain
186
-
187
- Returns:
188
- `torch.FloatTensor`: scaled input sample
189
- """
190
- if isinstance(timestep, torch.Tensor):
191
- timestep = timestep.to(self.timesteps.device)
192
- step_index = (self.timesteps == timestep).nonzero().item()
193
- sigma = self.sigmas[step_index]
194
- sample = sample / ((sigma**2 + 1) ** 0.5)
195
- self.is_scale_input_called = True
196
- return sample
197
-
198
- def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None):
199
- """
200
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
201
-
202
- Args:
203
- num_inference_steps (`int`):
204
- the number of diffusion steps used when generating samples with a pre-trained model.
205
- device (`str` or `torch.device`, optional):
206
- the device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
207
- """
208
- self.num_inference_steps = num_inference_steps
209
-
210
- # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
211
- if self.config.timestep_spacing == "linspace":
212
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[
213
- ::-1
214
- ].copy()
215
- elif self.config.timestep_spacing == "leading":
216
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
217
- # creates integer timesteps by multiplying by ratio
218
- # casting to int to avoid issues when num_inference_step is power of 3
219
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(float)
220
- timesteps += self.config.steps_offset
221
- elif self.config.timestep_spacing == "trailing":
222
- step_ratio = self.config.num_train_timesteps / self.num_inference_steps
223
- # creates integer timesteps by multiplying by ratio
224
- # casting to int to avoid issues when num_inference_step is power of 3
225
- timesteps = (np.arange(self.config.num_train_timesteps, 0, -step_ratio)).round().copy().astype(float)
226
- timesteps -= 1
227
- else:
228
- raise ValueError(
229
- f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'."
230
- )
231
-
232
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
233
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
234
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
235
- self.sigmas = torch.from_numpy(sigmas).to(device=device)
236
- if str(device).startswith("mps"):
237
- # mps does not support float64
238
- self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
239
- else:
240
- self.timesteps = torch.from_numpy(timesteps).to(device=device)
241
-
242
- def step(
243
- self,
244
- model_output: torch.FloatTensor,
245
- timestep: Union[float, torch.FloatTensor],
246
- sample: torch.FloatTensor,
247
- generator: Optional[torch.Generator] = None,
248
- return_dict: bool = True,
249
- ) -> Union[EulerAncestralDiscreteSchedulerOutput, Tuple]:
250
- """
251
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
252
- process from the learned model outputs (most often the predicted noise).
253
-
254
- Args:
255
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
256
- timestep (`float`): current timestep in the diffusion chain.
257
- sample (`torch.FloatTensor`):
258
- current instance of sample being created by diffusion process.
259
- generator (`torch.Generator`, optional): Random number generator.
260
- return_dict (`bool`): option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class
261
-
262
- Returns:
263
- [`~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput`] or `tuple`:
264
- [`~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput`] if `return_dict` is True, otherwise
265
- a `tuple`. When returning a tuple, the first element is the sample tensor.
266
-
267
- """
268
-
269
- if (
270
- isinstance(timestep, int)
271
- or isinstance(timestep, torch.IntTensor)
272
- or isinstance(timestep, torch.LongTensor)
273
- ):
274
- raise ValueError(
275
- (
276
- "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to"
277
- " `EulerDiscreteScheduler.step()` is not supported. Make sure to pass"
278
- " one of the `scheduler.timesteps` as a timestep."
279
- ),
280
- )
281
-
282
- if not self.is_scale_input_called:
283
- logger.warning(
284
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
285
- "See `StableDiffusionPipeline` for a usage example."
286
- )
287
-
288
- if isinstance(timestep, torch.Tensor):
289
- timestep = timestep.to(self.timesteps.device)
290
-
291
- step_index = (self.timesteps == timestep).nonzero().item()
292
- sigma = self.sigmas[step_index]
293
-
294
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
295
- if self.config.prediction_type == "epsilon":
296
- pred_original_sample = sample - sigma * model_output
297
- elif self.config.prediction_type == "v_prediction":
298
- # * c_out + input * c_skip
299
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
300
- elif self.config.prediction_type == "sample":
301
- raise NotImplementedError("prediction_type not implemented yet: sample")
302
- else:
303
- raise ValueError(
304
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
305
- )
306
-
307
- sigma_from = self.sigmas[step_index]
308
- sigma_to = self.sigmas[step_index + 1]
309
- sigma_up = (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5
310
- sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5
311
-
312
- # 2. Convert to an ODE derivative
313
- derivative = (sample - pred_original_sample) / sigma
314
-
315
- dt = sigma_down - sigma
316
-
317
- prev_sample = sample + derivative * dt
318
-
319
- device = model_output.device
320
- noise = randn_tensor(model_output.shape, dtype=model_output.dtype, device=device, generator=generator)
321
-
322
- prev_sample = prev_sample + noise * sigma_up
323
-
324
- if not return_dict:
325
- return (prev_sample,)
326
-
327
- return EulerAncestralDiscreteSchedulerOutput(
328
- prev_sample=prev_sample, pred_original_sample=pred_original_sample
329
- )
330
-
331
- # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise
332
- def add_noise(
333
- self,
334
- original_samples: torch.FloatTensor,
335
- noise: torch.FloatTensor,
336
- timesteps: torch.FloatTensor,
337
- ) -> torch.FloatTensor:
338
- # Make sure sigmas and timesteps have the same device and dtype as original_samples
339
- sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype)
340
- if original_samples.device.type == "mps" and torch.is_floating_point(timesteps):
341
- # mps does not support float64
342
- schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32)
343
- timesteps = timesteps.to(original_samples.device, dtype=torch.float32)
344
- else:
345
- schedule_timesteps = self.timesteps.to(original_samples.device)
346
- timesteps = timesteps.to(original_samples.device)
347
-
348
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
349
-
350
- sigma = sigmas[step_indices].flatten()
351
- while len(sigma.shape) < len(original_samples.shape):
352
- sigma = sigma.unsqueeze(-1)
353
-
354
- noisy_samples = original_samples + noise * sigma
355
- return noisy_samples
356
-
357
- def __len__(self):
358
- return self.config.num_train_timesteps
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/hrnet/cascade_rcnn_hrnetv2p_w18_20e_coco.py DELETED
@@ -1,10 +0,0 @@
1
- _base_ = './cascade_rcnn_hrnetv2p_w32_20e_coco.py'
2
- # model settings
3
- model = dict(
4
- pretrained='open-mmlab://msra/hrnetv2_w18',
5
- backbone=dict(
6
- extra=dict(
7
- stage2=dict(num_channels=(18, 36)),
8
- stage3=dict(num_channels=(18, 36, 72)),
9
- stage4=dict(num_channels=(18, 36, 72, 144)))),
10
- neck=dict(type='HRFPN', in_channels=[18, 36, 72, 144], out_channels=256))
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/three_interpolate.py DELETED
@@ -1,68 +0,0 @@
1
- from typing import Tuple
2
-
3
- import torch
4
- from torch.autograd import Function
5
-
6
- from ..utils import ext_loader
7
-
8
- ext_module = ext_loader.load_ext(
9
- '_ext', ['three_interpolate_forward', 'three_interpolate_backward'])
10
-
11
-
12
- class ThreeInterpolate(Function):
13
- """Performs weighted linear interpolation on 3 features.
14
-
15
- Please refer to `Paper of PointNet++ <https://arxiv.org/abs/1706.02413>`_
16
- for more details.
17
- """
18
-
19
- @staticmethod
20
- def forward(ctx, features: torch.Tensor, indices: torch.Tensor,
21
- weight: torch.Tensor) -> torch.Tensor:
22
- """
23
- Args:
24
- features (Tensor): (B, C, M) Features descriptors to be
25
- interpolated
26
- indices (Tensor): (B, n, 3) index three nearest neighbors
27
- of the target features in features
28
- weight (Tensor): (B, n, 3) weights of interpolation
29
-
30
- Returns:
31
- Tensor: (B, C, N) tensor of the interpolated features
32
- """
33
- assert features.is_contiguous()
34
- assert indices.is_contiguous()
35
- assert weight.is_contiguous()
36
-
37
- B, c, m = features.size()
38
- n = indices.size(1)
39
- ctx.three_interpolate_for_backward = (indices, weight, m)
40
- output = torch.cuda.FloatTensor(B, c, n)
41
-
42
- ext_module.three_interpolate_forward(
43
- features, indices, weight, output, b=B, c=c, m=m, n=n)
44
- return output
45
-
46
- @staticmethod
47
- def backward(
48
- ctx, grad_out: torch.Tensor
49
- ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
50
- """
51
- Args:
52
- grad_out (Tensor): (B, C, N) tensor with gradients of outputs
53
-
54
- Returns:
55
- Tensor: (B, C, M) tensor with gradients of features
56
- """
57
- idx, weight, m = ctx.three_interpolate_for_backward
58
- B, c, n = grad_out.size()
59
-
60
- grad_features = torch.cuda.FloatTensor(B, c, m).zero_()
61
- grad_out_data = grad_out.data.contiguous()
62
-
63
- ext_module.three_interpolate_backward(
64
- grad_out_data, idx, weight, grad_features.data, b=B, c=c, n=n, m=m)
65
- return grad_features, None, None
66
-
67
-
68
- three_interpolate = ThreeInterpolate.apply
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ariharasudhan/YoloV5/utils/triton.py DELETED
@@ -1,85 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """ Utils to interact with the Triton Inference Server
3
- """
4
-
5
- import typing
6
- from urllib.parse import urlparse
7
-
8
- import torch
9
-
10
-
11
- class TritonRemoteModel:
12
- """ A wrapper over a model served by the Triton Inference Server. It can
13
- be configured to communicate over GRPC or HTTP. It accepts Torch Tensors
14
- as input and returns them as outputs.
15
- """
16
-
17
- def __init__(self, url: str):
18
- """
19
- Keyword arguments:
20
- url: Fully qualified address of the Triton server - for e.g. grpc://localhost:8000
21
- """
22
-
23
- parsed_url = urlparse(url)
24
- if parsed_url.scheme == "grpc":
25
- from tritonclient.grpc import InferenceServerClient, InferInput
26
-
27
- self.client = InferenceServerClient(parsed_url.netloc) # Triton GRPC client
28
- model_repository = self.client.get_model_repository_index()
29
- self.model_name = model_repository.models[0].name
30
- self.metadata = self.client.get_model_metadata(self.model_name, as_json=True)
31
-
32
- def create_input_placeholders() -> typing.List[InferInput]:
33
- return [
34
- InferInput(i['name'], [int(s) for s in i["shape"]], i['datatype']) for i in self.metadata['inputs']]
35
-
36
- else:
37
- from tritonclient.http import InferenceServerClient, InferInput
38
-
39
- self.client = InferenceServerClient(parsed_url.netloc) # Triton HTTP client
40
- model_repository = self.client.get_model_repository_index()
41
- self.model_name = model_repository[0]['name']
42
- self.metadata = self.client.get_model_metadata(self.model_name)
43
-
44
- def create_input_placeholders() -> typing.List[InferInput]:
45
- return [
46
- InferInput(i['name'], [int(s) for s in i["shape"]], i['datatype']) for i in self.metadata['inputs']]
47
-
48
- self._create_input_placeholders_fn = create_input_placeholders
49
-
50
- @property
51
- def runtime(self):
52
- """Returns the model runtime"""
53
- return self.metadata.get("backend", self.metadata.get("platform"))
54
-
55
- def __call__(self, *args, **kwargs) -> typing.Union[torch.Tensor, typing.Tuple[torch.Tensor, ...]]:
56
- """ Invokes the model. Parameters can be provided via args or kwargs.
57
- args, if provided, are assumed to match the order of inputs of the model.
58
- kwargs are matched with the model input names.
59
- """
60
- inputs = self._create_inputs(*args, **kwargs)
61
- response = self.client.infer(model_name=self.model_name, inputs=inputs)
62
- result = []
63
- for output in self.metadata['outputs']:
64
- tensor = torch.as_tensor(response.as_numpy(output['name']))
65
- result.append(tensor)
66
- return result[0] if len(result) == 1 else result
67
-
68
- def _create_inputs(self, *args, **kwargs):
69
- args_len, kwargs_len = len(args), len(kwargs)
70
- if not args_len and not kwargs_len:
71
- raise RuntimeError("No inputs provided.")
72
- if args_len and kwargs_len:
73
- raise RuntimeError("Cannot specify args and kwargs at the same time")
74
-
75
- placeholders = self._create_input_placeholders_fn()
76
- if args_len:
77
- if args_len != len(placeholders):
78
- raise RuntimeError(f"Expected {len(placeholders)} inputs, got {args_len}.")
79
- for input, value in zip(placeholders, args):
80
- input.set_data_from_numpy(value.cpu().numpy())
81
- else:
82
- for input in placeholders:
83
- value = kwargs[input.name]
84
- input.set_data_from_numpy(value.cpu().numpy())
85
- return placeholders
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_lib.py DELETED
@@ -1,122 +0,0 @@
1
- import os
2
- import sys
3
- from itertools import product, starmap
4
- import distutils.command.install_lib as orig
5
-
6
-
7
- class install_lib(orig.install_lib):
8
- """Don't add compiled flags to filenames of non-Python files"""
9
-
10
- def run(self):
11
- self.build()
12
- outfiles = self.install()
13
- if outfiles is not None:
14
- # always compile, in case we have any extension stubs to deal with
15
- self.byte_compile(outfiles)
16
-
17
- def get_exclusions(self):
18
- """
19
- Return a collections.Sized collections.Container of paths to be
20
- excluded for single_version_externally_managed installations.
21
- """
22
- all_packages = (
23
- pkg
24
- for ns_pkg in self._get_SVEM_NSPs()
25
- for pkg in self._all_packages(ns_pkg)
26
- )
27
-
28
- excl_specs = product(all_packages, self._gen_exclusion_paths())
29
- return set(starmap(self._exclude_pkg_path, excl_specs))
30
-
31
- def _exclude_pkg_path(self, pkg, exclusion_path):
32
- """
33
- Given a package name and exclusion path within that package,
34
- compute the full exclusion path.
35
- """
36
- parts = pkg.split('.') + [exclusion_path]
37
- return os.path.join(self.install_dir, *parts)
38
-
39
- @staticmethod
40
- def _all_packages(pkg_name):
41
- """
42
- >>> list(install_lib._all_packages('foo.bar.baz'))
43
- ['foo.bar.baz', 'foo.bar', 'foo']
44
- """
45
- while pkg_name:
46
- yield pkg_name
47
- pkg_name, sep, child = pkg_name.rpartition('.')
48
-
49
- def _get_SVEM_NSPs(self):
50
- """
51
- Get namespace packages (list) but only for
52
- single_version_externally_managed installations and empty otherwise.
53
- """
54
- # TODO: is it necessary to short-circuit here? i.e. what's the cost
55
- # if get_finalized_command is called even when namespace_packages is
56
- # False?
57
- if not self.distribution.namespace_packages:
58
- return []
59
-
60
- install_cmd = self.get_finalized_command('install')
61
- svem = install_cmd.single_version_externally_managed
62
-
63
- return self.distribution.namespace_packages if svem else []
64
-
65
- @staticmethod
66
- def _gen_exclusion_paths():
67
- """
68
- Generate file paths to be excluded for namespace packages (bytecode
69
- cache files).
70
- """
71
- # always exclude the package module itself
72
- yield '__init__.py'
73
-
74
- yield '__init__.pyc'
75
- yield '__init__.pyo'
76
-
77
- if not hasattr(sys, 'implementation'):
78
- return
79
-
80
- base = os.path.join(
81
- '__pycache__', '__init__.' + sys.implementation.cache_tag)
82
- yield base + '.pyc'
83
- yield base + '.pyo'
84
- yield base + '.opt-1.pyc'
85
- yield base + '.opt-2.pyc'
86
-
87
- def copy_tree(
88
- self, infile, outfile,
89
- preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1
90
- ):
91
- assert preserve_mode and preserve_times and not preserve_symlinks
92
- exclude = self.get_exclusions()
93
-
94
- if not exclude:
95
- return orig.install_lib.copy_tree(self, infile, outfile)
96
-
97
- # Exclude namespace package __init__.py* files from the output
98
-
99
- from setuptools.archive_util import unpack_directory
100
- from distutils import log
101
-
102
- outfiles = []
103
-
104
- def pf(src, dst):
105
- if dst in exclude:
106
- log.warn("Skipping installation of %s (namespace package)",
107
- dst)
108
- return False
109
-
110
- log.info("copying %s -> %s", src, os.path.dirname(dst))
111
- outfiles.append(dst)
112
- return dst
113
-
114
- unpack_directory(infile, outfile, pf)
115
- return outfiles
116
-
117
- def get_outputs(self):
118
- outputs = orig.install_lib.get_outputs(self)
119
- exclude = self.get_exclusions()
120
- if exclude:
121
- return [f for f in outputs if f not in exclude]
122
- return outputs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/e4e/README.md DELETED
@@ -1,142 +0,0 @@
1
- # Designing an Encoder for StyleGAN Image Manipulation
2
- <a href="https://arxiv.org/abs/2102.02766"><img src="https://img.shields.io/badge/arXiv-2008.00951-b31b1b.svg"></a>
3
- <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg"></a>
4
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://colab.research.google.com/github/omertov/encoder4editing/blob/main/notebooks/inference_playground.ipynb)
5
-
6
- > Recently, there has been a surge of diverse methods for performing image editing by employing pre-trained unconditional generators. Applying these methods on real images, however, remains a challenge, as it necessarily requires the inversion of the images into their latent space. To successfully invert a real image, one needs to find a latent code that reconstructs the input image accurately, and more importantly, allows for its meaningful manipulation. In this paper, we carefully study the latent space of StyleGAN, the state-of-the-art unconditional generator. We identify and analyze the existence of a distortion-editability tradeoff and a distortion-perception tradeoff within the StyleGAN latent space. We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on. We present an encoder based on our two principles that is specifically designed for facilitating editing on real images by balancing these tradeoffs. By evaluating its performance qualitatively and quantitatively on numerous challenging domains, including cars and horses, we show that our inversion method, followed by common editing techniques, achieves superior real-image editing quality, with only a small reconstruction accuracy drop.
7
-
8
- <p align="center">
9
- <img src="docs/teaser.jpg" width="800px"/>
10
- </p>
11
-
12
- ## Description
13
- Official Implementation of "<a href="https://arxiv.org/abs/2102.02766">Designing an Encoder for StyleGAN Image Manipulation</a>" paper for both training and evaluation.
14
- The e4e encoder is specifically designed to complement existing image manipulation techniques performed over StyleGAN's latent space.
15
-
16
- ## Recent Updates
17
- `2021.03.25`: Add pose editing direction.
18
-
19
- ## Getting Started
20
- ### Prerequisites
21
- - Linux or macOS
22
- - NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
23
- - Python 3
24
-
25
- ### Installation
26
- - Clone the repository:
27
- ```
28
- git clone https://github.com/omertov/encoder4editing.git
29
- cd encoder4editing
30
- ```
31
- - Dependencies:
32
- We recommend running this repository using [Anaconda](https://docs.anaconda.com/anaconda/install/).
33
- All dependencies for defining the environment are provided in `environment/e4e_env.yaml`.
34
-
35
- ### Inference Notebook
36
- We provide a Jupyter notebook found in `notebooks/inference_playground.ipynb` that allows one to encode and perform several editings on real images using StyleGAN.
37
-
38
- ### Pretrained Models
39
- Please download the pre-trained models from the following links. Each e4e model contains the entire pSp framework architecture, including the encoder and decoder weights.
40
- | Path | Description
41
- | :--- | :----------
42
- |[FFHQ Inversion](https://drive.google.com/file/d/1cUv_reLE6k3604or78EranS7XzuVMWeO/view?usp=sharing) | FFHQ e4e encoder.
43
- |[Cars Inversion](https://drive.google.com/file/d/17faPqBce2m1AQeLCLHUVXaDfxMRU2QcV/view?usp=sharing) | Cars e4e encoder.
44
- |[Horse Inversion](https://drive.google.com/file/d/1TkLLnuX86B_BMo2ocYD0kX9kWh53rUVX/view?usp=sharing) | Horse e4e encoder.
45
- |[Church Inversion](https://drive.google.com/file/d/1-L0ZdnQLwtdy6-A_Ccgq5uNJGTqE7qBa/view?usp=sharing) | Church e4e encoder.
46
-
47
- If you wish to use one of the pretrained models for training or inference, you may do so using the flag `--checkpoint_path`.
48
-
49
- In addition, we provide various auxiliary models needed for training your own e4e model from scratch.
50
- | Path | Description
51
- | :--- | :----------
52
- |[FFHQ StyleGAN](https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing) | StyleGAN model pretrained on FFHQ taken from [rosinality](https://github.com/rosinality/stylegan2-pytorch) with 1024x1024 output resolution.
53
- |[IR-SE50 Model](https://drive.google.com/file/d/1KW7bjndL3QG3sxBbZxreGHigcCCpsDgn/view?usp=sharing) | Pretrained IR-SE50 model taken from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) for use in our ID loss during training.
54
- |[MOCOv2 Model](https://drive.google.com/file/d/18rLcNGdteX5LwT7sv_F7HWr12HpVEzVe/view?usp=sharing) | Pretrained ResNet-50 model trained using MOCOv2 for use in our simmilarity loss for domains other then human faces during training.
55
-
56
- By default, we assume that all auxiliary models are downloaded and saved to the directory `pretrained_models`. However, you may use your own paths by changing the necessary values in `configs/path_configs.py`.
57
-
58
- ## Training
59
- To train the e4e encoder, make sure the paths to the required models, as well as training and testing data is configured in `configs/path_configs.py` and `configs/data_configs.py`.
60
- #### **Training the e4e Encoder**
61
- ```
62
- python scripts/train.py \
63
- --dataset_type cars_encode \
64
- --exp_dir new/experiment/directory \
65
- --start_from_latent_avg \
66
- --use_w_pool \
67
- --w_discriminator_lambda 0.1 \
68
- --progressive_start 20000 \
69
- --id_lambda 0.5 \
70
- --val_interval 10000 \
71
- --max_steps 200000 \
72
- --stylegan_size 512 \
73
- --stylegan_weights path/to/pretrained/stylegan.pt \
74
- --workers 8 \
75
- --batch_size 8 \
76
- --test_batch_size 4 \
77
- --test_workers 4
78
- ```
79
-
80
- #### Training on your own dataset
81
- In order to train the e4e encoder on a custom dataset, perform the following adjustments:
82
- 1. Insert the paths to your train and test data into the `dataset_paths` variable defined in `configs/paths_config.py`:
83
- ```
84
- dataset_paths = {
85
- 'my_train_data': '/path/to/train/images/directory',
86
- 'my_test_data': '/path/to/test/images/directory'
87
- }
88
- ```
89
- 2. Configure a new dataset under the DATASETS variable defined in `configs/data_configs.py`:
90
- ```
91
- DATASETS = {
92
- 'my_data_encode': {
93
- 'transforms': transforms_config.EncodeTransforms,
94
- 'train_source_root': dataset_paths['my_train_data'],
95
- 'train_target_root': dataset_paths['my_train_data'],
96
- 'test_source_root': dataset_paths['my_test_data'],
97
- 'test_target_root': dataset_paths['my_test_data']
98
- }
99
- }
100
- ```
101
- Refer to `configs/transforms_config.py` for the transformations applied to the train and test images during training.
102
-
103
- 3. Finally, run a training session with `--dataset_type my_data_encode`.
104
-
105
- ## Inference
106
- Having trained your model, you can use `scripts/inference.py` to apply the model on a set of images.
107
- For example,
108
- ```
109
- python scripts/inference.py \
110
- --images_dir=/path/to/images/directory \
111
- --save_dir=/path/to/saving/directory \
112
- path/to/checkpoint.pt
113
- ```
114
-
115
- ## Latent Editing Consistency (LEC)
116
- As described in the paper, we suggest a new metric, Latent Editing Consistency (LEC), for evaluating the encoder's
117
- performance.
118
- We provide an example for calculating the metric over the FFHQ StyleGAN using the aging editing direction in
119
- `metrics/LEC.py`.
120
-
121
- To run the example:
122
- ```
123
- cd metrics
124
- python LEC.py \
125
- --images_dir=/path/to/images/directory \
126
- path/to/checkpoint.pt
127
- ```
128
-
129
- ## Acknowledgments
130
- This code borrows heavily from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel)
131
-
132
- ## Citation
133
- If you use this code for your research, please cite our paper <a href="https://arxiv.org/abs/2102.02766">Designing an Encoder for StyleGAN Image Manipulation</a>:
134
-
135
- ```
136
- @article{tov2021designing,
137
- title={Designing an Encoder for StyleGAN Image Manipulation},
138
- author={Tov, Omer and Alaluf, Yuval and Nitzan, Yotam and Patashnik, Or and Cohen-Or, Daniel},
139
- journal={arXiv preprint arXiv:2102.02766},
140
- year={2021}
141
- }
142
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_act.py DELETED
@@ -1,85 +0,0 @@
1
- import os
2
-
3
- import torch
4
- from torch import nn
5
- from torch.autograd import Function
6
- from torch.utils.cpp_extension import load
7
-
8
- module_path = os.path.dirname(__file__)
9
- fused = load(
10
- 'fused',
11
- sources=[
12
- os.path.join(module_path, 'fused_bias_act.cpp'),
13
- os.path.join(module_path, 'fused_bias_act_kernel.cu'),
14
- ],
15
- )
16
-
17
-
18
- class FusedLeakyReLUFunctionBackward(Function):
19
- @staticmethod
20
- def forward(ctx, grad_output, out, negative_slope, scale):
21
- ctx.save_for_backward(out)
22
- ctx.negative_slope = negative_slope
23
- ctx.scale = scale
24
-
25
- empty = grad_output.new_empty(0)
26
-
27
- grad_input = fused.fused_bias_act(
28
- grad_output, empty, out, 3, 1, negative_slope, scale
29
- )
30
-
31
- dim = [0]
32
-
33
- if grad_input.ndim > 2:
34
- dim += list(range(2, grad_input.ndim))
35
-
36
- grad_bias = grad_input.sum(dim).detach()
37
-
38
- return grad_input, grad_bias
39
-
40
- @staticmethod
41
- def backward(ctx, gradgrad_input, gradgrad_bias):
42
- out, = ctx.saved_tensors
43
- gradgrad_out = fused.fused_bias_act(
44
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
45
- )
46
-
47
- return gradgrad_out, None, None, None
48
-
49
-
50
- class FusedLeakyReLUFunction(Function):
51
- @staticmethod
52
- def forward(ctx, input, bias, negative_slope, scale):
53
- empty = input.new_empty(0)
54
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
55
- ctx.save_for_backward(out)
56
- ctx.negative_slope = negative_slope
57
- ctx.scale = scale
58
-
59
- return out
60
-
61
- @staticmethod
62
- def backward(ctx, grad_output):
63
- out, = ctx.saved_tensors
64
-
65
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
66
- grad_output, out, ctx.negative_slope, ctx.scale
67
- )
68
-
69
- return grad_input, grad_bias, None, None
70
-
71
-
72
- class FusedLeakyReLU(nn.Module):
73
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
74
- super().__init__()
75
-
76
- self.bias = nn.Parameter(torch.zeros(channel))
77
- self.negative_slope = negative_slope
78
- self.scale = scale
79
-
80
- def forward(self, input):
81
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
82
-
83
-
84
- def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
85
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/LazyImport.py DELETED
@@ -1,13 +0,0 @@
1
- from importlib.util import find_spec, LazyLoader, module_from_spec
2
- from sys import modules
3
-
4
- def lazyload(name):
5
- if name in modules:
6
- return modules[name]
7
- else:
8
- spec = find_spec(name)
9
- loader = LazyLoader(spec.loader)
10
- module = module_from_spec(spec)
11
- modules[name] = module
12
- loader.exec_module(module)
13
- return module
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BeeMon/dreambooth-training/app.py DELETED
@@ -1,687 +0,0 @@
1
- from subprocess import getoutput
2
- import os
3
-
4
- gpu_info = getoutput('nvidia-smi')
5
- if("A10G" in gpu_info):
6
- which_gpu = "A10G"
7
- os.system(f"pip install --no-deps xformers==0.0.16rc425")
8
- elif("T4" in gpu_info):
9
- which_gpu = "T4"
10
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
11
- else:
12
- which_gpu = "CPU"
13
-
14
- import gradio as gr
15
- from pathlib import Path
16
- import argparse
17
- import shutil
18
- from train_dreambooth import run_training
19
- from convertosd import convert
20
- from PIL import Image
21
- from slugify import slugify
22
- import requests
23
- import torch
24
- import zipfile
25
- import tarfile
26
- import urllib.parse
27
- import gc
28
- from diffusers import StableDiffusionPipeline
29
- from huggingface_hub import snapshot_download, update_repo_visibility, HfApi
30
-
31
- is_spaces = True if "SPACE_ID" in os.environ else False
32
- if(is_spaces):
33
- is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False
34
- else:
35
- is_shared_ui = False
36
- is_gpu_associated = torch.cuda.is_available()
37
-
38
- os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
39
-
40
- if(is_gpu_associated):
41
- model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable")
42
- model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"])
43
- model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"])
44
- safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
45
- model_to_load = model_v1
46
-
47
- def swap_base_model(selected_model):
48
- if(is_gpu_associated):
49
- global model_to_load
50
- if(selected_model == "v1-5"):
51
- model_to_load = model_v1
52
- elif(selected_model == "v2-1-768"):
53
- model_to_load = model_v2
54
- else:
55
- model_to_load = model_v2_512
56
-
57
-
58
-
59
- css = '''
60
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
61
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
62
- #component-4, #component-3, #component-10{min-height: 0}
63
- .duplicate-button img{margin: 0}
64
- '''
65
- maximum_concepts = 3
66
-
67
- def swap_text(option, base):
68
- resize_width = 768 if base == "v2-1-768" else 512
69
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
70
- if(option == "object"):
71
- instance_prompt_example = "cttoy"
72
- freeze_for = 30
73
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file=cat-toy.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)]
74
- elif(option == "person"):
75
- instance_prompt_example = "julcto"
76
- freeze_for = 70
77
- #show_prior_preservation = True if base != "v2-1-768" else False
78
- show_prior_preservation=False
79
- if(show_prior_preservation):
80
- prior_preservation_box_update = gr.update(visible=show_prior_preservation)
81
- else:
82
- prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False)
83
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file=person.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update]
84
- elif(option == "style"):
85
- instance_prompt_example = "trsldamrl"
86
- freeze_for = 10
87
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file=trsl_style.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)]
88
-
89
- def count_files(*inputs):
90
- file_counter = 0
91
- concept_counter = 0
92
- for i, input in enumerate(inputs):
93
- if(i < maximum_concepts):
94
- files = inputs[i]
95
- if(files):
96
- concept_counter+=1
97
- file_counter+=len(files)
98
- uses_custom = inputs[-1]
99
- type_of_thing = inputs[-4]
100
- selected_model = inputs[-5]
101
- experimental_faces = inputs[-6]
102
- if(uses_custom):
103
- Training_Steps = int(inputs[-3])
104
- else:
105
- Training_Steps = file_counter*150
106
- if(type_of_thing == "person" and Training_Steps > 2400):
107
- Training_Steps = 2400 #Avoid overfitting on person faces
108
- if(is_spaces):
109
- if(selected_model == "v1-5"):
110
- its = 1.1 if which_gpu == "T4" else 1.8
111
- if(experimental_faces):
112
- its = 1
113
- elif(selected_model == "v2-1-512"):
114
- its = 0.8 if which_gpu == "T4" else 1.5
115
- if(experimental_faces):
116
- its = 0.7
117
- elif(selected_model == "v2-1-768"):
118
- its = 0.48 if which_gpu == "T4" else 0.85
119
-
120
- gpu_price = 0.60 if which_gpu == "T4" else 1.10
121
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes.
122
- The setup, compression and uploading the model can take up to 20 minutes.<br>As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, <span style="font-size: 120%"><b>the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.</b></span><br><br>
123
- If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.<br><br>'''
124
- else:
125
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.<br><br>'''
126
-
127
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
128
-
129
- def update_steps(*files_list):
130
- file_counter = 0
131
- for i, files in enumerate(files_list):
132
- if(files):
133
- file_counter+=len(files)
134
- return(gr.update(value=file_counter*200))
135
-
136
- def visualise_progress_bar():
137
- return gr.update(visible=True)
138
-
139
- def pad_image(image):
140
- w, h = image.size
141
- if w == h:
142
- return image
143
- elif w > h:
144
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
145
- new_image.paste(image, (0, (w - h) // 2))
146
- return new_image
147
- else:
148
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
149
- new_image.paste(image, ((h - w) // 2, 0))
150
- return new_image
151
-
152
- def validate_model_upload(hf_token, model_name):
153
- if(hf_token != ''):
154
- api = HfApi()
155
- try:
156
- _ = api.whoami(hf_token)
157
- except:
158
- raise gr.Error("You have inserted an invalid Hugging Face token")
159
- try:
160
- if(is_spaces):
161
- update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space")
162
- except:
163
- raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions")
164
- else:
165
- raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)")
166
- if(model_name == ""):
167
- raise gr.Error("Please fill in your model's name")
168
-
169
- def swap_hardware(hf_token, hardware="cpu-basic"):
170
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
171
- headers = { "authorization" : f"Bearer {hf_token}"}
172
- body = {'flavor': hardware}
173
- requests.post(hardware_url, json = body, headers=headers)
174
-
175
- def swap_sleep_time(hf_token,sleep_time):
176
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime"
177
- headers = { "authorization" : f"Bearer {hf_token}"}
178
- body = {'seconds':sleep_time}
179
- requests.post(sleep_time_url,json=body,headers=headers)
180
-
181
- def get_sleep_time(hf_token):
182
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}"
183
- headers = { "authorization" : f"Bearer {hf_token}"}
184
- response = requests.get(sleep_time_url,headers=headers)
185
- try:
186
- gcTimeout = response.json()['runtime']['gcTimeout']
187
- except:
188
- gcTimeout = None
189
- return gcTimeout
190
-
191
- def write_to_community(title, description,hf_token):
192
- from huggingface_hub import HfApi
193
- api = HfApi()
194
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token)
195
-
196
- def train(progress=gr.Progress(track_tqdm=True), *inputs):
197
- which_model = inputs[-10]
198
- if(which_model == ""):
199
- raise gr.Error("You forgot to select a base model to use")
200
-
201
- if is_shared_ui:
202
- raise gr.Error("This Space only works in duplicated instances")
203
- if not is_gpu_associated:
204
- raise gr.Error("Please associate a T4 or A10G GPU for this Space")
205
- hf_token = inputs[-5]
206
- model_name = inputs[-7]
207
- if(is_spaces):
208
- sleep_time = get_sleep_time(hf_token)
209
- if sleep_time:
210
- swap_sleep_time(hf_token, -1)
211
- remove_attribution_after = inputs[-6]
212
- else:
213
- remove_attribution_after = False
214
-
215
- if(remove_attribution_after):
216
- validate_model_upload(hf_token, model_name)
217
-
218
- torch.cuda.empty_cache()
219
- if 'pipe' in globals():
220
- global pipe, pipe_is_set
221
- del pipe
222
- pipe_is_set = False
223
- gc.collect()
224
-
225
- if os.path.exists("output_model"): shutil.rmtree('output_model')
226
- if os.path.exists("instance_images"): shutil.rmtree('instance_images')
227
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
228
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
229
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
230
- file_counter = 0
231
- resolution = 512 if which_model != "v2-1-768" else 768
232
- for i, input in enumerate(inputs):
233
- if(i < maximum_concepts-1):
234
- if(input):
235
- os.makedirs('instance_images',exist_ok=True)
236
- files = inputs[i+(maximum_concepts*2)]
237
- prompt = inputs[i+maximum_concepts]
238
- if(prompt == "" or prompt == None):
239
- raise gr.Error("You forgot to define your concept prompt")
240
- for j, file_temp in enumerate(files):
241
- file = Image.open(file_temp.name)
242
- image = pad_image(file)
243
- image = image.resize((resolution, resolution))
244
- extension = file_temp.name.split(".")[1]
245
- image = image.convert('RGB')
246
- image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
247
- file_counter += 1
248
-
249
- os.makedirs('output_model',exist_ok=True)
250
- uses_custom = inputs[-1]
251
- type_of_thing = inputs[-4]
252
- experimental_face_improvement = inputs[-9]
253
-
254
- if(uses_custom):
255
- Training_Steps = int(inputs[-3])
256
- Train_text_encoder_for = int(inputs[-2])
257
- else:
258
- if(type_of_thing == "object"):
259
- Train_text_encoder_for=30
260
-
261
- elif(type_of_thing == "style"):
262
- Train_text_encoder_for=15
263
-
264
- elif(type_of_thing == "person"):
265
- Train_text_encoder_for=70
266
-
267
- Training_Steps = file_counter*150
268
- if(type_of_thing == "person" and Training_Steps > 2600):
269
- Training_Steps = 2600 #Avoid overfitting on people's faces
270
- stptxt = int((Training_Steps*Train_text_encoder_for)/100)
271
- gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False
272
- cache_latents = True if which_model != "v1-5" else False
273
- if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)):
274
- args_general = argparse.Namespace(
275
- image_captions_filename = True,
276
- train_text_encoder = True if stptxt > 0 else False,
277
- stop_text_encoder_training = stptxt,
278
- save_n_steps = 0,
279
- pretrained_model_name_or_path = model_to_load,
280
- instance_data_dir="instance_images",
281
- class_data_dir=None,
282
- output_dir="output_model",
283
- instance_prompt="",
284
- seed=42,
285
- resolution=resolution,
286
- mixed_precision="fp16",
287
- train_batch_size=1,
288
- gradient_accumulation_steps=1,
289
- use_8bit_adam=True,
290
- learning_rate=2e-6,
291
- lr_scheduler="polynomial",
292
- lr_warmup_steps = 0,
293
- max_train_steps=Training_Steps,
294
- gradient_checkpointing=gradient_checkpointing,
295
- cache_latents=cache_latents,
296
- )
297
- print("Starting single training...")
298
- lock_file = open("intraining.lock", "w")
299
- lock_file.close()
300
- try:
301
- run_training(args_general)
302
- except Exception as e:
303
- if(is_spaces):
304
- title="There was an error on during your training"
305
- description=f'''
306
- Unfortunately there was an error during training your {model_name} model.
307
- Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training):
308
- ```
309
- {str(e)}
310
- ```
311
- '''
312
- swap_hardware(hf_token, "cpu-basic")
313
- write_to_community(title,description,hf_token)
314
-
315
-
316
- gc.collect()
317
- torch.cuda.empty_cache()
318
- if(which_model == "v1-5"):
319
- print("Adding Safety Checker to the model...")
320
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True)
321
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True)
322
- shutil.copy(f"model_index.json", "output_model/model_index.json")
323
-
324
- if(not remove_attribution_after):
325
- swap_sleep_time(hf_token, sleep_time)
326
- print("Archiving model file...")
327
- with tarfile.open("diffusers_model.tar", "w") as tar:
328
- tar.add("output_model", arcname=os.path.basename("output_model"))
329
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
330
- trained_file = open("hastrained.success", "w")
331
- trained_file.close()
332
- print("Training completed!")
333
- return [
334
- gr.update(visible=False), #progress_bar
335
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
336
- gr.update(visible=True), #try_your_model
337
- gr.update(visible=True), #push_to_hub
338
- gr.update(visible=True), #convert_button
339
- gr.update(visible=False), #training_ongoing
340
- gr.update(visible=True) #completed_training
341
- ]
342
- else:
343
- where_to_upload = inputs[-8]
344
- push(model_name, where_to_upload, hf_token, which_model, True)
345
- swap_hardware(hf_token, "cpu-basic")
346
-
347
- pipe_is_set = False
348
- def generate(prompt, steps):
349
- torch.cuda.empty_cache()
350
- from diffusers import StableDiffusionPipeline
351
- global pipe_is_set
352
- if(not pipe_is_set):
353
- global pipe
354
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
355
- pipe = pipe.to("cuda")
356
- pipe_is_set = True
357
-
358
- image = pipe(prompt, num_inference_steps=steps).images[0]
359
- return(image)
360
-
361
- def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
362
- validate_model_upload(hf_token, model_name)
363
- if(not os.path.exists("model.ckpt")):
364
- convert("output_model", "model.ckpt")
365
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
366
- from huggingface_hub import create_repo
367
- model_name_slug = slugify(model_name)
368
- api = HfApi()
369
- your_username = api.whoami(token=hf_token)["name"]
370
- if(where_to_upload == "My personal profile"):
371
- model_id = f"{your_username}/{model_name_slug}"
372
- else:
373
- model_id = f"sd-dreambooth-library/{model_name_slug}"
374
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
375
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
376
-
377
- print(f"Starting to upload the model {model_id}...")
378
- images_upload = os.listdir("instance_images")
379
- image_string = ""
380
- instance_prompt_list = []
381
- previous_instance_prompt = ''
382
- for i, image in enumerate(images_upload):
383
- instance_prompt = image.split("_")[0]
384
- if(instance_prompt != previous_instance_prompt):
385
- title_instance_prompt_string = instance_prompt
386
- instance_prompt_list.append(instance_prompt)
387
- else:
388
- title_instance_prompt_string = ''
389
- previous_instance_prompt = instance_prompt
390
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
391
- {image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})'''
392
- readme_text = f'''---
393
- license: creativeml-openrail-m
394
- tags:
395
- - text-to-image
396
- widget:
397
- - text: {instance_prompt_list[0]}
398
- ---
399
- ### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
400
-
401
- You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
402
-
403
- Sample pictures of:
404
- {image_string}
405
- '''
406
- #Save the readme to a file
407
- readme_file = open("model.README.md", "w")
408
- readme_file.write(readme_text)
409
- readme_file.close()
410
- #Save the token identifier to a file
411
- text_file = open("token_identifier.txt", "w")
412
- text_file.write(', '.join(instance_prompt_list))
413
- text_file.close()
414
- try:
415
- create_repo(model_id,private=True, token=hf_token)
416
- except:
417
- import time
418
- epoch_time = str(int(time.time()))
419
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
420
- operations = [
421
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
422
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
423
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
424
- ]
425
- api.create_commit(
426
- repo_id=model_id,
427
- operations=operations,
428
- commit_message=f"Upload the model {model_name}",
429
- token=hf_token
430
- )
431
- api.upload_folder(
432
- folder_path="output_model",
433
- repo_id=model_id,
434
- token=hf_token
435
- )
436
- api.upload_folder(
437
- folder_path="instance_images",
438
- path_in_repo="concept_images",
439
- repo_id=model_id,
440
- token=hf_token
441
- )
442
- if is_spaces:
443
- if(not comes_from_automated):
444
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
445
- else:
446
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
447
- title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!"
448
- description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}"
449
- write_to_community(title, description, hf_token)
450
- #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
451
- print("Model uploaded successfully!")
452
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
453
-
454
- def convert_to_ckpt():
455
- if 'pipe' in globals():
456
- global pipe, pipe_is_set
457
- del pipe
458
- pipe_is_set = False
459
- gc.collect()
460
- convert("output_model", "model.ckpt")
461
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
462
-
463
- def check_status(top_description):
464
- if os.path.exists("hastrained.success"):
465
- if is_spaces:
466
- update_top_tag = gr.update(value=f'''
467
- <div class="gr-prose" style="max-width: 80%">
468
- <h2>Your model has finished training ✅</h2>
469
- <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}" target="_blank">settings page</a> and downgrade your Space to a CPU Basic</p>
470
- </div>
471
- ''')
472
- else:
473
- update_top_tag = gr.update(value=f'''
474
- <div class="gr-prose" style="max-width: 80%">
475
- <h2>Your model has finished training ✅</h2>
476
- <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).</p>
477
- </div>
478
- ''')
479
- show_outputs = True
480
- elif os.path.exists("intraining.lock"):
481
- update_top_tag = gr.update(value='''
482
- <div class="gr-prose" style="max-width: 80%">
483
- <h2>Don't worry, your model is still training! ⌛</h2>
484
- <p>You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model</p>
485
- </div>
486
- ''')
487
- show_outputs = False
488
- else:
489
- update_top_tag = gr.update(value=top_description)
490
- show_outputs = False
491
- if os.path.exists("diffusers_model.tar"):
492
- update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"])
493
- else:
494
- update_files_tag = gr.update(visible=show_outputs)
495
- return [
496
- update_top_tag, #top_description
497
- gr.update(visible=show_outputs), #try_your_model
498
- gr.update(visible=show_outputs), #push_to_hub
499
- update_files_tag, #result
500
- gr.update(visible=show_outputs), #convert_button
501
- ]
502
-
503
- def checkbox_swap(checkbox):
504
- return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)]
505
-
506
- with gr.Blocks(css=css) as demo:
507
- with gr.Box():
508
- if is_shared_ui:
509
- top_description = gr.HTML(f'''
510
- <div class="gr-prose" style="max-width: 80%">
511
- <h2>Attention - This Space doesn't work in this shared UI</h2>
512
- <p>For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!&nbsp;&nbsp;<a class="duplicate-button" style="display:inline-block" target="_blank" href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a></p>
513
- <img class="instruction" src="file=duplicate.png">
514
- <img class="arrow" src="file=arrow.png" />
515
- </div>
516
- ''')
517
- elif(is_spaces):
518
- if(is_gpu_associated):
519
- top_description = gr.HTML(f'''
520
- <div class="gr-prose" style="max-width: 80%">
521
- <h2>You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉</h2>
522
- <p>You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.</p>
523
- </div>
524
- ''')
525
- else:
526
- top_description = gr.HTML(f'''
527
- <div class="gr-prose" style="max-width: 80%">
528
- <h2>You have successfully duplicated the Dreambooth Training Space 🎉</h2>
529
- <p>There's only one step left before you can train your model: <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}/settings" style="text-decoration: underline" target="_blank">attribute a <b>T4-small or A10G-small GPU</b> to it (via the Settings tab)</a> and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.</p>
530
- </div>
531
- ''')
532
- else:
533
- top_description = gr.HTML(f'''
534
- <div class="gr-prose" style="max-width: 80%">
535
- <h2>You have successfully cloned the Dreambooth Training Space locally 🎉</h2>
536
- <p>Do a <code>pip install requirements-local.txt</code></p>
537
- </div>
538
- ''')
539
- gr.Markdown("# Dreambooth Training UI 💭")
540
- gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)")
541
-
542
- with gr.Row() as what_are_you_training:
543
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
544
- with gr.Column():
545
- base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True)
546
-
547
- #Very hacky approach to emulate dynamically created Gradio components
548
- with gr.Row() as upload_your_concept:
549
- with gr.Column():
550
- thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example")
551
- thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
552
- thing_image_example = gr.HTML('''<img src="file=cat-toy.png" />''')
553
- things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
554
-
555
- with gr.Column():
556
- file_collection = []
557
- concept_collection = []
558
- buttons_collection = []
559
- delete_collection = []
560
- is_visible = []
561
-
562
- row = [None] * maximum_concepts
563
- for x in range(maximum_concepts):
564
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
565
- if(x == 0):
566
- visible = True
567
- is_visible.append(gr.State(value=True))
568
- else:
569
- visible = False
570
- is_visible.append(gr.State(value=False))
571
-
572
- file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
573
- with gr.Column(visible=visible) as row[x]:
574
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions'''))
575
- with gr.Row():
576
- if(x < maximum_concepts-1):
577
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
578
- if(x > 0):
579
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
580
-
581
- counter_add = 1
582
- for button in buttons_collection:
583
- if(counter_add < len(buttons_collection)):
584
- button.click(lambda:
585
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
586
- None,
587
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
588
- else:
589
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
590
- counter_add += 1
591
-
592
- counter_delete = 1
593
- for delete_button in delete_collection:
594
- if(counter_delete < len(delete_collection)+1):
595
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
596
- counter_delete += 1
597
-
598
- with gr.Accordion("Custom Settings", open=False):
599
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
600
- gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
601
- steps = gr.Number(label="How many steps", value=2400)
602
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
603
-
604
- with gr.Box(visible=False) as training_summary:
605
- training_summary_text = gr.HTML("", visible=True, label="Training Summary")
606
- is_advanced_visible = True if is_spaces else False
607
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible)
608
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=True)
609
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True)
610
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True)
611
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True)
612
-
613
- train_btn = gr.Button("Start Training")
614
- progress_bar = gr.Textbox(visible=False)
615
- if(is_shared_ui):
616
- training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False)
617
- elif(not is_gpu_associated):
618
- training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False)
619
- else:
620
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
621
-
622
-
623
- #Post-training UI
624
- completed_training = gr.Markdown('''# ✅ Training completed.
625
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
626
-
627
- with gr.Row():
628
- with gr.Box(visible=False) as try_your_model:
629
- gr.Markdown("## Try your model")
630
- prompt = gr.Textbox(label="Type your prompt")
631
- result_image = gr.Image()
632
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
633
- generate_button = gr.Button("Generate Image")
634
-
635
- with gr.Box(visible=False) as push_to_hub:
636
- gr.Markdown("## Push to Hugging Face Hub")
637
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
638
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
639
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
640
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
641
-
642
- push_button = gr.Button("Push to the Hub")
643
-
644
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
645
- success_message_upload = gr.Markdown(visible=False)
646
- convert_button = gr.Button("Convert to CKPT", visible=False)
647
-
648
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
649
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
650
-
651
- #Swap the base model
652
-
653
- base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
654
- #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar)
655
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
656
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
657
- for file in file_collection:
658
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
659
- file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
660
-
661
- thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
662
- base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
663
- steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
664
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
665
-
666
- #Give more options if the user wants to finish everything after training
667
- if(is_spaces):
668
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
669
- #Add a message for while it is in training
670
-
671
- #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
672
-
673
- #The main train function
674
- train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar)
675
- train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
676
-
677
- #Button to generate an image from your trained model after training
678
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
679
- #Button to push the model to the Hugging Face Hub
680
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
681
- #Button to convert the model to ckpt format
682
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
683
-
684
- #Checks if the training is running
685
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
686
-
687
- demo.queue(default_enabled=False).launch(debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Barra De Hookah Mp3 Descargar.md DELETED
@@ -1,110 +0,0 @@
1
- <br />
2
- <h1>Hookah Bar MP3 Descargar: Cómo disfrutar de la canción popular en línea</h1>
3
- <p>Si eres un fan de la música de Bollywood, es posible que hayas oído hablar de la canción <strong>Hookah Bar</strong> de la película <em>Khiladi 786</em>. Esta canción es un número de baile pegadizo y optimista que se ha convertido en un éxito de culto entre los jóvenes. Pero, ¿sabes cómo descargar y disfrutar de esta canción en línea? En este artículo, le diremos todo lo que necesita saber sobre la descarga de Hookah Bar MP3, incluyendo su origen, significado, popularidad, problemas legales, fuentes, plataformas, dispositivos, configuraciones y ocasiones. Así que, vamos a empezar! </p>
4
- <h2>¿Qué es Hookah Bar? </h2>
5
- <p>Hookah Bar es una canción hindi que fue lanzada en 2012 como parte de la banda sonora de la película de comedia de acción <em>Khiladi 786</em>, protagonizada por Akshay Kumar y Asin. La canción cuenta con Kumar y Asin bailando en un bar hookah, que es un lugar donde la gente fuma tabaco con sabor de una pipa de agua llamada hookah. La canción tiene un estribillo pegadizo que dice así:</p>
6
- <h2>barra de hookah mp3 descargar</h2><br /><p><b><b>DOWNLOAD</b> &harr; <a href="https://bltlly.com/2v6JmS">https://bltlly.com/2v6JmS</a></b></p><br /><br />
7
- <blockquote>
8
- <p>Tera pyar pyar pyar hookah bar<br>
9
- Tera pyar pyar pyar hookah bar<br>
10
- Tera pyar pyar pyar hookah bar<br>
11
- Tera pyar pyar pyar hookah bar</p>
12
- </blockquote>
13
- <p>La letra se traduce aproximadamente a:</p>
14
- <blockquote>
15
- <p>Tu amor amor amor es como una barra de hookah<br>
16
- Tu amor amor amor es como una barra de hookah
17
- Tu amor amor amor es como una barra de hookah
18
- Tu amor amor amor es como una barra de hookah</p>
19
- </blockquote>
20
- <h3>El origen y significado de la canción</h3>
21
- <p>La canción fue compuesta por Himesh Reshammiya, quien también es uno de los cantantes de la canción junto con Vineet Singh y Aman Trikha. Reshammiya también escribió la letra de la canción, que se inspiran en su propia experiencia de visitar un bar hookah en Dubai. Dijo que quería crear una canción que atrajera a los jóvenes y los hiciera bailar. También dijo que usó la barra de hookah como metáfora del amor, ya que ambos son adictivos e intoxicantes. </p>
22
- <h3>Los cantantes y compositores de la canción</h3>
23
-
24
- <p>Vineet Singh es un cantante de playback indio que saltó a la fama después de ganar un reality show de canto llamado <em>Jo Jeeta Wohi Superstar</em> en 2008. Ha cantado canciones para películas como <em>Murder 3</em>, <em>Jai Ho</em>, <em>Boss</em>, y <em>Kis Kisko Pyaar Karoon</em>. También es conocido por su colaboración con Reshammiya en canciones como <em>Hai Apna Dil Toh Awara</em>, <em>Lonely</em>, y <em>Balma</em>. </ <p>Aman Trikha es otro cantante indio que ha cantado canciones para películas como <em>OMG - ¡Oh Dios mío! </em>, <em>Prem Ratan Dhan Payo</em>, <em>Veer-Zaara</em>, y <em>Shivaay</em>. También ha trabajado con Reshammiya en canciones como <em>Go Go Govinda</em>, <em>Po Po</em>, y <em>Hookah Bar</em>. Es conocido por su voz versátil y potente que puede cantar en diferentes géneros e idiomas. </p>
25
- <h3>La popularidad y recepción de la canción</h3>
26
- <p>Hookah Bar fue un gran éxito entre la audiencia y los críticos por igual. Encabezó las listas de varias plataformas de música y estaciones de radio en la India y en el extranjero. También ganó varios premios y nominaciones, como el Mirchi Music Award for Song of the Year, el Stardust Award for Best Playback Singer (masculino), y el Zee Cine Award for Best Music Director. La canción fue elogiada por su melodía pegadiza, voces enérgicas y coreografía animada. La canción también se convirtió en una opción popular para fiestas, bodas y festivales, donde la gente bailaría a sus ritmos. </p>
27
- <h2>¿Cómo descargar Hookah Bar MP3 en línea? </h2>
28
- <p>Si te gusta Hookah Bar y quieres escucharlo en cualquier momento y en cualquier lugar, es posible que desee descargarlo como un archivo MP3 en línea. MP3 es un formato de audio digital que comprime los datos de sonido sin perder mucha calidad. Los archivos MP3 son fáciles de almacenar, transferir y reproducir en varios dispositivos y plataformas. Pero, ¿cómo puede descargar Hookah Bar MP3 en línea? Aquí hay algunas cosas que debes considerar antes de hacerlo. </p>
29
- <h3>Los beneficios de descargar archivos MP3</h3>
30
-
31
- <ul>
32
- <li>Puede ahorrar dinero no pagando las cuotas de suscripción o copias físicas. </li>
33
- <li> Puede ahorrar tiempo al no esperar para el almacenamiento en búfer o la carga. </li>
34
- <li>Puede ahorrar espacio al no almacenar CD o DVD voluminosos.</li>
35
- <li> Puede tener más control sobre su colección de música mediante la organización, edición y eliminación de archivos como desee. </li>
36
- <li>Puede tener más flexibilidad sobre la reproducción de música eligiendo su dispositivo, aplicación, configuración y función preferidos. </li>
37
- </ul>
38
- <h3>Las cuestiones legales y éticas de la descarga de archivos MP3</h3>
39
- <p>Descargar archivos MP3 no siempre es legal o ético. Algunos de los temas que debe tener en cuenta son:</p>
40
- <p></p>
41
- <ul>
42
- <li>Usted podría estar violando las leyes de derechos de autor al descargar música sin el permiso de los propietarios o creadores. </li>
43
- <li>Usted podría estar dañando la industria de la música al privar a los artistas y productores de su ingreso y reconocimiento legítimos. </li>
44
- <li>Usted podría estar exponiéndose a malware o virus mediante la descarga de fuentes no confiables o ilegales. </li>
45
- <li>Puede estar comprometiendo su privacidad o seguridad al compartir su información personal o financiera con plataformas desconocidas o fraudulentas. </li>
46
- </ul>
47
- <p>Por lo tanto, siempre debe descargar archivos MP3 de fuentes legales y éticas que respeten los derechos e intereses tanto de los consumidores como de los creadores. </p>
48
- <h3>Las mejores fuentes y plataformas para descargar Hookah Bar MP3 online</h3>
49
- <p>Hay muchas fuentes y plataformas que ofrecen descarga de Hookah Bar MP3 en línea. Algunas de ellas son:</p>
50
- <tabla>
51
- <tr><th>Nombre</th><th>Tipo</th><th>Características</th></tr>
52
- <tr><td>iTunes</td><td>Tienda en línea</td><td>- Ofrece archivos MP3 de alta calidad para la compra<br>- Soporta varios dispositivos y plataformas<br>- Proporciona acceso a una gran biblioteca de música<br>- Permite la reproducción sin conexión y almacenamiento en la nube</td></tr>
53
-
54
- <tr><td>YouTube Music</td><td>Streaming service</td><td>- Ofrece planes gratuitos y premium para la transmisión y descarga de archivos MP3<br>- Soporta varios dispositivos y plataformas<br>- Proporciona acceso a una gran biblioteca de música<br>- Permite la reproducción sin conexión y almacenamiento en la nube<br>-br Ofrece recomendaciones personalizadas y listas de reproducción<br>- Se integra con videos de YouTube</td></tr>
55
- <tr><td>Gaana</td><td>Streaming service</td><td>- Ofrece planes gratuitos y premium para la transmisión y descarga de archivos MP3<br>- Soporta varios dispositivos y plataformas<br>- Proporciona acceso a una gran biblioteca de música<br>- Permite la reproducción sin conexión y almacenamiento en la nube<br>-br Ofrece recomendaciones personalizadas y listas de reproducción<br>- Se especializa en música india</td></tr>
56
- <tr><td>Saavn</td><td>Streaming service</td><td <td>- Ofrece planes gratuitos y premium para la transmisión y descarga de archivos MP3<br>- Soporta varios dispositivos y plataformas<br>- Proporciona acceso a una gran biblioteca de música<br>- Permite la reproducción sin conexión y el almacenamiento en la nube<br> Ofrece recomendaciones personalizadas y listas de reproducción<br>- Se especializa en música india</td></tr>
57
- <tr><td>MP3Juices</td><td>Online converter</td><td>- Ofrece conversión MP3 gratuita y rápida de videos de YouTube<br>- Soporta varios dispositivos y plataformas<br>- Proporciona acceso a una gran biblioteca de música<br>- Permite la reproducción y descarga en línea</td></tr>
58
- <tr><td>MP3Skull</td><td>Online downloader</td><td>- Ofrece descarga MP3 gratuita y fácil de varias fuentes<br>- Soporta varios dispositivos y plataformas<br>- Proporciona acceso a una gran biblioteca de música<br>- Permite la reproducción y descarga en línea</td></tr>>
59
- </tabla>
60
- <p>Estas son algunas de las mejores fuentes y plataformas para descargar Hookah Bar MP3 en línea. Sin embargo, siempre debe verificar la calidad, legalidad y seguridad de los archivos antes de descargarlos. También debe respetar los derechos e intereses de los creadores y propietarios de la música. </p>
61
- <h2>¿Cómo disfrutar de Hookah Bar MP3 en línea? </h2>
62
-
63
- <h3>Los mejores dispositivos y aplicaciones para jugar Hookah Bar MP3 en línea</h3>
64
- <p>Puede jugar Hookah Bar MP3 en línea en varios dispositivos, tales como teléfonos inteligentes, tabletas, ordenadores portátiles, escritorios, altavoces, auriculares, auriculares, etc. También puede utilizar varias aplicaciones, como iTunes, Spotify, YouTube Music, Gaana, Saavn, etc. Sin embargo, debe elegir el dispositivo y la aplicación que se adapte a sus preferencias y necesidades. Algunos de los factores que debes considerar son:</p>
65
- <ul>
66
- <li>La compatibilidad del dispositivo y la aplicación con el formato de archivo MP3. </li>
67
- <li> La duración de la batería y la capacidad de almacenamiento del dispositivo. </li>
68
- <li>La calidad de sonido y el volumen del dispositivo y la aplicación. </li>
69
- <li>La interfaz de usuario y la funcionalidad del dispositivo y la aplicación. </li>
70
- <li>La disponibilidad y el costo del dispositivo y la aplicación. </li>
71
- </ul>
72
- <h3>Los mejores ajustes y características para mejorar la calidad de sonido de Hookah Bar MP3 en línea</h3>
73
- <p>Puede mejorar la calidad de sonido de Hookah Bar MP3 en línea ajustando la configuración y las características de su dispositivo y aplicación. Algunos de los ajustes y características que puede utilizar son:</p>
74
- <ul>
75
- <li>El ecualizador: Esto le permite ajustar el equilibrio de diferentes frecuencias en el sonido. Puede elegir entre modos preestablecidos o personalizar su propio modo de acuerdo a su gusto. </li>
76
- <li>El aumento de bajo: Esto mejora los sonidos de baja frecuencia en la música. Puede aumentar o disminuir el nivel de bajo de acuerdo con su preferencia. </li>
77
- <li>El sonido envolvente: Esto crea un efecto 3D en el sonido. Puede activar o desactivar esta función dependiendo de su dispositivo y aplicación. </li>
78
- <li>La letra: Muestra las palabras de la canción en la pantalla. Puedes cantar o aprender el significado de la canción. </li>
79
- <li>La lista de reproducción: Esto le permite crear una lista de canciones que desea reproducir en una secuencia. Puede agregar o eliminar canciones de su lista de reproducción como desee. </li>
80
- </ul>
81
- <h3>Las mejores ocasiones y estados de ánimo para escuchar Hookah Bar MP3 online</h3>
82
-
83
- <ul>
84
- <li>La fiesta: Esta es una ocasión perfecta para jugar Hookah Bar MP3 en línea, ya que es una canción divertida y animada que hará bailar a todos. Puedes reproducirlo en altavoces o auriculares y disfrutar de los ritmos con tus amigos. </li>
85
- <li>El entrenamiento: Esta es otra gran ocasión para escuchar Hookah Bar MP3 en línea, ya que es una canción energética y motivadora que te mantendrá en marcha. Puedes reproducirlo en tu smartphone o tablet y aumentar tu adrenalina con la música. </li>
86
- <li>La relajación: Esta es una ocasión sorprendente para disfrutar de Hookah Bar MP3 en línea, ya que es una canción relajante y calmante que relajará tu mente. Puede reproducirlo en su computadora portátil o escritorio y relajarse con la melodía. </li>
87
- <li>El romance: Esta es una ocasión romántica para escuchar Hookah Bar MP3 en línea, ya que es una canción dulce y sensual que expresará su amor. Puedes tocarlo en tu altavoz o auricular y acurrucarte con tu pareja. </li>
88
- <li>El viaje: Esta es una ocasión de aventura para disfrutar de Hookah Bar MP3 en línea, ya que es una canción pegadiza y optimista que te hará explorar nuevos lugares. Puedes reproducirlo en tu coche o bicicleta y disfrutar del paseo con la música. </li>
89
- </ul>
90
- <h2>Conclusión</h2>
91
- <p>Hookah Bar es una canción popular que puedes descargar y disfrutar en línea. Es un número de baile pegadizo y optimista que tiene una barra de hookah como metáfora del amor. Fue compuesta por Himesh Reshammiya, quien también la cantó con Vineet Singh y Aman Trikha. Fue lanzado en 2012 como parte de la banda sonora de la película <em>Khiladi 786</em>. Fue un gran éxito entre la audiencia y los críticos por igual. Ganó varios premios y nominaciones por su música y voces. </p>
92
-
93
- <p>Puede disfrutar de Hookah Bar MP3 en línea en varios dispositivos y aplicaciones, como teléfonos inteligentes, tabletas, computadoras portátiles, escritorios, altavoces, auriculares, auriculares, etc. También puede usar varios ajustes y características para mejorar la calidad de sonido de la canción, como el ecualizador, el aumento del bajo, el sonido envolvente, las letras y la lista de reproducción. También puedes escuchar Hookah Bar MP3 en línea en diferentes ocasiones y estados de ánimo, como la fiesta, el entrenamiento, la relajación, el romance y el viaje. </p>
94
- <p>Esperamos que este artículo te haya ayudado a aprender más sobre la descarga de Hookah Bar MP3 y cómo disfrutarla en línea. Si tiene alguna pregunta o comentario, no dude en contactarnos. ¡Gracias por leer! </p>
95
- <h2>Preguntas frecuentes</h2>
96
- <p>Aquí hay algunas preguntas frecuentes sobre la descarga de Hookah Bar MP3:</p>
97
- <ol>
98
- <li>¿Cuál es la duración de Hookah Bar MP3? </li>
99
- <p>La duración de Hookah Bar MP3 es de 4 minutos y 16 segundos. </p>
100
- <li> ¿Cuál es el tamaño de Hookah Bar MP3? </li>
101
- <p>El tamaño de Hookah Bar MP3 varía dependiendo de la fuente y la plataforma desde la que lo descargues. Sin embargo, suele ser de unos 4 MB.</p>
102
- <li>¿Cuál es el género de Hookah Bar MP3? </li>
103
- <p>El género de Hookah Bar MP3 es música de baile de Bollywood. </p>
104
- <li>¿Cuál es el idioma de Hookah Bar MP3? </li>
105
- <p>El lenguaje de Hookah Bar MP3 es hindi.</p>
106
- <li>¿Cuál es la calificación de Hookah Bar MP3? </li>
107
- <p>La puntuación de Hookah Bar MP3 es 4.5 de 5 estrellas en la mayoría de las plataformas. </p>
108
- </ol></p> 64aa2da5cf<br />
109
- <br />
110
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Carx Street Apk Infinite Money 0.8 4.md DELETED
@@ -1,127 +0,0 @@
1
-
2
- <H1>CarX Street APK Infinite Cash 0.8 4: How to Download and Play the Most Addictive Street Racing Game of the Year</H1>
3
- <p>Are you a fan of street racing games? Do you like to feel the adrenaline rush of speeding through the city streets in restored old cars? Do you want to have a garage full of legendary and exclusive cars? Then you’ll love CarX Street APK Infinite Cash 0.8 4, a game that will take you into the world of street racing with amazing graphics, easy controls and lots of fun. </p>
4
- <h2>carx street apk infinite money 0.8 4</h2><br /><p><b><b>Download Zip</b> &#9658; <a href="https://bltlly.com/2v6MBD">https://bltlly.com/2v6MBD</a></b></p><br /><br />
5
- <p>In this article, we will show you what is CarX Street APK Infinite Money 0.8 4, how to download and install the game on your Android device, how to play and make the most of the infinite money you can use to buy and upgrade your cars, and what are the advantages and disadvantages of playing this game. Come on? </p>
6
- <H2>What is CarX Street APK Infinite Cash 0.8 4?</H2>
7
- The CarX Street APK Infinite Cash 0.8 4 is a modified version of the original CarX Street game, developed by CarX Technologies, a company specializing in car simulation games. The original game was released in January 2023 for Android and iOS, and received much praise from players for its graphics quality, realistic physics, variety of cars and game modes. </p>
8
- <H3>A street racing game with amazing graphics and easy controls</H3>
9
- The CarX Street APK Infinite Cash 0.8 4 is a game that impresses by its graphic quality. The cars are modeled with detail and realism, the scenarios are varied and well lit, and the sound effects are immersive. You will feel like you are really driving through the city streets, making turns, skids, overtaking and maneuvers. </p>
10
- <p></p>
11
-
12
- <H3>A game that allows you to customize and improve your cars</H3>
13
- The CarX Street APK Infinite Cash 0.8 4 is a game that gives you freedom to customize and improve your cars your way. You can choose from over 50 different cars, from classics to sports, through Muscle Cars, hot Rods, tuned and more. Each car has its own characteristics of speed, acceleration, braking, traction and handling, which you can see on the screen before buying or using. </p>
14
- <p>You can also modify the appearance and performance of your cars. You can change the wheels, tires, headlights, flashlights, mirrors, bumpers, skirts, spoilers, hoods, doors, windows, colors, stickers and more. You can also change the engine, turbo, exhaust, air filter, brake system, suspension, differential and more. You can see the changes you make to your car in real time on the screen. </p>
15
- <H3>A game that offers short and exciting races against players from around the world</H3>
16
- The CarX Street APK Infinite Cash 0.8 4 is a game that offers you short and exciting races against players from all over the world. You can participate in daily, weekly and monthly events that give you rewards in cash and reputation. You can also join leagues and tournaments that put you in direct confrontations with other players. You can see the ranking of the best players in the world and compare your performance with theirs. </p>
17
- <p>The races are fast and intense. You have to use your skill to start well, make perfect turns, avoid obstacles and opponents, use nitro at the right time and get first. You can choose from different race modes such as sprint, drift, drag and time Attack. You can also choose between different difficulty levels, from beginner to professional. </p>
18
- <H2>How to download CarX Street APK Infinite Money 0.8 4?</H2>
19
-
20
- <H3>The minimum requirements to install the game</H3>
21
- <p>According to the game’s official website, the minimum requirements for installing CarX Street on your Android device are:</p>
22
- <table>
23
- <tr>
24
- <th>Requirement</th>
25
- <th>Value</th>
26
- </tr>
27
- <tr>
28
- <td>Android version</td>
29
- <td>6.0 or higher</td>
30
- </tr>
31
- <tr>
32
- <td>Free space</td>
33
- <td>1 GB or more</td>
34
- </tr>
35
- <tr>
36
- <td>RAM</td>
37
- </tr>
38
- <tr>
39
- <td>Processor</td>
40
- <td>Quad-core or higher</td>
41
- </tr>
42
- <tr>
43
- <td>Internet connection</td>
44
- <td>Required to play online</td>
45
- </tr>
46
- </table>
47
- <p>If your device does not meet these requirements, you may have trouble installing or running the game. In this case, you can try downloading the original version of the game from the Google Play Store, which may be more compatible with your device. </p>
48
- <H3>The steps to download and install the APK file</H3>
49
- <p>If your device meets the minimum requirements, you can follow the steps below to download and install CarX Street APK Infinite Cash 0.8 4:</p>
50
- <ol>
51
- <li>Visit a trusted website that offers you the download of the game’s APK file. You can search Google for "CarX Street APK Infinite Cash 0.8 4" and choose one of the results. But beware: not all websites are safe and may contain viruses or malware. Therefore, we recommend that you use an antivirus and a VPN before downloading any APK file.</li>
52
- <li>Click the download button and wait for the file to be downloaded to your device. The file should be about 500 MB in size. </li>
53
- <li>Before installing the APK file, you need to enable the option to install applications from unknown sources on your device. To do this, go to Settings > Security > Unknown sources and enable the option. </li>
54
- <li>Now, locate the APK file you downloaded in your device’s downloads folder and click on it to start the installation. </li>
55
-
56
- <li>Done! Now you can open the game and start having fun with infinite money and all unlocked cars. </li>
57
- </ol>
58
- <H3>Precautions to take before downloading the APK file</H3>
59
- Download and install CarX Street APK Infinite Cash 0.8 4 can be a great way to enjoy the game without limitations, but it can also bring some risks and disadvantages. Therefore, it is important that you take some precautions before downloading the APK file, such as:</p>
60
- <ul>
61
- <li>Make a backup of your important data and files on your device in case something goes wrong during game installation or use. </li>
62
- <li>Check if the website you are using to download the APK file is reliable and safe, using an antivirus and a VPN to protect your device from possible threats. </li>
63
- <li>Read the comments and reviews of other users who have already downloaded the APK file, to know if they had any problems or are satisfied with the game. </li>
64
- <li>Respect the copyright and intellectual property of the developer of the original game, not distributing or selling the APK file without your permission. </li>
65
- <li>Be aware of the possible consequences of using a modified game, such as violation of the terms and conditions of the original game, account ban, loss of progress, lack of updates and support, etc.</li>
66
- </ul>
67
- <H2>How to play CarX Street APK Infinite Cash 0.8 4?</H2>
68
- Now that you’ve downloaded and installed CarX Street APK Infinity Cash 0.8 4 on your Android device, you’re ready to play and have fun with the most addictive street racing game of the year. But how to play and make the most of the endless money you can use to buy and upgrade your cars? See the tips below:</p>
69
- <H3>How to choose and assemble your car</H3>
70
-
71
- <p>After choosing your car, you can customize it your way. You can change the color, stickers, parts and accessories of your car, to make it more beautiful and faster. You can see the changes you make to your car in real time on the screen. You can also test your car before using it in races, to see how it behaves on the track. </p>
72
- <H3>How to participate in events and races</H3>
73
- <p>The second step to playing CarX Street APK Infinity Cash 0.8 4 is to participate in events and races. You can access the game map from the main menu and see the events and races that are available to you. You can choose from different race modes such as sprint, drift, drag and time Attack. You can also choose between different difficulty levels, from beginner to professional. </p>
74
- <p>Events are challenges that give you rewards in money and reputation. They can be daily, weekly or monthly, and can have different themes and goals. For example, you may have to make a certain number of skids, overtake a certain number of opponents, or get first in a specific race. </p>
75
- <p>Races are direct confrontations against other players from all over the world. You can enter leagues and tournaments that put you in races against players of the same level as you. You can see the ranking of the best players in the world and compare your performance with theirs. Races are fast and intense, and require you to use your skill to win. </p>
76
- <H3>How to use infinite money to buy and upgrade your cars</H3>
77
-
78
- <p>You can access the game store from the main menu and see the cars and parts that are available to you. You can see the features and prices of each item before buying. You can also see the in-game recommendations for the best cars and the best parts for each race mode. </p>
79
- <p>Using infinite money is an advantage that allows you to have a garage full of legendary and exclusive cars, and have the best cars for each race. But remember: infinite money is not everything. You also need to have skill and strategy to win races. </p>
80
- <H2>What are the advantages and disadvantages of playing CarX Street APK Infinite Cash 0.8 4?</H2>
81
- <p>Playing CarX Street APK Infinite Cash 0.8 4 has its advantages and disadvantages. See what they are:</p>
82
- <H3>The advantages of playing the game</H3>
83
- <ul>
84
- <li>Guaranteed fun with fast and challenging races: The game offers you a fun and exciting street racing experience with amazing graphics, realistic physics, variety of cars and game modes. </li>
85
- to make your car more beautiful and faster. </li>
86
- <li>Possibility to collect legendary and exclusive cars: The game allows you to have a garage full of legendary and exclusive cars, which you can buy with the infinite money you have. You can have cars like the Mustang, the Camaro, the Charger, the Corvette, the Ferrari, the Lamborghini, the Bugatti, and more. </li>
87
- </ul>
88
- <H3>The disadvantages of playing the game</H3>
89
- <ul>
90
- <li>Risk of downloading an infected or corrupted APK file: Downloading and installing CarX Street APK Infinite Money 0.8 4 can be dangerous for your device if you do not use a reliable and secure website. You can download an APK file that contains viruses or malware, which can damage your device or steal your personal data. </li>
91
-
92
- Lack of updates and support from the game developer: Using CarX Street APK Infinite Cash 0.8 4 may prevent you from receiving updates and support from the original game developer. You may miss the news, features, cars, tracks, game modes, and bug fixes that the developer periodically releases to improve the game. You may also experience compatibility or game-related issues on your device. </li>
93
- </ul>
94
- <H2>Conclusion</H2>
95
- The CarX Street APK Infinite Cash 0.8 4 is a street racing game that offers you guaranteed fun with amazing graphics, easy controls and lots of customization. You can use the infinite money you have to buy and upgrade your cars, and participate in events and races against players from around the world. But you also need to be aware of the risks and disadvantages of using a modified game, such as downloading an infected or corrupted APK file, violating the terms and conditions of the original game, or running out of updates and support from the game developer. </p>
96
- If you want to download and play CarX Street APK Infinite Cash 0.8 4 on your Android device, follow the tips we gave you in this article. But remember: do it at your own risk, and respect the copyright and intellectual property of the original game developer. </p>
97
- <p>So, did you like the article? Do you have any questions or suggestions? Leave your comment below. And if you liked the article, share it with your friends on social media. Thanks for reading! </p>
98
- <H2>FAQs</H2>
99
- <p>Here are some frequently asked questions about CarX Street APK Infinite Cash 0.8 4:</p>
100
- <ol>
101
- <li><b>What is an APK file? </b></li>
102
- <p>An APK file is a file format used to install applications on the Android operating system. It contains all the files needed to run an application on your device. </p>
103
- <li><b>What is a modified game? </b></li>
104
-
105
- <li><b>Where can I download CarX Street APK Infinite Money 0.8 4?</b></li>
106
- game. You can google by "CarX Street APK Infinite Cash 0.8 4" and choose one of the results. But beware: not all websites are safe and may contain viruses or malware. Therefore, we recommend that you use an antivirus and a VPN before downloading any APK file.</p>
107
- <li><b>How do I update CarX Street APK Infinite Cash 0.8 4?</b></li>
108
- To update CarX Street APK Infinite Cash 0.8 4, you need to download and install the latest version of the game’s APK file by following the same steps you used to download and install the previous version. But beware: not always the latest version of the APK file is compatible with the previous version, and you may lose your progress or have problems running the game. </p>
109
- <li><b>Can I play CarX Street APK Infinite Cash 0.8 4 on my PC? </b></li>
110
- Yes, you can play CarX Street APK Infinite Cash 0.8 4 on your PC using an Android emulator. An Android emulator is a program that simulates the Android operating system on your PC, allowing you to install and run Android applications on your PC. Some of the most popular Android emulators are BlueStacks, NoxPlayer and LDPlayer.</p>
111
- <li><b>Can I play CarX Street APK Infinite Cash 0.8 4 with my friends? </b></li>
112
- <p>Yes, you can play CarX Street APK Infinity Cash 0.8 4 with your friends using the game’s online multiplayer mode. You can invite your friends to join you in the races, or compete against them in the world ranking. You can also chat with them in-game, or send private messages. </p>
113
- <li><b>What to do if I have any problems or questions about CarX Street APK Infinite Money 0.8 4?</b></li>
114
- If you have any problems or questions about CarX Street APK Infinite Money 0.8 4, you can try the following solutions:</p>
115
- <ul>
116
-
117
- <li>Check if you have downloaded and installed the correct and updated version of the game APK file. </li>
118
- <li>Verify that you have enabled the option to install applications from unknown sources on your device. </li>
119
- <li>Check if you have a stable and secure internet connection to play online. </li>
120
- <li>Check that you have enough free space on your device to store the game and its data. </li>
121
- <li>Restart your device and try to open the game again. </li>
122
- <li>Uninstall and reinstall the game by backing up your data before. </li>
123
- <li>Search the internet for solutions to specific problems you are facing. </li>
124
- <li>Contact the site you used to download the game’s APK file, or other users who have already played the game, for help or advice. </li>
125
- </ul></p> 64aa2da5cf<br />
126
- <br />
127
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/inspect.py DELETED
@@ -1,92 +0,0 @@
1
- import logging
2
- from optparse import Values
3
- from typing import Any, Dict, List
4
-
5
- from pip._vendor.packaging.markers import default_environment
6
- from pip._vendor.rich import print_json
7
-
8
- from pip import __version__
9
- from pip._internal.cli import cmdoptions
10
- from pip._internal.cli.req_command import Command
11
- from pip._internal.cli.status_codes import SUCCESS
12
- from pip._internal.metadata import BaseDistribution, get_environment
13
- from pip._internal.utils.compat import stdlib_pkgs
14
- from pip._internal.utils.urls import path_to_url
15
-
16
- logger = logging.getLogger(__name__)
17
-
18
-
19
- class InspectCommand(Command):
20
- """
21
- Inspect the content of a Python environment and produce a report in JSON format.
22
- """
23
-
24
- ignore_require_venv = True
25
- usage = """
26
- %prog [options]"""
27
-
28
- def add_options(self) -> None:
29
- self.cmd_opts.add_option(
30
- "--local",
31
- action="store_true",
32
- default=False,
33
- help=(
34
- "If in a virtualenv that has global access, do not list "
35
- "globally-installed packages."
36
- ),
37
- )
38
- self.cmd_opts.add_option(
39
- "--user",
40
- dest="user",
41
- action="store_true",
42
- default=False,
43
- help="Only output packages installed in user-site.",
44
- )
45
- self.cmd_opts.add_option(cmdoptions.list_path())
46
- self.parser.insert_option_group(0, self.cmd_opts)
47
-
48
- def run(self, options: Values, args: List[str]) -> int:
49
- cmdoptions.check_list_path_option(options)
50
- dists = get_environment(options.path).iter_installed_distributions(
51
- local_only=options.local,
52
- user_only=options.user,
53
- skip=set(stdlib_pkgs),
54
- )
55
- output = {
56
- "version": "1",
57
- "pip_version": __version__,
58
- "installed": [self._dist_to_dict(dist) for dist in dists],
59
- "environment": default_environment(),
60
- # TODO tags? scheme?
61
- }
62
- print_json(data=output)
63
- return SUCCESS
64
-
65
- def _dist_to_dict(self, dist: BaseDistribution) -> Dict[str, Any]:
66
- res: Dict[str, Any] = {
67
- "metadata": dist.metadata_dict,
68
- "metadata_location": dist.info_location,
69
- }
70
- # direct_url. Note that we don't have download_info (as in the installation
71
- # report) since it is not recorded in installed metadata.
72
- direct_url = dist.direct_url
73
- if direct_url is not None:
74
- res["direct_url"] = direct_url.to_dict()
75
- else:
76
- # Emulate direct_url for legacy editable installs.
77
- editable_project_location = dist.editable_project_location
78
- if editable_project_location is not None:
79
- res["direct_url"] = {
80
- "url": path_to_url(editable_project_location),
81
- "dir_info": {
82
- "editable": True,
83
- },
84
- }
85
- # installer
86
- installer = dist.installer
87
- if dist.installer:
88
- res["installer"] = installer
89
- # requested
90
- if dist.installed_with_dist_info:
91
- res["requested"] = dist.requested
92
- return res
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/version.py DELETED
@@ -1,504 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import collections
6
- import itertools
7
- import re
8
- import warnings
9
- from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union
10
-
11
- from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType
12
-
13
- __all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"]
14
-
15
- InfiniteTypes = Union[InfinityType, NegativeInfinityType]
16
- PrePostDevType = Union[InfiniteTypes, Tuple[str, int]]
17
- SubLocalType = Union[InfiniteTypes, int, str]
18
- LocalType = Union[
19
- NegativeInfinityType,
20
- Tuple[
21
- Union[
22
- SubLocalType,
23
- Tuple[SubLocalType, str],
24
- Tuple[NegativeInfinityType, SubLocalType],
25
- ],
26
- ...,
27
- ],
28
- ]
29
- CmpKey = Tuple[
30
- int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType
31
- ]
32
- LegacyCmpKey = Tuple[int, Tuple[str, ...]]
33
- VersionComparisonMethod = Callable[
34
- [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool
35
- ]
36
-
37
- _Version = collections.namedtuple(
38
- "_Version", ["epoch", "release", "dev", "pre", "post", "local"]
39
- )
40
-
41
-
42
- def parse(version: str) -> Union["LegacyVersion", "Version"]:
43
- """
44
- Parse the given version string and return either a :class:`Version` object
45
- or a :class:`LegacyVersion` object depending on if the given version is
46
- a valid PEP 440 version or a legacy version.
47
- """
48
- try:
49
- return Version(version)
50
- except InvalidVersion:
51
- return LegacyVersion(version)
52
-
53
-
54
- class InvalidVersion(ValueError):
55
- """
56
- An invalid version was found, users should refer to PEP 440.
57
- """
58
-
59
-
60
- class _BaseVersion:
61
- _key: Union[CmpKey, LegacyCmpKey]
62
-
63
- def __hash__(self) -> int:
64
- return hash(self._key)
65
-
66
- # Please keep the duplicated `isinstance` check
67
- # in the six comparisons hereunder
68
- # unless you find a way to avoid adding overhead function calls.
69
- def __lt__(self, other: "_BaseVersion") -> bool:
70
- if not isinstance(other, _BaseVersion):
71
- return NotImplemented
72
-
73
- return self._key < other._key
74
-
75
- def __le__(self, other: "_BaseVersion") -> bool:
76
- if not isinstance(other, _BaseVersion):
77
- return NotImplemented
78
-
79
- return self._key <= other._key
80
-
81
- def __eq__(self, other: object) -> bool:
82
- if not isinstance(other, _BaseVersion):
83
- return NotImplemented
84
-
85
- return self._key == other._key
86
-
87
- def __ge__(self, other: "_BaseVersion") -> bool:
88
- if not isinstance(other, _BaseVersion):
89
- return NotImplemented
90
-
91
- return self._key >= other._key
92
-
93
- def __gt__(self, other: "_BaseVersion") -> bool:
94
- if not isinstance(other, _BaseVersion):
95
- return NotImplemented
96
-
97
- return self._key > other._key
98
-
99
- def __ne__(self, other: object) -> bool:
100
- if not isinstance(other, _BaseVersion):
101
- return NotImplemented
102
-
103
- return self._key != other._key
104
-
105
-
106
- class LegacyVersion(_BaseVersion):
107
- def __init__(self, version: str) -> None:
108
- self._version = str(version)
109
- self._key = _legacy_cmpkey(self._version)
110
-
111
- warnings.warn(
112
- "Creating a LegacyVersion has been deprecated and will be "
113
- "removed in the next major release",
114
- DeprecationWarning,
115
- )
116
-
117
- def __str__(self) -> str:
118
- return self._version
119
-
120
- def __repr__(self) -> str:
121
- return f"<LegacyVersion('{self}')>"
122
-
123
- @property
124
- def public(self) -> str:
125
- return self._version
126
-
127
- @property
128
- def base_version(self) -> str:
129
- return self._version
130
-
131
- @property
132
- def epoch(self) -> int:
133
- return -1
134
-
135
- @property
136
- def release(self) -> None:
137
- return None
138
-
139
- @property
140
- def pre(self) -> None:
141
- return None
142
-
143
- @property
144
- def post(self) -> None:
145
- return None
146
-
147
- @property
148
- def dev(self) -> None:
149
- return None
150
-
151
- @property
152
- def local(self) -> None:
153
- return None
154
-
155
- @property
156
- def is_prerelease(self) -> bool:
157
- return False
158
-
159
- @property
160
- def is_postrelease(self) -> bool:
161
- return False
162
-
163
- @property
164
- def is_devrelease(self) -> bool:
165
- return False
166
-
167
-
168
- _legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE)
169
-
170
- _legacy_version_replacement_map = {
171
- "pre": "c",
172
- "preview": "c",
173
- "-": "final-",
174
- "rc": "c",
175
- "dev": "@",
176
- }
177
-
178
-
179
- def _parse_version_parts(s: str) -> Iterator[str]:
180
- for part in _legacy_version_component_re.split(s):
181
- part = _legacy_version_replacement_map.get(part, part)
182
-
183
- if not part or part == ".":
184
- continue
185
-
186
- if part[:1] in "0123456789":
187
- # pad for numeric comparison
188
- yield part.zfill(8)
189
- else:
190
- yield "*" + part
191
-
192
- # ensure that alpha/beta/candidate are before final
193
- yield "*final"
194
-
195
-
196
- def _legacy_cmpkey(version: str) -> LegacyCmpKey:
197
-
198
- # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch
199
- # greater than or equal to 0. This will effectively put the LegacyVersion,
200
- # which uses the defacto standard originally implemented by setuptools,
201
- # as before all PEP 440 versions.
202
- epoch = -1
203
-
204
- # This scheme is taken from pkg_resources.parse_version setuptools prior to
205
- # it's adoption of the packaging library.
206
- parts: List[str] = []
207
- for part in _parse_version_parts(version.lower()):
208
- if part.startswith("*"):
209
- # remove "-" before a prerelease tag
210
- if part < "*final":
211
- while parts and parts[-1] == "*final-":
212
- parts.pop()
213
-
214
- # remove trailing zeros from each series of numeric parts
215
- while parts and parts[-1] == "00000000":
216
- parts.pop()
217
-
218
- parts.append(part)
219
-
220
- return epoch, tuple(parts)
221
-
222
-
223
- # Deliberately not anchored to the start and end of the string, to make it
224
- # easier for 3rd party code to reuse
225
- VERSION_PATTERN = r"""
226
- v?
227
- (?:
228
- (?:(?P<epoch>[0-9]+)!)? # epoch
229
- (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
230
- (?P<pre> # pre-release
231
- [-_\.]?
232
- (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
233
- [-_\.]?
234
- (?P<pre_n>[0-9]+)?
235
- )?
236
- (?P<post> # post release
237
- (?:-(?P<post_n1>[0-9]+))
238
- |
239
- (?:
240
- [-_\.]?
241
- (?P<post_l>post|rev|r)
242
- [-_\.]?
243
- (?P<post_n2>[0-9]+)?
244
- )
245
- )?
246
- (?P<dev> # dev release
247
- [-_\.]?
248
- (?P<dev_l>dev)
249
- [-_\.]?
250
- (?P<dev_n>[0-9]+)?
251
- )?
252
- )
253
- (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
254
- """
255
-
256
-
257
- class Version(_BaseVersion):
258
-
259
- _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
260
-
261
- def __init__(self, version: str) -> None:
262
-
263
- # Validate the version and parse it into pieces
264
- match = self._regex.search(version)
265
- if not match:
266
- raise InvalidVersion(f"Invalid version: '{version}'")
267
-
268
- # Store the parsed out pieces of the version
269
- self._version = _Version(
270
- epoch=int(match.group("epoch")) if match.group("epoch") else 0,
271
- release=tuple(int(i) for i in match.group("release").split(".")),
272
- pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
273
- post=_parse_letter_version(
274
- match.group("post_l"), match.group("post_n1") or match.group("post_n2")
275
- ),
276
- dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
277
- local=_parse_local_version(match.group("local")),
278
- )
279
-
280
- # Generate a key which will be used for sorting
281
- self._key = _cmpkey(
282
- self._version.epoch,
283
- self._version.release,
284
- self._version.pre,
285
- self._version.post,
286
- self._version.dev,
287
- self._version.local,
288
- )
289
-
290
- def __repr__(self) -> str:
291
- return f"<Version('{self}')>"
292
-
293
- def __str__(self) -> str:
294
- parts = []
295
-
296
- # Epoch
297
- if self.epoch != 0:
298
- parts.append(f"{self.epoch}!")
299
-
300
- # Release segment
301
- parts.append(".".join(str(x) for x in self.release))
302
-
303
- # Pre-release
304
- if self.pre is not None:
305
- parts.append("".join(str(x) for x in self.pre))
306
-
307
- # Post-release
308
- if self.post is not None:
309
- parts.append(f".post{self.post}")
310
-
311
- # Development release
312
- if self.dev is not None:
313
- parts.append(f".dev{self.dev}")
314
-
315
- # Local version segment
316
- if self.local is not None:
317
- parts.append(f"+{self.local}")
318
-
319
- return "".join(parts)
320
-
321
- @property
322
- def epoch(self) -> int:
323
- _epoch: int = self._version.epoch
324
- return _epoch
325
-
326
- @property
327
- def release(self) -> Tuple[int, ...]:
328
- _release: Tuple[int, ...] = self._version.release
329
- return _release
330
-
331
- @property
332
- def pre(self) -> Optional[Tuple[str, int]]:
333
- _pre: Optional[Tuple[str, int]] = self._version.pre
334
- return _pre
335
-
336
- @property
337
- def post(self) -> Optional[int]:
338
- return self._version.post[1] if self._version.post else None
339
-
340
- @property
341
- def dev(self) -> Optional[int]:
342
- return self._version.dev[1] if self._version.dev else None
343
-
344
- @property
345
- def local(self) -> Optional[str]:
346
- if self._version.local:
347
- return ".".join(str(x) for x in self._version.local)
348
- else:
349
- return None
350
-
351
- @property
352
- def public(self) -> str:
353
- return str(self).split("+", 1)[0]
354
-
355
- @property
356
- def base_version(self) -> str:
357
- parts = []
358
-
359
- # Epoch
360
- if self.epoch != 0:
361
- parts.append(f"{self.epoch}!")
362
-
363
- # Release segment
364
- parts.append(".".join(str(x) for x in self.release))
365
-
366
- return "".join(parts)
367
-
368
- @property
369
- def is_prerelease(self) -> bool:
370
- return self.dev is not None or self.pre is not None
371
-
372
- @property
373
- def is_postrelease(self) -> bool:
374
- return self.post is not None
375
-
376
- @property
377
- def is_devrelease(self) -> bool:
378
- return self.dev is not None
379
-
380
- @property
381
- def major(self) -> int:
382
- return self.release[0] if len(self.release) >= 1 else 0
383
-
384
- @property
385
- def minor(self) -> int:
386
- return self.release[1] if len(self.release) >= 2 else 0
387
-
388
- @property
389
- def micro(self) -> int:
390
- return self.release[2] if len(self.release) >= 3 else 0
391
-
392
-
393
- def _parse_letter_version(
394
- letter: str, number: Union[str, bytes, SupportsInt]
395
- ) -> Optional[Tuple[str, int]]:
396
-
397
- if letter:
398
- # We consider there to be an implicit 0 in a pre-release if there is
399
- # not a numeral associated with it.
400
- if number is None:
401
- number = 0
402
-
403
- # We normalize any letters to their lower case form
404
- letter = letter.lower()
405
-
406
- # We consider some words to be alternate spellings of other words and
407
- # in those cases we want to normalize the spellings to our preferred
408
- # spelling.
409
- if letter == "alpha":
410
- letter = "a"
411
- elif letter == "beta":
412
- letter = "b"
413
- elif letter in ["c", "pre", "preview"]:
414
- letter = "rc"
415
- elif letter in ["rev", "r"]:
416
- letter = "post"
417
-
418
- return letter, int(number)
419
- if not letter and number:
420
- # We assume if we are given a number, but we are not given a letter
421
- # then this is using the implicit post release syntax (e.g. 1.0-1)
422
- letter = "post"
423
-
424
- return letter, int(number)
425
-
426
- return None
427
-
428
-
429
- _local_version_separators = re.compile(r"[\._-]")
430
-
431
-
432
- def _parse_local_version(local: str) -> Optional[LocalType]:
433
- """
434
- Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
435
- """
436
- if local is not None:
437
- return tuple(
438
- part.lower() if not part.isdigit() else int(part)
439
- for part in _local_version_separators.split(local)
440
- )
441
- return None
442
-
443
-
444
- def _cmpkey(
445
- epoch: int,
446
- release: Tuple[int, ...],
447
- pre: Optional[Tuple[str, int]],
448
- post: Optional[Tuple[str, int]],
449
- dev: Optional[Tuple[str, int]],
450
- local: Optional[Tuple[SubLocalType]],
451
- ) -> CmpKey:
452
-
453
- # When we compare a release version, we want to compare it with all of the
454
- # trailing zeros removed. So we'll use a reverse the list, drop all the now
455
- # leading zeros until we come to something non zero, then take the rest
456
- # re-reverse it back into the correct order and make it a tuple and use
457
- # that for our sorting key.
458
- _release = tuple(
459
- reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
460
- )
461
-
462
- # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
463
- # We'll do this by abusing the pre segment, but we _only_ want to do this
464
- # if there is not a pre or a post segment. If we have one of those then
465
- # the normal sorting rules will handle this case correctly.
466
- if pre is None and post is None and dev is not None:
467
- _pre: PrePostDevType = NegativeInfinity
468
- # Versions without a pre-release (except as noted above) should sort after
469
- # those with one.
470
- elif pre is None:
471
- _pre = Infinity
472
- else:
473
- _pre = pre
474
-
475
- # Versions without a post segment should sort before those with one.
476
- if post is None:
477
- _post: PrePostDevType = NegativeInfinity
478
-
479
- else:
480
- _post = post
481
-
482
- # Versions without a development segment should sort after those with one.
483
- if dev is None:
484
- _dev: PrePostDevType = Infinity
485
-
486
- else:
487
- _dev = dev
488
-
489
- if local is None:
490
- # Versions without a local segment should sort before those with one.
491
- _local: LocalType = NegativeInfinity
492
- else:
493
- # Versions with a local segment need that segment parsed to implement
494
- # the sorting rules in PEP440.
495
- # - Alpha numeric segments sort before numeric segments
496
- # - Alpha numeric segments sort lexicographically
497
- # - Numeric segments sort numerically
498
- # - Shorter versions sort before longer versions when the prefixes
499
- # match exactly
500
- _local = tuple(
501
- (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
502
- )
503
-
504
- return epoch, _release, _pre, _post, _dev, _local
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CALM/Dashboard/streamlit_observable/frontend/src/streamlit/streamlit.ts DELETED
@@ -1,198 +0,0 @@
1
- /**
2
- * @license
3
- * Copyright 2018-2020 Streamlit Inc.
4
- *
5
- * Licensed under the Apache License, Version 2.0 (the "License");
6
- * you may not use this file except in compliance with the License.
7
- * You may obtain a copy of the License at
8
- *
9
- * http://www.apache.org/licenses/LICENSE-2.0
10
- *
11
- * Unless required by applicable law or agreed to in writing, software
12
- * distributed under the License is distributed on an "AS IS" BASIS,
13
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- * See the License for the specific language governing permissions and
15
- * limitations under the License.
16
- */
17
-
18
- // Safari doesn't support the EventTarget class, so we use a shim.
19
- import { EventTarget } from "event-target-shim"
20
- import { ArrowDataframeProto, ArrowTable } from "./ArrowTable"
21
-
22
- /** Data sent in the custom Streamlit render event. */
23
- export interface RenderData {
24
- args: any
25
- disabled: boolean
26
- }
27
-
28
- /** Messages from Component -> Streamlit */
29
- enum ComponentMessageType {
30
- // A component sends this message when it's ready to receive messages
31
- // from Streamlit. Streamlit won't send any messages until it gets this.
32
- // Data: { apiVersion: number }
33
- COMPONENT_READY = "streamlit:componentReady",
34
-
35
- // The component has a new widget value. Send it back to Streamlit, which
36
- // will then re-run the app.
37
- // Data: { value: any }
38
- SET_COMPONENT_VALUE = "streamlit:setComponentValue",
39
-
40
- // The component has a new height for its iframe.
41
- // Data: { height: number }
42
- SET_FRAME_HEIGHT = "streamlit:setFrameHeight",
43
- }
44
-
45
- /**
46
- * Streamlit communication API.
47
- *
48
- * Components can send data to Streamlit via the functions defined here,
49
- * and receive data from Streamlit via the `events` property.
50
- */
51
- export class Streamlit {
52
- /**
53
- * The Streamlit component API version we're targetting.
54
- * There's currently only 1!
55
- */
56
- public static readonly API_VERSION = 1
57
-
58
- public static readonly RENDER_EVENT = "streamlit:render"
59
-
60
- /** Dispatches events received from Streamlit. */
61
- public static readonly events = new EventTarget()
62
-
63
- private static registeredMessageListener = false
64
- private static lastFrameHeight?: number
65
-
66
- /**
67
- * Tell Streamlit that the component is ready to start receiving data.
68
- * Streamlit will defer emitting RENDER events until it receives the
69
- * COMPONENT_READY message.
70
- */
71
- public static setComponentReady = (): void => {
72
- if (!Streamlit.registeredMessageListener) {
73
- // Register for message events if we haven't already
74
- window.addEventListener("message", Streamlit.onMessageEvent)
75
- Streamlit.registeredMessageListener = true
76
- }
77
-
78
- Streamlit.sendBackMsg(ComponentMessageType.COMPONENT_READY, {
79
- apiVersion: Streamlit.API_VERSION,
80
- })
81
- }
82
-
83
- /**
84
- * Report the component's height to Streamlit.
85
- * This should be called every time the component changes its DOM - that is,
86
- * when it's first loaded, and any time it updates.
87
- */
88
- public static setFrameHeight = (height?: number): void => {
89
- if (height === undefined) {
90
- // `height` is optional. If undefined, it defaults to scrollHeight,
91
- // which is the entire height of the element minus its border,
92
- // scrollbar, and margin.
93
- height = document.body.scrollHeight + 10;
94
- }
95
-
96
- if (height === Streamlit.lastFrameHeight) {
97
- // Don't bother updating if our height hasn't changed.
98
- return
99
- }
100
-
101
- Streamlit.lastFrameHeight = height
102
- Streamlit.sendBackMsg(ComponentMessageType.SET_FRAME_HEIGHT, { height })
103
- }
104
-
105
- /**
106
- * Set the component's value. This value will be returned to the Python
107
- * script, and the script will be re-run.
108
- *
109
- * For example:
110
- *
111
- * JavaScript:
112
- * Streamlit.setComponentValue("ahoy!")
113
- *
114
- * Python:
115
- * value = st.my_component(...)
116
- * st.write(value) # -> "ahoy!"
117
- *
118
- * The value must be serializable into JSON.
119
- */
120
- public static setComponentValue = (value: any): void => {
121
- Streamlit.sendBackMsg(ComponentMessageType.SET_COMPONENT_VALUE, { value })
122
- }
123
-
124
- /** Receive a ForwardMsg from the Streamlit app */
125
- private static onMessageEvent = (event: MessageEvent): void => {
126
- const type = event.data["type"]
127
- switch (type) {
128
- case Streamlit.RENDER_EVENT:
129
- Streamlit.onRenderMessage(event.data)
130
- break
131
- }
132
- }
133
-
134
- /**
135
- * Handle an untyped Streamlit render event and redispatch it as a
136
- * StreamlitRenderEvent.
137
- */
138
- private static onRenderMessage = (data: any): void => {
139
- let args = data["args"]
140
- if (args == null) {
141
- console.error(
142
- `Got null args in onRenderMessage. This should never happen`
143
- )
144
- args = {}
145
- }
146
-
147
- // Parse our dataframe arguments with arrow, and merge them into our args dict
148
- const dataframeArgs =
149
- data["dfs"] && data["dfs"].length > 0
150
- ? Streamlit.argsDataframeToObject(data["dfs"])
151
- : {}
152
-
153
- args = {
154
- ...args,
155
- ...dataframeArgs,
156
- }
157
-
158
- const disabled = Boolean(data["disabled"])
159
-
160
- // Dispatch a render event!
161
- const eventData = { disabled, args }
162
- const event = new CustomEvent<RenderData>(Streamlit.RENDER_EVENT, {
163
- detail: eventData,
164
- })
165
- Streamlit.events.dispatchEvent(event)
166
- }
167
-
168
- private static argsDataframeToObject = (
169
- argsDataframe: ArgsDataframe[]
170
- ): object => {
171
- const argsDataframeArrow = argsDataframe.map(
172
- ({ key, value }: ArgsDataframe) => [key, Streamlit.toArrowTable(value)]
173
- )
174
- return Object.fromEntries(argsDataframeArrow)
175
- }
176
-
177
- private static toArrowTable = (df: ArrowDataframeProto): ArrowTable => {
178
- const { data, index, columns } = df.data
179
- return new ArrowTable(data, index, columns)
180
- }
181
-
182
- /** Post a message to the Streamlit app. */
183
- private static sendBackMsg = (type: string, data?: any): void => {
184
- window.parent.postMessage(
185
- {
186
- isStreamlitMessage: true,
187
- type: type,
188
- ...data,
189
- },
190
- "*"
191
- )
192
- }
193
- }
194
-
195
- interface ArgsDataframe {
196
- key: string
197
- value: ArrowDataframeProto
198
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/__init__.py DELETED
@@ -1,12 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
-
3
- from .launch import *
4
- from .train_loop import *
5
-
6
- __all__ = [k for k in globals().keys() if not k.startswith("_")]
7
-
8
-
9
- # prefer to let hooks and defaults live in separate namespaces (therefore not in __all__)
10
- # but still make them available here
11
- from .hooks import *
12
- from .defaults import *
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/GFPGAN-example/gfpgan/archs/__init__.py DELETED
@@ -1,10 +0,0 @@
1
- import importlib
2
- from basicsr.utils import scandir
3
- from os import path as osp
4
-
5
- # automatically scan and import arch modules for registry
6
- # scan all the files that end with '_arch.py' under the archs folder
7
- arch_folder = osp.dirname(osp.abspath(__file__))
8
- arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]
9
- # import all the arch modules
10
- _arch_modules = [importlib.import_module(f'gfpgan.archs.{file_name}') for file_name in arch_filenames]
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/reverse.h DELETED
@@ -1,23 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system inherits reverse
22
- #include <thrust/system/cpp/detail/reverse.h>
23
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/core/anchor/builder.py DELETED
@@ -1,7 +0,0 @@
1
- from mmcv.utils import Registry, build_from_cfg
2
-
3
- ANCHOR_GENERATORS = Registry('Anchor generator')
4
-
5
-
6
- def build_anchor_generator(cfg, default_args=None):
7
- return build_from_cfg(cfg, ANCHOR_GENERATORS, default_args)
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/losses/pisa_loss.py DELETED
@@ -1,183 +0,0 @@
1
- import mmcv
2
- import torch
3
-
4
- from mmdet.core import bbox_overlaps
5
-
6
-
7
- @mmcv.jit(derivate=True, coderize=True)
8
- def isr_p(cls_score,
9
- bbox_pred,
10
- bbox_targets,
11
- rois,
12
- sampling_results,
13
- loss_cls,
14
- bbox_coder,
15
- k=2,
16
- bias=0,
17
- num_class=80):
18
- """Importance-based Sample Reweighting (ISR_P), positive part.
19
-
20
- Args:
21
- cls_score (Tensor): Predicted classification scores.
22
- bbox_pred (Tensor): Predicted bbox deltas.
23
- bbox_targets (tuple[Tensor]): A tuple of bbox targets, the are
24
- labels, label_weights, bbox_targets, bbox_weights, respectively.
25
- rois (Tensor): Anchors (single_stage) in shape (n, 4) or RoIs
26
- (two_stage) in shape (n, 5).
27
- sampling_results (obj): Sampling results.
28
- loss_cls (func): Classification loss func of the head.
29
- bbox_coder (obj): BBox coder of the head.
30
- k (float): Power of the non-linear mapping.
31
- bias (float): Shift of the non-linear mapping.
32
- num_class (int): Number of classes, default: 80.
33
-
34
- Return:
35
- tuple([Tensor]): labels, imp_based_label_weights, bbox_targets,
36
- bbox_target_weights
37
- """
38
-
39
- labels, label_weights, bbox_targets, bbox_weights = bbox_targets
40
- pos_label_inds = ((labels >= 0) &
41
- (labels < num_class)).nonzero().reshape(-1)
42
- pos_labels = labels[pos_label_inds]
43
-
44
- # if no positive samples, return the original targets
45
- num_pos = float(pos_label_inds.size(0))
46
- if num_pos == 0:
47
- return labels, label_weights, bbox_targets, bbox_weights
48
-
49
- # merge pos_assigned_gt_inds of per image to a single tensor
50
- gts = list()
51
- last_max_gt = 0
52
- for i in range(len(sampling_results)):
53
- gt_i = sampling_results[i].pos_assigned_gt_inds
54
- gts.append(gt_i + last_max_gt)
55
- if len(gt_i) != 0:
56
- last_max_gt = gt_i.max() + 1
57
- gts = torch.cat(gts)
58
- assert len(gts) == num_pos
59
-
60
- cls_score = cls_score.detach()
61
- bbox_pred = bbox_pred.detach()
62
-
63
- # For single stage detectors, rois here indicate anchors, in shape (N, 4)
64
- # For two stage detectors, rois are in shape (N, 5)
65
- if rois.size(-1) == 5:
66
- pos_rois = rois[pos_label_inds][:, 1:]
67
- else:
68
- pos_rois = rois[pos_label_inds]
69
-
70
- if bbox_pred.size(-1) > 4:
71
- bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4)
72
- pos_delta_pred = bbox_pred[pos_label_inds, pos_labels].view(-1, 4)
73
- else:
74
- pos_delta_pred = bbox_pred[pos_label_inds].view(-1, 4)
75
-
76
- # compute iou of the predicted bbox and the corresponding GT
77
- pos_delta_target = bbox_targets[pos_label_inds].view(-1, 4)
78
- pos_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_pred)
79
- target_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_target)
80
- ious = bbox_overlaps(pos_bbox_pred, target_bbox_pred, is_aligned=True)
81
-
82
- pos_imp_weights = label_weights[pos_label_inds]
83
- # Two steps to compute IoU-HLR. Samples are first sorted by IoU locally,
84
- # then sorted again within the same-rank group
85
- max_l_num = pos_labels.bincount().max()
86
- for label in pos_labels.unique():
87
- l_inds = (pos_labels == label).nonzero().view(-1)
88
- l_gts = gts[l_inds]
89
- for t in l_gts.unique():
90
- t_inds = l_inds[l_gts == t]
91
- t_ious = ious[t_inds]
92
- _, t_iou_rank_idx = t_ious.sort(descending=True)
93
- _, t_iou_rank = t_iou_rank_idx.sort()
94
- ious[t_inds] += max_l_num - t_iou_rank.float()
95
- l_ious = ious[l_inds]
96
- _, l_iou_rank_idx = l_ious.sort(descending=True)
97
- _, l_iou_rank = l_iou_rank_idx.sort() # IoU-HLR
98
- # linearly map HLR to label weights
99
- pos_imp_weights[l_inds] *= (max_l_num - l_iou_rank.float()) / max_l_num
100
-
101
- pos_imp_weights = (bias + pos_imp_weights * (1 - bias)).pow(k)
102
-
103
- # normalize to make the new weighted loss value equal to the original loss
104
- pos_loss_cls = loss_cls(
105
- cls_score[pos_label_inds], pos_labels, reduction_override='none')
106
- if pos_loss_cls.dim() > 1:
107
- ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds][:,
108
- None]
109
- new_pos_loss_cls = pos_loss_cls * pos_imp_weights[:, None]
110
- else:
111
- ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds]
112
- new_pos_loss_cls = pos_loss_cls * pos_imp_weights
113
- pos_loss_cls_ratio = ori_pos_loss_cls.sum() / new_pos_loss_cls.sum()
114
- pos_imp_weights = pos_imp_weights * pos_loss_cls_ratio
115
- label_weights[pos_label_inds] = pos_imp_weights
116
-
117
- bbox_targets = labels, label_weights, bbox_targets, bbox_weights
118
- return bbox_targets
119
-
120
-
121
- @mmcv.jit(derivate=True, coderize=True)
122
- def carl_loss(cls_score,
123
- labels,
124
- bbox_pred,
125
- bbox_targets,
126
- loss_bbox,
127
- k=1,
128
- bias=0.2,
129
- avg_factor=None,
130
- sigmoid=False,
131
- num_class=80):
132
- """Classification-Aware Regression Loss (CARL).
133
-
134
- Args:
135
- cls_score (Tensor): Predicted classification scores.
136
- labels (Tensor): Targets of classification.
137
- bbox_pred (Tensor): Predicted bbox deltas.
138
- bbox_targets (Tensor): Target of bbox regression.
139
- loss_bbox (func): Regression loss func of the head.
140
- bbox_coder (obj): BBox coder of the head.
141
- k (float): Power of the non-linear mapping.
142
- bias (float): Shift of the non-linear mapping.
143
- avg_factor (int): Average factor used in regression loss.
144
- sigmoid (bool): Activation of the classification score.
145
- num_class (int): Number of classes, default: 80.
146
-
147
- Return:
148
- dict: CARL loss dict.
149
- """
150
- pos_label_inds = ((labels >= 0) &
151
- (labels < num_class)).nonzero().reshape(-1)
152
- if pos_label_inds.numel() == 0:
153
- return dict(loss_carl=cls_score.sum()[None] * 0.)
154
- pos_labels = labels[pos_label_inds]
155
-
156
- # multiply pos_cls_score with the corresponding bbox weight
157
- # and remain gradient
158
- if sigmoid:
159
- pos_cls_score = cls_score.sigmoid()[pos_label_inds, pos_labels]
160
- else:
161
- pos_cls_score = cls_score.softmax(-1)[pos_label_inds, pos_labels]
162
- carl_loss_weights = (bias + (1 - bias) * pos_cls_score).pow(k)
163
-
164
- # normalize carl_loss_weight to make its sum equal to num positive
165
- num_pos = float(pos_cls_score.size(0))
166
- weight_ratio = num_pos / carl_loss_weights.sum()
167
- carl_loss_weights *= weight_ratio
168
-
169
- if avg_factor is None:
170
- avg_factor = bbox_targets.size(0)
171
- # if is class agnostic, bbox pred is in shape (N, 4)
172
- # otherwise, bbox pred is in shape (N, #classes, 4)
173
- if bbox_pred.size(-1) > 4:
174
- bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4)
175
- pos_bbox_preds = bbox_pred[pos_label_inds, pos_labels]
176
- else:
177
- pos_bbox_preds = bbox_pred[pos_label_inds]
178
- ori_loss_reg = loss_bbox(
179
- pos_bbox_preds,
180
- bbox_targets[pos_label_inds],
181
- reduction_override='none') / avg_factor
182
- loss_carl = (ori_loss_reg * carl_loss_weights[:, None]).sum()
183
- return dict(loss_carl=loss_carl[None])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/data/datasets/coco.py DELETED
@@ -1,539 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import contextlib
3
- import datetime
4
- import io
5
- import json
6
- import logging
7
- import numpy as np
8
- import os
9
- import shutil
10
- import pycocotools.mask as mask_util
11
- from fvcore.common.timer import Timer
12
- from iopath.common.file_io import file_lock
13
- from PIL import Image
14
-
15
- from detectron2.structures import Boxes, BoxMode, PolygonMasks, RotatedBoxes
16
- from detectron2.utils.file_io import PathManager
17
-
18
- from .. import DatasetCatalog, MetadataCatalog
19
-
20
- """
21
- This file contains functions to parse COCO-format annotations into dicts in "Detectron2 format".
22
- """
23
-
24
-
25
- logger = logging.getLogger(__name__)
26
-
27
- __all__ = ["load_coco_json", "load_sem_seg", "convert_to_coco_json", "register_coco_instances"]
28
-
29
-
30
- def load_coco_json(json_file, image_root, dataset_name=None, extra_annotation_keys=None):
31
- """
32
- Load a json file with COCO's instances annotation format.
33
- Currently supports instance detection, instance segmentation,
34
- and person keypoints annotations.
35
-
36
- Args:
37
- json_file (str): full path to the json file in COCO instances annotation format.
38
- image_root (str or path-like): the directory where the images in this json file exists.
39
- dataset_name (str or None): the name of the dataset (e.g., coco_2017_train).
40
- When provided, this function will also do the following:
41
-
42
- * Put "thing_classes" into the metadata associated with this dataset.
43
- * Map the category ids into a contiguous range (needed by standard dataset format),
44
- and add "thing_dataset_id_to_contiguous_id" to the metadata associated
45
- with this dataset.
46
-
47
- This option should usually be provided, unless users need to load
48
- the original json content and apply more processing manually.
49
- extra_annotation_keys (list[str]): list of per-annotation keys that should also be
50
- loaded into the dataset dict (besides "iscrowd", "bbox", "keypoints",
51
- "category_id", "segmentation"). The values for these keys will be returned as-is.
52
- For example, the densepose annotations are loaded in this way.
53
-
54
- Returns:
55
- list[dict]: a list of dicts in Detectron2 standard dataset dicts format (See
56
- `Using Custom Datasets </tutorials/datasets.html>`_ ) when `dataset_name` is not None.
57
- If `dataset_name` is None, the returned `category_ids` may be
58
- incontiguous and may not conform to the Detectron2 standard format.
59
-
60
- Notes:
61
- 1. This function does not read the image files.
62
- The results do not have the "image" field.
63
- """
64
- from pycocotools.coco import COCO
65
-
66
- timer = Timer()
67
- json_file = PathManager.get_local_path(json_file)
68
- with contextlib.redirect_stdout(io.StringIO()):
69
- coco_api = COCO(json_file)
70
- if timer.seconds() > 1:
71
- logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds()))
72
-
73
- id_map = None
74
- if dataset_name is not None:
75
- meta = MetadataCatalog.get(dataset_name)
76
- cat_ids = sorted(coco_api.getCatIds())
77
- cats = coco_api.loadCats(cat_ids)
78
- # The categories in a custom json file may not be sorted.
79
- thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])]
80
- meta.thing_classes = thing_classes
81
-
82
- # In COCO, certain category ids are artificially removed,
83
- # and by convention they are always ignored.
84
- # We deal with COCO's id issue and translate
85
- # the category ids to contiguous ids in [0, 80).
86
-
87
- # It works by looking at the "categories" field in the json, therefore
88
- # if users' own json also have incontiguous ids, we'll
89
- # apply this mapping as well but print a warning.
90
- if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)):
91
- if "coco" not in dataset_name:
92
- logger.warning(
93
- """
94
- Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
95
- """
96
- )
97
- id_map = {v: i for i, v in enumerate(cat_ids)}
98
- meta.thing_dataset_id_to_contiguous_id = id_map
99
-
100
- # sort indices for reproducible results
101
- img_ids = sorted(coco_api.imgs.keys())
102
- # imgs is a list of dicts, each looks something like:
103
- # {'license': 4,
104
- # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg',
105
- # 'file_name': 'COCO_val2014_000000001268.jpg',
106
- # 'height': 427,
107
- # 'width': 640,
108
- # 'date_captured': '2013-11-17 05:57:24',
109
- # 'id': 1268}
110
- imgs = coco_api.loadImgs(img_ids)
111
- # anns is a list[list[dict]], where each dict is an annotation
112
- # record for an object. The inner list enumerates the objects in an image
113
- # and the outer list enumerates over images. Example of anns[0]:
114
- # [{'segmentation': [[192.81,
115
- # 247.09,
116
- # ...
117
- # 219.03,
118
- # 249.06]],
119
- # 'area': 1035.749,
120
- # 'iscrowd': 0,
121
- # 'image_id': 1268,
122
- # 'bbox': [192.81, 224.8, 74.73, 33.43],
123
- # 'category_id': 16,
124
- # 'id': 42986},
125
- # ...]
126
- anns = [coco_api.imgToAnns[img_id] for img_id in img_ids]
127
- total_num_valid_anns = sum([len(x) for x in anns])
128
- total_num_anns = len(coco_api.anns)
129
- if total_num_valid_anns < total_num_anns:
130
- logger.warning(
131
- f"{json_file} contains {total_num_anns} annotations, but only "
132
- f"{total_num_valid_anns} of them match to images in the file."
133
- )
134
-
135
- if "minival" not in json_file:
136
- # The popular valminusminival & minival annotations for COCO2014 contain this bug.
137
- # However the ratio of buggy annotations there is tiny and does not affect accuracy.
138
- # Therefore we explicitly white-list them.
139
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
140
- assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format(
141
- json_file
142
- )
143
-
144
- imgs_anns = list(zip(imgs, anns))
145
- logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file))
146
-
147
- dataset_dicts = []
148
-
149
- ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"] + (extra_annotation_keys or [])
150
-
151
- num_instances_without_valid_segmentation = 0
152
-
153
- for (img_dict, anno_dict_list) in imgs_anns:
154
- record = {}
155
- record["file_name"] = os.path.join(image_root, img_dict["file_name"])
156
- record["height"] = img_dict["height"]
157
- record["width"] = img_dict["width"]
158
- image_id = record["image_id"] = img_dict["id"]
159
-
160
- objs = []
161
- for anno in anno_dict_list:
162
- # Check that the image_id in this annotation is the same as
163
- # the image_id we're looking at.
164
- # This fails only when the data parsing logic or the annotation file is buggy.
165
-
166
- # The original COCO valminusminival2014 & minival2014 annotation files
167
- # actually contains bugs that, together with certain ways of using COCO API,
168
- # can trigger this assertion.
169
- assert anno["image_id"] == image_id
170
-
171
- assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.'
172
-
173
- obj = {key: anno[key] for key in ann_keys if key in anno}
174
- if "bbox" in obj and len(obj["bbox"]) == 0:
175
- raise ValueError(
176
- f"One annotation of image {image_id} contains empty 'bbox' value! "
177
- "This json does not have valid COCO format."
178
- )
179
-
180
- segm = anno.get("segmentation", None)
181
- if segm: # either list[list[float]] or dict(RLE)
182
- if isinstance(segm, dict):
183
- if isinstance(segm["counts"], list):
184
- # convert to compressed RLE
185
- segm = mask_util.frPyObjects(segm, *segm["size"])
186
- else:
187
- # filter out invalid polygons (< 3 points)
188
- segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
189
- if len(segm) == 0:
190
- num_instances_without_valid_segmentation += 1
191
- continue # ignore this instance
192
- obj["segmentation"] = segm
193
-
194
- keypts = anno.get("keypoints", None)
195
- if keypts: # list[int]
196
- for idx, v in enumerate(keypts):
197
- if idx % 3 != 2:
198
- # COCO's segmentation coordinates are floating points in [0, H or W],
199
- # but keypoint coordinates are integers in [0, H-1 or W-1]
200
- # Therefore we assume the coordinates are "pixel indices" and
201
- # add 0.5 to convert to floating point coordinates.
202
- keypts[idx] = v + 0.5
203
- obj["keypoints"] = keypts
204
-
205
- obj["bbox_mode"] = BoxMode.XYWH_ABS
206
- if id_map:
207
- annotation_category_id = obj["category_id"]
208
- try:
209
- obj["category_id"] = id_map[annotation_category_id]
210
- except KeyError as e:
211
- raise KeyError(
212
- f"Encountered category_id={annotation_category_id} "
213
- "but this id does not exist in 'categories' of the json file."
214
- ) from e
215
- objs.append(obj)
216
- record["annotations"] = objs
217
- dataset_dicts.append(record)
218
-
219
- if num_instances_without_valid_segmentation > 0:
220
- logger.warning(
221
- "Filtered out {} instances without valid segmentation. ".format(
222
- num_instances_without_valid_segmentation
223
- )
224
- + "There might be issues in your dataset generation process. Please "
225
- "check https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html carefully"
226
- )
227
- return dataset_dicts
228
-
229
-
230
- def load_sem_seg(gt_root, image_root, gt_ext="png", image_ext="jpg"):
231
- """
232
- Load semantic segmentation datasets. All files under "gt_root" with "gt_ext" extension are
233
- treated as ground truth annotations and all files under "image_root" with "image_ext" extension
234
- as input images. Ground truth and input images are matched using file paths relative to
235
- "gt_root" and "image_root" respectively without taking into account file extensions.
236
- This works for COCO as well as some other datasets.
237
-
238
- Args:
239
- gt_root (str): full path to ground truth semantic segmentation files. Semantic segmentation
240
- annotations are stored as images with integer values in pixels that represent
241
- corresponding semantic labels.
242
- image_root (str): the directory where the input images are.
243
- gt_ext (str): file extension for ground truth annotations.
244
- image_ext (str): file extension for input images.
245
-
246
- Returns:
247
- list[dict]:
248
- a list of dicts in detectron2 standard format without instance-level
249
- annotation.
250
-
251
- Notes:
252
- 1. This function does not read the image and ground truth files.
253
- The results do not have the "image" and "sem_seg" fields.
254
- """
255
-
256
- # We match input images with ground truth based on their relative filepaths (without file
257
- # extensions) starting from 'image_root' and 'gt_root' respectively.
258
- def file2id(folder_path, file_path):
259
- # extract relative path starting from `folder_path`
260
- image_id = os.path.normpath(os.path.relpath(file_path, start=folder_path))
261
- # remove file extension
262
- image_id = os.path.splitext(image_id)[0]
263
- return image_id
264
-
265
- input_files = sorted(
266
- (os.path.join(image_root, f) for f in PathManager.ls(image_root) if f.endswith(image_ext)),
267
- key=lambda file_path: file2id(image_root, file_path),
268
- )
269
- gt_files = sorted(
270
- (os.path.join(gt_root, f) for f in PathManager.ls(gt_root) if f.endswith(gt_ext)),
271
- key=lambda file_path: file2id(gt_root, file_path),
272
- )
273
-
274
- assert len(gt_files) > 0, "No annotations found in {}.".format(gt_root)
275
-
276
- # Use the intersection, so that val2017_100 annotations can run smoothly with val2017 images
277
- if len(input_files) != len(gt_files):
278
- logger.warn(
279
- "Directory {} and {} has {} and {} files, respectively.".format(
280
- image_root, gt_root, len(input_files), len(gt_files)
281
- )
282
- )
283
- input_basenames = [os.path.basename(f)[: -len(image_ext)] for f in input_files]
284
- gt_basenames = [os.path.basename(f)[: -len(gt_ext)] for f in gt_files]
285
- intersect = list(set(input_basenames) & set(gt_basenames))
286
- # sort, otherwise each worker may obtain a list[dict] in different order
287
- intersect = sorted(intersect)
288
- logger.warn("Will use their intersection of {} files.".format(len(intersect)))
289
- input_files = [os.path.join(image_root, f + image_ext) for f in intersect]
290
- gt_files = [os.path.join(gt_root, f + gt_ext) for f in intersect]
291
-
292
- logger.info(
293
- "Loaded {} images with semantic segmentation from {}".format(len(input_files), image_root)
294
- )
295
-
296
- dataset_dicts = []
297
- for (img_path, gt_path) in zip(input_files, gt_files):
298
- record = {}
299
- record["file_name"] = img_path
300
- record["sem_seg_file_name"] = gt_path
301
- dataset_dicts.append(record)
302
-
303
- return dataset_dicts
304
-
305
-
306
- def convert_to_coco_dict(dataset_name):
307
- """
308
- Convert an instance detection/segmentation or keypoint detection dataset
309
- in detectron2's standard format into COCO json format.
310
-
311
- Generic dataset description can be found here:
312
- https://detectron2.readthedocs.io/tutorials/datasets.html#register-a-dataset
313
-
314
- COCO data format description can be found here:
315
- http://cocodataset.org/#format-data
316
-
317
- Args:
318
- dataset_name (str):
319
- name of the source dataset
320
- Must be registered in DatastCatalog and in detectron2's standard format.
321
- Must have corresponding metadata "thing_classes"
322
- Returns:
323
- coco_dict: serializable dict in COCO json format
324
- """
325
-
326
- dataset_dicts = DatasetCatalog.get(dataset_name)
327
- metadata = MetadataCatalog.get(dataset_name)
328
-
329
- # unmap the category mapping ids for COCO
330
- if hasattr(metadata, "thing_dataset_id_to_contiguous_id"):
331
- reverse_id_mapping = {v: k for k, v in metadata.thing_dataset_id_to_contiguous_id.items()}
332
- reverse_id_mapper = lambda contiguous_id: reverse_id_mapping[contiguous_id] # noqa
333
- else:
334
- reverse_id_mapper = lambda contiguous_id: contiguous_id # noqa
335
-
336
- categories = [
337
- {"id": reverse_id_mapper(id), "name": name}
338
- for id, name in enumerate(metadata.thing_classes)
339
- ]
340
-
341
- logger.info("Converting dataset dicts into COCO format")
342
- coco_images = []
343
- coco_annotations = []
344
-
345
- for image_id, image_dict in enumerate(dataset_dicts):
346
- coco_image = {
347
- "id": image_dict.get("image_id", image_id),
348
- "width": int(image_dict["width"]),
349
- "height": int(image_dict["height"]),
350
- "file_name": str(image_dict["file_name"]),
351
- }
352
- coco_images.append(coco_image)
353
-
354
- anns_per_image = image_dict.get("annotations", [])
355
- for annotation in anns_per_image:
356
- # create a new dict with only COCO fields
357
- coco_annotation = {}
358
-
359
- # COCO requirement: XYWH box format for axis-align and XYWHA for rotated
360
- bbox = annotation["bbox"]
361
- if isinstance(bbox, np.ndarray):
362
- if bbox.ndim != 1:
363
- raise ValueError(f"bbox has to be 1-dimensional. Got shape={bbox.shape}.")
364
- bbox = bbox.tolist()
365
- if len(bbox) not in [4, 5]:
366
- raise ValueError(f"bbox has to has length 4 or 5. Got {bbox}.")
367
- from_bbox_mode = annotation["bbox_mode"]
368
- to_bbox_mode = BoxMode.XYWH_ABS if len(bbox) == 4 else BoxMode.XYWHA_ABS
369
- bbox = BoxMode.convert(bbox, from_bbox_mode, to_bbox_mode)
370
-
371
- # COCO requirement: instance area
372
- if "segmentation" in annotation:
373
- # Computing areas for instances by counting the pixels
374
- segmentation = annotation["segmentation"]
375
- # TODO: check segmentation type: RLE, BinaryMask or Polygon
376
- if isinstance(segmentation, list):
377
- polygons = PolygonMasks([segmentation])
378
- area = polygons.area()[0].item()
379
- elif isinstance(segmentation, dict): # RLE
380
- area = mask_util.area(segmentation).item()
381
- else:
382
- raise TypeError(f"Unknown segmentation type {type(segmentation)}!")
383
- else:
384
- # Computing areas using bounding boxes
385
- if to_bbox_mode == BoxMode.XYWH_ABS:
386
- bbox_xy = BoxMode.convert(bbox, to_bbox_mode, BoxMode.XYXY_ABS)
387
- area = Boxes([bbox_xy]).area()[0].item()
388
- else:
389
- area = RotatedBoxes([bbox]).area()[0].item()
390
-
391
- if "keypoints" in annotation:
392
- keypoints = annotation["keypoints"] # list[int]
393
- for idx, v in enumerate(keypoints):
394
- if idx % 3 != 2:
395
- # COCO's segmentation coordinates are floating points in [0, H or W],
396
- # but keypoint coordinates are integers in [0, H-1 or W-1]
397
- # For COCO format consistency we substract 0.5
398
- # https://github.com/facebookresearch/detectron2/pull/175#issuecomment-551202163
399
- keypoints[idx] = v - 0.5
400
- if "num_keypoints" in annotation:
401
- num_keypoints = annotation["num_keypoints"]
402
- else:
403
- num_keypoints = sum(kp > 0 for kp in keypoints[2::3])
404
-
405
- # COCO requirement:
406
- # linking annotations to images
407
- # "id" field must start with 1
408
- coco_annotation["id"] = len(coco_annotations) + 1
409
- coco_annotation["image_id"] = coco_image["id"]
410
- coco_annotation["bbox"] = [round(float(x), 3) for x in bbox]
411
- coco_annotation["area"] = float(area)
412
- coco_annotation["iscrowd"] = int(annotation.get("iscrowd", 0))
413
- coco_annotation["category_id"] = int(reverse_id_mapper(annotation["category_id"]))
414
-
415
- # Add optional fields
416
- if "keypoints" in annotation:
417
- coco_annotation["keypoints"] = keypoints
418
- coco_annotation["num_keypoints"] = num_keypoints
419
-
420
- if "segmentation" in annotation:
421
- seg = coco_annotation["segmentation"] = annotation["segmentation"]
422
- if isinstance(seg, dict): # RLE
423
- counts = seg["counts"]
424
- if not isinstance(counts, str):
425
- # make it json-serializable
426
- seg["counts"] = counts.decode("ascii")
427
-
428
- coco_annotations.append(coco_annotation)
429
-
430
- logger.info(
431
- "Conversion finished, "
432
- f"#images: {len(coco_images)}, #annotations: {len(coco_annotations)}"
433
- )
434
-
435
- info = {
436
- "date_created": str(datetime.datetime.now()),
437
- "description": "Automatically generated COCO json file for Detectron2.",
438
- }
439
- coco_dict = {"info": info, "images": coco_images, "categories": categories, "licenses": None}
440
- if len(coco_annotations) > 0:
441
- coco_dict["annotations"] = coco_annotations
442
- return coco_dict
443
-
444
-
445
- def convert_to_coco_json(dataset_name, output_file, allow_cached=True):
446
- """
447
- Converts dataset into COCO format and saves it to a json file.
448
- dataset_name must be registered in DatasetCatalog and in detectron2's standard format.
449
-
450
- Args:
451
- dataset_name:
452
- reference from the config file to the catalogs
453
- must be registered in DatasetCatalog and in detectron2's standard format
454
- output_file: path of json file that will be saved to
455
- allow_cached: if json file is already present then skip conversion
456
- """
457
-
458
- # TODO: The dataset or the conversion script *may* change,
459
- # a checksum would be useful for validating the cached data
460
-
461
- PathManager.mkdirs(os.path.dirname(output_file))
462
- with file_lock(output_file):
463
- if PathManager.exists(output_file) and allow_cached:
464
- logger.warning(
465
- f"Using previously cached COCO format annotations at '{output_file}'. "
466
- "You need to clear the cache file if your dataset has been modified."
467
- )
468
- else:
469
- logger.info(f"Converting annotations of dataset '{dataset_name}' to COCO format ...)")
470
- coco_dict = convert_to_coco_dict(dataset_name)
471
-
472
- logger.info(f"Caching COCO format annotations at '{output_file}' ...")
473
- tmp_file = output_file + ".tmp"
474
- with PathManager.open(tmp_file, "w") as f:
475
- json.dump(coco_dict, f)
476
- shutil.move(tmp_file, output_file)
477
-
478
-
479
- def register_coco_instances(name, metadata, json_file, image_root):
480
- """
481
- Register a dataset in COCO's json annotation format for
482
- instance detection, instance segmentation and keypoint detection.
483
- (i.e., Type 1 and 2 in http://cocodataset.org/#format-data.
484
- `instances*.json` and `person_keypoints*.json` in the dataset).
485
-
486
- This is an example of how to register a new dataset.
487
- You can do something similar to this function, to register new datasets.
488
-
489
- Args:
490
- name (str): the name that identifies a dataset, e.g. "coco_2014_train".
491
- metadata (dict): extra metadata associated with this dataset. You can
492
- leave it as an empty dict.
493
- json_file (str): path to the json instance annotation file.
494
- image_root (str or path-like): directory which contains all the images.
495
- """
496
- assert isinstance(name, str), name
497
- assert isinstance(json_file, (str, os.PathLike)), json_file
498
- assert isinstance(image_root, (str, os.PathLike)), image_root
499
- # 1. register a function which returns dicts
500
- DatasetCatalog.register(name, lambda: load_coco_json(json_file, image_root, name))
501
-
502
- # 2. Optionally, add metadata about this dataset,
503
- # since they might be useful in evaluation, visualization or logging
504
- MetadataCatalog.get(name).set(
505
- json_file=json_file, image_root=image_root, evaluator_type="coco", **metadata
506
- )
507
-
508
-
509
- if __name__ == "__main__":
510
- """
511
- Test the COCO json dataset loader.
512
-
513
- Usage:
514
- python -m detectron2.data.datasets.coco \
515
- path/to/json path/to/image_root dataset_name
516
-
517
- "dataset_name" can be "coco_2014_minival_100", or other
518
- pre-registered ones
519
- """
520
- from detectron2.utils.logger import setup_logger
521
- from detectron2.utils.visualizer import Visualizer
522
- import detectron2.data.datasets # noqa # add pre-defined metadata
523
- import sys
524
-
525
- logger = setup_logger(name=__name__)
526
- assert sys.argv[3] in DatasetCatalog.list()
527
- meta = MetadataCatalog.get(sys.argv[3])
528
-
529
- dicts = load_coco_json(sys.argv[1], sys.argv[2], sys.argv[3])
530
- logger.info("Done loading {} samples.".format(len(dicts)))
531
-
532
- dirname = "coco-data-vis"
533
- os.makedirs(dirname, exist_ok=True)
534
- for d in dicts:
535
- img = np.array(Image.open(d["file_name"]))
536
- visualizer = Visualizer(img, metadata=meta)
537
- vis = visualizer.draw_dataset_dict(d)
538
- fpath = os.path.join(dirname, os.path.basename(d["file_name"]))
539
- vis.save(fpath)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/retinanet.py DELETED
@@ -1,609 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import logging
3
- import math
4
- import numpy as np
5
- from typing import Dict, List, Tuple
6
- import torch
7
- from fvcore.nn import sigmoid_focal_loss_jit
8
- from torch import Tensor, nn
9
- from torch.nn import functional as F
10
-
11
- from detectron2.config import configurable
12
- from detectron2.data.detection_utils import convert_image_to_rgb
13
- from detectron2.layers import ShapeSpec, batched_nms, cat, get_norm, nonzero_tuple
14
- from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
15
- from detectron2.utils.events import get_event_storage
16
-
17
- from ..anchor_generator import build_anchor_generator
18
- from ..backbone import Backbone, build_backbone
19
- from ..box_regression import Box2BoxTransform, _dense_box_regression_loss
20
- from ..matcher import Matcher
21
- from ..postprocessing import detector_postprocess
22
- from .build import META_ARCH_REGISTRY
23
-
24
- __all__ = ["RetinaNet"]
25
-
26
-
27
- logger = logging.getLogger(__name__)
28
-
29
-
30
- def permute_to_N_HWA_K(tensor, K: int):
31
- """
32
- Transpose/reshape a tensor from (N, (Ai x K), H, W) to (N, (HxWxAi), K)
33
- """
34
- assert tensor.dim() == 4, tensor.shape
35
- N, _, H, W = tensor.shape
36
- tensor = tensor.view(N, -1, K, H, W)
37
- tensor = tensor.permute(0, 3, 4, 1, 2)
38
- tensor = tensor.reshape(N, -1, K) # Size=(N,HWA,K)
39
- return tensor
40
-
41
-
42
- @META_ARCH_REGISTRY.register()
43
- class RetinaNet(nn.Module):
44
- """
45
- Implement RetinaNet in :paper:`RetinaNet`.
46
- """
47
-
48
- @configurable
49
- def __init__(
50
- self,
51
- *,
52
- backbone: Backbone,
53
- head: nn.Module,
54
- head_in_features,
55
- anchor_generator,
56
- box2box_transform,
57
- anchor_matcher,
58
- num_classes,
59
- focal_loss_alpha=0.25,
60
- focal_loss_gamma=2.0,
61
- smooth_l1_beta=0.0,
62
- box_reg_loss_type="smooth_l1",
63
- test_score_thresh=0.05,
64
- test_topk_candidates=1000,
65
- test_nms_thresh=0.5,
66
- max_detections_per_image=100,
67
- pixel_mean,
68
- pixel_std,
69
- vis_period=0,
70
- input_format="BGR",
71
- ):
72
- """
73
- NOTE: this interface is experimental.
74
-
75
- Args:
76
- backbone: a backbone module, must follow detectron2's backbone interface
77
- head (nn.Module): a module that predicts logits and regression deltas
78
- for each level from a list of per-level features
79
- head_in_features (Tuple[str]): Names of the input feature maps to be used in head
80
- anchor_generator (nn.Module): a module that creates anchors from a
81
- list of features. Usually an instance of :class:`AnchorGenerator`
82
- box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to
83
- instance boxes
84
- anchor_matcher (Matcher): label the anchors by matching them with ground truth.
85
- num_classes (int): number of classes. Used to label background proposals.
86
-
87
- # Loss parameters:
88
- focal_loss_alpha (float): focal_loss_alpha
89
- focal_loss_gamma (float): focal_loss_gamma
90
- smooth_l1_beta (float): smooth_l1_beta
91
- box_reg_loss_type (str): Options are "smooth_l1", "giou"
92
-
93
- # Inference parameters:
94
- test_score_thresh (float): Inference cls score threshold, only anchors with
95
- score > INFERENCE_TH are considered for inference (to improve speed)
96
- test_topk_candidates (int): Select topk candidates before NMS
97
- test_nms_thresh (float): Overlap threshold used for non-maximum suppression
98
- (suppress boxes with IoU >= this threshold)
99
- max_detections_per_image (int):
100
- Maximum number of detections to return per image during inference
101
- (100 is based on the limit established for the COCO dataset).
102
-
103
- # Input parameters
104
- pixel_mean (Tuple[float]):
105
- Values to be used for image normalization (BGR order).
106
- To train on images of different number of channels, set different mean & std.
107
- Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]
108
- pixel_std (Tuple[float]):
109
- When using pre-trained models in Detectron1 or any MSRA models,
110
- std has been absorbed into its conv1 weights, so the std needs to be set 1.
111
- Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std)
112
- vis_period (int):
113
- The period (in terms of steps) for minibatch visualization at train time.
114
- Set to 0 to disable.
115
- input_format (str): Whether the model needs RGB, YUV, HSV etc.
116
- """
117
- super().__init__()
118
-
119
- self.backbone = backbone
120
- self.head = head
121
- self.head_in_features = head_in_features
122
- if len(self.backbone.output_shape()) != len(self.head_in_features):
123
- logger.warning("[RetinaNet] Backbone produces unused features.")
124
-
125
- # Anchors
126
- self.anchor_generator = anchor_generator
127
- self.box2box_transform = box2box_transform
128
- self.anchor_matcher = anchor_matcher
129
-
130
- self.num_classes = num_classes
131
- # Loss parameters:
132
- self.focal_loss_alpha = focal_loss_alpha
133
- self.focal_loss_gamma = focal_loss_gamma
134
- self.smooth_l1_beta = smooth_l1_beta
135
- self.box_reg_loss_type = box_reg_loss_type
136
- # Inference parameters:
137
- self.test_score_thresh = test_score_thresh
138
- self.test_topk_candidates = test_topk_candidates
139
- self.test_nms_thresh = test_nms_thresh
140
- self.max_detections_per_image = max_detections_per_image
141
- # Vis parameters
142
- self.vis_period = vis_period
143
- self.input_format = input_format
144
-
145
- self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
146
- self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
147
-
148
- """
149
- In Detectron1, loss is normalized by number of foreground samples in the batch.
150
- When batch size is 1 per GPU, #foreground has a large variance and
151
- using it lead to lower performance. Here we maintain an EMA of #foreground to
152
- stabilize the normalizer.
153
- """
154
- self.loss_normalizer = 100 # initialize with any reasonable #fg that's not too small
155
- self.loss_normalizer_momentum = 0.9
156
-
157
- @classmethod
158
- def from_config(cls, cfg):
159
- backbone = build_backbone(cfg)
160
- backbone_shape = backbone.output_shape()
161
- feature_shapes = [backbone_shape[f] for f in cfg.MODEL.RETINANET.IN_FEATURES]
162
- head = RetinaNetHead(cfg, feature_shapes)
163
- anchor_generator = build_anchor_generator(cfg, feature_shapes)
164
- return {
165
- "backbone": backbone,
166
- "head": head,
167
- "anchor_generator": anchor_generator,
168
- "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RETINANET.BBOX_REG_WEIGHTS),
169
- "anchor_matcher": Matcher(
170
- cfg.MODEL.RETINANET.IOU_THRESHOLDS,
171
- cfg.MODEL.RETINANET.IOU_LABELS,
172
- allow_low_quality_matches=True,
173
- ),
174
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
175
- "pixel_std": cfg.MODEL.PIXEL_STD,
176
- "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES,
177
- "head_in_features": cfg.MODEL.RETINANET.IN_FEATURES,
178
- # Loss parameters:
179
- "focal_loss_alpha": cfg.MODEL.RETINANET.FOCAL_LOSS_ALPHA,
180
- "focal_loss_gamma": cfg.MODEL.RETINANET.FOCAL_LOSS_GAMMA,
181
- "smooth_l1_beta": cfg.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA,
182
- "box_reg_loss_type": cfg.MODEL.RETINANET.BBOX_REG_LOSS_TYPE,
183
- # Inference parameters:
184
- "test_score_thresh": cfg.MODEL.RETINANET.SCORE_THRESH_TEST,
185
- "test_topk_candidates": cfg.MODEL.RETINANET.TOPK_CANDIDATES_TEST,
186
- "test_nms_thresh": cfg.MODEL.RETINANET.NMS_THRESH_TEST,
187
- "max_detections_per_image": cfg.TEST.DETECTIONS_PER_IMAGE,
188
- # Vis parameters
189
- "vis_period": cfg.VIS_PERIOD,
190
- "input_format": cfg.INPUT.FORMAT,
191
- }
192
-
193
- @property
194
- def device(self):
195
- return self.pixel_mean.device
196
-
197
- def visualize_training(self, batched_inputs, results):
198
- """
199
- A function used to visualize ground truth images and final network predictions.
200
- It shows ground truth bounding boxes on the original image and up to 20
201
- predicted object bounding boxes on the original image.
202
-
203
- Args:
204
- batched_inputs (list): a list that contains input to the model.
205
- results (List[Instances]): a list of #images elements.
206
- """
207
- from detectron2.utils.visualizer import Visualizer
208
-
209
- assert len(batched_inputs) == len(
210
- results
211
- ), "Cannot visualize inputs and results of different sizes"
212
- storage = get_event_storage()
213
- max_boxes = 20
214
-
215
- image_index = 0 # only visualize a single image
216
- img = batched_inputs[image_index]["image"]
217
- img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format)
218
- v_gt = Visualizer(img, None)
219
- v_gt = v_gt.overlay_instances(boxes=batched_inputs[image_index]["instances"].gt_boxes)
220
- anno_img = v_gt.get_image()
221
- processed_results = detector_postprocess(results[image_index], img.shape[0], img.shape[1])
222
- predicted_boxes = processed_results.pred_boxes.tensor.detach().cpu().numpy()
223
-
224
- v_pred = Visualizer(img, None)
225
- v_pred = v_pred.overlay_instances(boxes=predicted_boxes[0:max_boxes])
226
- prop_img = v_pred.get_image()
227
- vis_img = np.vstack((anno_img, prop_img))
228
- vis_img = vis_img.transpose(2, 0, 1)
229
- vis_name = f"Top: GT bounding boxes; Bottom: {max_boxes} Highest Scoring Results"
230
- storage.put_image(vis_name, vis_img)
231
-
232
- def forward(self, batched_inputs: List[Dict[str, Tensor]]):
233
- """
234
- Args:
235
- batched_inputs: a list, batched outputs of :class:`DatasetMapper` .
236
- Each item in the list contains the inputs for one image.
237
- For now, each item in the list is a dict that contains:
238
-
239
- * image: Tensor, image in (C, H, W) format.
240
- * instances: Instances
241
-
242
- Other information that's included in the original dicts, such as:
243
-
244
- * "height", "width" (int): the output resolution of the model, used in inference.
245
- See :meth:`postprocess` for details.
246
- Returns:
247
- In training, dict[str, Tensor]: mapping from a named loss to a tensor storing the
248
- loss. Used during training only. In inference, the standard output format, described
249
- in :doc:`/tutorials/models`.
250
- """
251
- images = self.preprocess_image(batched_inputs)
252
- features = self.backbone(images.tensor)
253
- features = [features[f] for f in self.head_in_features]
254
-
255
- anchors = self.anchor_generator(features)
256
- pred_logits, pred_anchor_deltas = self.head(features)
257
- # Transpose the Hi*Wi*A dimension to the middle:
258
- pred_logits = [permute_to_N_HWA_K(x, self.num_classes) for x in pred_logits]
259
- pred_anchor_deltas = [permute_to_N_HWA_K(x, 4) for x in pred_anchor_deltas]
260
-
261
- if self.training:
262
- assert not torch.jit.is_scripting(), "Not supported"
263
- assert "instances" in batched_inputs[0], "Instance annotations are missing in training!"
264
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
265
-
266
- gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances)
267
- losses = self.losses(anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes)
268
-
269
- if self.vis_period > 0:
270
- storage = get_event_storage()
271
- if storage.iter % self.vis_period == 0:
272
- results = self.inference(
273
- anchors, pred_logits, pred_anchor_deltas, images.image_sizes
274
- )
275
- self.visualize_training(batched_inputs, results)
276
-
277
- return losses
278
- else:
279
- results = self.inference(anchors, pred_logits, pred_anchor_deltas, images.image_sizes)
280
- if torch.jit.is_scripting():
281
- return results
282
- processed_results = []
283
- for results_per_image, input_per_image, image_size in zip(
284
- results, batched_inputs, images.image_sizes
285
- ):
286
- height = input_per_image.get("height", image_size[0])
287
- width = input_per_image.get("width", image_size[1])
288
- r = detector_postprocess(results_per_image, height, width)
289
- processed_results.append({"instances": r})
290
- return processed_results
291
-
292
- def losses(self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes):
293
- """
294
- Args:
295
- anchors (list[Boxes]): a list of #feature level Boxes
296
- gt_labels, gt_boxes: see output of :meth:`RetinaNet.label_anchors`.
297
- Their shapes are (N, R) and (N, R, 4), respectively, where R is
298
- the total number of anchors across levels, i.e. sum(Hi x Wi x Ai)
299
- pred_logits, pred_anchor_deltas: both are list[Tensor]. Each element in the
300
- list corresponds to one level and has shape (N, Hi * Wi * Ai, K or 4).
301
- Where K is the number of classes used in `pred_logits`.
302
-
303
- Returns:
304
- dict[str, Tensor]:
305
- mapping from a named loss to a scalar tensor
306
- storing the loss. Used during training only. The dict keys are:
307
- "loss_cls" and "loss_box_reg"
308
- """
309
- num_images = len(gt_labels)
310
- gt_labels = torch.stack(gt_labels) # (N, R)
311
-
312
- valid_mask = gt_labels >= 0
313
- pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes)
314
- num_pos_anchors = pos_mask.sum().item()
315
- get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images)
316
- self.loss_normalizer = self.loss_normalizer_momentum * self.loss_normalizer + (
317
- 1 - self.loss_normalizer_momentum
318
- ) * max(num_pos_anchors, 1)
319
-
320
- # classification and regression loss
321
- gt_labels_target = F.one_hot(gt_labels[valid_mask], num_classes=self.num_classes + 1)[
322
- :, :-1
323
- ] # no loss for the last (background) class
324
- loss_cls = sigmoid_focal_loss_jit(
325
- cat(pred_logits, dim=1)[valid_mask],
326
- gt_labels_target.to(pred_logits[0].dtype),
327
- alpha=self.focal_loss_alpha,
328
- gamma=self.focal_loss_gamma,
329
- reduction="sum",
330
- )
331
-
332
- loss_box_reg = _dense_box_regression_loss(
333
- anchors,
334
- self.box2box_transform,
335
- pred_anchor_deltas,
336
- gt_boxes,
337
- pos_mask,
338
- box_reg_loss_type=self.box_reg_loss_type,
339
- smooth_l1_beta=self.smooth_l1_beta,
340
- )
341
-
342
- return {
343
- "loss_cls": loss_cls / self.loss_normalizer,
344
- "loss_box_reg": loss_box_reg / self.loss_normalizer,
345
- }
346
-
347
- @torch.no_grad()
348
- def label_anchors(self, anchors, gt_instances):
349
- """
350
- Args:
351
- anchors (list[Boxes]): A list of #feature level Boxes.
352
- The Boxes contains anchors of this image on the specific feature level.
353
- gt_instances (list[Instances]): a list of N `Instances`s. The i-th
354
- `Instances` contains the ground-truth per-instance annotations
355
- for the i-th input image.
356
-
357
- Returns:
358
- list[Tensor]: List of #img tensors. i-th element is a vector of labels whose length is
359
- the total number of anchors across all feature maps (sum(Hi * Wi * A)).
360
- Label values are in {-1, 0, ..., K}, with -1 means ignore, and K means background.
361
-
362
- list[Tensor]: i-th element is a Rx4 tensor, where R is the total number of anchors
363
- across feature maps. The values are the matched gt boxes for each anchor.
364
- Values are undefined for those anchors not labeled as foreground.
365
- """
366
- anchors = Boxes.cat(anchors) # Rx4
367
-
368
- gt_labels = []
369
- matched_gt_boxes = []
370
- for gt_per_image in gt_instances:
371
- match_quality_matrix = pairwise_iou(gt_per_image.gt_boxes, anchors)
372
- matched_idxs, anchor_labels = self.anchor_matcher(match_quality_matrix)
373
- del match_quality_matrix
374
-
375
- if len(gt_per_image) > 0:
376
- matched_gt_boxes_i = gt_per_image.gt_boxes.tensor[matched_idxs]
377
-
378
- gt_labels_i = gt_per_image.gt_classes[matched_idxs]
379
- # Anchors with label 0 are treated as background.
380
- gt_labels_i[anchor_labels == 0] = self.num_classes
381
- # Anchors with label -1 are ignored.
382
- gt_labels_i[anchor_labels == -1] = -1
383
- else:
384
- matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
385
- gt_labels_i = torch.zeros_like(matched_idxs) + self.num_classes
386
-
387
- gt_labels.append(gt_labels_i)
388
- matched_gt_boxes.append(matched_gt_boxes_i)
389
-
390
- return gt_labels, matched_gt_boxes
391
-
392
- def inference(
393
- self,
394
- anchors: List[Boxes],
395
- pred_logits: List[Tensor],
396
- pred_anchor_deltas: List[Tensor],
397
- image_sizes: List[Tuple[int, int]],
398
- ):
399
- """
400
- Arguments:
401
- anchors (list[Boxes]): A list of #feature level Boxes.
402
- The Boxes contain anchors of this image on the specific feature level.
403
- pred_logits, pred_anchor_deltas: list[Tensor], one per level. Each
404
- has shape (N, Hi * Wi * Ai, K or 4)
405
- image_sizes (List[(h, w)]): the input image sizes
406
-
407
- Returns:
408
- results (List[Instances]): a list of #images elements.
409
- """
410
- results: List[Instances] = []
411
- for img_idx, image_size in enumerate(image_sizes):
412
- pred_logits_per_image = [x[img_idx] for x in pred_logits]
413
- deltas_per_image = [x[img_idx] for x in pred_anchor_deltas]
414
- results_per_image = self.inference_single_image(
415
- anchors, pred_logits_per_image, deltas_per_image, image_size
416
- )
417
- results.append(results_per_image)
418
- return results
419
-
420
- def inference_single_image(
421
- self,
422
- anchors: List[Boxes],
423
- box_cls: List[Tensor],
424
- box_delta: List[Tensor],
425
- image_size: Tuple[int, int],
426
- ):
427
- """
428
- Single-image inference. Return bounding-box detection results by thresholding
429
- on scores and applying non-maximum suppression (NMS).
430
-
431
- Arguments:
432
- anchors (list[Boxes]): list of #feature levels. Each entry contains
433
- a Boxes object, which contains all the anchors in that feature level.
434
- box_cls (list[Tensor]): list of #feature levels. Each entry contains
435
- tensor of size (H x W x A, K)
436
- box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4.
437
- image_size (tuple(H, W)): a tuple of the image height and width.
438
-
439
- Returns:
440
- Same as `inference`, but for only one image.
441
- """
442
- boxes_all = []
443
- scores_all = []
444
- class_idxs_all = []
445
-
446
- # Iterate over every feature level
447
- for box_cls_i, box_reg_i, anchors_i in zip(box_cls, box_delta, anchors):
448
- # (HxWxAxK,)
449
- predicted_prob = box_cls_i.flatten().sigmoid_()
450
-
451
- # Apply two filtering below to make NMS faster.
452
- # 1. Keep boxes with confidence score higher than threshold
453
- keep_idxs = predicted_prob > self.test_score_thresh
454
- predicted_prob = predicted_prob[keep_idxs]
455
- topk_idxs = nonzero_tuple(keep_idxs)[0]
456
-
457
- # 2. Keep top k top scoring boxes only
458
- num_topk = min(self.test_topk_candidates, topk_idxs.size(0))
459
- # torch.sort is actually faster than .topk (at least on GPUs)
460
- predicted_prob, idxs = predicted_prob.sort(descending=True)
461
- predicted_prob = predicted_prob[:num_topk]
462
- topk_idxs = topk_idxs[idxs[:num_topk]]
463
-
464
- anchor_idxs = topk_idxs // self.num_classes
465
- classes_idxs = topk_idxs % self.num_classes
466
-
467
- box_reg_i = box_reg_i[anchor_idxs]
468
- anchors_i = anchors_i[anchor_idxs]
469
- # predict boxes
470
- predicted_boxes = self.box2box_transform.apply_deltas(box_reg_i, anchors_i.tensor)
471
-
472
- boxes_all.append(predicted_boxes)
473
- scores_all.append(predicted_prob)
474
- class_idxs_all.append(classes_idxs)
475
-
476
- boxes_all, scores_all, class_idxs_all = [
477
- cat(x) for x in [boxes_all, scores_all, class_idxs_all]
478
- ]
479
- keep = batched_nms(boxes_all, scores_all, class_idxs_all, self.test_nms_thresh)
480
- keep = keep[: self.max_detections_per_image]
481
-
482
- result = Instances(image_size)
483
- result.pred_boxes = Boxes(boxes_all[keep])
484
- result.scores = scores_all[keep]
485
- result.pred_classes = class_idxs_all[keep]
486
- return result
487
-
488
- def preprocess_image(self, batched_inputs: List[Dict[str, Tensor]]):
489
- """
490
- Normalize, pad and batch the input images.
491
- """
492
- images = [x["image"].to(self.device) for x in batched_inputs]
493
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
494
- images = ImageList.from_tensors(images, self.backbone.size_divisibility)
495
- return images
496
-
497
-
498
- class RetinaNetHead(nn.Module):
499
- """
500
- The head used in RetinaNet for object classification and box regression.
501
- It has two subnets for the two tasks, with a common structure but separate parameters.
502
- """
503
-
504
- @configurable
505
- def __init__(
506
- self,
507
- *,
508
- input_shape: List[ShapeSpec],
509
- num_classes,
510
- num_anchors,
511
- conv_dims: List[int],
512
- norm="",
513
- prior_prob=0.01,
514
- ):
515
- """
516
- NOTE: this interface is experimental.
517
-
518
- Args:
519
- input_shape (List[ShapeSpec]): input shape
520
- num_classes (int): number of classes. Used to label background proposals.
521
- num_anchors (int): number of generated anchors
522
- conv_dims (List[int]): dimensions for each convolution layer
523
- norm (str or callable):
524
- Normalization for conv layers except for the two output layers.
525
- See :func:`detectron2.layers.get_norm` for supported types.
526
- prior_prob (float): Prior weight for computing bias
527
- """
528
- super().__init__()
529
-
530
- if norm == "BN" or norm == "SyncBN":
531
- logger.warning("Shared norm does not work well for BN, SyncBN, expect poor results")
532
-
533
- cls_subnet = []
534
- bbox_subnet = []
535
- for in_channels, out_channels in zip(
536
- [input_shape[0].channels] + list(conv_dims), conv_dims
537
- ):
538
- cls_subnet.append(
539
- nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
540
- )
541
- if norm:
542
- cls_subnet.append(get_norm(norm, out_channels))
543
- cls_subnet.append(nn.ReLU())
544
- bbox_subnet.append(
545
- nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
546
- )
547
- if norm:
548
- bbox_subnet.append(get_norm(norm, out_channels))
549
- bbox_subnet.append(nn.ReLU())
550
-
551
- self.cls_subnet = nn.Sequential(*cls_subnet)
552
- self.bbox_subnet = nn.Sequential(*bbox_subnet)
553
- self.cls_score = nn.Conv2d(
554
- conv_dims[-1], num_anchors * num_classes, kernel_size=3, stride=1, padding=1
555
- )
556
- self.bbox_pred = nn.Conv2d(
557
- conv_dims[-1], num_anchors * 4, kernel_size=3, stride=1, padding=1
558
- )
559
-
560
- # Initialization
561
- for modules in [self.cls_subnet, self.bbox_subnet, self.cls_score, self.bbox_pred]:
562
- for layer in modules.modules():
563
- if isinstance(layer, nn.Conv2d):
564
- torch.nn.init.normal_(layer.weight, mean=0, std=0.01)
565
- torch.nn.init.constant_(layer.bias, 0)
566
-
567
- # Use prior in model initialization to improve stability
568
- bias_value = -(math.log((1 - prior_prob) / prior_prob))
569
- torch.nn.init.constant_(self.cls_score.bias, bias_value)
570
-
571
- @classmethod
572
- def from_config(cls, cfg, input_shape: List[ShapeSpec]):
573
- num_anchors = build_anchor_generator(cfg, input_shape).num_cell_anchors
574
- assert (
575
- len(set(num_anchors)) == 1
576
- ), "Using different number of anchors between levels is not currently supported!"
577
- num_anchors = num_anchors[0]
578
-
579
- return {
580
- "input_shape": input_shape,
581
- "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES,
582
- "conv_dims": [input_shape[0].channels] * cfg.MODEL.RETINANET.NUM_CONVS,
583
- "prior_prob": cfg.MODEL.RETINANET.PRIOR_PROB,
584
- "norm": cfg.MODEL.RETINANET.NORM,
585
- "num_anchors": num_anchors,
586
- }
587
-
588
- def forward(self, features: List[Tensor]):
589
- """
590
- Arguments:
591
- features (list[Tensor]): FPN feature map tensors in high to low resolution.
592
- Each tensor in the list correspond to different feature levels.
593
-
594
- Returns:
595
- logits (list[Tensor]): #lvl tensors, each has shape (N, AxK, Hi, Wi).
596
- The tensor predicts the classification probability
597
- at each spatial position for each of the A anchors and K object
598
- classes.
599
- bbox_reg (list[Tensor]): #lvl tensors, each has shape (N, Ax4, Hi, Wi).
600
- The tensor predicts 4-vector (dx,dy,dw,dh) box
601
- regression values for every anchor. These values are the
602
- relative offset between the anchor and the ground truth box.
603
- """
604
- logits = []
605
- bbox_reg = []
606
- for feature in features:
607
- logits.append(self.cls_score(self.cls_subnet(feature)))
608
- bbox_reg.append(self.bbox_pred(self.bbox_subnet(feature)))
609
- return logits, bbox_reg
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chintan-Donda/KKMS-KSSW-HF/src/ner_detection.py DELETED
@@ -1,58 +0,0 @@
1
- import gradio as gr
2
- import openai
3
- import os
4
- import re
5
- import ast
6
-
7
- openai.api_key = "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"
8
- SYSTEM_PROMPT = "You are a smart and intelligent Named Entity Recognition (NER) system. I will provide you the definition of the entities you need to extract, the sentence from where your extract the entities and the output format with examples."
9
- USER_PROMPT_1 = "Are you clear about your role?"
10
- ASSISTANT_PROMPT_1 = "Sure, I'm ready to help you with your NER task. Please provide me with the necessary information to get started."
11
- GUIDELINES_PROMPT = (
12
- """Entity Definition:\n"
13
- "1. PEST NAME: Name of the pest which has attacked a particular crop which may lead to crop damage.\n"
14
- "2. CROP DISEASE: Any kind of crop disease which occurs in agriculture land in india and nearby resgions.\n"
15
- "3. WEATHER CONDITION: Severe climate conditions like heavy rainfall, hailstorm which has destroyed crops.\n"
16
- "\n"
17
- "Output Format:\n"
18
- "{{'PEST NAME': [list of entities present], 'CROP DISEASE': [list of entities present], 'WEATHER CONDITION': [list of entities present]}}\n"
19
- "If no entities are presented in any categories keep it None\n"
20
- "\n"
21
- "Examples:\n"
22
- "\n"
23
- "1. Sentence: Pest attack on maize crop in lower Kangra : The Tribune India. Farmers in lower Kangra are a harried lot as the fall armyworm pest has attacked their maize crop. 'Kolshi' continues to affect Vidarbha's Orange crop cultivation (Citrus Black Fly) | Krishak Jagat. A total of 1,50,000 hectares of land in the Vidarbha region is planted with oranges, and of them, 25% are seriously damaged by Kolshi, a citrus black fly disease. India's June tea output drops 17% as floods hit plucking | Mint. India's June tea production fell 17.4% from a year earlier to 141.31 million kilograms, the state-run Tea Board said, as floods and pest attack dented output in the main producing region\n"
24
- "Output: {{'PEST NAME': ['fall armyworm'], 'CROP DISEASE': ['citrus black fly disease'], 'WEATHER CONDITION': ['floods']}}\n"
25
- "\n"
26
- "2. Sentence: ICAR issues pest alert in Leparada, W/Siang | The Arunachal Times. 70 percent prevalence of fall army worm in maize fields in Pagi, Gori and Bam villages in Leparada district and Darka, Kombo and Jirdin villages in West Siang district was observed. After maize, Kangra vegetable crops under white fly attack : The Tribune India. Vegetable crops are under attack by white fly in the lower hills of Kangra district. The pest attack comes after the recent damage caused by fall armyworm to the maize crop in the area. Pest attacks on paddy crop worry farmers in the integrated Karimnagar district | Hindudayashankar. Crops withering due to stem borer, leaf folder and rice blast; farmers have to incur huge expenditures to control menace. Cyclone Amphan damages crop, vegetable prices shoot up | Cities News,The Indian Express. Cyclone Amphan has damaged vegetables across South Bengal. Farmers lost 80 to 90 per cent of crop as fields were flooded.\n"
27
- "Output: {{'PEST NAME': ['fall army worm', 'white fly attack', 'stem borer', 'leaf folder'], 'CROP DISEASE': ['rice blast'], 'WEATHER CONDITION': ['Cyclone Amphan']}}\n"
28
- "\n"
29
- "3. Sentence: {}\n"
30
- "Output: """
31
- )
32
-
33
- def openai_chat_completion_response(news_article_text):
34
- final_prompt = GUIDELINES_PROMPT.format(news_article_text)
35
- response = openai.ChatCompletion.create(
36
- model="gpt-3.5-turbo",
37
- messages=[
38
- {"role": "system", "content": SYSTEM_PROMPT},
39
- {"role": "user", "content": USER_PROMPT_1},
40
- {"role": "assistant", "content": ASSISTANT_PROMPT_1},
41
- {"role": "user", "content": final_prompt}
42
- ]
43
- )
44
- return response['choices'][0]['message']['content'].strip(" \n")
45
-
46
- # def preprocess(prompt):
47
- # return GUIDELINES_PROMPT.format(prompt)
48
- # def main():
49
- # my_sentence = "Hundreds of hectares of land under the cotton crop, once referred to as white gold, has come under attack of a wide range of insects like whitefly, pink bollworm and mealybug. This is likely to hit the cotton production this year."
50
- # GUIDELINES_PROMPT = GUIDELINES_PROMPT.format(my_sentence)
51
- # # print(GUIDELINES_PROMPT)
52
- # ners = openai_chat_completion_response(GUIDELINES_PROMPT)
53
- # print(ners)
54
-
55
- import gradio as gra
56
- #define gradio interface and other parameters
57
- app = gra.Interface(fn = openai_chat_completion_response, inputs="text", outputs="text")
58
- app.launch(share=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chukwuka/Dog_Breed_ImageWoof/app.py DELETED
@@ -1,98 +0,0 @@
1
-
2
- ### 1. Imports and class names setup ###
3
- import gradio as gr
4
- import os
5
- import numpy as np
6
- import torch
7
- import torchvision.transforms as T
8
-
9
- from model import Efficient_b2_model
10
- from timeit import default_timer as timer
11
- from typing import Tuple, Dict
12
- from data_setup import classes, model_tsfm
13
-
14
- # Setup class names
15
- #class_names = ['pizza', 'steak', 'sushi']
16
-
17
- ### 2. Model and transforms preparation ###
18
- #test_tsfm = T.Compose([T.Resize((224,224)),
19
- # T.ToTensor(),
20
- # T.Normalize(mean=[0.485, 0.456, 0.406], # 3. A mean of [0.485, 0.456, 0.406] (across each colour channel)
21
- # std=[0.229, 0.224, 0.225]) # 4. A standard deviation of [0.229, 0.224, 0.225] (across each colour channel),
22
- # ])
23
-
24
- # Create EffNetB2 Model
25
- effnet_b2 = Efficient_b2_model(num_classes=len(classes), pretrained=True)
26
- #effnet_b2
27
- #effnetb2, test_transform = create_effnet_b2(num_of_class=len(class_names),
28
- #transform=test_tsfm,
29
- #seed=42)
30
-
31
- # saved_path = 'demos\foodvision_mini\09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_percent.pth'
32
- saved_path = 'efficient_b2_checkpoint_model_2023_02_04.pth'
33
-
34
- print('Loading Model State Dictionary')
35
- # Load saved weights
36
- effnet_b2.load_state_dict(
37
- torch.load(f=saved_path,
38
- map_location=torch.device('cpu'), # load to CPU
39
- )
40
- )
41
-
42
- print('Model Loaded ...')
43
- ### 3. Predict function ###
44
-
45
- # Create predict function
46
- from typing import Tuple, Dict
47
-
48
- def predict(img) -> Tuple[Dict, float]:
49
- """Transforms and performs a prediction on img and returns prediction and time taken.
50
- """
51
- # Start the timer
52
- start_time = timer()
53
-
54
- # Transform the target image and add a batch dimension
55
- #img = get_image(img_path, model_tsfm).unsqueeze(0)
56
- img = model_tsfm(image=np.array(img))["image"]
57
- img = img.unsqueeze(0)
58
-
59
- # Put model into evaluation mode and turn on inference mode
60
- effnet_b2.eval()
61
- with torch.inference_mode():
62
- # Pass the transformed image through the model and turn the prediction logits into prediction probabilities
63
- pred_probs = torch.softmax(effnet_b2(img), dim=1)
64
-
65
- # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
66
- pred_labels_and_probs = {classes[i]: float(pred_probs[0][i]) for i in range(len(classes))}
67
-
68
- # Calculate the prediction time
69
- pred_time = round(timer() - start_time, 5)
70
-
71
- # Return the prediction dictionary and prediction time
72
- return pred_labels_and_probs, pred_time
73
-
74
- ### 4. Gradio App ###
75
-
76
- # Create title, description and article strings
77
- title= 'DogBreed Mini 🐩🐶🦮🐕‍🦺'
78
- description = "An EfficientNetB2 feature extractor computer vision model to classify images of Dog breeds."
79
- article = "<p>ImageWoof Created by Chukwuka </p><p style='text-align: center'><a href='https://github.com/Sylvesterchuks/dogbreed_app'>Github Repo</a></p>"
80
-
81
-
82
- # Create examples list from "examples/" directory
83
- example_list = [["examples/" + example] for example in os.listdir("examples")]
84
-
85
- # Create the Gradio demo
86
- demo = gr.Interface(fn=predict, # mapping function from input to output
87
- inputs=gr.Image(type='pil'), # What are the inputs?
88
- outputs=[gr.Label(num_top_classes=10, label="Predictions"), # what are the outputs?
89
- gr.Number(label='Prediction time (s)')], # Our fn has two outputs, therefore we have two outputs
90
- examples=example_list,
91
- title=title,
92
- description=description,
93
- article=article
94
- )
95
- # Launch the demo
96
- print('Gradio Demo Launched')
97
- demo.launch()
98
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cicooo/vits-uma-genshin-honkai/commons.py DELETED
@@ -1,172 +0,0 @@
1
- import math
2
- import torch
3
- from torch.nn import functional as F
4
- import torch.jit
5
-
6
-
7
- def script_method(fn, _rcb=None):
8
- return fn
9
-
10
-
11
- def script(obj, optimize=True, _frames_up=0, _rcb=None):
12
- return obj
13
-
14
-
15
- torch.jit.script_method = script_method
16
- torch.jit.script = script
17
-
18
-
19
- def init_weights(m, mean=0.0, std=0.01):
20
- classname = m.__class__.__name__
21
- if classname.find("Conv") != -1:
22
- m.weight.data.normal_(mean, std)
23
-
24
-
25
- def get_padding(kernel_size, dilation=1):
26
- return int((kernel_size*dilation - dilation)/2)
27
-
28
-
29
- def convert_pad_shape(pad_shape):
30
- l = pad_shape[::-1]
31
- pad_shape = [item for sublist in l for item in sublist]
32
- return pad_shape
33
-
34
-
35
- def intersperse(lst, item):
36
- result = [item] * (len(lst) * 2 + 1)
37
- result[1::2] = lst
38
- return result
39
-
40
-
41
- def kl_divergence(m_p, logs_p, m_q, logs_q):
42
- """KL(P||Q)"""
43
- kl = (logs_q - logs_p) - 0.5
44
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
45
- return kl
46
-
47
-
48
- def rand_gumbel(shape):
49
- """Sample from the Gumbel distribution, protect from overflows."""
50
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
51
- return -torch.log(-torch.log(uniform_samples))
52
-
53
-
54
- def rand_gumbel_like(x):
55
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
56
- return g
57
-
58
-
59
- def slice_segments(x, ids_str, segment_size=4):
60
- ret = torch.zeros_like(x[:, :, :segment_size])
61
- for i in range(x.size(0)):
62
- idx_str = ids_str[i]
63
- idx_end = idx_str + segment_size
64
- ret[i] = x[i, :, idx_str:idx_end]
65
- return ret
66
-
67
-
68
- def rand_slice_segments(x, x_lengths=None, segment_size=4):
69
- b, d, t = x.size()
70
- if x_lengths is None:
71
- x_lengths = t
72
- ids_str_max = x_lengths - segment_size + 1
73
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
74
- ret = slice_segments(x, ids_str, segment_size)
75
- return ret, ids_str
76
-
77
-
78
- def get_timing_signal_1d(
79
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
80
- position = torch.arange(length, dtype=torch.float)
81
- num_timescales = channels // 2
82
- log_timescale_increment = (
83
- math.log(float(max_timescale) / float(min_timescale)) /
84
- (num_timescales - 1))
85
- inv_timescales = min_timescale * torch.exp(
86
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
87
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
88
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
89
- signal = F.pad(signal, [0, 0, 0, channels % 2])
90
- signal = signal.view(1, channels, length)
91
- return signal
92
-
93
-
94
- def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
95
- b, channels, length = x.size()
96
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
97
- return x + signal.to(dtype=x.dtype, device=x.device)
98
-
99
-
100
- def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
101
- b, channels, length = x.size()
102
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
103
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
104
-
105
-
106
- def subsequent_mask(length):
107
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
108
- return mask
109
-
110
-
111
- @torch.jit.script
112
- def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
113
- n_channels_int = n_channels[0]
114
- in_act = input_a + input_b
115
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
116
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
117
- acts = t_act * s_act
118
- return acts
119
-
120
-
121
- def convert_pad_shape(pad_shape):
122
- l = pad_shape[::-1]
123
- pad_shape = [item for sublist in l for item in sublist]
124
- return pad_shape
125
-
126
-
127
- def shift_1d(x):
128
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
129
- return x
130
-
131
-
132
- def sequence_mask(length, max_length=None):
133
- if max_length is None:
134
- max_length = length.max()
135
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
136
- return x.unsqueeze(0) < length.unsqueeze(1)
137
-
138
-
139
- def generate_path(duration, mask):
140
- """
141
- duration: [b, 1, t_x]
142
- mask: [b, 1, t_y, t_x]
143
- """
144
- device = duration.device
145
-
146
- b, _, t_y, t_x = mask.shape
147
- cum_duration = torch.cumsum(duration, -1)
148
-
149
- cum_duration_flat = cum_duration.view(b * t_x)
150
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
151
- path = path.view(b, t_x, t_y)
152
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
153
- path = path.unsqueeze(1).transpose(2,3) * mask
154
- return path
155
-
156
-
157
- def clip_grad_value_(parameters, clip_value, norm_type=2):
158
- if isinstance(parameters, torch.Tensor):
159
- parameters = [parameters]
160
- parameters = list(filter(lambda p: p.grad is not None, parameters))
161
- norm_type = float(norm_type)
162
- if clip_value is not None:
163
- clip_value = float(clip_value)
164
-
165
- total_norm = 0
166
- for p in parameters:
167
- param_norm = p.grad.data.norm(norm_type)
168
- total_norm += param_norm.item() ** norm_type
169
- if clip_value is not None:
170
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
171
- total_norm = total_norm ** (1. / norm_type)
172
- return total_norm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/Config.js DELETED
@@ -1,375 +0,0 @@
1
-
2
- import YAML from 'yaml'
3
- import chokidar from 'chokidar'
4
- import fs from 'node:fs'
5
- import YamlReader from './YamlReader.js'
6
- import cfg from '../../../lib/config/config.js'
7
- import _ from 'lodash'
8
- import { modifyWebSocket } from './WebSocket.js'
9
- import { cfgSchema } from '../config/system/cfg_system.js'
10
-
11
- const Path = process.cwd()
12
- const Plugin_Name = 'ws-plugin'
13
- const Plugin_Path = `${Path}/plugins/${Plugin_Name}`
14
- class Config {
15
- constructor() {
16
- this.config = {}
17
- this.oldConfig = {}
18
- /** 监听文件 */
19
- this.watcher = { config: {}, defSet: {} }
20
-
21
- this.initCfg()
22
- }
23
-
24
- /** 初始化配置 */
25
- initCfg() {
26
- let path = `${Plugin_Path}/config/config/`
27
- if (!fs.existsSync(path)) fs.mkdirSync(path)
28
- let pathDef = `${Plugin_Path}/config/default_config/`
29
- const files = fs.readdirSync(pathDef).filter(file => file.endsWith('.yaml'))
30
- for (let file of files) {
31
- if (!fs.existsSync(`${path}${file}`)) {
32
- fs.copyFileSync(`${pathDef}${file}`, `${path}${file}`)
33
- }
34
- this.watch(`${path}${file}`, file.replace('.yaml', ''), 'config')
35
- }
36
- }
37
-
38
- /** 主人QQ */
39
- get masterQQ() {
40
- return cfg.masterQQ
41
- }
42
-
43
- /** Bot账号:[主人帐号] */
44
- get master() {
45
- return cfg.master
46
- }
47
-
48
- /** 云崽黑名单群 */
49
- get blackGroup() {
50
- return cfg.getOther().blackGroup
51
- }
52
-
53
- /** 云崽白名单群 */
54
- get whiteGroup() {
55
- return cfg.getOther().whiteGroup
56
- }
57
-
58
- /** 心跳 */
59
- get heartbeatInterval() {
60
- return this.getDefOrConfig('ws-config').heartbeatInterval
61
- }
62
-
63
- /** 数据上报类型 */
64
- get messagePostFormat() {
65
- return this.getDefOrConfig('ws-config').messagePostFormat
66
- }
67
-
68
- /** 连接列表 */
69
- get servers() {
70
- return this.getDefOrConfig('ws-config').servers
71
- }
72
-
73
- get noMsgStart() {
74
- return this.getDefOrConfig('msg-config').noMsgStart
75
- }
76
-
77
- get noMsgInclude() {
78
- return this.getDefOrConfig('msg-config').noMsgInclude
79
- }
80
-
81
- get howToMaster() {
82
- return this.getDefOrConfig('msg-config').howToMaster
83
- }
84
-
85
- /**掉线时否通知主人 */
86
- get disconnectToMaster() {
87
- return this.getDefOrConfig('msg-config').disconnectToMaster
88
- }
89
-
90
- /**重连成功时是否通知主人 */
91
- get reconnectToMaster() {
92
- return this.getDefOrConfig('msg-config').reconnectToMaster
93
- }
94
-
95
- /**首次连接成功时是否通知主人 */
96
- get firstconnectToMaster() {
97
- return this.getDefOrConfig('msg-config').firstconnectToMaster
98
- }
99
-
100
- /**消息存储时间 */
101
- get msgStoreTime() {
102
- return this.getDefOrConfig('msg-config').msgStoreTime
103
- }
104
-
105
- /**禁用群聊列表 */
106
- get noGroup() {
107
- return this.getDefOrConfig('msg-config').noGroup
108
- }
109
-
110
- /** 白名单群聊 */
111
- get yesGroup() {
112
- return this.getDefOrConfig('msg-config').yesGroup
113
- }
114
-
115
- /** 禁言拦截 */
116
- get muteStop() {
117
- return this.getDefOrConfig('msg-config').muteStop
118
- }
119
-
120
- /** red 发送伪造转发消息方式 */
121
- get redSendForwardMsgType(){
122
- return this.getDefOrConfig('msg-config').redSendForwardMsgType
123
- }
124
-
125
- /**群管理员变动是否上报 */
126
- get groupAdmin() {
127
- return this.getDefOrConfig('notice-config').groupAdmin
128
- }
129
-
130
- /**群成员减少是否上报 */
131
- get groupDecrease() {
132
- return this.getDefOrConfig('notice-config').groupDecrease
133
- }
134
-
135
- /**群成员增加是否上报 */
136
- get groupIncrease() {
137
- return this.getDefOrConfig('notice-config').groupIncrease
138
- }
139
-
140
- /**群禁言是否上报 */
141
- get groupBan() {
142
- return this.getDefOrConfig('notice-config').groupBan
143
- }
144
-
145
- /**好友添加是否上报 */
146
- get friendIncrease() {
147
- return this.getDefOrConfig('notice-config').friendIncrease
148
- }
149
-
150
- /**群消息撤回是否上报 */
151
- get groupRecall() {
152
- return this.getDefOrConfig('notice-config').groupRecall
153
- }
154
-
155
- /**好友消息撤回是否上报 */
156
- get friendRecall() {
157
- return this.getDefOrConfig('notice-config').friendRecall
158
- }
159
-
160
- /**群内戳一戳是否上报 */
161
- get groupPoke() {
162
- return this.getDefOrConfig('notice-config').groupPoke
163
- }
164
-
165
- /** 好友申请是否上报 */
166
- get friendAdd() {
167
- return this.getDefOrConfig('request-config').friendAdd
168
- }
169
-
170
- /** 群聊邀请是否上报 (邀请机器人入群) */
171
- get groupInvite() {
172
- return this.getDefOrConfig('request-config').groupInvite
173
- }
174
-
175
- /** 群聊申请是否上报 (申请加入群聊) */
176
- get groupAdd() {
177
- return this.getDefOrConfig('request-config').groupAdd
178
- }
179
-
180
- /** 默认配置和用户配置 */
181
- getDefOrConfig(name) {
182
- let def = this.getdefSet(name)
183
- let config = this.getConfig(name)
184
- return { ...def, ...config }
185
- }
186
-
187
- /** 默认配置 */
188
- getdefSet(name) {
189
- return this.getYaml('default_config', name)
190
- }
191
-
192
- /** 用户配置 */
193
- getConfig(name) {
194
- return this.getYaml('config', name)
195
- }
196
-
197
- /**
198
- * 获取配置yaml
199
- * @param type 默认跑配置-defSet,用户配置-config
200
- * @param name 名称
201
- */
202
- getYaml(type, name) {
203
- let file = `${Plugin_Path}/config/${type}/${name}.yaml`
204
- let key = `${type}.${name}`
205
-
206
- if (this.config[key]) return this.config[key]
207
-
208
- this.config[key] = YAML.parse(
209
- fs.readFileSync(file, 'utf8')
210
- )
211
-
212
- this.watch(file, name, type)
213
-
214
- return this.config[key]
215
- }
216
-
217
- /** 监听配置文件 */
218
- watch(file, name, type = 'default_config') {
219
- let key = `${type}.${name}`
220
- if (!this.oldConfig[key]) this.oldConfig[key] = _.cloneDeep(this.config[key])
221
- if (this.watcher[key]) return
222
-
223
- const watcher = chokidar.watch(file)
224
- watcher.on('change', path => {
225
- delete this.config[key]
226
- if (typeof Bot == 'undefined') return
227
- logger.mark(`[ws-plugin][修改配置文件][${type}][${name}]`)
228
-
229
- if (name == 'ws-config') {
230
- const oldConfig = this.oldConfig[key]
231
- delete this.oldConfig[key]
232
- const newConfig = this.getYaml(type, name)
233
- const object = this.findDifference(oldConfig, newConfig)
234
- // console.log(object);
235
- for (const key in object) {
236
- if (Object.hasOwnProperty.call(object, key)) {
237
- const value = object[key];
238
- const arr = key.split('.')
239
- if (arr[0] !== 'servers') continue
240
- let data = newConfig.servers[arr[1]]
241
- if (typeof data === 'undefined') data = oldConfig.servers[arr[1]]
242
- const target = {
243
- type: null,
244
- data
245
- }
246
- if (typeof value['newValue'] === 'object' && typeof value['oldValue'] === 'undefined') {
247
- target.type = 'add'
248
- }
249
- else if (typeof value['newValue'] === 'undefined' && typeof value['oldValue'] === 'object') {
250
- target.type = 'del'
251
- }
252
- else if (value['newValue'] === true && (value['oldValue'] === false || typeof value['oldValue'] === 'undefined')) {
253
- target.type = 'close'
254
- }
255
- else if (value['newValue'] === false && (value['oldValue'] === true || typeof value['oldValue'] === 'undefined')) {
256
- target.type = 'open'
257
- }
258
- modifyWebSocket(target)
259
- }
260
- }
261
-
262
- }
263
- })
264
-
265
- this.watcher[key] = watcher
266
- }
267
-
268
- getCfgSchemaMap() {
269
- let ret = {}
270
- _.forEach(cfgSchema, (cfgGroup) => {
271
- _.forEach(cfgGroup.cfg, (cfgItem, cfgKey) => {
272
- ret[cfgItem.key] = cfgItem
273
- cfgItem.cfgKey = cfgKey
274
- })
275
- })
276
- return ret
277
- }
278
-
279
- getCfgSchema() {
280
- return cfgSchema
281
- }
282
-
283
- getCfg() {
284
- let wsconfig = this.getDefOrConfig('ws-config')
285
- let msgconfig = this.getDefOrConfig('msg-config')
286
- let noticeconfig = this.getDefOrConfig('notice-config')
287
- let requestconfig = this.getDefOrConfig('request-config')
288
- return {
289
- ...wsconfig,
290
- ...msgconfig,
291
- ...noticeconfig,
292
- ...requestconfig
293
- }
294
- }
295
-
296
- /**
297
- * @description: 修改设置
298
- * @param {String} name 文件名
299
- * @param {String} key 修改的key值
300
- * @param {String|Number} value 修改的value值
301
- * @param {'config'|'default_config'} type 配置文件或默认
302
- */
303
- modify(name, key, value, type = 'config') {
304
- let path = `${Plugin_Path}/config/${type}/${name}.yaml`
305
- new YamlReader(path).set(key, value)
306
- this.oldConfig[key] = _.cloneDeep(this.config[key])
307
- delete this.config[`${type}.${name}`]
308
- }
309
-
310
- /**
311
- * @description: 修改配置数组
312
- * @param {String} name 文件名
313
- * @param {String|Number} key key值
314
- * @param {String|Number} value value
315
- * @param {'add'|'del'} category 类别 add or del
316
- * @param {'config'|'default_config'} type 配置文件或默认
317
- */
318
- modifyarr(name, key, value, category = 'add', type = 'config') {
319
- let path = `${Plugin_Path}/config/${type}/${name}.yaml`
320
- let yaml = new YamlReader(path)
321
- if (category == 'add') {
322
- yaml.addIn(key, value)
323
- } else {
324
- let index = yaml.jsonData[key].indexOf(value)
325
- yaml.delete(`${key}.${index}`)
326
- }
327
- }
328
-
329
- setArr(name, key, item, value, type = 'config') {
330
- let path = `${Plugin_Path}/config/${type}/${name}.yaml`
331
- let yaml = new YamlReader(path)
332
- let arr = yaml.get(key).slice();
333
- arr[item] = value
334
- yaml.set(key, arr)
335
- }
336
-
337
- delServersArr(value, name = 'ws-config', type = 'config') {
338
- let path = `${Plugin_Path}/config/${type}/${name}.yaml`
339
- let yaml = new YamlReader(path)
340
- let key = 'servers'
341
- // let index = yaml.jsonData[key].indexOf(value)
342
- let index = yaml.jsonData[key].findIndex(item => item.name === value);
343
- yaml.delete(`${key}.${index}`)
344
- }
345
-
346
- /**
347
- * @description 对比两个对象不同的值
348
- * @param {*} oldObj
349
- * @param {*} newObj
350
- * @param {*} parentKey
351
- * @returns
352
- */
353
- findDifference(obj1, obj2, parentKey = '') {
354
- const result = {};
355
- for (const key in obj1) {
356
- const fullKey = parentKey ? `${parentKey}.${key}` : key;
357
- if (_.isObject(obj1[key]) && _.isObject(obj2[key])) {
358
- const diff = this.findDifference(obj1[key], obj2[key], fullKey);
359
- if (!_.isEmpty(diff)) {
360
- Object.assign(result, diff);
361
- }
362
- } else if (!_.isEqual(obj1[key], obj2[key])) {
363
- result[fullKey] = { oldValue: obj1[key], newValue: obj2[key] };
364
- }
365
- }
366
- for (const key in obj2) {
367
- if (!obj1.hasOwnProperty(key)) {
368
- const fullKey = parentKey ? `${parentKey}.${key}` : key;
369
- result[fullKey] = { oldValue: undefined, newValue: obj2[key] };
370
- }
371
- }
372
- return result;
373
- }
374
- }
375
- export default new Config()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/little_angel/__init__.py DELETED
@@ -1,55 +0,0 @@
1
- from typing import List
2
-
3
- from pil_utils import BuildImage
4
-
5
- from meme_generator import MemeArgsModel, add_meme
6
- from meme_generator.exception import TextOverLength
7
- from meme_generator.utils import make_jpg_or_gif
8
-
9
-
10
- def little_angel(images: List[BuildImage], texts: List[str], args: MemeArgsModel):
11
- img_w, img_h = images[0].convert("RGBA").resize_width(500).size
12
- frame = BuildImage.new("RGBA", (600, img_h + 230), "white")
13
- text = "非常可爱!简直就是小天使"
14
- frame.draw_text(
15
- (10, img_h + 120, 590, img_h + 185), text, max_fontsize=48, weight="bold"
16
- )
17
-
18
- ta = "她"
19
- name = ta
20
- if texts:
21
- name = texts[0]
22
- elif args.user_infos:
23
- info = args.user_infos[0]
24
- ta = "他" if info.gender == "male" else "她"
25
- name = info.name or ta
26
-
27
- text = f"{ta}没失踪也没怎么样 我只是觉得你们都该看一下"
28
- frame.draw_text(
29
- (20, img_h + 180, 580, img_h + 215), text, max_fontsize=26, weight="bold"
30
- )
31
-
32
- text = f"请问你们看到{name}了吗?"
33
- try:
34
- frame.draw_text(
35
- (20, 0, 580, 110), text, max_fontsize=70, min_fontsize=25, weight="bold"
36
- )
37
- except ValueError:
38
- raise TextOverLength(name)
39
-
40
- def make(img: BuildImage) -> BuildImage:
41
- img = img.convert("RGBA").resize_width(500)
42
- return frame.copy().paste(img, (int(300 - img_w / 2), 110), alpha=True)
43
-
44
- return make_jpg_or_gif(images[0], make)
45
-
46
-
47
- add_meme(
48
- "little_angel",
49
- little_angel,
50
- min_images=1,
51
- max_images=1,
52
- min_texts=0,
53
- max_texts=1,
54
- keywords=["小天使"],
55
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/CDM/detect_compo/deprecated/ip_detection_utils.py DELETED
@@ -1,461 +0,0 @@
1
- import numpy as np
2
- import cv2
3
- from collections import Counter
4
-
5
- import lib_ip.ip_draw as draw
6
- from CDM.config.CONFIG_UIED import Config
7
- C = Config()
8
-
9
-
10
- # detect object(connected region)
11
- # def boundary_bfs_connected_area(img, x, y, mark):
12
- # def neighbor(img, x, y, mark, stack):
13
- # for i in range(x - 1, x + 2):
14
- # if i < 0 or i >= img.shape[0]: continue
15
- # for j in range(y - 1, y + 2):
16
- # if j < 0 or j >= img.shape[1]: continue
17
- # if img[i, j] == 255 and mark[i, j] == 0:
18
- # stack.append([i, j])
19
- # mark[i, j] = 255
20
- #
21
- # stack = [[x, y]] # points waiting for inspection
22
- # area = [[x, y]] # points of this area
23
- # mark[x, y] = 255 # drawing broad
24
- #
25
- # while len(stack) > 0:
26
- # point = stack.pop()
27
- # area.append(point)
28
- # neighbor(img, point[0], point[1], mark, stack)
29
- # return area
30
-
31
-
32
- # def line_check_perpendicular(lines_h, lines_v, max_thickness):
33
- # """
34
- # lines: [line_h, line_v]
35
- # -> line_h: horizontal {'head':(column_min, row), 'end':(column_max, row), 'thickness':int)
36
- # -> line_v: vertical {'head':(column, row_min), 'end':(column, row_max), 'thickness':int}
37
- # """
38
- # is_per_h = np.full(len(lines_h), False)
39
- # is_per_v = np.full(len(lines_v), False)
40
- # for i in range(len(lines_h)):
41
- # # save the intersection point of h
42
- # lines_h[i]['inter_point'] = set()
43
- # h = lines_h[i]
44
- #
45
- # for j in range(len(lines_v)):
46
- # # save the intersection point of v
47
- # if 'inter_point' not in lines_v[j]: lines_v[j]['inter_point'] = set()
48
- # v = lines_v[j]
49
- #
50
- # # if h is perpendicular to v in head of v
51
- # if abs(h['head'][1]-v['head'][1]) <= max_thickness:
52
- # if abs(h['head'][0] - v['head'][0]) <= max_thickness:
53
- # lines_h[i]['inter_point'].add('head')
54
- # lines_v[j]['inter_point'].add('head')
55
- # is_per_h[i] = True
56
- # is_per_v[j] = True
57
- # elif abs(h['end'][0] - v['head'][0]) <= max_thickness:
58
- # lines_h[i]['inter_point'].add('end')
59
- # lines_v[j]['inter_point'].add('head')
60
- # is_per_h[i] = True
61
- # is_per_v[j] = True
62
- #
63
- # # if h is perpendicular to v in end of v
64
- # elif abs(h['head'][1]-v['end'][1]) <= max_thickness:
65
- # if abs(h['head'][0] - v['head'][0]) <= max_thickness:
66
- # lines_h[i]['inter_point'].add('head')
67
- # lines_v[j]['inter_point'].add('end')
68
- # is_per_h[i] = True
69
- # is_per_v[j] = True
70
- # elif abs(h['end'][0] - v['head'][0]) <= max_thickness:
71
- # lines_h[i]['inter_point'].add('end')
72
- # lines_v[j]['inter_point'].add('end')
73
- # is_per_h[i] = True
74
- # is_per_v[j] = True
75
- # per_h = []
76
- # per_v = []
77
- # for i in range(len(is_per_h)):
78
- # if is_per_h[i]:
79
- # lines_h[i]['inter_point'] = list(lines_h[i]['inter_point'])
80
- # per_h.append(lines_h[i])
81
- # for i in range(len(is_per_v)):
82
- # if is_per_v[i]:
83
- # lines_v[i]['inter_point'] = list(lines_v[i]['inter_point'])
84
- # per_v.append(lines_v[i])
85
- # return per_h, per_v
86
-
87
-
88
- # def line_shrink_corners(corner, lines_h, lines_v):
89
- # """
90
- # shrink the corner according to lines:
91
- # col_min_shrink: shrink right (increase)
92
- # col_max_shrink: shrink left (decrease)
93
- # row_min_shrink: shrink down (increase)
94
- # row_max_shrink: shrink up (decrease)
95
- # :param lines_h: horizontal {'head':(column_min, row), 'end':(column_max, row), 'thickness':int)
96
- # :param lines_v: vertical {'head':(column, row_min), 'end':(column, row_max), 'thickness':int}
97
- # :return: shrunken corner: (top_left, bottom_right)
98
- # """
99
- # (col_min, row_min), (col_max, row_max) = corner
100
- # col_min_shrink, row_min_shrink = col_min, row_min
101
- # col_max_shrink, row_max_shrink = col_max, row_max
102
- # valid_frame = False
103
- #
104
- # for h in lines_h:
105
- # # ignore outer border
106
- # if len(h['inter_point']) == 2:
107
- # valid_frame = True
108
- # continue
109
- # # shrink right -> col_min move to end
110
- # if h['inter_point'][0] == 'head':
111
- # col_min_shrink = max(h['end'][0], col_min_shrink)
112
- # # shrink left -> col_max move to head
113
- # elif h['inter_point'][0] == 'end':
114
- # col_max_shrink = min(h['head'][0], col_max_shrink)
115
- #
116
- # for v in lines_v:
117
- # # ignore outer border
118
- # if len(v['inter_point']) == 2:
119
- # valid_frame = True
120
- # continue
121
- # # shrink down -> row_min move to end
122
- # if v['inter_point'][0] == 'head':
123
- # row_min_shrink = max(v['end'][1], row_min_shrink)
124
- # # shrink up -> row_max move to head
125
- # elif v['inter_point'][0] == 'end':
126
- # row_max_shrink = min(v['head'][1], row_max_shrink)
127
- #
128
- # # return the shrunken corner if only there is line intersecting with two other lines
129
- # if valid_frame:
130
- # return (col_min_shrink, row_min_shrink), (col_max_shrink, row_max_shrink)
131
- # return corner
132
-
133
-
134
- # def line_cvt_relative_position(col_min, row_min, lines_h, lines_v):
135
- # """
136
- # convert the relative position of lines in the entire image
137
- # :param col_min: based column the img lines belong to
138
- # :param row_min: based row the img lines belong to
139
- # :param lines_h: horizontal {'head':(column_min, row), 'end':(column_max, row), 'thickness':int)
140
- # :param lines_v: vertical {'head':(column, row_min), 'end':(column, row_max), 'thickness':int}
141
- # :return: lines_h_cvt, lines_v_cvt
142
- # """
143
- # for h in lines_h:
144
- # h['head'][0] += col_min
145
- # h['head'][1] += row_min
146
- # h['end'][0] += col_min
147
- # h['end'][1] += row_min
148
- # for v in lines_v:
149
- # v['head'][0] += col_min
150
- # v['head'][1] += row_min
151
- # v['end'][0] += col_min
152
- # v['end'][1] += row_min
153
- #
154
- # return lines_h, lines_v
155
-
156
-
157
- # check if an object is so slim
158
- # @boundary: [border_up, border_bottom, border_left, border_right]
159
- # -> up, bottom: (column_index, min/max row border)
160
- # -> left, right: (row_index, min/max column border) detect range of each row
161
- def clipping_by_line(boundary, boundary_rec, lines):
162
- boundary = boundary.copy()
163
- for orient in lines:
164
- # horizontal
165
- if orient == 'h':
166
- # column range of sub area
167
- r1, r2 = 0, 0
168
- for line in lines[orient]:
169
- if line[0] == 0:
170
- r1 = line[1]
171
- continue
172
- r2 = line[0]
173
- b_top = []
174
- b_bottom = []
175
- for i in range(len(boundary[0])):
176
- if r2 > boundary[0][i][0] >= r1:
177
- b_top.append(boundary[0][i])
178
- for i in range(len(boundary[1])):
179
- if r2 > boundary[1][i][0] >= r1:
180
- b_bottom.append(boundary[1][i])
181
-
182
- b_left = [x for x in boundary[2]] # (row_index, min column border)
183
- for i in range(len(b_left)):
184
- if b_left[i][1] < r1:
185
- b_left[i][1] = r1
186
- b_right = [x for x in boundary[3]] # (row_index, max column border)
187
- for i in range(len(b_right)):
188
- if b_right[i][1] > r2:
189
- b_right[i][1] = r2
190
-
191
- boundary_rec.append([b_top, b_bottom, b_left, b_right])
192
- r1 = line[1]
193
-
194
-
195
- # remove imgs that contain text
196
- # def rm_text(org, corners, compo_class,
197
- # max_text_height=C.THRESHOLD_TEXT_MAX_HEIGHT, max_text_width=C.THRESHOLD_TEXT_MAX_WIDTH,
198
- # ocr_padding=C.OCR_PADDING, ocr_min_word_area=C.OCR_MIN_WORD_AREA, show=False):
199
- # """
200
- # Remove area that full of text
201
- # :param org: original image
202
- # :param corners: [(top_left, bottom_right)]
203
- # -> top_left: (column_min, row_min)
204
- # -> bottom_right: (column_max, row_max)
205
- # :param compo_class: classes of corners
206
- # :param max_text_height: Too large to be text
207
- # :param max_text_width: Too large to be text
208
- # :param ocr_padding: Padding for clipping
209
- # :param ocr_min_word_area: If too text area ratio is too large
210
- # :param show: Show or not
211
- # :return: corners without text objects
212
- # """
213
- # new_corners = []
214
- # new_class = []
215
- # for i in range(len(corners)):
216
- # corner = corners[i]
217
- # (top_left, bottom_right) = corner
218
- # (col_min, row_min) = top_left
219
- # (col_max, row_max) = bottom_right
220
- # height = row_max - row_min
221
- # width = col_max - col_min
222
- # # highly likely to be block or img if too large
223
- # if height > max_text_height and width > max_text_width:
224
- # new_corners.append(corner)
225
- # new_class.append(compo_class[i])
226
- # else:
227
- # row_min = row_min - ocr_padding if row_min - ocr_padding >= 0 else 0
228
- # row_max = row_max + ocr_padding if row_max + ocr_padding < org.shape[0] else org.shape[0]
229
- # col_min = col_min - ocr_padding if col_min - ocr_padding >= 0 else 0
230
- # col_max = col_max + ocr_padding if col_max + ocr_padding < org.shape[1] else org.shape[1]
231
- # # check if this area is text
232
- # clip = org[row_min: row_max, col_min: col_max]
233
- # if not ocr.is_text(clip, ocr_min_word_area, show=show):
234
- # new_corners.append(corner)
235
- # new_class.append(compo_class[i])
236
- # return new_corners, new_class
237
-
238
-
239
- # def rm_img_in_compo(corners_img, corners_compo):
240
- # """
241
- # Remove imgs in component
242
- # """
243
- # corners_img_new = []
244
- # for img in corners_img:
245
- # is_nested = False
246
- # for compo in corners_compo:
247
- # if util.corner_relation(img, compo) == -1:
248
- # is_nested = True
249
- # break
250
- # if not is_nested:
251
- # corners_img_new.append(img)
252
- # return corners_img_new
253
-
254
-
255
- # def block_or_compo(org, binary, corners,
256
- # max_thickness=C.THRESHOLD_BLOCK_MAX_BORDER_THICKNESS, max_block_cross_points=C.THRESHOLD_BLOCK_MAX_CROSS_POINT,
257
- # min_compo_w_h_ratio=C.THRESHOLD_UICOMPO_MIN_W_H_RATIO, max_compo_w_h_ratio=C.THRESHOLD_UICOMPO_MAX_W_H_RATIO,
258
- # min_block_edge=C.THRESHOLD_BLOCK_MIN_EDGE_LENGTH):
259
- # """
260
- # Check if the objects are img components or just block
261
- # :param org: Original image
262
- # :param binary: Binary image from pre-processing
263
- # :param corners: [(top_left, bottom_right)]
264
- # -> top_left: (column_min, row_min)
265
- # -> bottom_right: (column_max, row_max)
266
- # :param max_thickness: The max thickness of border of blocks
267
- # :param max_block_cross_points: Ratio of point of interaction
268
- # :return: corners of blocks and imgs
269
- # """
270
- # blocks = []
271
- # imgs = []
272
- # compos = []
273
- # for corner in corners:
274
- # (top_left, bottom_right) = corner
275
- # (col_min, row_min) = top_left
276
- # (col_max, row_max) = bottom_right
277
- # height = row_max - row_min
278
- # width = col_max - col_min
279
- #
280
- # block = False
281
- # vacancy = [0, 0, 0, 0]
282
- # for i in range(1, max_thickness):
283
- # try:
284
- # # top to bottom
285
- # if vacancy[0] == 0 and (col_max - col_min - 2 * i) is not 0 and (
286
- # np.sum(binary[row_min + i, col_min + i: col_max - i]) / 255) / (col_max - col_min - 2 * i) <= max_block_cross_points:
287
- # vacancy[0] = 1
288
- # # bottom to top
289
- # if vacancy[1] == 0 and (col_max - col_min - 2 * i) is not 0 and (
290
- # np.sum(binary[row_max - i, col_min + i: col_max - i]) / 255) / (col_max - col_min - 2 * i) <= max_block_cross_points:
291
- # vacancy[1] = 1
292
- # # left to right
293
- # if vacancy[2] == 0 and (row_max - row_min - 2 * i) is not 0 and (
294
- # np.sum(binary[row_min + i: row_max - i, col_min + i]) / 255) / (row_max - row_min - 2 * i) <= max_block_cross_points:
295
- # vacancy[2] = 1
296
- # # right to left
297
- # if vacancy[3] == 0 and (row_max - row_min - 2 * i) is not 0 and (
298
- # np.sum(binary[row_min + i: row_max - i, col_max - i]) / 255) / (row_max - row_min - 2 * i) <= max_block_cross_points:
299
- # vacancy[3] = 1
300
- # if np.sum(vacancy) == 4:
301
- # block = True
302
- # except:
303
- # pass
304
- #
305
- # # too big to be UI components
306
- # if block:
307
- # if height > min_block_edge and width > min_block_edge:
308
- # blocks.append(corner)
309
- # else:
310
- # if min_compo_w_h_ratio < width / height < max_compo_w_h_ratio:
311
- # compos.append(corner)
312
- # # filter out small objects
313
- # else:
314
- # if height > min_block_edge:
315
- # imgs.append(corner)
316
- # else:
317
- # if min_compo_w_h_ratio < width / height < max_compo_w_h_ratio:
318
- # compos.append(corner)
319
- # return blocks, imgs, compos
320
-
321
-
322
- # def compo_on_img(processing, org, binary, clf,
323
- # compos_corner, compos_class):
324
- # """
325
- # Detect potential UI components inner img;
326
- # Only leave non-img
327
- # """
328
- # pad = 2
329
- # for i in range(len(compos_corner)):
330
- # if compos_class[i] != 'img':
331
- # continue
332
- # ((col_min, row_min), (col_max, row_max)) = compos_corner[i]
333
- # col_min = max(col_min - pad, 0)
334
- # col_max = min(col_max + pad, org.shape[1])
335
- # row_min = max(row_min - pad, 0)
336
- # row_max = min(row_max + pad, org.shape[0])
337
- # area = (col_max - col_min) * (row_max - row_min)
338
- # if area < 600:
339
- # continue
340
- #
341
- # clip_org = org[row_min:row_max, col_min:col_max]
342
- # clip_bin_inv = pre.reverse_binary(binary[row_min:row_max, col_min:col_max])
343
- #
344
- # compos_boundary_new, compos_corner_new, compos_class_new = processing(clip_org, clip_bin_inv, clf)
345
- # compos_corner_new = util.corner_cvt_relative_position(compos_corner_new, col_min, row_min)
346
- #
347
- # assert len(compos_corner_new) == len(compos_class_new)
348
- #
349
- # # only leave non-img elements
350
- # for i in range(len(compos_corner_new)):
351
- # ((col_min_new, row_min_new), (col_max_new, row_max_new)) = compos_corner_new[i]
352
- # area_new = (col_max_new - col_min_new) * (row_max_new - row_min_new)
353
- # if compos_class_new[i] != 'img' and area_new / area < 0.8:
354
- # compos_corner.append(compos_corner_new[i])
355
- # compos_class.append(compos_class_new[i])
356
- #
357
- # return compos_corner, compos_class
358
-
359
-
360
- # def strip_img(corners_compo, compos_class, corners_img):
361
- # """
362
- # Separate img from other compos
363
- # :return: compos without img
364
- # """
365
- # corners_compo_withuot_img = []
366
- # compo_class_withuot_img = []
367
- # for i in range(len(compos_class)):
368
- # if compos_class[i] == 'img':
369
- # corners_img.append(corners_compo[i])
370
- # else:
371
- # corners_compo_withuot_img.append(corners_compo[i])
372
- # compo_class_withuot_img.append(compos_class[i])
373
- # return corners_compo_withuot_img, compo_class_withuot_img
374
-
375
-
376
- # def merge_corner(corners, compos_class, min_selected_IoU=C.THRESHOLD_MIN_IOU, is_merge_nested_same=True):
377
- # """
378
- # Calculate the Intersection over Overlap (IoU) and merge corners according to the value of IoU
379
- # :param is_merge_nested_same: if true, merge the nested corners with same class whatever the IoU is
380
- # :param corners: corners: [(top_left, bottom_right)]
381
- # -> top_left: (column_min, row_min)
382
- # -> bottom_right: (column_max, row_max)
383
- # :return: new corners
384
- # """
385
- # new_corners = []
386
- # new_class = []
387
- # for i in range(len(corners)):
388
- # is_intersected = False
389
- # for j in range(len(new_corners)):
390
- # r = util.corner_relation_nms(corners[i], new_corners[j], min_selected_IoU)
391
- # # r = util.corner_relation(corners[i], new_corners[j])
392
- # if is_merge_nested_same:
393
- # if compos_class[i] == new_class[j]:
394
- # # if corners[i] is in new_corners[j], ignore corners[i]
395
- # if r == -1:
396
- # is_intersected = True
397
- # break
398
- # # if new_corners[j] is in corners[i], replace new_corners[j] with corners[i]
399
- # elif r == 1:
400
- # is_intersected = True
401
- # new_corners[j] = corners[i]
402
- #
403
- # # if above IoU threshold, and corners[i] is in new_corners[j], ignore corners[i]
404
- # if r == -2:
405
- # is_intersected = True
406
- # break
407
- # # if above IoU threshold, and new_corners[j] is in corners[i], replace new_corners[j] with corners[i]
408
- # elif r == 2:
409
- # is_intersected = True
410
- # new_corners[j] = corners[i]
411
- # new_class[j] = compos_class[i]
412
- #
413
- # # containing and too small
414
- # elif r == -3:
415
- # is_intersected = True
416
- # break
417
- # elif r == 3:
418
- # is_intersected = True
419
- # new_corners[j] = corners[i]
420
- #
421
- # # if [i] and [j] are overlapped but no containing relation, merge corners when same class
422
- # elif r == 4:
423
- # is_intersected = True
424
- # if compos_class[i] == new_class[j]:
425
- # new_corners[j] = util.corner_merge_two_corners(corners[i], new_corners[j])
426
- #
427
- # if not is_intersected:
428
- # new_corners.append(corners[i])
429
- # new_class.append(compos_class[i])
430
- # return new_corners, new_class
431
-
432
-
433
- # def select_corner(corners, compos_class, class_name):
434
- # """
435
- # Select corners in given compo type
436
- # """
437
- # corners_wanted = []
438
- # for i in range(len(compos_class)):
439
- # if compos_class[i] == class_name:
440
- # corners_wanted.append(corners[i])
441
- # return corners_wanted
442
-
443
-
444
- # def flood_fill_bfs(img, x_start, y_start, mark, grad_thresh):
445
- # def neighbor(x, y):
446
- # for i in range(x - 1, x + 2):
447
- # if i < 0 or i >= img.shape[0]: continue
448
- # for j in range(y - 1, y + 2):
449
- # if j < 0 or j >= img.shape[1]: continue
450
- # if mark[i, j] == 0 and abs(img[i, j] - img[x, y]) < grad_thresh:
451
- # stack.append([i, j])
452
- # mark[i, j] = 255
453
- #
454
- # stack = [[x_start, y_start]] # points waiting for inspection
455
- # region = [[x_start, y_start]] # points of this connected region
456
- # mark[x_start, y_start] = 255 # drawing broad
457
- # while len(stack) > 0:
458
- # point = stack.pop()
459
- # region.append(point)
460
- # neighbor(point[0], point[1])
461
- # return region
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/laion_dataset.py DELETED
@@ -1,31 +0,0 @@
1
- """
2
- Copyright (c) 2022, salesforce.com, inc.
3
- All rights reserved.
4
- SPDX-License-Identifier: BSD-3-Clause
5
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
6
- """
7
-
8
- import webdataset as wds
9
- from video_llama.datasets.datasets.base_dataset import BaseDataset
10
-
11
-
12
- class LaionDataset(BaseDataset):
13
- def __init__(self, vis_processor, text_processor, location):
14
- super().__init__(vis_processor=vis_processor, text_processor=text_processor)
15
-
16
- self.inner_dataset = wds.DataPipeline(
17
- wds.ResampledShards(location),
18
- wds.tarfile_to_samples(handler=wds.warn_and_continue),
19
- wds.shuffle(1000, handler=wds.warn_and_continue),
20
- wds.decode("pilrgb", handler=wds.warn_and_continue),
21
- wds.to_tuple("jpg", "json", handler=wds.warn_and_continue),
22
- wds.map_tuple(self.vis_processor, handler=wds.warn_and_continue),
23
- wds.map(self.to_dict, handler=wds.warn_and_continue),
24
- )
25
-
26
- def to_dict(self, sample):
27
- return {
28
- "image": sample[0],
29
- "text_input": self.text_processor(sample[1]["caption"]),
30
- }
31
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/__main__.py DELETED
@@ -1,3 +0,0 @@
1
- from .features import pilinfo
2
-
3
- pilinfo()