parquet-converter commited on
Commit
656c72b
·
1 Parent(s): 23de991

Update parquet files (step 35 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop 7.0.md +0 -86
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Compendio De Obstetricia Votta Pdf.md +0 -28
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Windows XP SP3 Angel Live V.2.0.iso The Features and Benefits of this Superb XP.md +0 -74
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Grand Ages Rome Gold Edition Serial What You Need to Know Before You Buy.md +0 -11
  5. spaces/1phancelerku/anime-remove-background/Clash Royale for Windows 11 The Ultimate Guide to Install and Play.md +0 -164
  6. spaces/1phancelerku/anime-remove-background/Download Nguwe by Q-Mark TpZee Afriikan Papi - Amapiano Mp3 2022.md +0 -83
  7. spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for iOS and Enjoy Unlimited Money.md +0 -124
  8. spaces/1phancelerku/anime-remove-background/Experience the Thrill of KOF M.U.G.E.N 2020 on Your Smartphone.md +0 -135
  9. spaces/2hack2furious/anonymizer/app.py +0 -87
  10. spaces/2ndelement/voicevox/test/test_full_context_label.py +0 -404
  11. spaces/2ndelement/voicevox/voicevox_engine/setting/Setting.py +0 -25
  12. spaces/52Hz/HWMNet_lowlight_enhancement/main_test_HWMNet.py +0 -86
  13. spaces/52Hz/SRMNet_AWGN_denoising/README.md +0 -37
  14. spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/rel_transformer.py +0 -611
  15. spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/lj/preprocess.py +0 -9
  16. spaces/AIMLApps/Botrite_wip/README.md +0 -12
  17. spaces/AP123/IllusionDiffusion/user_history.py +0 -423
  18. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/tcrp-plugin.js +0 -34
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/Factory.d.ts +0 -7
  20. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunChildrenWrap.js +0 -93
  21. spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/transforms.py +0 -193
  22. spaces/AlekseyKorshuk/model-evaluation/tabs/arena_side_by_side.py +0 -240
  23. spaces/AlekseyKorshuk/thin-plate-spline-motion-model/train.py +0 -94
  24. spaces/AlexWang/lama/bin/gen_outpainting_dataset.py +0 -88
  25. spaces/Ame42/UBTH/app.py +0 -225
  26. spaces/Amrrs/hubble-jwst-compare/app.py +0 -53
  27. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_scipy_objects.py +0 -17
  28. spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py +0 -15
  29. spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_1x_coco.py +0 -13
  30. spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/__init__.py +0 -1
  31. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/__init__.py +0 -2
  32. spaces/AnticPan/Clothes2Human/app.py +0 -62
  33. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/padding.py +0 -141
  34. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_lib.py +0 -238
  35. spaces/Atualli/yoloxTeste/checkYolox.sh +0 -17
  36. spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/__init__.py +0 -28
  37. spaces/Benson/text-generation/Examples/Cmo Descargar El Tiempo De Juego Del Proyecto En Steam.md +0 -123
  38. spaces/Benson/text-generation/Examples/Descargar 3d Fondo De Pantalla En Vivo.md +0 -94
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/index.py +0 -139
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/compat.py +0 -94
  41. spaces/Branon/Proxy/Dockerfile +0 -11
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/README.md +0 -8
  43. spaces/CVPR/LIVE/pybind11/include/pybind11/detail/init.h +0 -336
  44. spaces/CVPR/LIVE/thrust/thrust/distance.h +0 -77
  45. spaces/CVPR/WALT/mmdet/models/dense_heads/vfnet_head.py +0 -794
  46. spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/data/__init__.py +0 -3
  47. spaces/CVPR/regionclip-demo/detectron2/data/__init__.py +0 -19
  48. spaces/ChrisPreston/diff-svc_minato_aqua/modules/vocoders/__init__.py +0 -1
  49. spaces/CofAI/chat/client/css/dropdown.css +0 -10
  50. spaces/Covert1107/sd-diffusers-webui/modules/safe.py +0 -188
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop 7.0.md DELETED
@@ -1,86 +0,0 @@
1
-
2
- <h1>Adobe Photoshop 7.0: A Classic Photo Editing Software That Still Works</h1>
3
- <p>Adobe Photoshop 7.0 is one of the most popular and widely used photo editing software in the world. It was released in 2002 and has been a favorite among professional and amateur photographers, graphic designers, and digital artists ever since. Adobe Photoshop 7.0 offers a range of features and tools that allow you to create, edit, enhance, and manipulate images with ease and precision. In this article, we will review some of the main features and benefits of Adobe Photoshop 7.0 and why it is still a great choice for photo editing in 2023.</p>
4
- <p>One of the main advantages of Adobe Photoshop 7.0 is its compatibility and performance. Adobe Photoshop 7.0 can run smoothly on almost any Windows or Mac computer, even if it has low specifications or an older operating system. It does not require much disk space or memory to install and operate, unlike newer versions of Photoshop that may slow down your system or crash frequently. Adobe Photoshop 7.0 also supports a wide range of file formats, such as JPEG, PNG, GIF, TIFF, PSD, PDF, and more. You can easily import and export images from different sources and devices without losing quality or data.</p>
5
- <h2>adobe photoshop 7.0</h2><br /><p><b><b>Download Zip</b> &#10031;&#10031;&#10031; <a href="https://byltly.com/2uKyhe">https://byltly.com/2uKyhe</a></b></p><br /><br />
6
- <p>Another key feature of Adobe Photoshop 7.0 is its user interface and functionality. Adobe Photoshop 7.0 has a simple and intuitive interface that makes it easy to navigate and access the various tools and options. You can customize the layout and appearance of the interface according to your preferences and needs. You can also use keyboard shortcuts and mouse gestures to speed up your workflow and productivity. Adobe Photoshop 7.0 also has a powerful and versatile functionality that allows you to perform a variety of tasks and effects on your images. You can crop, resize, rotate, flip, skew, distort, warp, transform, align, merge, blend, layer, mask, filter, adjust, colorize, retouch, sharpen, blur, smudge, clone, heal, dodge, burn, sponge, gradient, texturize, stylize, draw, paint, erase, fill, stroke, select, cut, copy, paste, undo, redo, save, print, and more.</p>
7
- <p>A third benefit of Adobe Photoshop 7.0 is its creativity and innovation. Adobe Photoshop 7.0 offers a range of creative and innovative features and tools that allow you to unleash your imagination and express your vision. You can use Adobe Photoshop 7.0 to create stunning graphics and artworks for various purposes and platforms. You can design logos, banners, posters, flyers, brochures, cards,
8
- invitations,
9
- stickers,
10
- labels,
11
- t-shirts,
12
- mugs,
13
- calendars,
14
- wallpapers,
15
- icons,
16
- buttons,
17
- illustrations,
18
- comics,
19
- cartoons,
20
- animations,
21
- games,
22
- websites,
23
- apps,
24
- and more.
25
- You can also use Adobe Photoshop 7.0 to enhance your photos and make them look more professional
26
- and artistic.
27
- You can edit your photos to improve their brightness,
28
- contrast,
29
- color,
30
- exposure,
31
- white balance,
32
- sharpness,
33
- noise reduction,
34
- red-eye removal,
35
- blemish removal,
36
- skin smoothing,
37
- teeth whitening,
38
- eye color changing,
39
- hair color changing,
40
- face reshaping,
41
- body slimming,
42
- background changing,
43
- object adding or removing,
44
- and more.
45
- You can also apply various effects
46
- and filters
47
- to your photos
48
- to make them look more
49
- dramatic,
50
- romantic,
51
- vintage,
52
- retro,
53
- glamorous,
54
- grunge,
55
- pop art,
56
- watercolor,
57
- oil painting,
58
- sketching,
59
- and more.</p>
60
- <p>In conclusion
61
- Adobe Photoshop 7.0
62
- is a classic photo editing software
63
- that still works
64
- in 2023.
65
- It has a range of features
66
- and benefits
67
- that make it a great choice
68
- for photo editing
69
- in terms of compatibility
70
- performance
71
- user interface
72
- functionality
73
- creativity
74
- and innovation.
75
- You can use Adobe Photoshop 7.0
76
- to create
77
- edit
78
- enhance
79
- and manipulate images
80
- with ease
81
- and precision.
82
- You can also use Adobe Photoshop 7.0
83
- to express your vision
84
- and unleash your imagination.</p> ddb901b051<br />
85
- <br />
86
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Compendio De Obstetricia Votta Pdf.md DELETED
@@ -1,28 +0,0 @@
1
- <br />
2
- <h1>Compendio De Obstetricia Votta Pdf: A Comprehensive Guide for Obstetrics Students and Professionals</h1>
3
-
4
- <p>If you are looking for a reliable and updated source of information on obstetrics, you may want to check out the Compendio De Obstetricia Votta Pdf. This is a book written by Osvaldo H. Parada and Roberto A. Votta, two renowned obstetricians from Argentina, who have compiled their extensive knowledge and experience in this field.</p>
5
-
6
- <p>The Compendio De Obstetricia Votta Pdf covers all the aspects of obstetrics, from normal pregnancy and delivery to complications and emergencies. It also includes chapters on gynecology, neonatology, genetics, ultrasound, and more. The book is organized in a clear and concise way, with tables, figures, algorithms, and clinical cases to illustrate the concepts.</p>
7
- <h2>Compendio De Obstetricia Votta Pdf</h2><br /><p><b><b>Download</b> &gt; <a href="https://byltly.com/2uKuYQ">https://byltly.com/2uKuYQ</a></b></p><br /><br />
8
-
9
- <p>The Compendio De Obstetricia Votta Pdf is a valuable resource for obstetrics students, residents, and specialists who want to update their skills and knowledge. It is also useful for other health professionals who work with pregnant women and newborns, such as nurses, midwives, pediatricians, and family doctors.</p>
10
-
11
- <p>You can download the Compendio De Obstetricia Votta Pdf for free from various websites on the internet[^1^] [^2^] [^3^]. However, we recommend that you buy the original book from a reputable publisher or bookstore to support the authors and ensure the quality of the content.</p>
12
-
13
- <p>The Compendio De Obstetricia Votta Pdf is a must-have for anyone who wants to learn more about obstetrics and improve their practice. It is a comprehensive guide that will help you provide the best care for your patients.</p>
14
-
15
- <h2>Obstetrics Trends in 2022</h2>
16
-
17
- <p>Obstetrics is a dynamic and evolving field that constantly adapts to new evidence, technologies, and challenges. In 2022, some of the trends that may shape obstetrics practice and research include:</p>
18
-
19
- <ul>
20
- <li><b>Malaria prevention in pregnancy.</b> Malaria is a major cause of maternal and fetal morbidity and mortality in endemic regions. A recent trial in East Africa compared different regimens of intermittent preventive treatment in pregnancy (IPTp) with sulfadoxine-pyrimethamine (SP) or dihydroartemisinin-piperaquine (DP), with or without azithromycin [ 1 ]. The results showed that DP was more effective than SP in reducing clinical malaria, but also associated with higher rates of adverse pregnancy outcomes. Further studies are needed to optimize malaria prevention strategies in areas with SP resistance.</li>
21
- <li><b>Intrauterine transfusion for alpha thalassemia major.</b> Alpha thalassemia major (ATM) is a severe form of hemolytic anemia that usually results in fetal demise unless intrauterine transfusions (IUT) are performed. A series of 19 pregnancies with prenatally diagnosed ATM showed that IUT can improve survival and neurodevelopmental outcomes, especially if initiated early [ 2 ]. IUT should be offered as a fetal therapy option for patients with ATM who wish to continue their pregnancies.</li>
22
- <li><b>Timing of aspirin discontinuation in preeclampsia prophylaxis.</b> Aspirin is widely used for preventing preeclampsia in high-risk pregnancies, but the optimal time to stop it before delivery is unclear. A randomized trial in Spain compared two strategies: stopping aspirin at 36 weeks or continuing it until delivery [ 3 ]. The trial found no significant difference between the groups in the incidence of preeclampsia or other maternal or neonatal outcomes. Thus, either approach may be reasonable depending on individual preferences and circumstances.</li>
23
- </ul>
24
-
25
- <p>These are just some of the examples of the current trends in obstetrics that may influence clinical practice and research in 2022. Obstetricians should stay updated on the latest evidence and guidelines to provide the best care for their patients.</p>
26
- <p></p> 81aa517590<br />
27
- <br />
28
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Windows XP SP3 Angel Live V.2.0.iso The Features and Benefits of this Superb XP.md DELETED
@@ -1,74 +0,0 @@
1
-
2
- <h1>What is Windows XP SP3 Angel Live V.2.0.iso?</h1>
3
- <p>Windows XP is one of the most popular and widely used operating systems in the world, even though it was released more than 20 years ago. However, Microsoft stopped supporting it in 2014, which means that it no longer receives security updates or bug fixes.</p>
4
- <h2>FULL Windows XP SP3 Angel Live V.2.0.iso</h2><br /><p><b><b>Download</b> > <a href="https://byltly.com/2uKvlE">https://byltly.com/2uKvlE</a></b></p><br /><br />
5
- <p>Fortunately, there are some unofficial versions of Windows XP that are still being maintained and updated by enthusiasts and developers who want to keep this operating system alive and functional.</p>
6
- <p>One of these versions is <b>Windows XP SP3 Angel Live V.2.0.iso</b>, which is a modified and enhanced version of Windows XP that can run from a CD or a USB drive without installation.</p>
7
- <p>This version of Windows XP has many features that make it faster, more stable, more secure, and more customizable than the original one.</p>
8
- <p>In this article, we will show you what are these features, why you should choose this version of Windows XP, how to download it, how to install it, how to use it, and how to troubleshoot it.</p>
9
- <h2>Why choose Windows XP SP3 Angel Live V.2.0.iso?</h2>
10
- <p>There are many reasons why you might want to choose <b>Windows XP SP3 Angel Live V.2.0.iso</b> over other versions of Windows XP or other operating systems.</p>
11
- <p>Here are some of the benefits of using this version of Windows XP:</p>
12
- <ul>
13
- <li><b>Speed:</b> This version of Windows XP is optimized for performance and runs faster than the original one. It has less bloatware and unnecessary services that slow down your system.</li>
14
- <li><b>Stability:</b> This version of Windows XP is more stable and reliable than the original one. It has fewer bugs and errors that cause crashes or freezes.</li>
15
- <li><b>Security:</b> This version of Windows XP is more secure than the original one. It has all the latest updates and patches that fix the vulnerabilities and exploits that affect Windows XP.</li>
16
- <li><b>Customization:</b> This version of Windows XP is more customizable than the original one. It has many tools and options that let you change the appearance and functionality of your system according to your preferences.</li>
17
- </ul>
18
- <h2>How to download Windows XP SP3 Angel Live V.2.0.iso?</h2>
19
- <p>If you want to try <b>Windows XP SP3 Angel Live V.2.0.iso</b>, you need to download the ISO file first.</p>
20
- <p>An ISO file is an image file that contains all the data and files that are needed to create a bootable CD or USB drive.</p>
21
- <p>You can download <b>Windows XP SP3 Angel Live V.2.0.iso</b> from various sources on the internet, but you need to be careful about where you get it from.</p>
22
- <p>Download Windows XP SP3 Angel Live V.2.0.iso full version<br />
23
- How to install Windows XP SP3 Angel Live V.2.0.iso on a USB drive<br />
24
- Windows XP SP3 Angel Live V.2.0.iso bootable CD/DVD<br />
25
- Windows XP SP3 Angel Live V.2.0.iso torrent link<br />
26
- Windows XP SP3 Angel Live V.2.0.iso free download with crack<br />
27
- Windows XP SP3 Angel Live V.2.0.iso system requirements and features<br />
28
- Windows XP SP3 Angel Live V.2.0.iso review and rating<br />
29
- Windows XP SP3 Angel Live V.2.0.iso activation key generator<br />
30
- Windows XP SP3 Angel Live V.2.0.iso serial number and product key<br />
31
- Windows XP SP3 Angel Live V.2.0.iso update and patch<br />
32
- Windows XP SP3 Angel Live V.2.0.iso comparison with other Windows versions<br />
33
- Windows XP SP3 Angel Live V.2.0.iso customization and optimization tips<br />
34
- Windows XP SP3 Angel Live V.2.0.iso troubleshooting and error fixing guide<br />
35
- Windows XP SP3 Angel Live V.2.0.iso backup and restore options<br />
36
- Windows XP SP3 Angel Live V.2.0.iso security and antivirus software<br />
37
- Windows XP SP3 Angel Live V.2.0.iso compatible drivers and hardware<br />
38
- Windows XP SP3 Angel Live V.2.0.iso best apps and games<br />
39
- Windows XP SP3 Angel Live V.2.0.iso online support and community forum<br />
40
- Windows XP SP3 Angel Live V.2.0.iso alternative download sources and mirrors<br />
41
- Windows XP SP3 Angel Live V.2.0.iso file size and format<br />
42
- Windows XP SP3 Angel Live V.2.0.iso license agreement and terms of use<br />
43
- Windows XP SP3 Angel Live V.2.0.iso history and development<br />
44
- Windows XP SP3 Angel Live V.2.0.iso advantages and disadvantages<br />
45
- Windows XP SP3 Angel Live V.2.0.iso screenshots and videos<br />
46
- Windows XP SP3 Angel Live V.2.0.iso FAQs and answers</p>
47
- <p>Some sources may provide fake or corrupted files that may harm your system or contain malware or viruses.</p>
48
- <p>To avoid these risks, we recommend you download <b>Windows XP SP3 Angel Live V.2.0.iso</b> from a reliable source such as Archive.org or YouTube. These sources provide direct links to download the ISO file without any surveys or ads.</p>
49
- <p>The size of the ISO file is about 633 MB, so make sure you have enough space on your hard drive or your USB drive before downloading it.</p>
50
- <p>After downloading the ISO file, you need to verify its integrity by checking its checksum or hash value.</p>
51
- <p>A checksum or hash value is a unique code that identifies a file based on its content.</p>
52
- <p>If two files have the same checksum or hash value, it means they are identical.</p>
53
- <p>If they have different checksums or hash values, it means they are different or corrupted.</p>
54
- <p>You can use various tools such as MD5 & SHA Checksum Utility or HashTab to calculate and compare the checksum or hash value of your downloaded ISO file with the original one provided by the source.</p>
55
- <p>If they match, it means your downloaded ISO file is valid and safe.</p>
56
- <p>If they don't match, it means your downloaded ISO file is invalid or tampered with.</p>
57
- <h3>How to install Windows XP SP3 Angel Live V.2.0.iso?</h3>
58
- <p>After downloading and verifying <b>Windows XP SP3 Angel Live V.2.0.iso</b>, you can install it on your system in two ways:</p>
59
- <ul>
60
- <li><b>On a computer:</b> You can install this version of Windows XP on a physical computer by burning the ISO file to a CD or a USB drive and booting from it.</li>
61
- <li><b>On a virtual machine:</b> You can install this version of Windows XP on a virtual machine by creating a virtual machine and mounting the ISO file as a virtual CD.</li>
62
- </ul>
63
- <h4>How to install Windows XP SP3 Angel Live V.2.0.iso on a computer?</h4>
64
- <p>To install <b>Windows XP SP3 Angel Live V.2.0.iso</b> on a computer, you need to burn the ISO file to a CD or a USB drive first.</p>
65
- <p>You can use various tools such as ImgBurn or Rufus to burn the ISO file to a CD or a USB drive respectively.</p>
66
- <p>You need to make sure that your CD or USB drive has enough space (at least 700 MB) and is formatted as FAT32.</p>
67
- <p>You also need to make sure that your computer supports booting from a CD or a USB drive.</p>
68
- <p>To do that, you need to access your computer's BIOS settings by pressing a specific key (usually F1, F2, F10, F12, ESC, DEL) during startup.</p>
69
- <p>In your BIOS settings, you need to find the boot order option and set your CD or USB drive as the first boot device.</p>
70
- <p>You can save your changes and exit your BIOS settings by pressing another specific key (usually F10).</p>
71
- <p>Your computer will restart and boot from your CD or USB drive automatically.</p>
72
- <h4>How to install Windows XP SP3 Angel Live V.2</p> 0a6ba089eb<br />
73
- <br />
74
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Grand Ages Rome Gold Edition Serial What You Need to Know Before You Buy.md DELETED
@@ -1,11 +0,0 @@
1
-
2
- <h1>Grand Ages Rome Gold Edition Serial: How to Get It and Play the Game</h1>
3
- If you are a fan of strategy games set in historical periods, you might have heard of Grand Ages Rome. This is a city-building and management simulation game that lets you take control of one of the greatest civilizations in history. You can raise massive armies, embark on epic campaigns, expand your empire, and engage in grand-scale city building. You can also create magnificent cities with creativity and control like never before. But what if you want to play the enhanced version of the game, which includes the original Grand Ages Rome and its expansion pack, Reign of Augustus? This is where Grand Ages Rome Gold Edition comes in. This package offers more features, content, and gameplay options than the base game. For example, you can play as one of four new factions, access 12 new maps, build 6 new buildings, and enjoy improved graphics and performance. However, to play Grand Ages Rome Gold Edition, you need a valid serial number. This is a unique code that activates and registers your copy of the game. Without it, you won't be able to install or play the game properly. So how do you get a serial number for Grand Ages Rome Gold Edition? And how do you use it to install and play the game? In this article, we will answer these questions and more. <h2>Why do you need a serial number for Grand Ages Rome Gold Edition?</h2>
4
- A serial number is a sequence of letters and numbers that identifies your copy of the game. It is also known as a product key or an activation code. You need a serial number for Grand Ages Rome Gold Edition for two main reasons: - To activate the game: This means verifying that your copy of the game is legitimate and not pirated. Activation is usually done online, by entering your serial number on a website or through a software client. Activation prevents unauthorized copying and distribution of the game. - To register the game: This means creating an account that allows you to access online features of the game, such as multiplayer mode, leaderboards, achievements, and updates. Registration is usually done by entering your serial number and your email address on a website or through a software client. If you don't have a valid serial number for Grand Ages Rome Gold Edition, you might encounter some problems when trying to install or play the game. For example: - You might not be able to install the game at all, or only partially. - You might not be able to launch or run the game properly. - You might not be able to access online features or multiplayer mode. - You might get error messages or warnings that your copy of the game is invalid or duplicate. Therefore, it is important to have a valid serial number for Grand Ages Rome Gold Edition if you want to enjoy the full experience of the game. <h2>How to get a valid serial number for Grand Ages Rome Gold Edition?</h2>
5
- There are two main ways to get a valid serial number for Grand Ages Rome Gold Edition: the official way and the unofficial way. The official way is to buy the game from Steam or other authorized retailers. This is the most legal and safe way to get a serial number for Grand Ages Rome Gold Edition. When you buy the game from Steam or other authorized retailers, you will receive a serial number along with your purchase confirmation. You can then use this serial number to activate and register your copy of the game. The unofficial way is to use a crack or a keygen from online sources. This is an illegal and risky way to get a serial number for Grand Ages Rome Gold Edition. A crack is a file that modifies or bypasses the activation or registration process of the game. A keygen is a program that generates random serial numbers that might work for the game. When you download a crack or a keygen from online sources, you might be able to install and play the game without buying it. However, there are some drawbacks and dangers of using a crack or a keygen for Grand Ages Rome Gold Edition. For example: - You might violate the terms of service or end-user license agreement of the game developer or publisher. - You might infringe on the intellectual property rights or copyrights of the game developer or publisher. - You might expose your computer to viruses, malware, spyware, or other harmful software that might damage your system or steal your personal information. - You might not be able to access online features or multiplayer mode of the game. - You might not be able to update or patch your copy of the game. - You might not be able to get technical support or customer service from the game developer or publisher. Therefore, it is advisable to avoid using a crack or a keygen for Grand Ages Rome Gold Edition if you want to avoid legal troubles or security risks. <h2>How to install and play Grand Ages Rome Gold Edition with a serial number?</h2>
6
- Depending on whether you bought the game from Steam or other authorized retailers, or downloaded it from online sources, there are different steps for installing and playing Grand Ages Rome Gold Edition with a serial number. If you bought the game from Steam or other authorized retailers, here are the steps for installing and playing Grand Ages Rome Gold Edition with a serial number: - Download and install Steam on your computer if you don't have it already. - Launch Steam and log in with your account credentials. - Go to Library > Games > Add A Game > Activate A Product On Steam. - Enter your serial number for Grand Ages Rome Gold Edition when prompted. - Follow the instructions on screen to complete the activation process. - Once activated, you can download and install Grand Ages Rome Gold Edition from your Steam library. - Launch Grand Ages Rome Gold Edition from Steam and enjoy playing. Alternatively, if you bought a physical disc of Grand Ages Rome Gold Edition from an authorized retailer, here are the steps for installing and playing Grand Ages Rome Gold Edition with a serial number: - Insert your disc into your computer's CD/DVD drive. - Follow the instructions on screen to start the installation process. - Enter your serial number for Grand Ages Rome Gold Edition when prompted. - Follow the instructions on screen to complete the installation process. - Once installed, launch Grand Ages Rome Gold Edition from your desktop shortcut or start menu and enjoy playing. If you downloaded Grand Ages Rome Gold Edition from online sources along with a crack or a keygen file, here are the steps for installing and playing Grand Ages Rome Gold Edition with a serial number: - Extract your downloaded file using an archive program such as WinRAR or 7-Zip. - Run your keygen program and generate a random serial number for Grand Ages Rome Gold Edition. - Copy this serial number somewhere safe for later use. - Run your setup program and start installing Grand Ages Rome Gold Edition on your computer. - Enter your generated serial number when prompted during installation. - Follow any other instructions on screen to complete installation process. - Once installed, copy your crack file into your installation folder where your main executable file (Rome.exe) is located. Replace any existing files if asked. - Block your main executable file (Rome.exe) in your firewall program by creating an outbound rule that prevents it from accessing internet connection. This will prevent any online verification checks that might invalidate your copy of the game. - Launch Grand Ages Rome Gold Edition from your desktop shortcut or start menu and enjoy playing. <h2>Conclusion: Enjoy The grand strategy Game Set In Ancient Rome</h2>
7
- Grand Ages Rome Gold Edition is an amazing strategy game that lets you experience what it was like to be part of one of history's most powerful empires. You can build cities, wage wars, manage politics, and shape history as you see fit. However, to play this game, you need a valid serial number that activates and registers your copy of the game. You can get a serial number by buying the game from Steam or other authorized retailers, or by using a crack a keygen from online sources. However, each method has its own pros and cons, and you should be aware of the legal and security implications of using a crack or a keygen. Once you have a serial number, you can install and play Grand Ages Rome Gold Edition by following the steps for your chosen method. Whether you bought the game from Steam or other authorized retailers, or downloaded it from online sources, you should block the game in your firewall to prevent any online verification checks that might invalidate your copy of the game. Now that you have installed and played Grand Ages Rome Gold Edition with a serial number, you can enjoy the grand strategy game set in ancient Rome. You can choose from five different families, each with their own traits and abilities. You can also customize your character's appearance, skills, and talents. You can explore a vast map that covers Europe, Africa, and Asia. You can build and manage cities with over 40 different buildings and 50 different units. You can engage in real-time battles with thousands of soldiers and hundreds of weapons. You can also participate in historical events and scenarios that will shape the fate of Rome. Grand Ages Rome Gold Edition is a game that will challenge your strategic thinking and immerse you in a rich historical setting. With its stunning graphics, realistic sound effects, and captivating gameplay, Grand Ages Rome Gold Edition is a game that you will not regret playing. <h2>FAQs</h2>
8
- Here are some frequently asked questions about Grand Ages Rome Gold Edition Serial: - Q: Where can I buy Grand Ages Rome Gold Edition? - A: You can buy Grand Ages Rome Gold Edition from Steam or other authorized retailers such as Amazon, GOG.com, or Humble Bundle. - Q: How much does Grand Ages Rome Gold Edition cost? - A: Grand Ages Rome Gold Edition costs $14.99 on Steam, but it is often on sale for a lower price. - Q: What are the system requirements for Grand Ages Rome Gold Edition? - A: The minimum system requirements for Grand Ages Rome Gold Edition are: - OS: Windows XP or Vista - Processor: 2.5 GHz Single Core Processor - Memory: 1 GB RAM - Graphics: 128 MB 3D Video Card (GeForce 6600/Radeon 9600 or better) - DirectX: Version 9.0c - Storage: 4 GB available space - Sound Card: DirectX Compatible The recommended system requirements for Grand Ages Rome Gold Edition are: - OS: Windows XP or Vista - Processor: 2.5 GHz Dual Core Processor - Memory: 2 GB RAM - Graphics: 256 MB 3D Video Card (GeForce 8800/Radeon HD2900 or better) - DirectX: Version 9.0c - Storage: 4 GB available space - Sound Card: DirectX Compatible - Q: How many players can play Grand Ages Rome Gold Edition online? - A: Grand Ages Rome Gold Edition supports up to four players in online multiplayer mode. - Q: What are the differences between Grand Ages Rome and Grand Ages Rome Gold Edition? - A: Grand Ages Rome Gold Edition includes the original Grand Ages Rome and its expansion pack, Reign of Augustus. The expansion pack adds four new factions, 12 new maps, six new buildings, improved graphics and performance, and more gameplay options. </p>
9
- <h2>Grand Ages Rome Gold edition Serial</h2><br /><p><b><b>Download Zip</b> &#9675; <a href="https://byltly.com/2uKyYx">https://byltly.com/2uKyYx</a></b></p><br /><br /> 0a6ba089eb<br />
10
- <br />
11
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Clash Royale for Windows 11 The Ultimate Guide to Install and Play.md DELETED
@@ -1,164 +0,0 @@
1
- <br />
2
- <h1>How to Download and Play Clash Royale on Windows 11</h1>
3
- <p>Are you a fan of strategy games that are fast-paced, fun, and competitive? Do you want to experience a new way of playing your favorite mobile game on your PC? If you answered yes to both questions, then you should definitely try out <strong>Clash Royale</strong> on <strong>Windows 11</strong>.</p>
4
- <h2>clash royale download windows 11</h2><br /><p><b><b>Download Zip</b> &#9881;&#9881;&#9881; <a href="https://jinyurl.com/2uNQoc">https://jinyurl.com/2uNQoc</a></b></p><br /><br />
5
- <h2>What is Clash Royale?</h2>
6
- <h3>A brief introduction to the game and its features</h3>
7
- <p>Clash Royale is a real-time multiplayer game developed and published by Supercell, the makers of the popular Clash of Clans. In this game, you collect and upgrade cards that feature characters, spells, and defenses from the Clash universe. You use these cards to battle other players online in a three-minute match where the goal is to destroy their towers and win trophies, crowns, and glory.</p>
8
- <p>The game has over 90 unique cards that belong to different rarities, types, and arenas. You can create your own battle deck with up to eight cards and customize it according to your play style and strategy. You can also join or form a clan with other players to share cards, chat, and participate in clan wars for big rewards.</p>
9
- <p>Clash Royale is constantly updated with new features, events, and challenges that keep the game fresh and exciting. You can unlock new cards, arenas, skins, emotes, magic items, and more as you progress through the game. You can also compete in global tournaments, seasonal events, special modes, and ladder matches to test your skills against the best players in the world.</p>
10
- <h3>Why play Clash Royale on Windows 11?</h3>
11
- <h4>The benefits of playing on a larger screen, better graphics, and smoother controls</h4>
12
- <p>While Clash Royale is primarily designed for mobile devices, playing it on Windows 11 can offer you some advantages that can enhance your gaming experience. Here are some of them:</p>
13
- <p>How to install clash royale on windows 11 PC<br />
14
- Clash royale windows 11 emulator download<br />
15
- Best clash royale decks for windows 11 players<br />
16
- Clash royale windows 11 update and new features<br />
17
- Clash royale windows 11 gameplay and tips<br />
18
- How to play clash royale with friends on windows 11<br />
19
- Clash royale windows 11 vs android comparison<br />
20
- How to fix clash royale not working on windows 11<br />
21
- Clash royale windows 11 system requirements and performance<br />
22
- How to transfer clash royale account from android to windows 11<br />
23
- Clash royale windows 11 review and rating<br />
24
- How to get free gems and coins in clash royale on windows 11<br />
25
- Clash royale windows 11 cheats and hacks<br />
26
- How to join a clan in clash royale on windows 11<br />
27
- Clash royale windows 11 tournaments and events<br />
28
- How to stream clash royale on windows 11 using OBS<br />
29
- Clash royale windows 11 keyboard and mouse controls<br />
30
- How to customize clash royale settings on windows 11<br />
31
- Clash royale windows 11 best graphics and sound options<br />
32
- How to backup and restore clash royale data on windows 11<br />
33
- Clash royale windows 11 offline mode and online mode<br />
34
- How to chat and communicate in clash royale on windows 11<br />
35
- Clash royale windows 11 challenges and quests<br />
36
- How to unlock new cards and upgrade them in clash royale on windows 11<br />
37
- Clash royale windows 11 strategy and guide<br />
38
- How to level up and rank up in clash royale on windows 11<br />
39
- Clash royale windows 11 achievements and rewards<br />
40
- How to report and block players in clash royale on windows 11<br />
41
- Clash royale windows 11 support and feedback<br />
42
- How to uninstall and reinstall clash royale on windows 11<br />
43
- Clash royale windows 11 vs ios comparison<br />
44
- How to download clash royale mod apk for windows 11<br />
45
- Clash royale windows 11 beta version and release date<br />
46
- Clash royale windows 11 news and updates<br />
47
- Clash royale windows 11 fan art and wallpapers<br />
48
- How to create and edit your own deck in clash royale on windows 11<br />
49
- Clash royale windows 11 best practices and tips for beginners<br />
50
- Clash royale windows 11 pros and cons<br />
51
- How to watch clash royale videos and live streams on windows 11<br />
52
- Clash royale windows 11 FAQ and troubleshooting</p>
53
- <ul>
54
- <li>You can enjoy the game on a larger screen with higher resolution and better graphics. This can help you see the details of the cards, units, towers, and arena more clearly. It can also make the game more immersive and engaging.</li>
55
- <li>You can use your mouse and keyboard to control the game instead of tapping on a small touchscreen. This can give you more accuracy, speed, and comfort when playing. You can also customize your key bindings and settings according to your preference.</li>
56
- <li>You can avoid battery drain, overheating, lagging, or crashing issues that may occur on some mobile devices. Playing on Windows 11 can ensure that your PC runs smoothly and efficiently without compromising your performance or enjoyment.</li>
57
- </ul <h2>How to Download and Install Clash Royale on Windows 11</h2>
58
- <h3>The minimum system requirements for Windows 11 and Clash Royale</h3>
59
- <p>Before you can download and play Clash Royale on Windows 11, you need to make sure that your PC meets the minimum system requirements for both the operating system and the game. Here are the specifications you need to check:</p>
60
- <table>
61
- <tr>
62
- <th>Windows 11</th>
63
- <th>Clash Royale</th>
64
- </tr>
65
- <tr>
66
- <td>
67
- <ul>
68
- <li>Processor: 1 GHz or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC)</li>
69
- <li>RAM: 4 GB</li>
70
- <li>Storage: 64 GB or larger storage device</li>
71
- <li>Graphics card: Compatible with DirectX 12 or later with WDDM 2.0 driver</li>
72
- <li>Display: High definition (720p) display that is greater than 9” diagonally, 8 bits per color channel</li>
73
- <li>Internet connection: Required for updates and some features</li>
74
- </ul>
75
- </td>
76
- <td>
77
- <ul>
78
- <li>Android version: 4.1 and up</li>
79
- <li>RAM: 1 GB (recommended)</li>
80
- <li>Storage: 116 MB (additional files may be downloaded)</li>
81
- <li>Graphics: OpenGL ES 3.0 support (recommended)</li>
82
- <li>Internet connection: Required to play online</li>
83
- </ul>
84
- </td>
85
- </tr>
86
- </table>
87
- <p>If your PC meets or exceeds these requirements, you can proceed to the next step. If not, you may need to upgrade your hardware or look for other alternatives.</p>
88
- <h3>The steps to download and install an Android emulator (Bluestacks 5) on Windows 11</h3>
89
- <p>An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available online, but one of the most popular and reliable ones is <strong>Bluestacks 5</strong>. Bluestacks 5 is the latest version of the Bluestacks app player that offers improved performance, compatibility, and features for Windows 11 users.</p>
90
- <p>To download and install Bluestacks 5 on Windows 11, follow these steps:</p>
91
- <ol>
92
- <li>Go to the official website of Bluestacks at <a href="">https://www.bluestacks.com/</a></li>
93
- <li>Click on the <strong>Download Bluestacks 5</strong> button and wait for the installer file to download.</li>
94
- <li>Double-click on the installer file and follow the instructions on the screen to install Bluestacks 5 on your PC.</li>
95
- <li>Once the installation is complete, launch Bluestacks 5 from your desktop or start menu.</li>
96
- <li>Sign in with your Google account or create a new one if you don't have one.</li>
97
- <li>You are now ready to use Bluestacks 5 and access the Google Play Store.</li>
98
- </ol>
99
- <h3>The steps to download and install Clash Royale from the Google Play Store on Bluestacks 5</h3>
100
- <p>Now that you have Bluestacks 5 installed on your PC, you can easily download and install Clash Royale from the Google Play Store. Here are the steps to do so:</p>
101
- <ol>
102
- <li>On the Bluestacks home screen, click on the <strong>Google Play Store</strong> icon.</li>
103
- <li>In the search bar, type <strong>Clash Royale</strong> and hit enter.</li>
104
- <li>Select <strong>Clash Royale</strong> from the list of results and click on the <strong>Install</strong> button.</li>
105
- <li>Wait for the game to download and install on your PC.</li>
106
- <li>Once the installation is done, click on the <strong>Open</strong> button or go back to the Bluestacks home screen and click on the <strong>Clash Royale</strong> icon.</li>
107
- <li>You can now enjoy playing Clash Royale on your PC with Bluestacks 5.</li>
108
- </ol troops, such as Giant or Balloon. You can also use a spell card (such as Arrows or Fireball) to damage or eliminate multiple enemy troops or buildings at once. You can also use a high HP troop card (such as Knight or Valkyrie) to tank and distract enemy troops while your towers or other troops deal damage to them.</p>
109
- <p>By using buildings, spells, and high HP troops to defend your towers, you can prevent your opponent from gaining an elixir or tower advantage and turn the tide of the battle in your favor. You can also save your towers from being destroyed and losing the game.</p>
110
- <h4>Use a win condition card to target enemy towers</h4>
111
- <p>A fifth way to improve your gameplay in Clash Royale is to use a win condition card to target enemy towers. A win condition card is a card that can directly or indirectly deal damage to enemy towers and help you win the game. Some examples of win condition cards are Hog Rider, Royal Giant, Graveyard, Miner, Goblin Barrel, and X-Bow. These cards have different strengths and weaknesses, but they all share the same goal: to destroy enemy towers.</p>
112
- <p>By using a win condition card to target enemy towers, you can increase your chances of winning the game by dealing consistent and significant damage to your opponent's towers. You can also force your opponent to react and spend elixir to defend their towers, which can give you an elixir or tower advantage.</p>
113
- <h2>Conclusion</h2>
114
- <h3>A summary of the main points and a call to action for the readers to try out Clash Royale on Windows 11</h3>
115
- <p>In conclusion, Clash Royale is a fun and addictive game that you can enjoy on Windows 11 with the help of an Android emulator like Bluestacks 5. By playing Clash Royale on Windows 11, you can benefit from a larger screen, better graphics, and smoother controls. You can also improve your gameplay by following some tips and tricks, such as joining a clan, attacking in pairs, counting elixir, defending your towers, and using a win condition card.</p>
116
- <p>If you are interested in trying out Clash Royale on Windows 11, you can download and install Bluestacks 5 from their official website and then download and install Clash Royale from the Google Play Store on Bluestacks 5. You can then start playing Clash Royale on your PC and have a blast with your friends and foes.</p>
117
- <p>What are you waiting for? Download Clash Royale on Windows 11 today and join the millions of players who are already enjoying this amazing game!</p>
118
- <h2>FAQs</h2>
119
- <h3>What are the best cards in Clash Royale?</h3>
120
- <p>There is no definitive answer to this question, as different cards may suit different players, decks, strategies, and situations. However, some of the most popular and versatile cards in Clash Royale are:</p>
121
- <ul>
122
- <li>Mega Knight: A legendary card that costs 7 elixir and can deal massive damage with its jump and splash attacks. It can counter swarms, tanks, buildings, and ground troops effectively.</li>
123
- <li>Skeleton Dragons: A common card that costs 4 elixir and spawns two flying skeletons that shoot fireballs. It can deal decent damage to air and ground troops and buildings.</li>
124
- <li>Mother Witch: A legendary card that costs 4 elixir and shoots cursed bolts that turn enemy troops into hogs when they die. It can create a swarm of hogs that can overwhelm the enemy's defense.</li>
125
- <li>Royal Delivery: A rare card that costs 3 elixir and drops a crate that deals area damage and spawns a Royal Recruit. It can be used to surprise and counter enemy troops or buildings.</li>
126
- <li>Goblin Cage: A rare card that costs 4 elixir and spawns a building that releases a Goblin Brawler when destroyed. It can be used to lure and distract enemy troops or deal damage to enemy towers.</li>
127
- </ul <h3>How do I get more gems and gold in Clash Royale?</h3>
128
- <p>Gems and gold are two of the most important resources in Clash Royale, as they allow you to buy chests, cards, magic items, emotes, skins, and more. There are several ways to get more gems and gold in Clash Royale:</p>
129
- <ul>
130
- <li>Complete quests, achievements, challenges, tournaments, clan wars, seasonal events, special modes, ladder matches, etc. These activities can reward you with gems, gold, chests, magic items, etc.</li>
131
- <li>Open chests that you get from battles or quests. These chests can contain gems, gold, cards, magic items, etc.</li>
132
- <li>Donate or request cards from your clanmates. This can earn you gold and XP for each card you donate or request.</li>
133
- <li>Buy gems or gold from the shop with real money. This is the fastest but most expensive way to get more gems and gold in Clash Royale.</li>
134
- </ul>
135
- <h3>How do I join or create a clan in Clash Royale?</h3>
136
- <p>Joining or creating a clan in Clash Royale is a great way to interact with other players, share cards, chat, and participate in clan wars. To join or create a clan in Clash Royale, you need to reach at least level 1 in the game. You can then follow these steps:</p>
137
- <ol>
138
- <li>Tap on the <strong>Clan</strong> tab on the main screen.</li>
139
- <li>Tap on the <strong>Join a Clan</strong> button to browse or search for a clan that suits your preferences. You can filter the clans by name, location, trophy requirement, type, etc.</li>
140
- <li>Tap on the <strong>Request to Join</strong> button to send a request to the clan leader or co-leader. You can also write a message to introduce yourself and explain why you want to join the clan.</li>
141
- <li>Wait for the clan leader or co-leader to accept or reject your request. If they accept your request, you will become a member of the clan and be able to access the clan chat, shop, wars, etc.</li>
142
- <li>If you want to create your own clan instead of joining an existing one, you can tap on the <strong>Create a Clan</strong> button instead of the <strong>Join a Clan</strong> button. You will need to spend 1000 gold to create a clan.</li>
143
- <li>You can then choose a name, badge, location, type, trophy requirement, description, and tag for your clan. You can also invite your friends or family to join your clan or accept requests from other players who want to join your clan.</li>
144
- <li>You will become the leader of your clan and be able to manage it as you wish. You can promote or demote members, start or cancel clan wars, edit the clan settings, etc.</li>
145
- </ol>
146
- <h3>How do I change my name or avatar in Clash Royale?</h3>
147
- <p>Changing your name or avatar in Clash Royale is a simple and quick process that can help you personalize your profile and express your identity. To change your name or avatar in Clash Royale, follow these steps:</p>
148
- <ol>
149
- <li>Tap on your profile icon on the top left corner of the main screen.</li>
150
- <li>Tap on the <strong>Name Change</strong> button or the <strong>Edit Avatar</strong> button depending on what you want to change.</li>
151
- <li>If you want to change your name, you can enter a new name in the text box and tap on the <strong>Confirm</strong> button. You can only change your name once for free, so choose wisely. If you want to change your name again, you will need to spend 500 gems.</li>
152
- <li>If you want to change your avatar, you can choose from a variety of avatars that feature different characters, animals, objects, etc. You can also unlock more avatars by completing achievements, challenges, events, etc. Tap on the avatar that you like and tap on the <strong>Select</strong> button.</li>
153
- <li>Your name or avatar will be changed immediately and be visible to other players in the game.</li>
154
- </ol>
155
- <h3>How do I contact Supercell for support or feedback?</h3>
156
- <p>If you have any issues, questions, suggestions, or feedback regarding Clash Royale or any other Supercell game, you can contact Supercell for support or feedback through their official channels. Here are some ways to do so:</p>
157
- <ul>
158
- <li>You can use the in-game support feature by tapping on the <strong>Settings</strong> icon on the top right corner of the main screen and then tapping on the <strong>Help and Support</strong> button. You can then browse through the frequently asked questions (FAQs) or contact Supercell directly by tapping on the <strong>Contact Us</strong> button.</li>
159
- <li>You can visit the official website of Supercell at <a href="">https://supercell.com/en/</a> and click on the <strong>Contact Us</strong> link at the bottom of the page. You can then fill out a form with your details and message and submit it to Supercell.</li>
160
- <li>You can follow Supercell on their social media platforms such as Facebook, Twitter, Instagram, YouTube, Reddit, Discord, etc. You can then send them a message or comment on their posts with your feedback or inquiry.</li>
161
- </ul <p>Supercell is usually responsive and helpful when it comes to addressing their players' concerns and opinions. However, please be respectful and polite when contacting them and avoid spamming or abusing them.</p>
162
- <p>I</p> 197e85843d<br />
163
- <br />
164
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Nguwe by Q-Mark TpZee Afriikan Papi - Amapiano Mp3 2022.md DELETED
@@ -1,83 +0,0 @@
1
- <br />
2
- <h1>How to Download Q Mark Nguwe Mp3</h1>
3
- <p>Q Mark Nguwe mp3 is a hit song by South African artists Q-Mark, TpZee, and Afriikan Papi. It is a love-themed track with a nostalgic eighties dance feel, a simple baseline, and smooth vocals. The song has been streamed millions of times on various platforms, such as YouTube, Spotify, Apple Music, and more. If you are a fan of this song and want to download it as an mp3 file, you might be wondering how to do it.</p>
4
- <h2>download q mark nguwe mp3</h2><br /><p><b><b>Download File</b> &#8230;&#8230;&#8230; <a href="https://jinyurl.com/2uNTZN">https://jinyurl.com/2uNTZN</a></b></p><br /><br />
5
- <p>Downloading mp3 files has many advantages. You can listen to your favorite music offline, without using data or Wi-Fi. You can also transfer the files to different devices, such as your phone, tablet, computer, or mp3 player. You can also create playlists, edit tags, and customize your music library.</p>
6
- <p>There are different ways to download mp3 files, depending on your device, budget, and preference. In this article, we will show you three main methods to download Q Mark Nguwe mp3: buying music on desktop with iTunes, downloading music for free from YouTube and SoundCloud, and downloading music from other websites or apps. Let's get started!</p>
7
- <h2>Method 1: Buying Music on Desktop with iTunes</h2>
8
- <p>If you have a Windows or Mac computer, you can use iTunes to buy and download Q Mark Nguwe mp3. iTunes is a software that allows you to manage your music library, sync your devices, and access the iTunes Store. Here are the steps to follow:</p>
9
- <p>download q mark nguwe mp3 free<br />
10
- download q mark nguwe mp3 song<br />
11
- download q mark nguwe mp3 2022<br />
12
- download q mark nguwe mp3 audio<br />
13
- download q mark nguwe mp3 music<br />
14
- download q mark nguwe mp3 online<br />
15
- download q mark nguwe mp3 portalmoznews<br />
16
- download q mark nguwe mp3 fakaza<br />
17
- download q mark nguwe mp3 zamusic<br />
18
- download q mark nguwe mp3 tubidy<br />
19
- download q mark nguwe mp3 waploaded<br />
20
- download q mark nguwe mp3 hiphopza<br />
21
- download q mark nguwe mp3 sahiphop<br />
22
- download q mark nguwe mp3 naijavibes<br />
23
- download q mark nguwe mp3 tooxclusive<br />
24
- download q mark nguwe mp3 justnaija<br />
25
- download q mark nguwe mp3 afrobeat<br />
26
- download q mark nguwe mp3 amapiano<br />
27
- download q mark nguwe mp3 gqom<br />
28
- download q mark nguwe mp3 house<br />
29
- download q mark nguwe mp3 kizomba<br />
30
- download q mark nguwe mp3 zouk<br />
31
- download q mark nguwe mp3 bongo flava<br />
32
- download q mark nguwe mp3 afro pop<br />
33
- download q mark nguwe mp3 r&b<br />
34
- download q mark nguwe mp3 hip hop<br />
35
- download q mark nguwe mp3 rap<br />
36
- download q mark nguwe mp3 reggae<br />
37
- download q mark nguwe mp3 dancehall<br />
38
- download q mark nguwe mp3 gospel<br />
39
- download q mark nguwe mp3 instrumental<br />
40
- download q mark nguwe mp3 remix<br />
41
- download q mark nguwe mp3 cover<br />
42
- download q mark nguwe mp3 video<br />
43
- download q mark nguwe mp3 lyrics<br />
44
- download q mark nguwe tpzee afriikan papi mp3 <br />
45
- download tpzee afriikan papi ft. q-mark -nguwe 2022.mp3 <br />
46
- how to download q-mark -nguwe 2022.mp3 <br />
47
- where to download tpzee afriikan papi ft. q-mark -nguwe 2022.mp3 <br />
48
- best site to download tpzee afriikan papi ft. q-mark -nguwe 2022.mp3</p>
49
- <ol>
50
- <li><strong>Install iTunes and sign in with Apple ID.</strong> If you are using a Mac, iTunes is already installed on your computer. If you are using Windows, you need to download and install iTunes from [17](http://www.apple.com/itunes/download). You also need to create an Apple ID account and enter payment information for it before you can buy music from iTunes.</li>
51
- <li><strong>Search for music and buy it with iTunes.</strong> Open iTunes and click Store at the top of the window. In the search bar, type in Q Mark Nguwe mp3 or any other song, album, or artist you want. Select the music you want to buy and click the price button next to it. Enter your Apple ID password or use Touch ID if you have a MacBook with a Touch Bar.</li>
52
- <li><strong>View and transfer the music files on Windows or Mac.</strong> After buying the music, it will be added to your iTunes library automatically. You can view the files by clicking Library at the top of the window. You can also transfer the files to different devices by connecting them to your computer with a USB cable or using iCloud Music Library if you have an Apple Music subscription.</li>
53
- </ol>
54
- <h2>Method 2: Downloading Music for Free from YouTube and SoundCloud</h2>
55
- <p>If you don't want to spend money on buying music, you can also download Q Mark Nguwe mp3 for free from YouTube or SoundCloud. These are two popular platforms that host millions of music videos and audio tracks. However, you need to use a third-party website or app to convert and download the mp3 file Here are the steps to follow:</p>
56
- <ol>
57
- <li><strong>Find and copy the link of the music video or audio track.</strong> Go to YouTube or SoundCloud and search for Q Mark Nguwe mp3 or any other song you want. Select the video or track you want and copy the link from the address bar of your browser.</li>
58
- <li><strong>Use a third-party website or app to convert and download the mp3 file.</strong> There are many websites and apps that allow you to convert and download mp3 files from YouTube or SoundCloud, such as [16](https://ytmp3.cc/en13/), [15](https://www.4kdownload.com/products/product-youtubetomp3), [14](https://sclouddownloader.net/), and [13](https://soundcloudmp3.org/). Choose one that suits your needs and preferences, and paste the link you copied in the input box. Click the convert or download button and wait for the process to finish.</li>
59
- <li><strong>Check the quality and legality of the downloaded file.</strong> After downloading the mp3 file, you can check its quality by playing it on your device or using a software like [12](https://spek.cc/). You can also check its legality by reading the terms and conditions of the website or app you used, and the license of the original music. Some music may be protected by copyright laws, which means you cannot download or use it without permission from the owner.</li>
60
- </ol>
61
- <h2>Method 3: Downloading Music from Other Websites or Apps</h2>
62
- <p>If you are not satisfied with iTunes, YouTube, or SoundCloud, you can also download Q Mark Nguwe mp3 from other websites or apps that offer mp3 downloads. However, you need to be careful when choosing these sources, as some of them may be unreliable, unsafe, or illegal. Here are some tips to follow:</p>
63
- <ul>
64
- <li><strong>Search for reliable and safe websites or apps that offer mp3 downloads.</strong> You can use a search engine like Bing to find websites or apps that offer mp3 downloads. You can also use a tool like [11](https://www.virustotal.com/gui/home/url) to scan the URL of the website or app before visiting it. You can also read reviews and ratings from other users to see if they are trustworthy and secure.</li>
65
- <li><strong>Choose the best format and quality for your device and preference.</strong> Different websites or apps may offer different formats and qualities for mp3 downloads, such as 128 kbps, 192 kbps, 320 kbps, etc. The higher the bitrate, the better the sound quality, but also the larger the file size. You should choose a format and quality that matches your device's storage capacity and your listening preference.</li>
66
- <li><strong>Avoid malware and viruses when downloading mp3 files.</strong> Some websites or apps may try to trick you into downloading unwanted software, malware, or viruses along with the mp3 files. You should avoid clicking on pop-ups, ads, or suspicious links that appear on these sources. You should also use an antivirus software like [10](https://www.microsoft.com/en-us/windows/comprehensive-security) to scan your device regularly and remove any threats.</li>
67
- </ul>
68
- <h2>Conclusion</h2>
69
- <p>In this article, we have shown you three main methods to download Q Mark Nguwe mp3: buying music on desktop with iTunes, downloading music for free from YouTube and SoundCloud, and downloading music from other websites or apps. Each method has its pros and cons, so you should choose the one that suits your needs and preferences best.</p>
70
- <p>We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!</p>
71
- <h2>Frequently Asked Questions</h2>
72
- <h4>What is Q Mark Nguwe mp3?</h4>
73
- <p>Q Mark Nguwe mp3 is a hit song by South African artists Q-Mark, TpZee, and Afriikan Papi. It is a love-themed track with a nostalgic eighties dance feel, a simple baseline, and smooth vocals.</p>
74
- <h4>Why should I download mp3 files?</h4>
75
- <p>Downloading mp3 files has many advantages. You can listen to your favorite music offline, without using data or Wi-Fi. You can also transfer the files to different devices, such as your phone, tablet, computer, or mp3 player. You can also create playlists, edit tags, and customize your music library.</p>
76
- <h4>How can I buy music on desktop with iTunes?</h4>
77
- <p>You can buy music on desktop with iTunes by installing iTunes on your Windows or Mac computer, signing in with your Apple ID account, searching for music and buying it with iTunes, and viewing and transferring the music files on Windows or Mac.</p>
78
- <h4>How can I download music for free from YouTube and SoundCloud?</h4>
79
- <p>You can download music for free from YouTube and SoundCloud by finding and copying the link of the music video or audio track, using a third-party website or app to convert and download the mp3 file, and checking the quality and legality of the downloaded file.</p>
80
- <h4>How can I download music from other websites or apps?</h4>
81
- <p>You can download music from other websites or apps by searching for reliable and safe websites or apps that offer mp3 downloads, choosing the best format and quality for your device and preference, and avoiding malware and viruses when downloading mp3 files.</p> 401be4b1e0<br />
82
- <br />
83
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for iOS and Enjoy Unlimited Money.md DELETED
@@ -1,124 +0,0 @@
1
- <br />
2
- <h1>Traffic Racer Mod APK for iOS: How to Install and Play</h1>
3
- <p>If you are looking for a fun and addictive racing game that will keep you entertained for hours, you might want to check out <strong>traffic racer mod apk</strong>. This is a modified version of the popular traffic racer game that offers unlimited money, unlocked cars, and other features that make the game more enjoyable. But what if you want to play this game on your iOS device? Is it possible to install and run traffic racer mod apk on iOS? In this article, we will answer these questions and show you how to install and play traffic racer mod apk on iOS devices. We will also tell you about the benefits and features of this game and some frequently asked questions.</p>
4
- <h2>What is Traffic Racer Mod APK?</h2>
5
- <p>Traffic Racer is a milestone in the genre of endless arcade racing. It is a game where you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can try to be one of the fastest drivers in the global leaderboards and enjoy the stunning 3D graphics and smooth car handling. The game has over 40 different cars, 5 detailed environments, and 5 game modes to choose from. You can also customize your car through paint and wheels and compete with other players online.</p>
6
- <h2>traffic racer mod apk for ios</h2><br /><p><b><b>Download</b> &#8250;&#8250;&#8250; <a href="https://jinyurl.com/2uNNPb">https://jinyurl.com/2uNNPb</a></b></p><br /><br />
7
- <p>Traffic Racer Mod APK is a modified version of the original game that gives you some extra features that are not available in the official version. For example, you can get unlimited money to buy any car you want, unlock all cars and levels, remove ads, and enjoy faster gameplay. These features make the game more fun and exciting, as you can drive any car you like and race without any limitations.</p>
8
- <h2>Why Would Someone Want to Play Traffic Racer Mod APK on iOS?</h2>
9
- <p>There are many reasons why someone would want to play traffic racer mod apk on iOS devices. Some of them are:</p>
10
- <ul>
11
- <li>They love racing games and want to try something new and different.</li>
12
- <li>They want to enjoy the benefits and features of the modded version without spending any money.</li>
13
- <li>They want to challenge themselves and compete with other players online.</li>
14
- <li>They want to have fun and kill some time.</li>
15
- </ul>
16
- <p>However, there is one problem: traffic racer mod apk is not available on the Apple App Store. This means that you cannot download and install it directly from there. So, how can you play this game on your iOS device? There are two methods that you can use:</p>
17
- <h2>How to Install Traffic Racer Mod APK on iOS Devices</h2>
18
- <h3>Method 1: Jailbreak Your Device and Use Cydia</h3>
19
- <p>The first method is to jailbreak your device and use Cydia. Jailbreaking is a process that allows you to modify the file system of your device and install custom applications that are not authorized by Apple. Cydia is an app store for jailbroken devices that lets you download and install various apps, tweaks, themes, and mods.</p>
20
- <p>To use this method, you need to follow these steps:</p>
21
- <ol>
22
- <li>Jailbreak your device using a tool like Checkra1n or Unc0ver. You can find tutorials online on how to do this.</li>
23
- <li>Open Cydia and add a source that has traffic racer mod apk. You can search online for such sources or use this one: [10](https://oceanofgamesu.com/traffic-racer-mod-apk-download).</li>
24
- <li>Search for traffic racer mod apk in the search bar and tap on the install button.</li>
25
- <li>Wait for the installation to finish and then launch the game from your home screen.</li>
26
- <li>Enjoy playing traffic racer mod apk on your iOS device.</li>
27
- </ol>
28
- <p>This method is easy and fast, but it has some drawbacks. First, you need to jailbreak your device, which can void your warranty and expose your device to security risks. Second, you need to find a reliable source that has traffic racer mod apk, which can be hard to do. Third, you may encounter some compatibility issues or bugs while playing the game.</p>
29
- <h3>Method 2: Find the IPA Equivalent and Use Cydia Impactor</h3>
30
- <p>The second method is to find the IPA equivalent of traffic racer mod apk and use Cydia Impactor. IPA is the file format for iOS applications that can be installed on your device using a computer. Cydia Impactor is a tool that allows you to sideload IPA files onto your device without jailbreaking it.</p>
31
- <p>To use this method, you need to follow these steps:</p>
32
- <p>traffic racer hack ios download<br />
33
- traffic racer unlimited money ios<br />
34
- traffic racer modded apk for iphone<br />
35
- traffic racer cheats ios no jailbreak<br />
36
- traffic racer game mod apk ios<br />
37
- traffic racer ios mod menu<br />
38
- traffic racer hack version download ios<br />
39
- traffic racer mod apk free download for ios<br />
40
- traffic racer unlimited cash ios<br />
41
- traffic racer modded game for ios<br />
42
- traffic racer hack tool ios<br />
43
- traffic racer mod apk latest version ios<br />
44
- traffic racer cheat codes ios<br />
45
- traffic racer mod apk online for ios<br />
46
- traffic racer unlimited coins ios<br />
47
- traffic racer hacked apk for iphone<br />
48
- traffic racer mod apk offline for ios<br />
49
- traffic racer hack app ios<br />
50
- traffic racer mod apk 2023 for ios<br />
51
- traffic racer mod apk revdl for ios<br />
52
- traffic racer hack ipa download<br />
53
- traffic racer mod apk rexdl for ios<br />
54
- traffic racer mod apk happymod for ios<br />
55
- traffic racer hack cydia ios<br />
56
- traffic racer mod apk an1 for ios<br />
57
- traffic racer hack tweakbox ios<br />
58
- traffic racer mod apk andropalace for ios<br />
59
- traffic racer mod apk android 1 for ios<br />
60
- traffic racer hack tutuapp ios<br />
61
- traffic racer mod apk apkpure for ios<br />
62
- traffic racer hack appvalley ios<br />
63
- traffic racer mod apk mob.org for ios<br />
64
- traffic racer mod apk android republic for ios<br />
65
- traffic racer hack panda helper ios<br />
66
- traffic racer mod apk ihackedit for ios<br />
67
- traffic racer mod apk lenov.ru for ios<br />
68
- traffic racer hack ignition app ios<br />
69
- traffic racer mod apk platinmods for ios<br />
70
- traffic racer mod apk 5play.ru for ios<br />
71
- traffic racer hack appcake ios<br />
72
- traffic racer mod apk blackmod for ios<br />
73
- traffic racer mod apk apkmody for ios<br />
74
- traffic racer hack tweakdoor ios<br />
75
- traffic racer mod apk apknite for ios<br />
76
- traffic racer mod apk apkmirror for ios<br />
77
- traffic racer hack appvn ios<br />
78
- traffic racer mod apk apksfree for ios<br />
79
- traffic racer mod apk apktada for ios<br />
80
- traffic racer hack ipogo app ios</p>
81
- <ol>
82
- <li>Find the IPA equivalent of traffic racer mod apk. You can search online for such files or use this one: [9](https://iosninja.io/ipa-library/download-traffic-racer-hack-ipa-ios).</li>
83
- <li>Download Cydia Impactor from [8](https://cydiaimpactor.com) and install it on your computer.</li>
84
- <li>Connect your iOS device to your computer using a USB cable and launch Cydia Impactor.</li>
85
- <li>Drag and drop the IPA file onto Cydia Impactor and enter your Apple ID and password when prompted.</li>
86
- <li>Wait for the installation to finish and then trust the app from your device settings.</li>
87
- <li>Launch the game from your home screen and enjoy playing traffic racer mod apk on your iOS device.</li>
88
- </ol>
89
- <p>This method is safer and more reliable than the first one, but it has some limitations. First, you need to have a computer and a USB cable to perform this method. Second, you need to enter your Apple ID and password, which can be risky if you use a fake or hacked one. Third, you need to trust the app from your device settings, which can be revoked by Apple at any time.</p>
90
- <h2>Benefits and Features of Traffic Racer Game</h2>
91
- <p>Whether you use the first or the second method, you will be able to enjoy the benefits and features of traffic racer game on your iOS device. Some of them are:</p>
92
- <ul>
93
- <li><strong>Stunning 3D graphics and realistic car handling</strong>: The game has amazing 3D graphics that make you feel like you are driving in real life. The cars have realistic physics and sound effects that enhance the gameplay experience.</li>
94
- <li><strong>Over 40 different cars and 5 game modes to choose from</strong>: The game has a variety of cars that you can drive, from sedans and sports cars to trucks and buses. You can also choose from 5 different game modes, such as endless, two-way, time trial, police chase, and free ride.</li>
95
- <li><strong>Customization options and online leaderboards</strong>: The game allows you to customize your car through paint and wheels. You can also upgrade your car's speed, handling, and braking. You can also compete with other players online through the global leaderboards and see how you rank among them.</li>
96
- </ul>
97
- <h2>Conclusion</h2>
98
- <p>Traffic Racer Mod APK is a great racing game that you can play on your iOS device. It offers unlimited money, unlocked cars, and other features that make the game more fun and exciting. However, since it is not available on the App Store, you need to use either jailbreaking or sideloading methods to install it on your device. Both methods have their pros and cons, so you need to choose the one that suits you best. Once you install the game, you can enjoy its benefits and features and have a blast driving through highway traffic.</p>
99
- <h2>FAQs</h2>
100
- <h3>What are the risks of installing traffic racer mod apk on iOS devices?</h3>
101
- <p>The risks of installing traffic racer mod apk on iOS devices depend on the method that you use. If you use jailbreaking, you may void your warranty, expose your device to security risks, or encounter compatibility issues or bugs. If you use sideloading, you may risk your Apple ID and password, or lose access to the app if Apple revokes it.</p>
102
- <h3>How can I update traffic racer mod apk on iOS devices?</h3>
103
- <p>To update traffic racer mod apk on iOS devices, you need to follow the same steps that you used to install it. You need to find the latest version of the modded file (either apk or ipa) and install it using the same tool (either Cydia or Cydia Impactor). You may need to delete the previous version of the game before installing the new one.</p>
104
- <h3>How can I get unlimited money in traffic racer mod apk?</h3>
105
- <p>To get unlimited money in traffic racer mod apk, you do not need to do anything special. The modded version of the game already gives you unlimited money to buy and upgrade any car you want. You can also earn more money by playing the game and completing missions.</p>
106
- <h3>What are some tips and tricks for playing traffic racer game?</h3>
107
- <p>Some tips and tricks for playing traffic racer game are:</p>
108
- <ul>
109
- <li>Drive faster to get more scores and cash.</li>
110
- <li>Use the nitro boost to overtake other cars and avoid collisions.</li>
111
- <li>Drive in the opposite direction in two-way mode to get extra score and cash.</li>
112
- <li>Do not crash into other cars or obstacles, as this will damage your car and reduce your speed.</li>
113
- <li>Try different cars and game modes to find the one that suits your style.</li>
114
- </ul>
115
- <h3>What are some alternatives to traffic racer game?</h3>
116
- <p>If you are looking for some alternatives to traffic racer game, you can try these games:</p>
117
- <ul>
118
- <li><strong>Traffic Rider</strong>: This is a similar game where you ride a motorcycle instead of a car. You can enjoy the first-person view, realistic bike sounds, and over 30 different bikes.</li>
119
- <li><strong>Traffic Tour</strong>: This is another racing game where you drive through traffic, perform stunts, and challenge other players online. You can also customize your car, change the camera view, and use different controls.</li>
120
- <li><strong>Traffic Run</strong>: This is a casual game where you tap to control your car and avoid hitting other vehicles or obstacles. You can also collect coins, unlock new cars, and explore different environments.</li>
121
- </ul>
122
- <p>I hope you enjoyed this article and learned how to install and play traffic racer mod apk on iOS devices. If you have any questions or feedback, please leave a comment below. Thank you for reading!</p> 401be4b1e0<br />
123
- <br />
124
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Experience the Thrill of KOF M.U.G.E.N 2020 on Your Smartphone.md DELETED
@@ -1,135 +0,0 @@
1
- <br />
2
- <h1>KOF M.U.G.E.N 2020 Download APK: How to Play the Ultimate Fighting Game on Your Android Device</h1> | <p>Do you love fighting games? Do you want to play one of the most popular and customizable fighting games on your Android device? If you answered yes to both questions, then you should definitely try KOF M.U.G.E.N 2020 APK.</p>
3
- <h2>kof m.u.g.e.n 2020 download apk</h2><br /><p><b><b>Download</b> &#9658;&#9658;&#9658; <a href="https://jinyurl.com/2uNTFT">https://jinyurl.com/2uNTFT</a></b></p><br /><br />
4
- <p>KOF M.U.G.E.N 2020 APK is a fan-made game that combines characters, stages, music, and gameplay from various SNK franchises such as The King of Fighters, Fatal Fury, Art of Fighting, Samurai Shodown, Metal Slug, and more. It is based on the M.U.G.E.N engine, which allows anyone to create their own fighting games with ease.</p>
5
- <p>In this article, we will tell you everything you need to know about KOF M.U.G.E.N 2020 APK, including what it is, how to download and install it on your Android device, how to customize and edit it according to your preferences, and why you should give it a try. We will also answer some frequently asked questions about KOF M.U.G.E.N 2020 APK at the end of this article.</p>
6
- <h2>What is KOF M.U.G.E.N 2020?</h2>
7
- | HTML Code | | --------- | | <h3>A Brief History of KOF M.U.G.E.N</h3> | <p>KOF M.U.G.E.N is a series of fan-made games that started in 2002 by a group of Brazilian fans who wanted to create their own version of The King of Fighters, a popular fighting game franchise by SNK. They used the M.U.G.E.N engine, which is a free and open-source game engine that allows anyone to create 2D fighting games with custom characters, stages, music, and gameplay.</p>
8
- <p>Over the years, KOF M.U.G.E.N has evolved and improved, adding more characters, stages, modes, and features from various SNK games and other sources. KOF M.U.G.E.N 2020 is the latest and most advanced version of the series, featuring over 200 characters, over 100 stages, and many options and settings to customize the game to your liking.</p>
9
- <p>kof m.u.g.e.n 2020 free download for pc<br />
10
- kof m.u.g.e.n 2020 android apk offline<br />
11
- kof m.u.g.e.n 2020 full game download<br />
12
- kof m.u.g.e.n 2020 mod apk unlimited money<br />
13
- kof m.u.g.e.n 2020 latest version download<br />
14
- kof m.u.g.e.n 2020 characters download<br />
15
- kof m.u.g.e.n 2020 apk + obb download<br />
16
- kof m.u.g.e.n 2020 mugenation edition download<br />
17
- kof m.u.g.e.n 2020 filehorse download<br />
18
- kof m.u.g.e.n 2020 mugen free for all download<br />
19
- kof m.u.g.e.n 2020 apk pure download<br />
20
- kof m.u.g.e.n 2020 no ads download<br />
21
- kof m.u.g.e.n 2020 windows 10 download<br />
22
- kof m.u.g.e.n 2020 apk mirror download<br />
23
- kof m.u.g.e.n 2020 mega.nz download<br />
24
- kof m.u.g.e.n 2020 mediafire download<br />
25
- kof m.u.g.e.n 2020 google drive download<br />
26
- kof m.u.g.e.n 2020 uptodown download<br />
27
- kof m.u.g.e.n 2020 apkmonk download<br />
28
- kof m.u.g.e.n 2020 apkpure.com download<br />
29
- kof m.u.g.e.n 2020 revdl.com download<br />
30
- kof m.u.g.e.n 2020 rexdl.com download<br />
31
- kof m.u.g.e.n 2020 andropalace.org download<br />
32
- kof m.u.g.e.n 2020 android1.com download<br />
33
- kof m.u.g.e.n 2020 apkdone.com download<br />
34
- kof m.u.g.e.n 2020 apkaward.com download<br />
35
- kof m.u.g.e.n 2020 apkmody.io download<br />
36
- kof m.u.g.e.n 2020 apksum.com download<br />
37
- kof m.u.g.e.n 2020 apktada.com download<br />
38
- kof m.u.g.e.n 2020 apknite.com download<br />
39
- kof m.u.g.e.n 2020 apkhere.com download<br />
40
- kof m.u.g.e.n 2020 apkmirror.com download<br />
41
- kof m.u.g.e.n 2020 apk4fun.com download<br />
42
- kof m.u.g.e.n 2020 apkpanda.com download<br />
43
- kof m.u.g.e.n 2020 apkcombo.com download<br />
44
- kof m.u.g.e.n 2020 apksfull.com download<br />
45
- kof m.u.g.e.n 2020 apkgk.com download<br />
46
- kof m.u.g.e.n 2020 apkring.com download<br />
47
- kof m.u.g.e.n 2020 apkfab.com download<br />
48
- kof m.u.g.e.n 2020 apkhome.net download</p>
49
- <h3>Features and Gameplay of KOF M.U.G.E.N 2020</h3>
50
- <p>KOF M.U.G.E.N 2020 is a 2D fighting game that follows the same basic rules and mechanics as The King of Fighters. You can choose from several modes, such as Arcade, Team Battle, Survival, Training, Watch, and more. You can also choose from different types of teams, such as Single, Simul, Turns, or Tag.</p>
51
- <p>The gameplay of KOF M.U.G.E.N 2020 is fast-paced and fluid, with smooth animations and responsive controls. You can perform various moves and combos with your characters, such as punches, kicks, throws, special moves, super moves, and ultimate moves. You can also use different systems and mechanics, such as Power Gauge, Max Mode, Guard Cancel, Counter Attack, Roll Escape, and more.</p>
52
- <p>KOF M.U.G.E.N 2020 also has many features that make it unique and fun to play. For example, you can adjust the difficulty level, the number of rounds, the time limit, the damage ratio, the life recovery rate, and other options. You can also enable or disable certain features, such as AI mode, debug mode, cheats mode, auto guard mode, and more. You can also change the screen resolution, the sound volume, the language, the input configuration, and other settings.</p>
53
- <h3>Characters and Stages of KOF M.U.G.E.N 2020</h3>
54
- | HTML Code | | --------- | | Iori Yagami, Terry Bogard, Mai Shiranui, etc.), Fatal Fury series (such as Geese Howard, Andy Bogard, Kim Kaphwan, etc.), Art of Fighting series (such as Ryo Sakazaki, Robert Garcia, Yuri Sakazaki, etc.), Samurai Shodown series (such as Haohmaru, Nakoruru, Genjuro Kibagami, etc.), Metal Slug series (such as Marco Rossi, Fio Germi, Tarma Roving, etc.), and more. You can also find characters from other games and media, such as Street Fighter, Mortal Kombat, Dragon Ball, Naruto, Bleach, One Piece, Marvel, DC, and more.</p>
55
- <p>KOF M.U.G.E.N 2020 also has a large selection of stages that you can fight on. There are over 100 stages from various SNK games and other sources. You can find stages from The King of Fighters series (such as Esaka, Korea, China, etc.), Fatal Fury series (such as South Town, Pao Pao Cafe, Geese Tower, etc.), Art of Fighting series (such as Kyokugen Dojo, L'Amor Restaurant, Glass Hill Valley, etc.), Samurai Shodown series (such as Gairyu Isle, Amakusa Castle, Shimabara Hell Gate, etc.), Metal Slug series (such as Mission 1, Mission 2, Mission 3, etc.), and more. You can also find stages from other games and media, such as Street Fighter, Mortal Kombat, Dragon Ball, Naruto, Bleach, One Piece, Marvel, DC, and more.</p>
56
- <h2>How to Download and Install KOF M.U.G.E.N 2020 APK on Your Android Device</h2>
57
- <p>If you want to play KOF M.U.G.E.N 2020 APK on your Android device, you will need to download and install it first. Here are the requirements and compatibility information that you should know before downloading and installing KOF M.U.G.E.N 2020 APK:</p>
58
- <h3>Requirements and Compatibility</h3>
59
- <p>KOF M.U.G.E.N 2020 APK is a large file that requires a lot of storage space and memory to run smoothly. You will need at least 2 GB of free storage space on your Android device to download and install KOF M.U.G.E.N 2020 APK. You will also need at least 1 GB of RAM to play KOF M.U.G.E.N 2020 APK without lag or crashes.</p>
60
- <p>KOF M.U.G.E.N 2020 APK is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not be able to run KOF M.U.G.E.N 2020 APK properly due to hardware limitations or software issues. If you encounter any problems while playing KOF M.U.G.E.N 2020 APK on your Android device, you can try to lower the game settings or contact the developer for support.</p>
61
- <h3>Steps to Download and Install KOF M.U.G.E.N 2020 APK</h3>
62
- <p>Here are the steps that you need to follow to download and install KOF M.U.G.E.N 2020 APK on your Android device:</p>
63
- <ol>
64
- <li>Go to the official website of KOF M.U.G.E.N 2020 APK [here] and click on the download button.</li>
65
- <li>Wait for the download to finish and locate the file in your device's file manager.</li>
66
- <li>Tap on the file and allow the installation from unknown sources if prompted.</li>
67
- <li>Wait for the installation to complete and launch the game from your app drawer or home screen.</li>
68
- <li>Enjoy playing KOF M.U.G.E.N 2020 APK on your Android device!</li>
69
- </ol>
70
- <h3>Tips and Tricks to Enjoy KOF M.U.G.E.N 2020 APK</h3>
71
- | HTML Code | | --------- | | there are some tips and tricks that you can use to enjoy KOF M.U.G.E.N 2020 APK even more. Here are some of them:</p>
72
- <ul>
73
- <li>Use the training mode to practice your moves and combos with different characters and learn their strengths and weaknesses.</li>
74
- <li>Use the watch mode to watch AI-controlled matches between different characters and learn from their strategies and tactics.</li>
75
- <li>Use the cheats mode to unlock all characters and stages, change the game speed, enable infinite power, and more.</li>
76
- <li>Use the debug mode to access hidden features and options, such as changing the character size, color, position, and more.</li>
77
- <li>Use the AI mode to make the game play itself and enjoy watching the action.</li>
78
- </ul>
79
- <h2>How to Customize and Edit KOF M.U.G.E.N 2020 APK</h2>
80
- <p>KOF M.U.G.E.N 2020 APK is a highly customizable and editable game that allows you to create your own fighting game experience. You can add or remove characters and stages, change the game settings and options, and even create your own characters and stages. Here are some ways that you can customize and edit KOF M.U.G.E.N 2020 APK:</p>
81
- <h3>How to Add or Remove Characters and Stages</h3>
82
- <p>KOF M.U.G.E.N 2020 APK comes with a large roster of characters and stages, but you can always add or remove them according to your preferences. You can download additional characters and stages from various websites, such as [this] or [this], or you can delete unwanted characters and stages from your device's storage. Here are the steps that you need to follow to add or remove characters and stages:</p>
83
- <ol>
84
- <li>Download the character or stage file that you want to add from a reliable source and extract it if it is compressed.</li>
85
- <li>Copy the character or stage folder to the chars or stages folder in your device's storage where KOF M.U.G.E.N 2020 APK is installed.</li>
86
- <li>Edit the select.def file in the data folder using a text editor app such as [this] or [this].</li>
87
- <li>Add the name of the character or stage folder to the select.def file under the appropriate section (such as kfm, bonus, hidden, etc.). For example, if you want to add a character named Ryu, you should write Ryu/Ryu.def under the kfm section.</li>
88
- <li>Save the select.def file and launch KOF M.U.G.E.N 2020 APK. You should see the new character or stage in the game.</li>
89
- <li>To remove a character or stage, simply delete its folder from the chars or stages folder and remove its name from the select.def file.</li>
90
- </ol>
91
- <h3>How to Change the Game Settings and Options</h3>
92
- <p>KOF M.U.G.E.N 2020 APK has many settings and options that you can change to customize the game to your liking. You can change things such as the screen resolution, the sound volume, the language, the input configuration, and more. Here are some ways that you can change the game settings and options:</p>
93
- <ul>
94
- <li>To change the screen resolution, edit the mugen.cfg file in the data folder using a text editor app. Find the line that says "GameWidth" and "GameHeight" and change their values to your desired resolution. For example, if you want to play in 1280x720 resolution, you should write GameWidth = 1280 and GameHeight = 720. Save the mugen.cfg file and launch KOF M.U.G.E.N 2020 APK.</li>
95
- | HTML Code | | --------- | | the sound volume slider for the master, music, and sound effects. You can also mute or unmute the sound by pressing the M key on your keyboard.</li>
96
- <li>To change the language, go to the options menu in KOF M.U.G.E.N 2020 APK and select the language option. You can choose from English, Spanish, Portuguese, French, and Japanese. You can also edit the system.def file in the data folder using a text editor app and change the value of the "language" parameter to your desired language code. For example, if you want to play in German, you should write language = "de". Save the system.def file and launch KOF M.U.G.E.N 2020 APK.</li>
97
- <li>To change the input configuration, go to the options menu in KOF M.U.G.E.N 2020 APK and select the input option. You can configure the buttons for each player and each mode, such as up, down, left, right, light punch, heavy punch, light kick, heavy kick, start, and select. You can also edit the mugen.cfg file in the data folder using a text editor app and change the values of the "Joystick" and "KeyConfig" parameters to your desired input settings. For example, if you want to use the A key for light punch, you should write KeyConfig[0].Button.A = a. Save the mugen.cfg file and launch KOF M.U.G.E.N 2020 APK.</li>
98
- </ul>
99
- <h3>How to Create Your Own Characters and Stages</h3>
100
- <p>KOF M.U.G.E.N 2020 APK is not only a game that you can play, but also a game that you can create. You can create your own characters and stages using the M.U.G.E.N engine and add them to KOF M.U.G.E.N 2020 APK. However, this is not an easy task and requires a lot of time, effort, and knowledge. Here are some resources that you can use to learn how to create your own characters and stages:</p>
101
- <ul>
102
- <li>[This] is a tutorial that teaches you how to create your own character from scratch using Fighter Factory Studio, a tool that allows you to edit sprites, animations, sounds, and codes for your character.</li>
103
- <li>[This] is a tutorial that teaches you how to create your own stage from scratch using Stage Tool, a tool that allows you to edit images, sounds, and codes for your stage.</li>
104
- <li>[This] is a forum where you can find and download various resources for creating your own characters and stages, such as sprites, sounds, codes, templates, tools, tutorials, and more.</li>
105
- <li>[This] is a website where you can find and download various characters and stages that other people have created for M.U.G.E.N games.</li>
106
- </ul>
107
- <h2>Conclusion</h2>
108
- <p>KOF M.U.G.E.N 2020 APK is a fan-made game that offers a unique and enjoyable fighting game experience on your Android device. It has a huge roster of characters and stages from various SNK franchises and other sources. It has a fast-paced and fluid gameplay with smooth animations and responsive controls. It has many features and options that allow you to customize the game to your liking. It also allows you to create your own characters and stages using the M.U.G.E.N engine.</p>
109
- <p>If you are a fan of fighting games or SNK games, you should definitely try KOF M.U.G.E.N 2020 APK. It is free to download and easy to install on your Android device. It is fun to play alone or with friends. It is also a great way to express your creativity and imagination by creating your own characters and stages.</p>
110
- <p>So what are you waiting for? Download KOF M.U.G.E.N 2020 APK now and enjoy playing the ultimate fighting game on your Android device!</p>
111
- <h3>Why You Should Try KOF M.U.G.E.N 2020 APK</h3>
112
- <p>Here are some reasons why you should try KOF M.U.G.E.N 2020 APK:</p>
113
- <ul>
114
- <li>It is free to download and play.</li>
115
- <li>It has over 200 characters and over 100 stages from various SNK franchises and other sources.</li>
116
- <li>It has a fast-paced and fluid gameplay with smooth animations and responsive controls.</li>
117
- <li>It has many features and options that allow you to customize the game to your liking.</li>
118
- <li>It allows you to create your own characters and stages using the M.U.G.E.N engine.</li>
119
- <li>It is fun to play alone or with friends.</li>
120
- </ul>
121
- | HTML Code | | --------- | | FAQs</h3> | <p>Here are some frequently asked questions about KOF M.U.G.E.N 2020 APK:</p>
122
- <ol>
123
- <li>Is KOF M.U.G.E.N 2020 APK safe to download and install?</li>
124
- <p>Yes, KOF M.U.G.E.N 2020 APK is safe to download and install as long as you get it from the official website or a trusted source. However, you should always scan any file that you download with an antivirus app before installing it on your device.</p>
125
- <li>Is KOF M.U.G.E.N 2020 APK legal to play?</li>
126
- <p>KOF M.U.G.E.N 2020 APK is a fan-made game that is not affiliated with or endorsed by SNK or any other company. It is a non-profit game that is made for entertainment purposes only. It does not intend to infringe any copyrights or trademarks of SNK or any other company. However, you should always respect the rights and wishes of the original creators and owners of the characters and stages that are used in KOF M.U.G.E.N 2020 APK.</p>
127
- <li>How can I play KOF M.U.G.E.N 2020 APK with my friends?</li>
128
- <p>KOF M.U.G.E.N 2020 APK supports local multiplayer mode, which means that you can play with your friends on the same device using a split-screen or a gamepad. You can also play with your friends online using a third-party app such as [this] or [this], which allows you to create a virtual network and connect your devices over the internet.</p>
129
- <li>How can I update KOF M.U.G.E.N 2020 APK to the latest version?</li>
130
- <p>KOF M.U.G.E.N 2020 APK is constantly updated by the developer with new characters, stages, features, and bug fixes. You can check for updates on the official website or on the developer's social media pages. You can also enable the auto-update option in the game settings, which will notify you when a new update is available and download it automatically.</p>
131
- <li>How can I contact the developer of KOF M.U.G.E.N 2020 APK?</li>
132
- <p>If you have any questions, suggestions, feedback, or issues regarding KOF M.U.G.E.N 2020 APK, you can contact the developer by sending an email to [this] or by leaving a comment on the developer's YouTube channel [here]. The developer is very responsive and friendly and will try to help you as soon as possible.</p>
133
- </ol></p> 401be4b1e0<br />
134
- <br />
135
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2hack2furious/anonymizer/app.py DELETED
@@ -1,87 +0,0 @@
1
- import modules
2
- import streamlit as st
3
- from streamlit_extras.let_it_rain import rain
4
-
5
- # Options
6
- DISCLAIMER = """
7
- *This app processes data using 2-anonymity, an implementation of the k-anonymity framework. While this is a great start to anonymizing your data, it is by no means perfect, and should be used with caution. For example, some sets of sensitive features which may clearly be identified by a human could be missed by our algorithm. Please keep this in mind.*
8
- """
9
- K = 2
10
-
11
- # Page Config
12
- st.set_page_config(layout="wide")
13
-
14
- ### FILE LOADER for sidebar
15
- with st.sidebar:
16
- st.header("🕵️ 2anonymity")
17
- st.markdown("*Clean and anonymize data*")
18
- with st.container() as upload:
19
- file = st.file_uploader(f"Upload dataset:", type=modules.SUPPORTED_TYPES, label_visibility="collapsed")
20
- df, (filename, extension), result = modules.load_file(file)
21
-
22
- ### MAIN
23
- if df is None: # Await file to be uploaded
24
- rain("🤠")
25
- else:
26
- ### PRE-TRANSFORM features for sidebar
27
- with st.sidebar:
28
- # Options for data loading
29
- with st.container() as loading_options:
30
- st.markdown("### Data loading options:")
31
- remove_duplicates = st.checkbox("Remove duplicate rows", value=True)
32
- drop_missing = st.checkbox("Remove rows with missing values", value=False)
33
-
34
- # Options for data optimization
35
- with st.container() as anonymizing_options:
36
- st.markdown("### Anonymizing options:")
37
- max_categorical_size = st.slider("Categorical Variable Threshold", min_value=2, max_value=200, value=50, step=1)
38
- bin_size = st.slider("Bin Size", min_value=2, max_value=200, value=20, step=1)
39
- redaction_selection = st.selectbox("Redaction strength", ["Low", "Medium", "High", "Extreme"])
40
- sensitivity_minimum = {"Low": 2, "Medium": 4, "High": 6, "Extreme": 12}[redaction_selection]
41
-
42
-
43
- ### DATA PREVIEW AND TRANSFORM
44
- # Preview data before transform
45
- with st.container() as before_data:
46
- s = df.style
47
- s = s.set_properties(**{'background-color': '#fce4e4'})
48
- st.dataframe(s)
49
-
50
- # Transform data
51
- df = modules.data_cleaner(df, drop_missing, remove_duplicates)
52
- df, unprocessed = modules.data_anonymizer(df, K, max_categorical_size, bin_size, sensitivity_minimum)
53
-
54
- # Preview data after before_data
55
- with st.container() as after_data:
56
- s = df.style
57
- s = s.set_properties(**{'background-color': '#e4fce4'})
58
- st.dataframe(s)
59
-
60
-
61
- ### POST-TRANSFORM features for sidebar
62
- with st.sidebar:
63
- # Options for download
64
- with st.container() as download_header:
65
- st.markdown("### Download options:")
66
- output_extension = st.selectbox("File type", [".csv", ".json", ".xlsx"])
67
- if unprocessed: st.markdown(f"Error encountered when processing columns {str(unprocessed)}")
68
-
69
- # Prepare file for download
70
- with st.container() as downloader:
71
- if output_extension == ".csv": output_file = df.to_csv().encode("utf-8")
72
- elif output_extension == ".json": output_file = df.to_json().encode("utf-8")
73
- elif output_extension == ".xlsx": output_file = df.to_excel().encode("utf-8")
74
- output_filename = f"""{filename.split(".")[:-1][0]}-clean{output_extension}"""
75
- st.download_button("Download", output_file, file_name=output_filename)
76
-
77
- # Add a disclaimer for data security
78
- with st.container() as disclaimer:
79
- st.markdown(
80
- f"""
81
- Disclaimer:
82
- {DISCLAIMER}
83
- """
84
- )
85
-
86
- # Attribution
87
- st.sidebar.markdown("Created by team #2hack2furious for the hackthethreat2023")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2ndelement/voicevox/test/test_full_context_label.py DELETED
@@ -1,404 +0,0 @@
1
- from copy import deepcopy
2
- from itertools import chain
3
- from unittest import TestCase
4
-
5
- from voicevox_engine.full_context_label import (
6
- AccentPhrase,
7
- BreathGroup,
8
- Mora,
9
- Phoneme,
10
- Utterance,
11
- )
12
-
13
-
14
- class TestBasePhonemes(TestCase):
15
- def setUp(self):
16
- super().setUp()
17
- # pyopenjtalk.extract_fullcontext("こんにちは、ヒホです。")の結果
18
- # 出来る限りテスト内で他のライブラリに依存しないため、
19
- # またテスト内容を透明化するために、テストケースを生成している
20
- self.test_case_hello_hiho = [
21
- # sil (無音)
22
- "xx^xx-sil+k=o/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
23
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:5_5%0_xx_xx/H:xx_xx/I:xx-xx"
24
- + "@xx+xx&xx-xx|xx+xx/J:1_5/K:2+2-9",
25
- # k
26
- "xx^sil-k+o=N/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
27
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
28
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
29
- # o
30
- "sil^k-o+N=n/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
31
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
32
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
33
- # N (ん)
34
- "k^o-N+n=i/A:-3+2+4/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
35
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
36
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
37
- # n
38
- "o^N-n+i=ch/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
39
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
40
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
41
- # i
42
- "N^n-i+ch=i/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
43
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
44
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
45
- # ch
46
- "n^i-ch+i=w/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
47
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
48
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
49
- # i
50
- "i^ch-i+w=a/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
51
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
52
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
53
- # w
54
- "ch^i-w+a=pau/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
55
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
56
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
57
- # a
58
- "i^w-a+pau=h/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
59
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
60
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
61
- # pau (読点)
62
- "w^a-pau+h=i/A:xx+xx+xx/B:09-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:5_5!0_xx-xx"
63
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:4_1%0_xx_xx/H:1_5/I:xx-xx"
64
- + "@xx+xx&xx-xx|xx+xx/J:1_4/K:2+2-9",
65
- # h
66
- "a^pau-h+i=h/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0"
67
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
68
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
69
- # i
70
- "pau^h-i+h=o/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0"
71
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
72
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
73
- # h
74
- "h^i-h+o=d/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0"
75
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
76
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
77
- # o
78
- "i^h-o+d=e/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0"
79
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
80
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
81
- # d
82
- "h^o-d+e=s/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
83
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
84
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
85
- # e
86
- "o^d-e+s=U/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
87
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
88
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
89
- # s
90
- "d^e-s+U=sil/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
91
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
92
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
93
- # U (無声母音)
94
- "e^s-U+sil=xx/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
95
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
96
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
97
- # sil (無音)
98
- "s^U-sil+xx=xx/A:xx+xx+xx/B:10-7_2/C:xx_xx+xx/D:xx+xx_xx/E:4_1!0_xx-xx"
99
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:xx_xx%xx_xx_xx/H:1_4/I:xx-xx"
100
- + "@xx+xx&xx-xx|xx+xx/J:xx_xx/K:2+2-9",
101
- ]
102
- self.phonemes_hello_hiho = [
103
- Phoneme.from_label(label) for label in self.test_case_hello_hiho
104
- ]
105
-
106
-
107
- class TestPhoneme(TestBasePhonemes):
108
- def test_phoneme(self):
109
- self.assertEqual(
110
- " ".join([phoneme.phoneme for phoneme in self.phonemes_hello_hiho]),
111
- "sil k o N n i ch i w a pau h i h o d e s U sil",
112
- )
113
-
114
- def test_is_pause(self):
115
- self.assertEqual(
116
- [phoneme.is_pause() for phoneme in self.phonemes_hello_hiho],
117
- [
118
- True, # sil
119
- False, # k
120
- False, # o
121
- False, # N
122
- False, # n
123
- False, # i
124
- False, # ch
125
- False, # i
126
- False, # w
127
- False, # a
128
- True, # pau
129
- False, # h
130
- False, # i
131
- False, # h
132
- False, # o
133
- False, # d
134
- False, # e
135
- False, # s
136
- False, # u
137
- True, # sil
138
- ],
139
- )
140
-
141
- def test_label(self) -> None:
142
- self.assertEqual(
143
- [phoneme.label for phoneme in self.phonemes_hello_hiho],
144
- self.test_case_hello_hiho,
145
- )
146
-
147
-
148
- class TestMora(TestBasePhonemes):
149
- def setUp(self) -> None:
150
- super().setUp()
151
- # contexts["a2"] == "1" ko
152
- self.mora_hello_1 = Mora(
153
- consonant=self.phonemes_hello_hiho[1], vowel=self.phonemes_hello_hiho[2]
154
- )
155
- # contexts["a2"] == "2" N
156
- self.mora_hello_2 = Mora(consonant=None, vowel=self.phonemes_hello_hiho[3])
157
- # contexts["a2"] == "3" ni
158
- self.mora_hello_3 = Mora(
159
- consonant=self.phonemes_hello_hiho[4], vowel=self.phonemes_hello_hiho[5]
160
- )
161
- # contexts["a2"] == "4" chi
162
- self.mora_hello_4 = Mora(
163
- consonant=self.phonemes_hello_hiho[6], vowel=self.phonemes_hello_hiho[7]
164
- )
165
- # contexts["a2"] == "5" wa
166
- self.mora_hello_5 = Mora(
167
- consonant=self.phonemes_hello_hiho[8], vowel=self.phonemes_hello_hiho[9]
168
- )
169
- # contexts["a2"] == "1" hi
170
- self.mora_hiho_1 = Mora(
171
- consonant=self.phonemes_hello_hiho[11], vowel=self.phonemes_hello_hiho[12]
172
- )
173
- # contexts["a2"] == "2" ho
174
- self.mora_hiho_2 = Mora(
175
- consonant=self.phonemes_hello_hiho[13], vowel=self.phonemes_hello_hiho[14]
176
- )
177
- # contexts["a2"] == "3" de
178
- self.mora_hiho_3 = Mora(
179
- consonant=self.phonemes_hello_hiho[15], vowel=self.phonemes_hello_hiho[16]
180
- )
181
- # contexts["a2"] == "1" sU
182
- self.mora_hiho_4 = Mora(
183
- consonant=self.phonemes_hello_hiho[17], vowel=self.phonemes_hello_hiho[18]
184
- )
185
-
186
- def assert_phonemes(self, mora: Mora, mora_str: str) -> None:
187
- self.assertEqual(
188
- "".join([phoneme.phoneme for phoneme in mora.phonemes]), mora_str
189
- )
190
-
191
- def assert_labels(self, mora: Mora, label_start: int, label_end: int) -> None:
192
- self.assertEqual(mora.labels, self.test_case_hello_hiho[label_start:label_end])
193
-
194
- def test_phonemes(self) -> None:
195
- self.assert_phonemes(self.mora_hello_1, "ko")
196
- self.assert_phonemes(self.mora_hello_2, "N")
197
- self.assert_phonemes(self.mora_hello_3, "ni")
198
- self.assert_phonemes(self.mora_hello_4, "chi")
199
- self.assert_phonemes(self.mora_hello_5, "wa")
200
- self.assert_phonemes(self.mora_hiho_1, "hi")
201
- self.assert_phonemes(self.mora_hiho_2, "ho")
202
- self.assert_phonemes(self.mora_hiho_3, "de")
203
- self.assert_phonemes(self.mora_hiho_4, "sU")
204
-
205
- def test_labels(self) -> None:
206
- self.assert_labels(self.mora_hello_1, 1, 3)
207
- self.assert_labels(self.mora_hello_2, 3, 4)
208
- self.assert_labels(self.mora_hello_3, 4, 6)
209
- self.assert_labels(self.mora_hello_4, 6, 8)
210
- self.assert_labels(self.mora_hello_5, 8, 10)
211
- self.assert_labels(self.mora_hiho_1, 11, 13)
212
- self.assert_labels(self.mora_hiho_2, 13, 15)
213
- self.assert_labels(self.mora_hiho_3, 15, 17)
214
- self.assert_labels(self.mora_hiho_4, 17, 19)
215
-
216
- def test_set_context(self):
217
- # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする
218
- mora_hello_1 = deepcopy(self.mora_hello_1)
219
- # phonemeにあたる"p3"を書き換える
220
- mora_hello_1.set_context("p3", "a")
221
- self.assert_phonemes(mora_hello_1, "aa")
222
-
223
-
224
- class TestAccentPhrase(TestBasePhonemes):
225
- def setUp(self) -> None:
226
- super().setUp()
227
- # TODO: ValueErrorを吐く作為的ではない自然な例の模索
228
- # 存在しないなら放置でよい
229
- self.accent_phrase_hello = AccentPhrase.from_phonemes(
230
- self.phonemes_hello_hiho[1:10]
231
- )
232
- self.accent_phrase_hiho = AccentPhrase.from_phonemes(
233
- self.phonemes_hello_hiho[11:19]
234
- )
235
-
236
- def test_accent(self):
237
- self.assertEqual(self.accent_phrase_hello.accent, 5)
238
- self.assertEqual(self.accent_phrase_hiho.accent, 1)
239
-
240
- def test_set_context(self):
241
- accent_phrase_hello = deepcopy(self.accent_phrase_hello)
242
- # phonemeにあたる"p3"を書き換える
243
- accent_phrase_hello.set_context("p3", "a")
244
- self.assertEqual(
245
- "".join([phoneme.phoneme for phoneme in accent_phrase_hello.phonemes]),
246
- "aaaaaaaaa",
247
- )
248
-
249
- def test_phonemes(self):
250
- self.assertEqual(
251
- " ".join(
252
- [phoneme.phoneme for phoneme in self.accent_phrase_hello.phonemes]
253
- ),
254
- "k o N n i ch i w a",
255
- )
256
- self.assertEqual(
257
- " ".join([phoneme.phoneme for phoneme in self.accent_phrase_hiho.phonemes]),
258
- "h i h o d e s U",
259
- )
260
-
261
- def test_labels(self):
262
- self.assertEqual(
263
- self.accent_phrase_hello.labels, self.test_case_hello_hiho[1:10]
264
- )
265
- self.assertEqual(
266
- self.accent_phrase_hiho.labels, self.test_case_hello_hiho[11:19]
267
- )
268
-
269
- def test_merge(self):
270
- # 「こんにちはヒホです」
271
- # 読点を無くしたものと同等
272
- merged_accent_phrase = self.accent_phrase_hello.merge(self.accent_phrase_hiho)
273
- self.assertEqual(merged_accent_phrase.accent, 5)
274
- self.assertEqual(
275
- " ".join([phoneme.phoneme for phoneme in merged_accent_phrase.phonemes]),
276
- "k o N n i ch i w a h i h o d e s U",
277
- )
278
- self.assertEqual(
279
- merged_accent_phrase.labels,
280
- self.test_case_hello_hiho[1:10] + self.test_case_hello_hiho[11:19],
281
- )
282
-
283
-
284
- class TestBreathGroup(TestBasePhonemes):
285
- def setUp(self) -> None:
286
- super().setUp()
287
- self.breath_group_hello = BreathGroup.from_phonemes(
288
- self.phonemes_hello_hiho[1:10]
289
- )
290
- self.breath_group_hiho = BreathGroup.from_phonemes(
291
- self.phonemes_hello_hiho[11:19]
292
- )
293
-
294
- def test_set_context(self):
295
- # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする
296
- breath_group_hello = deepcopy(self.breath_group_hello)
297
- # phonemeにあたる"p3"を書き換える
298
- breath_group_hello.set_context("p3", "a")
299
- self.assertEqual(
300
- "".join([phoneme.phoneme for phoneme in breath_group_hello.phonemes]),
301
- "aaaaaaaaa",
302
- )
303
-
304
- def test_phonemes(self):
305
- self.assertEqual(
306
- " ".join([phoneme.phoneme for phoneme in self.breath_group_hello.phonemes]),
307
- "k o N n i ch i w a",
308
- )
309
- self.assertEqual(
310
- " ".join([phoneme.phoneme for phoneme in self.breath_group_hiho.phonemes]),
311
- "h i h o d e s U",
312
- )
313
-
314
- def test_labels(self):
315
- self.assertEqual(
316
- self.breath_group_hello.labels, self.test_case_hello_hiho[1:10]
317
- )
318
- self.assertEqual(
319
- self.breath_group_hiho.labels, self.test_case_hello_hiho[11:19]
320
- )
321
-
322
-
323
- class TestUtterance(TestBasePhonemes):
324
- def setUp(self) -> None:
325
- super().setUp()
326
- self.utterance_hello_hiho = Utterance.from_phonemes(self.phonemes_hello_hiho)
327
-
328
- def test_phonemes(self):
329
- self.assertEqual(
330
- " ".join(
331
- [phoneme.phoneme for phoneme in self.utterance_hello_hiho.phonemes]
332
- ),
333
- "sil k o N n i ch i w a pau h i h o d e s U sil",
334
- )
335
- changed_utterance = Utterance.from_phonemes(self.utterance_hello_hiho.phonemes)
336
- self.assertEqual(len(changed_utterance.breath_groups), 2)
337
- accent_phrases = list(
338
- chain.from_iterable(
339
- breath_group.accent_phrases
340
- for breath_group in changed_utterance.breath_groups
341
- )
342
- )
343
- for prev, cent, post in zip(
344
- [None] + accent_phrases[:-1],
345
- accent_phrases,
346
- accent_phrases[1:] + [None],
347
- ):
348
- mora_num = len(cent.moras)
349
- accent = cent.accent
350
-
351
- if prev is not None:
352
- for phoneme in prev.phonemes:
353
- self.assertEqual(phoneme.contexts["g1"], str(mora_num))
354
- self.assertEqual(phoneme.contexts["g2"], str(accent))
355
-
356
- if post is not None:
357
- for phoneme in post.phonemes:
358
- self.assertEqual(phoneme.contexts["e1"], str(mora_num))
359
- self.assertEqual(phoneme.contexts["e2"], str(accent))
360
-
361
- for phoneme in cent.phonemes:
362
- self.assertEqual(
363
- phoneme.contexts["k2"],
364
- str(
365
- sum(
366
- [
367
- len(breath_group.accent_phrases)
368
- for breath_group in changed_utterance.breath_groups
369
- ]
370
- )
371
- ),
372
- )
373
-
374
- for prev, cent, post in zip(
375
- [None] + changed_utterance.breath_groups[:-1],
376
- changed_utterance.breath_groups,
377
- changed_utterance.breath_groups[1:] + [None],
378
- ):
379
- accent_phrase_num = len(cent.accent_phrases)
380
-
381
- if prev is not None:
382
- for phoneme in prev.phonemes:
383
- self.assertEqual(phoneme.contexts["j1"], str(accent_phrase_num))
384
-
385
- if post is not None:
386
- for phoneme in post.phonemes:
387
- self.assertEqual(phoneme.contexts["h1"], str(accent_phrase_num))
388
-
389
- for phoneme in cent.phonemes:
390
- self.assertEqual(phoneme.contexts["i1"], str(accent_phrase_num))
391
- self.assertEqual(
392
- phoneme.contexts["i5"],
393
- str(accent_phrases.index(cent.accent_phrases[0]) + 1),
394
- )
395
- self.assertEqual(
396
- phoneme.contexts["i6"],
397
- str(
398
- len(accent_phrases)
399
- - accent_phrases.index(cent.accent_phrases[0])
400
- ),
401
- )
402
-
403
- def test_labels(self):
404
- self.assertEqual(self.utterance_hello_hiho.labels, self.test_case_hello_hiho)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2ndelement/voicevox/voicevox_engine/setting/Setting.py DELETED
@@ -1,25 +0,0 @@
1
- from enum import Enum
2
- from typing import Optional
3
-
4
- from pydantic import BaseModel, Field
5
-
6
-
7
- class CorsPolicyMode(str, Enum):
8
- """
9
- CORSの許可モード
10
- """
11
-
12
- all = "all" # 全てのオリジンからのリクエストを許可
13
- localapps = "localapps" # ローカルアプリケーションからのリクエストを許可
14
-
15
-
16
- class Setting(BaseModel):
17
- """
18
- エンジンの設定情報
19
- """
20
-
21
- cors_policy_mode: CorsPolicyMode = Field(title="リソース共有ポリシー")
22
- allow_origin: Optional[str] = Field(title="許可するオリジン")
23
-
24
- class Config:
25
- use_enum_values = True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/52Hz/HWMNet_lowlight_enhancement/main_test_HWMNet.py DELETED
@@ -1,86 +0,0 @@
1
- import argparse
2
- import cv2
3
- import glob
4
- import numpy as np
5
- from collections import OrderedDict
6
- from skimage import img_as_ubyte
7
- import os
8
- import torch
9
- import requests
10
- from PIL import Image
11
- import torchvision.transforms.functional as TF
12
- import torch.nn.functional as F
13
- from natsort import natsorted
14
- from model.HWMNet import HWMNet
15
-
16
- def main():
17
- parser = argparse.ArgumentParser(description='Demo Low-light Image enhancement')
18
- parser.add_argument('--input_dir', default='test/', type=str, help='Input images')
19
- parser.add_argument('--result_dir', default='result/', type=str, help='Directory for results')
20
- parser.add_argument('--weights',
21
- default='experiments/pretrained_models/LOL_enhancement_HWMNet.pth', type=str,
22
- help='Path to weights')
23
-
24
- args = parser.parse_args()
25
-
26
- inp_dir = args.input_dir
27
- out_dir = args.result_dir
28
-
29
- os.makedirs(out_dir, exist_ok=True)
30
-
31
- files = natsorted(glob.glob(os.path.join(inp_dir, '*')))
32
-
33
- if len(files) == 0:
34
- raise Exception(f"No files found at {inp_dir}")
35
-
36
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
37
-
38
- # Load corresponding models architecture and weights
39
- model = HWMNet(in_chn=3, wf=96, depth=4)
40
- model = model.to(device)
41
- model.eval()
42
- load_checkpoint(model, args.weights)
43
-
44
-
45
- mul = 16
46
- for file_ in files:
47
- img = Image.open(file_).convert('RGB')
48
- input_ = TF.to_tensor(img).unsqueeze(0).to(device)
49
-
50
- # Pad the input if not_multiple_of 8
51
- h, w = input_.shape[2], input_.shape[3]
52
- H, W = ((h + mul) // mul) * mul, ((w + mul) // mul) * mul
53
- padh = H - h if h % mul != 0 else 0
54
- padw = W - w if w % mul != 0 else 0
55
- input_ = F.pad(input_, (0, padw, 0, padh), 'reflect')
56
- with torch.no_grad():
57
- restored = model(input_)
58
-
59
- restored = torch.clamp(restored, 0, 1)
60
- restored = restored[:, :, :h, :w]
61
- restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy()
62
- restored = img_as_ubyte(restored[0])
63
-
64
- f = os.path.splitext(os.path.split(file_)[-1])[0]
65
- save_img((os.path.join(out_dir, f + '.png')), restored)
66
-
67
-
68
- def save_img(filepath, img):
69
- cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
70
-
71
-
72
- def load_checkpoint(model, weights):
73
- checkpoint = torch.load(weights, map_location=torch.device('cpu'))
74
- try:
75
- model.load_state_dict(checkpoint["state_dict"])
76
- except:
77
- state_dict = checkpoint["state_dict"]
78
- new_state_dict = OrderedDict()
79
- for k, v in state_dict.items():
80
- name = k[7:] # remove `module.`
81
- new_state_dict[name] = v
82
- model.load_state_dict(new_state_dict)
83
-
84
-
85
- if __name__ == '__main__':
86
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/52Hz/SRMNet_AWGN_denoising/README.md DELETED
@@ -1,37 +0,0 @@
1
- ---
2
- title: SRMNet_AWGN_denoising
3
- emoji: 🌪
4
- colorFrom: red
5
- colorTo: yellow
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: false
9
- ---
10
-
11
- # Configuration
12
-
13
- `title`: _string_
14
- Display title for the Space
15
-
16
- `emoji`: _string_
17
- Space emoji (emoji-only character allowed)
18
-
19
- `colorFrom`: _string_
20
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
21
-
22
- `colorTo`: _string_
23
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
24
-
25
- `sdk`: _string_
26
- Can be either `gradio`, `streamlit`, or `static`
27
-
28
- `sdk_version` : _string_
29
- Only applicable for `streamlit` SDK.
30
- See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
31
-
32
- `app_file`: _string_
33
- Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
34
- Path is relative to the root of the repository.
35
-
36
- `pinned`: _boolean_
37
- Whether the Space stays on top of your list.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/rel_transformer.py DELETED
@@ -1,611 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
- from utils.hparams import hparams
6
- from modules.commons.common_layers import Embedding
7
- from utils.tts_utils import group_hidden_by_segs, expand_word2ph
8
-
9
- import transformers
10
-
11
- def convert_pad_shape(pad_shape):
12
- l = pad_shape[::-1]
13
- pad_shape = [item for sublist in l for item in sublist]
14
- return pad_shape
15
-
16
-
17
- def shift_1d(x):
18
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
19
- return x
20
-
21
-
22
- def sequence_mask(length, max_length=None):
23
- if max_length is None:
24
- max_length = length.max()
25
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
26
- return x.unsqueeze(0) < length.unsqueeze(1)
27
-
28
-
29
- class Encoder(nn.Module):
30
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0.,
31
- window_size=None, block_length=None, pre_ln=False, **kwargs):
32
- super().__init__()
33
- self.hidden_channels = hidden_channels
34
- self.filter_channels = filter_channels
35
- self.n_heads = n_heads
36
- self.n_layers = n_layers
37
- self.kernel_size = kernel_size
38
- self.p_dropout = p_dropout
39
- self.window_size = window_size
40
- self.block_length = block_length
41
- self.pre_ln = pre_ln
42
-
43
- self.drop = nn.Dropout(p_dropout)
44
- self.attn_layers = nn.ModuleList()
45
- self.norm_layers_1 = nn.ModuleList()
46
- self.ffn_layers = nn.ModuleList()
47
- self.norm_layers_2 = nn.ModuleList()
48
- for i in range(self.n_layers):
49
- self.attn_layers.append(
50
- MultiHeadAttention(hidden_channels, hidden_channels, n_heads, window_size=window_size,
51
- p_dropout=p_dropout, block_length=block_length))
52
- self.norm_layers_1.append(LayerNorm(hidden_channels))
53
- self.ffn_layers.append(
54
- FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
55
- self.norm_layers_2.append(LayerNorm(hidden_channels))
56
- if pre_ln:
57
- self.last_ln = LayerNorm(hidden_channels)
58
-
59
- def forward(self, x, x_mask):
60
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
61
- for i in range(self.n_layers):
62
- x = x * x_mask
63
- x_ = x
64
- if self.pre_ln:
65
- x = self.norm_layers_1[i](x)
66
- y = self.attn_layers[i](x, x, attn_mask)
67
- y = self.drop(y)
68
- x = x_ + y
69
- if not self.pre_ln:
70
- x = self.norm_layers_1[i](x)
71
-
72
- x_ = x
73
- if self.pre_ln:
74
- x = self.norm_layers_2[i](x)
75
- y = self.ffn_layers[i](x, x_mask)
76
- y = self.drop(y)
77
- x = x_ + y
78
- if not self.pre_ln:
79
- x = self.norm_layers_2[i](x)
80
- if self.pre_ln:
81
- x = self.last_ln(x)
82
- x = x * x_mask
83
- return x
84
-
85
-
86
- class MultiHeadAttention(nn.Module):
87
- def __init__(self, channels, out_channels, n_heads, window_size=None, heads_share=True, p_dropout=0.,
88
- block_length=None, proximal_bias=False, proximal_init=False):
89
- super().__init__()
90
- assert channels % n_heads == 0
91
-
92
- self.channels = channels
93
- self.out_channels = out_channels
94
- self.n_heads = n_heads
95
- self.window_size = window_size
96
- self.heads_share = heads_share
97
- self.block_length = block_length
98
- self.proximal_bias = proximal_bias
99
- self.p_dropout = p_dropout
100
- self.attn = None
101
-
102
- self.k_channels = channels // n_heads
103
- self.conv_q = nn.Conv1d(channels, channels, 1)
104
- self.conv_k = nn.Conv1d(channels, channels, 1)
105
- self.conv_v = nn.Conv1d(channels, channels, 1)
106
- if window_size is not None:
107
- n_heads_rel = 1 if heads_share else n_heads
108
- rel_stddev = self.k_channels ** -0.5
109
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
110
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
111
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
112
- self.drop = nn.Dropout(p_dropout)
113
-
114
- nn.init.xavier_uniform_(self.conv_q.weight)
115
- nn.init.xavier_uniform_(self.conv_k.weight)
116
- if proximal_init:
117
- self.conv_k.weight.data.copy_(self.conv_q.weight.data)
118
- self.conv_k.bias.data.copy_(self.conv_q.bias.data)
119
- nn.init.xavier_uniform_(self.conv_v.weight)
120
-
121
- def forward(self, x, c, attn_mask=None):
122
- q = self.conv_q(x)
123
- k = self.conv_k(c)
124
- v = self.conv_v(c)
125
-
126
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
127
-
128
- x = self.conv_o(x)
129
- return x
130
-
131
- def attention(self, query, key, value, mask=None):
132
- # reshape [b, d, t] -> [b, n_h, t, d_k]
133
- b, d, t_s, t_t = (*key.size(), query.size(2))
134
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
135
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
136
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
137
-
138
- scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.k_channels)
139
- if self.window_size is not None:
140
- assert t_s == t_t, "Relative attention is only available for self-attention."
141
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
142
- rel_logits = self._matmul_with_relative_keys(query, key_relative_embeddings)
143
- rel_logits = self._relative_position_to_absolute_position(rel_logits)
144
- scores_local = rel_logits / math.sqrt(self.k_channels)
145
- scores = scores + scores_local
146
- if self.proximal_bias:
147
- assert t_s == t_t, "Proximal bias is only available for self-attention."
148
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
149
- if mask is not None:
150
- scores = scores.masked_fill(mask == 0, -1e4)
151
- if self.block_length is not None:
152
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
153
- scores = scores * block_mask + -1e4 * (1 - block_mask)
154
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
155
- p_attn = self.drop(p_attn)
156
- output = torch.matmul(p_attn, value)
157
- if self.window_size is not None:
158
- relative_weights = self._absolute_position_to_relative_position(p_attn)
159
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
160
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
161
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
162
- return output, p_attn
163
-
164
- def _matmul_with_relative_values(self, x, y):
165
- """
166
- x: [b, h, l, m]
167
- y: [h or 1, m, d]
168
- ret: [b, h, l, d]
169
- """
170
- ret = torch.matmul(x, y.unsqueeze(0))
171
- return ret
172
-
173
- def _matmul_with_relative_keys(self, x, y):
174
- """
175
- x: [b, h, l, d]
176
- y: [h or 1, m, d]
177
- ret: [b, h, l, m]
178
- """
179
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
180
- return ret
181
-
182
- def _get_relative_embeddings(self, relative_embeddings, length):
183
- max_relative_position = 2 * self.window_size + 1
184
- # Pad first before slice to avoid using cond ops.
185
- pad_length = max(length - (self.window_size + 1), 0)
186
- slice_start_position = max((self.window_size + 1) - length, 0)
187
- slice_end_position = slice_start_position + 2 * length - 1
188
- if pad_length > 0:
189
- padded_relative_embeddings = F.pad(
190
- relative_embeddings,
191
- convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
192
- else:
193
- padded_relative_embeddings = relative_embeddings
194
- used_relative_embeddings = padded_relative_embeddings[:, slice_start_position:slice_end_position]
195
- return used_relative_embeddings
196
-
197
- def _relative_position_to_absolute_position(self, x):
198
- """
199
- x: [b, h, l, 2*l-1]
200
- ret: [b, h, l, l]
201
- """
202
- batch, heads, length, _ = x.size()
203
- # Concat columns of pad to shift from relative to absolute indexing.
204
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
205
-
206
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
207
- x_flat = x.view([batch, heads, length * 2 * length])
208
- x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]))
209
-
210
- # Reshape and slice out the padded elements.
211
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:, :, :length, length - 1:]
212
- return x_final
213
-
214
- def _absolute_position_to_relative_position(self, x):
215
- """
216
- x: [b, h, l, l]
217
- ret: [b, h, l, 2*l-1]
218
- """
219
- batch, heads, length, _ = x.size()
220
- # padd along column
221
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]))
222
- x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)])
223
- # add 0's in the beginning that will skew the elements after reshape
224
- x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
225
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
226
- return x_final
227
-
228
- def _attention_bias_proximal(self, length):
229
- """Bias for self-attention to encourage attention to close positions.
230
- Args:
231
- length: an integer scalar.
232
- Returns:
233
- a Tensor with shape [1, 1, length, length]
234
- """
235
- r = torch.arange(length, dtype=torch.float32)
236
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
237
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
238
-
239
-
240
- class FFN(nn.Module):
241
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None):
242
- super().__init__()
243
- self.in_channels = in_channels
244
- self.out_channels = out_channels
245
- self.filter_channels = filter_channels
246
- self.kernel_size = kernel_size
247
- self.p_dropout = p_dropout
248
- self.activation = activation
249
-
250
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
251
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, 1)
252
- self.drop = nn.Dropout(p_dropout)
253
-
254
- def forward(self, x, x_mask):
255
- x = self.conv_1(x * x_mask)
256
- if self.activation == "gelu":
257
- x = x * torch.sigmoid(1.702 * x)
258
- else:
259
- x = torch.relu(x)
260
- x = self.drop(x)
261
- x = self.conv_2(x * x_mask)
262
- return x * x_mask
263
-
264
-
265
- class LayerNorm(nn.Module):
266
- def __init__(self, channels, eps=1e-4):
267
- super().__init__()
268
- self.channels = channels
269
- self.eps = eps
270
-
271
- self.gamma = nn.Parameter(torch.ones(channels))
272
- self.beta = nn.Parameter(torch.zeros(channels))
273
-
274
- def forward(self, x):
275
- n_dims = len(x.shape)
276
- mean = torch.mean(x, 1, keepdim=True)
277
- variance = torch.mean((x - mean) ** 2, 1, keepdim=True)
278
-
279
- x = (x - mean) * torch.rsqrt(variance + self.eps)
280
-
281
- shape = [1, -1] + [1] * (n_dims - 2)
282
- x = x * self.gamma.view(*shape) + self.beta.view(*shape)
283
- return x
284
-
285
-
286
- class ConvReluNorm(nn.Module):
287
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
288
- super().__init__()
289
- self.in_channels = in_channels
290
- self.hidden_channels = hidden_channels
291
- self.out_channels = out_channels
292
- self.kernel_size = kernel_size
293
- self.n_layers = n_layers
294
- self.p_dropout = p_dropout
295
- assert n_layers > 1, "Number of layers should be larger than 0."
296
-
297
- self.conv_layers = nn.ModuleList()
298
- self.norm_layers = nn.ModuleList()
299
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
300
- self.norm_layers.append(LayerNorm(hidden_channels))
301
- self.relu_drop = nn.Sequential(
302
- nn.ReLU(),
303
- nn.Dropout(p_dropout))
304
- for _ in range(n_layers - 1):
305
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
306
- self.norm_layers.append(LayerNorm(hidden_channels))
307
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
308
- self.proj.weight.data.zero_()
309
- self.proj.bias.data.zero_()
310
-
311
- def forward(self, x, x_mask):
312
- x_org = x
313
- for i in range(self.n_layers):
314
- x = self.conv_layers[i](x * x_mask)
315
- x = self.norm_layers[i](x)
316
- x = self.relu_drop(x)
317
- x = x_org + self.proj(x)
318
- return x * x_mask
319
-
320
-
321
- class RelTransformerEncoder(nn.Module):
322
- def __init__(self,
323
- n_vocab,
324
- out_channels,
325
- hidden_channels,
326
- filter_channels,
327
- n_heads,
328
- n_layers,
329
- kernel_size,
330
- p_dropout=0.0,
331
- window_size=4,
332
- block_length=None,
333
- prenet=True,
334
- pre_ln=True,
335
- ):
336
-
337
- super().__init__()
338
-
339
- self.n_vocab = n_vocab
340
- self.out_channels = out_channels
341
- self.hidden_channels = hidden_channels
342
- self.filter_channels = filter_channels
343
- self.n_heads = n_heads
344
- self.n_layers = n_layers
345
- self.kernel_size = kernel_size
346
- self.p_dropout = p_dropout
347
- self.window_size = window_size
348
- self.block_length = block_length
349
- self.prenet = prenet
350
- if n_vocab > 0:
351
- self.emb = Embedding(n_vocab, hidden_channels, padding_idx=0)
352
-
353
- if prenet:
354
- self.pre = ConvReluNorm(hidden_channels, hidden_channels, hidden_channels,
355
- kernel_size=5, n_layers=3, p_dropout=0)
356
- self.encoder = Encoder(
357
- hidden_channels,
358
- filter_channels,
359
- n_heads,
360
- n_layers,
361
- kernel_size,
362
- p_dropout,
363
- window_size=window_size,
364
- block_length=block_length,
365
- pre_ln=pre_ln,
366
- )
367
-
368
- def forward(self, x, x_mask=None):
369
- if self.n_vocab > 0:
370
- x_lengths = (x > 0).long().sum(-1)
371
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
372
- else:
373
- x_lengths = (x.abs().sum(-1) > 0).long().sum(-1)
374
- x = torch.transpose(x, 1, -1) # [b, h, t]
375
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
376
-
377
- if self.prenet:
378
- x = self.pre(x, x_mask)
379
- x = self.encoder(x, x_mask)
380
- return x.transpose(1, 2)
381
-
382
-
383
- class Pooler(nn.Module):
384
- """
385
- Parameter-free poolers to get the sentence embedding
386
- 'cls': [CLS] representation with BERT/RoBERTa's MLP pooler.
387
- 'cls_before_pooler': [CLS] representation without the original MLP pooler.
388
- 'avg': average of the last layers' hidden states at each token.
389
- 'avg_top2': average of the last two layers.
390
- 'avg_first_last': average of the first and the last layers.
391
- """
392
- def __init__(self, pooler_type):
393
- super().__init__()
394
- self.pooler_type = pooler_type
395
- assert self.pooler_type in ["cls", "cls_before_pooler", "avg", "avg_top2", "avg_first_last"], "unrecognized pooling type %s" % self.pooler_type
396
-
397
- def forward(self, attention_mask, outputs):
398
- last_hidden = outputs.last_hidden_state
399
- pooler_output = outputs.pooler_output
400
- hidden_states = outputs.hidden_states
401
-
402
- if self.pooler_type in ['cls_before_pooler', 'cls']:
403
- return last_hidden[:, 0]
404
- elif self.pooler_type == "avg":
405
- return ((last_hidden * attention_mask.unsqueeze(-1)).sum(1) / attention_mask.sum(-1).unsqueeze(-1))
406
- elif self.pooler_type == "avg_first_last":
407
- first_hidden = hidden_states[0]
408
- last_hidden = hidden_states[-1]
409
- pooled_result = ((first_hidden + last_hidden) / 2.0 * attention_mask.unsqueeze(-1)).sum(1) / attention_mask.sum(-1).unsqueeze(-1)
410
- return pooled_result
411
- elif self.pooler_type == "avg_top2":
412
- second_last_hidden = hidden_states[-2]
413
- last_hidden = hidden_states[-1]
414
- pooled_result = ((last_hidden + second_last_hidden) / 2.0 * attention_mask.unsqueeze(-1)).sum(1) / attention_mask.sum(-1).unsqueeze(-1)
415
- return pooled_result
416
- else:
417
- raise NotImplementedError
418
-
419
-
420
- class Similarity(nn.Module):
421
- """
422
- Dot product or cosine similarity
423
- """
424
-
425
- def __init__(self, temp):
426
- super().__init__()
427
- self.temp = temp
428
- self.cos = nn.CosineSimilarity(dim=-1)
429
- self.record = None
430
- self.pos_avg = 0.0
431
- self.neg_avg = 0.0
432
-
433
- def forward(self, x, y):
434
- sim = self.cos(x, y)
435
- self.record = sim.detach() # [64,64]
436
- min_size = min(self.record.shape[0], self.record.shape[1]) # 64
437
- num_item = self.record.shape[0] * self.record.shape[1] # 4096
438
- self.pos_avg = self.record.diag().sum() / min_size
439
- if num_item - min_size == 0:
440
- self.neg_avg = (self.record.sum() - self.record.diag().sum()) / 1
441
- return sim / self.temp
442
- if torch.any(torch.isnan(self.record)).item() is True:
443
- print("we got self.record has nan when compute neg_avg")
444
- if torch.any(torch.isnan(self.record.diag())).item() is True:
445
- print("we got self.record.diag() has nan when compute neg_avg")
446
- self.neg_avg = (self.record.sum() - self.record.diag().sum()) / (num_item - min_size)
447
-
448
- return sim / self.temp
449
-
450
-
451
- class BertPredictionHeadTransform(nn.Module):
452
- def __init__(self, hidden_size):
453
- super().__init__()
454
- self.dense = nn.Linear(hidden_size, hidden_size)
455
- self.transform_act_fn = F.gelu
456
- self.LayerNorm = nn.LayerNorm(hidden_size, eps=1e-12)
457
-
458
- def forward(self, hidden_states):
459
- hidden_states = self.dense(hidden_states)
460
- hidden_states = self.transform_act_fn(hidden_states)
461
- hidden_states = self.LayerNorm(hidden_states)
462
- return hidden_states
463
-
464
-
465
- class BertLMPredictionHead(nn.Module):
466
- def __init__(self, hid_dim, out_dim):
467
- super().__init__()
468
- self.transform = BertPredictionHeadTransform(hid_dim)
469
- self.decoder = nn.Linear(hid_dim, out_dim, bias=False)
470
- self.bias = nn.Parameter(torch.zeros(out_dim))
471
- self.decoder.bias = self.bias
472
-
473
- def forward(self, hidden_states):
474
- hidden_states = self.transform(hidden_states)
475
- hidden_states = self.decoder(hidden_states)
476
- return hidden_states
477
-
478
-
479
- # V2_2
480
- # change add to concat.
481
- # now support finetune BERT
482
- # grad_bert=0.1 & trainable_block_idx=0
483
- class BERTRelTransformerEncoder(nn.Module):
484
- def __init__(self,
485
- n_vocab,
486
- out_channels,
487
- hidden_channels,
488
- filter_channels,
489
- n_heads,
490
- n_layers,
491
- kernel_size,
492
- p_dropout=0.0,
493
- window_size=4,
494
- block_length=None,
495
- prenet=True,
496
- pre_ln=True,
497
- ):
498
-
499
- super().__init__()
500
-
501
- self.n_vocab = n_vocab
502
- self.out_channels = out_channels
503
- self.hidden_channels = hidden_channels
504
- self.filter_channels = filter_channels
505
- self.n_heads = n_heads
506
- self.n_layers = n_layers
507
- self.kernel_size = kernel_size
508
- self.p_dropout = p_dropout
509
- self.window_size = window_size
510
- self.block_length = block_length
511
- self.prenet = prenet
512
- if n_vocab > 0:
513
- self.emb = Embedding(n_vocab, hidden_channels, padding_idx=0)
514
-
515
- if prenet:
516
- self.pre = ConvReluNorm(hidden_channels, hidden_channels, hidden_channels,
517
- kernel_size=5, n_layers=3, p_dropout=0)
518
- self.encoder1 = Encoder(
519
- hidden_channels,
520
- filter_channels,
521
- n_heads,
522
- n_layers//2,
523
- kernel_size,
524
- p_dropout,
525
- window_size=window_size,
526
- block_length=block_length,
527
- pre_ln=pre_ln,
528
- )
529
-
530
- self.encoder2 = Encoder(
531
- hidden_channels,
532
- filter_channels,
533
- n_heads,
534
- n_layers - n_layers//2,
535
- kernel_size,
536
- p_dropout,
537
- window_size=window_size,
538
- block_length=block_length,
539
- pre_ln=pre_ln,
540
- )
541
-
542
- if hparams['ds_name'] in ['ljspeech', 'libritts', 'librispeech']:
543
- model_name = 'bert-base-uncased'
544
- elif hparams['ds_name'] in ['biaobei', 'wenetspeech']:
545
- model_name = 'bert-base-chinese'
546
- else:
547
- raise NotImplementedError()
548
-
549
- self.tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
550
- config = transformers.AutoConfig.from_pretrained(model_name)
551
- if hparams.get("load_bert_from_pretrained", True):
552
- print("Load BERT from pretrained model ...")
553
- self.bert = transformers.AutoModel.from_pretrained(model_name,config=config)
554
- trainable_start_block = hparams.get("bert_trainable_start_block", 0)
555
- else:
556
- print("Initialize BERT from scratch!")
557
- self.bert = transformers.BertModel(config=config)
558
- trainable_start_block = 0
559
-
560
- for k, v in self.bert.named_parameters():
561
- if 'embeddings' in k:
562
- v.requires_grad = False
563
- elif 'encoder.layer' in k:
564
- block_idx = int(k.split(".")[2])
565
- if block_idx < trainable_start_block:
566
- v.requires_grad = False
567
- else:
568
- v.requires_grad = True
569
- elif 'cls' in k:
570
- v.requires_grad = True
571
- else:
572
- print("Unhandled key: {}, set to requires_grad...".format(k))
573
- v.requires_grad = True
574
-
575
- self.bert_combine = nn.Sequential(*[
576
- nn.Conv1d(768 + hidden_channels, hidden_channels, 3, 1, 1),
577
- nn.ReLU(),
578
- ])
579
- self.pooler = Pooler("avg")
580
- self.sim = Similarity(temp=0.05)
581
-
582
- def forward(self, x, x_mask=None, bert_feats=None, ph2word=None, **kwargs):
583
- if self.n_vocab > 0:
584
- x_lengths = (x > 0).long().sum(-1)
585
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
586
- else:
587
- x_lengths = (x.abs().sum(-1) > 0).long().sum(-1)
588
- x = torch.transpose(x, 1, -1) # [b, h, t]
589
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
590
-
591
- if self.prenet:
592
- x = self.pre(x, x_mask)
593
- x = self.encoder1(x, x_mask)
594
- bert_outputs = self.bert(bert_feats['bert_input_ids'],
595
- attention_mask=bert_feats['bert_attention_mask'],
596
- token_type_ids=bert_feats['bert_token_type_ids'],
597
- output_hidden_states=True)
598
- bert_num_blocks = hparams.get("bert_num_blocks", 12) # total 1+12blocks in bert
599
- bert_embedding = bert_outputs['hidden_states'][bert_num_blocks]
600
- # bert_embedding = bert_outputs['last_hidden_state']
601
- grad_bert = hparams.get("grad_bert", 0.1)
602
- bert_embedding = bert_embedding.detach() * (1-grad_bert) + bert_embedding * grad_bert
603
- bert_word_embedding, _ = group_hidden_by_segs(bert_embedding, bert_feats['bert_token2word'], bert_feats['bert_token2word'].max().item())
604
- bert_ph_embedding = expand_word2ph(bert_word_embedding, ph2word)
605
- bert_ph_embedding = bert_ph_embedding.transpose(1,2)
606
- x = torch.cat([x, bert_ph_embedding], dim=1)
607
- x = self.bert_combine(x)
608
- x = self.encoder2(x, x_mask)
609
- return x.transpose(1, 2)
610
-
611
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/lj/preprocess.py DELETED
@@ -1,9 +0,0 @@
1
- from text_to_speech.data_gen.tts.base_preprocess import BasePreprocessor
2
-
3
-
4
- class LJPreprocess(BasePreprocessor):
5
- def meta_data(self):
6
- for l in open(f'{self.raw_data_dir}/metadata.csv').readlines():
7
- item_name, _, txt = l.strip().split("|")
8
- wav_fn = f"{self.raw_data_dir}/wavs/{item_name}.wav"
9
- yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt}
 
 
 
 
 
 
 
 
 
 
spaces/AIMLApps/Botrite_wip/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Botrite Wip
3
- emoji: 📈
4
- colorFrom: red
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.37.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AP123/IllusionDiffusion/user_history.py DELETED
@@ -1,423 +0,0 @@
1
- """
2
- User History is a plugin that you can add to your Spaces to cache generated images for your users.
3
-
4
- Key features:
5
- - 🤗 Sign in with Hugging Face
6
- - Save generated images with their metadata: prompts, timestamp, hyper-parameters, etc.
7
- - Export your history as zip.
8
- - Delete your history to respect privacy.
9
- - Compatible with Persistent Storage for long-term storage.
10
- - Admin panel to check configuration and disk usage .
11
-
12
- Useful links:
13
- - Demo: https://huggingface.co/spaces/Wauplin/gradio-user-history
14
- - README: https://huggingface.co/spaces/Wauplin/gradio-user-history/blob/main/README.md
15
- - Source file: https://huggingface.co/spaces/Wauplin/gradio-user-history/blob/main/user_history.py
16
- - Discussions: https://huggingface.co/spaces/Wauplin/gradio-user-history/discussions
17
- """
18
- import json
19
- import os
20
- import shutil
21
- import warnings
22
- from datetime import datetime
23
- from functools import cache
24
- from pathlib import Path
25
- from typing import Callable, Dict, List, Tuple
26
- from uuid import uuid4
27
-
28
- import gradio as gr
29
- import numpy as np
30
- import requests
31
- from filelock import FileLock
32
- from PIL.Image import Image
33
-
34
-
35
- def setup(folder_path: str | Path | None = None) -> None:
36
- user_history = _UserHistory()
37
- user_history.folder_path = _resolve_folder_path(folder_path)
38
- user_history.initialized = True
39
-
40
-
41
- def render() -> None:
42
- user_history = _UserHistory()
43
-
44
- # initialize with default config
45
- if not user_history.initialized:
46
- print("Initializing user history with default config. Use `user_history.setup(...)` to customize folder_path.")
47
- setup()
48
-
49
- # Render user history tab
50
- gr.Markdown(
51
- "## Your past generations\n\nLog in to keep a gallery of your previous generations. Your history will be saved"
52
- " and available on your next visit. Make sure to export your images from time to time as this gallery may be"
53
- " deleted in the future."
54
- )
55
-
56
- if os.getenv("SYSTEM") == "spaces" and not os.path.exists("/data"):
57
- gr.Markdown(
58
- "**⚠️ Persistent storage is disabled, meaning your history will be lost if the Space gets restarted."
59
- " Only the Space owner can setup a Persistent Storage. If you are not the Space owner, consider"
60
- " duplicating this Space to set your own storage.⚠️**"
61
- )
62
-
63
- with gr.Row():
64
- gr.LoginButton(min_width=250)
65
- gr.LogoutButton(min_width=250)
66
- refresh_button = gr.Button(
67
- "Refresh",
68
- icon="https://huggingface.co/spaces/Wauplin/gradio-user-history/resolve/main/assets/icon_refresh.png",
69
- )
70
- export_button = gr.Button(
71
- "Export",
72
- icon="https://huggingface.co/spaces/Wauplin/gradio-user-history/resolve/main/assets/icon_download.png",
73
- )
74
- delete_button = gr.Button(
75
- "Delete history",
76
- icon="https://huggingface.co/spaces/Wauplin/gradio-user-history/resolve/main/assets/icon_delete.png",
77
- )
78
-
79
- # "Export zip" row (hidden by default)
80
- with gr.Row():
81
- export_file = gr.File(file_count="single", file_types=[".zip"], label="Exported history", visible=False)
82
-
83
- # "Config deletion" row (hidden by default)
84
- with gr.Row():
85
- confirm_button = gr.Button("Confirm delete all history", variant="stop", visible=False)
86
- cancel_button = gr.Button("Cancel", visible=False)
87
-
88
- # Gallery
89
- gallery = gr.Gallery(
90
- label="Past images",
91
- show_label=True,
92
- elem_id="gallery",
93
- object_fit="contain",
94
- columns=5,
95
- height=600,
96
- preview=False,
97
- show_share_button=False,
98
- show_download_button=False,
99
- )
100
- gr.Markdown(
101
- "User history is powered by"
102
- " [Wauplin/gradio-user-history](https://huggingface.co/spaces/Wauplin/gradio-user-history). Integrate it to"
103
- " your own Space in just a few lines of code!"
104
- )
105
- gallery.attach_load_event(_fetch_user_history, every=None)
106
-
107
- # Interactions
108
- refresh_button.click(fn=_fetch_user_history, inputs=[], outputs=[gallery], queue=False)
109
- export_button.click(fn=_export_user_history, inputs=[], outputs=[export_file], queue=False)
110
-
111
- # Taken from https://github.com/gradio-app/gradio/issues/3324#issuecomment-1446382045
112
- delete_button.click(
113
- lambda: [gr.update(visible=True), gr.update(visible=True)],
114
- outputs=[confirm_button, cancel_button],
115
- queue=False,
116
- )
117
- cancel_button.click(
118
- lambda: [gr.update(visible=False), gr.update(visible=False)],
119
- outputs=[confirm_button, cancel_button],
120
- queue=False,
121
- )
122
- confirm_button.click(_delete_user_history).then(
123
- lambda: [gr.update(visible=False), gr.update(visible=False)],
124
- outputs=[confirm_button, cancel_button],
125
- queue=False,
126
- )
127
-
128
- # Admin section (only shown locally or when logged in as Space owner)
129
- _admin_section()
130
-
131
-
132
- def save_image(
133
- profile: gr.OAuthProfile | None,
134
- image: Image | np.ndarray | str | Path,
135
- label: str | None = None,
136
- metadata: Dict | None = None,
137
- ):
138
- # Ignore images from logged out users
139
- if profile is None:
140
- return
141
- username = profile["preferred_username"]
142
-
143
- # Ignore images if user history not used
144
- user_history = _UserHistory()
145
- if not user_history.initialized:
146
- warnings.warn(
147
- "User history is not set in Gradio demo. Saving image is ignored. You must use `user_history.render(...)`"
148
- " first."
149
- )
150
- return
151
-
152
- # Copy image to storage
153
- image_path = _copy_image(image, dst_folder=user_history._user_images_path(username))
154
-
155
- # Save new image + metadata
156
- if metadata is None:
157
- metadata = {}
158
- if "datetime" not in metadata:
159
- metadata["datetime"] = str(datetime.now())
160
- data = {"path": str(image_path), "label": label, "metadata": metadata}
161
- with user_history._user_lock(username):
162
- with user_history._user_jsonl_path(username).open("a") as f:
163
- f.write(json.dumps(data) + "\n")
164
-
165
-
166
- #############
167
- # Internals #
168
- #############
169
-
170
-
171
- class _UserHistory(object):
172
- _instance = None
173
- initialized: bool = False
174
- folder_path: Path
175
-
176
- def __new__(cls):
177
- # Using singleton pattern => we don't want to expose an object (more complex to use) but still want to keep
178
- # state between `render` and `save_image` calls.
179
- if cls._instance is None:
180
- cls._instance = super(_UserHistory, cls).__new__(cls)
181
- return cls._instance
182
-
183
- def _user_path(self, username: str) -> Path:
184
- path = self.folder_path / username
185
- path.mkdir(parents=True, exist_ok=True)
186
- return path
187
-
188
- def _user_lock(self, username: str) -> FileLock:
189
- """Ensure history is not corrupted if concurrent calls."""
190
- return FileLock(self.folder_path / f"{username}.lock") # lock outside of folder => better when exporting ZIP
191
-
192
- def _user_jsonl_path(self, username: str) -> Path:
193
- return self._user_path(username) / "history.jsonl"
194
-
195
- def _user_images_path(self, username: str) -> Path:
196
- path = self._user_path(username) / "images"
197
- path.mkdir(parents=True, exist_ok=True)
198
- return path
199
-
200
-
201
- def _fetch_user_history(profile: gr.OAuthProfile | None) -> List[Tuple[str, str]]:
202
- """Return saved history for that user, if it exists."""
203
- # Cannot load history for logged out users
204
- if profile is None:
205
- return []
206
- username = profile["preferred_username"]
207
-
208
- user_history = _UserHistory()
209
- if not user_history.initialized:
210
- warnings.warn("User history is not set in Gradio demo. You must use `user_history.render(...)` first.")
211
- return []
212
-
213
- with user_history._user_lock(username):
214
- # No file => no history saved yet
215
- jsonl_path = user_history._user_jsonl_path(username)
216
- if not jsonl_path.is_file():
217
- return []
218
-
219
- # Read history
220
- images = []
221
- for line in jsonl_path.read_text().splitlines():
222
- data = json.loads(line)
223
- images.append((data["path"], data["label"] or ""))
224
- return list(reversed(images))
225
-
226
-
227
- def _export_user_history(profile: gr.OAuthProfile | None) -> Dict | None:
228
- """Zip all history for that user, if it exists and return it as a downloadable file."""
229
- # Cannot load history for logged out users
230
- if profile is None:
231
- return None
232
- username = profile["preferred_username"]
233
-
234
- user_history = _UserHistory()
235
- if not user_history.initialized:
236
- warnings.warn("User history is not set in Gradio demo. You must use `user_history.render(...)` first.")
237
- return None
238
-
239
- # Zip history
240
- with user_history._user_lock(username):
241
- path = shutil.make_archive(
242
- str(_archives_path() / f"history_{username}"), "zip", user_history._user_path(username)
243
- )
244
-
245
- return gr.update(visible=True, value=path)
246
-
247
-
248
- def _delete_user_history(profile: gr.OAuthProfile | None) -> None:
249
- """Delete all history for that user."""
250
- # Cannot load history for logged out users
251
- if profile is None:
252
- return
253
- username = profile["preferred_username"]
254
-
255
- user_history = _UserHistory()
256
- if not user_history.initialized:
257
- warnings.warn("User history is not set in Gradio demo. You must use `user_history.render(...)` first.")
258
- return
259
-
260
- with user_history._user_lock(username):
261
- shutil.rmtree(user_history._user_path(username))
262
-
263
-
264
- ####################
265
- # Internal helpers #
266
- ####################
267
-
268
-
269
- def _copy_image(image: Image | np.ndarray | str | Path, dst_folder: Path) -> Path:
270
- """Copy image to the images folder."""
271
- # Already a path => copy it
272
- if isinstance(image, str):
273
- image = Path(image)
274
- if isinstance(image, Path):
275
- dst = dst_folder / f"{uuid4().hex}_{Path(image).name}" # keep file ext
276
- shutil.copyfile(image, dst)
277
- return dst
278
-
279
- # Still a Python object => serialize it
280
- if isinstance(image, np.ndarray):
281
- image = Image.fromarray(image)
282
- if isinstance(image, Image):
283
- dst = dst_folder / f"{uuid4().hex}.png"
284
- image.save(dst)
285
- return dst
286
-
287
- raise ValueError(f"Unsupported image type: {type(image)}")
288
-
289
-
290
- def _resolve_folder_path(folder_path: str | Path | None) -> Path:
291
- if folder_path is not None:
292
- return Path(folder_path).expanduser().resolve()
293
-
294
- if os.getenv("SYSTEM") == "spaces" and os.path.exists("/data"): # Persistent storage is enabled!
295
- return Path("/data") / "_user_history"
296
-
297
- # Not in a Space or Persistent storage not enabled => local folder
298
- return Path(__file__).parent / "_user_history"
299
-
300
-
301
- def _archives_path() -> Path:
302
- # Doesn't have to be on persistent storage as it's only used for download
303
- path = Path(__file__).parent / "_user_history_exports"
304
- path.mkdir(parents=True, exist_ok=True)
305
- return path
306
-
307
-
308
- #################
309
- # Admin section #
310
- #################
311
-
312
-
313
- def _admin_section() -> None:
314
- title = gr.Markdown()
315
- title.attach_load_event(_display_if_admin(), every=None)
316
-
317
-
318
- def _display_if_admin() -> Callable:
319
- def _inner(profile: gr.OAuthProfile | None) -> str:
320
- if profile is None:
321
- return ""
322
- if profile["preferred_username"] in _fetch_admins():
323
- return _admin_content()
324
- return ""
325
-
326
- return _inner
327
-
328
-
329
- def _admin_content() -> str:
330
- return f"""
331
- ## Admin section
332
-
333
- Running on **{os.getenv("SYSTEM", "local")}** (id: {os.getenv("SPACE_ID")}). {_get_msg_is_persistent_storage_enabled()}
334
-
335
- Admins: {', '.join(_fetch_admins())}
336
-
337
- {_get_nb_users()} user(s), {_get_nb_images()} image(s)
338
-
339
- ### Configuration
340
-
341
- History folder: *{_UserHistory().folder_path}*
342
-
343
- Exports folder: *{_archives_path()}*
344
-
345
- ### Disk usage
346
-
347
- {_disk_space_warning_message()}
348
- """
349
-
350
-
351
- def _get_nb_users() -> int:
352
- user_history = _UserHistory()
353
- if not user_history.initialized:
354
- return 0
355
- if user_history.folder_path is not None and user_history.folder_path.exists():
356
- return len([path for path in user_history.folder_path.iterdir() if path.is_dir()])
357
- return 0
358
-
359
-
360
- def _get_nb_images() -> int:
361
- user_history = _UserHistory()
362
- if not user_history.initialized:
363
- return 0
364
- if user_history.folder_path is not None and user_history.folder_path.exists():
365
- return len([path for path in user_history.folder_path.glob("*/images/*")])
366
- return 0
367
-
368
-
369
- def _get_msg_is_persistent_storage_enabled() -> str:
370
- if os.getenv("SYSTEM") == "spaces":
371
- if os.path.exists("/data"):
372
- return "Persistent storage is enabled."
373
- else:
374
- return (
375
- "Persistent storage is not enabled. This means that user histories will be deleted when the Space is"
376
- " restarted. Consider adding a Persistent Storage in your Space settings."
377
- )
378
- return ""
379
-
380
-
381
- def _disk_space_warning_message() -> str:
382
- user_history = _UserHistory()
383
- if not user_history.initialized:
384
- return ""
385
-
386
- message = ""
387
- if user_history.folder_path is not None:
388
- total, used, _ = _get_disk_usage(user_history.folder_path)
389
- message += f"History folder: **{used / 1e9 :.0f}/{total / 1e9 :.0f}GB** used ({100*used/total :.0f}%)."
390
-
391
- total, used, _ = _get_disk_usage(_archives_path())
392
- message += f"\n\nExports folder: **{used / 1e9 :.0f}/{total / 1e9 :.0f}GB** used ({100*used/total :.0f}%)."
393
-
394
- return f"{message.strip()}"
395
-
396
-
397
- def _get_disk_usage(path: Path) -> Tuple[int, int, int]:
398
- for path in [path] + list(path.parents): # first check target_dir, then each parents one by one
399
- try:
400
- return shutil.disk_usage(path)
401
- except OSError: # if doesn't exist or can't read => fail silently and try parent one
402
- pass
403
- return 0, 0, 0
404
-
405
-
406
- @cache
407
- def _fetch_admins() -> List[str]:
408
- # Running locally => fake user is admin
409
- if os.getenv("SYSTEM") != "spaces":
410
- return ["FakeGradioUser"]
411
-
412
- # Running in Space but no space_id => ???
413
- space_id = os.getenv("SPACE_ID")
414
- if space_id is None:
415
- return ["Unknown"]
416
-
417
- # Running in Space => try to fetch organization members
418
- # Otherwise, it's not an organization => namespace is the user
419
- namespace = space_id.split("/")[0]
420
- response = requests.get(f"https://huggingface.co/api/organizations/{namespace}/members")
421
- if response.status_code == 200:
422
- return sorted((member["user"] for member in response.json()), key=lambda x: x.lower())
423
- return [namespace]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/tcrp-plugin.js DELETED
@@ -1,34 +0,0 @@
1
- import TCRP from './tcrp.js';
2
-
3
- const Recorder = TCRP.Recorder;
4
- const Player = TCRP.Player;
5
-
6
- class TCRPPlugin extends Phaser.Plugins.BasePlugin {
7
- constructor(pluginManager) {
8
- super(pluginManager);
9
- }
10
-
11
- start() {
12
- var eventEmitter = this.game.events;
13
- eventEmitter.on('destroy', this.destroy, this);
14
- }
15
-
16
- addRecorder(parent, config) {
17
- return new Recorder(parent, config);
18
- }
19
-
20
- addPlayer(parent, config) {
21
- return new Player(parent, config);
22
- }
23
- }
24
-
25
- var methods = {
26
- runCommands: TCRP.RunCommands
27
- }
28
-
29
- Object.assign(
30
- TCRPPlugin.prototype,
31
- methods
32
- );
33
-
34
- export default TCRPPlugin;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/Factory.d.ts DELETED
@@ -1,7 +0,0 @@
1
- // import * as Phaser from 'phaser';
2
- import ClickOutside from "./ClickOutside";
3
-
4
- export default function (
5
- gameObject: Phaser.GameObjects.GameObject,
6
- config?: ClickOutside.IConfig
7
- ): ClickOutside;
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunChildrenWrap.js DELETED
@@ -1,93 +0,0 @@
1
- import { GetDisplayWidth, GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js';
2
-
3
- var RunChildrenWrap = function (lineWidth, out) {
4
- if (out === undefined) {
5
- out = {
6
- lines: [],
7
- width: 0,
8
- height: 0
9
- }
10
- } else {
11
- out.lines.length = 0;
12
- out.width = 0;
13
- out.height = 0;
14
- }
15
-
16
- var children = this.sizerChildren;
17
- var itemSpace = this.space.item,
18
- lineSpace = this.space.line,
19
- indentLeftOdd = this.space.indentLeftOdd,
20
- indentLeftEven = this.space.indentLeftEven,
21
- indentTopOdd = this.space.indentTopOdd,
22
- indentTopEven = this.space.indentTopEven;
23
- var child, childWidth, childHeight, remainder = 0, indentLeft;
24
- var lines = out.lines,
25
- lastLine = undefined,
26
- newLine;
27
- for (var i = 0, cnt = children.length; i < cnt; i++) {
28
- child = children[i];
29
- if (child === '\n') {
30
- child = undefined;
31
- childWidth = 0;
32
- newLine = true;
33
- } else {
34
- if (child.rexSizer.hidden) {
35
- continue;
36
- }
37
-
38
- if (child.isRexSizer) {
39
- child.layout(); // Use original size
40
- }
41
-
42
- childWidth = GetChildWidth(child);
43
- newLine = (remainder < childWidth) || (lastLine === undefined);
44
- }
45
- // New line
46
- if (newLine) {
47
- if (lastLine) {
48
- lastLine.width = lineWidth - (remainder + itemSpace);
49
- out.width = Math.max(out.width, lastLine.width);
50
- out.height += lastLine.height + lineSpace;
51
- }
52
-
53
- lastLine = {
54
- children: [],
55
- // width: 0,
56
- height: 0
57
- };
58
- lines.push(lastLine);
59
-
60
- var indentLeft = (lines.length % 2) ? indentLeftOdd : indentLeftEven;
61
- remainder = lineWidth - indentLeft;
62
- }
63
-
64
- remainder -= (childWidth + itemSpace);
65
- if (child) {
66
- lastLine.children.push(child);
67
- childHeight = GeChildHeight(child);
68
- lastLine.height = Math.max(lastLine.height, childHeight);
69
- }
70
- }
71
-
72
- if (lastLine) {
73
- lastLine.width = lineWidth - (remainder + itemSpace);
74
- out.width = Math.max(out.width, lastLine.width);
75
- out.height += lastLine.height;
76
- }
77
-
78
- out.height += Math.max(indentTopOdd, indentTopEven);
79
-
80
- return out;
81
- }
82
-
83
- var GetChildWidth = function (child) {
84
- var padding = child.rexSizer.padding;
85
- return GetDisplayWidth(child) + padding.left + padding.right;
86
- }
87
-
88
- var GeChildHeight = function (child) {
89
- var padding = child.rexSizer.padding;
90
- return GetDisplayHeight(child) + padding.top + padding.bottom;
91
- }
92
-
93
- export default RunChildrenWrap;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/transforms.py DELETED
@@ -1,193 +0,0 @@
1
- import torch
2
- from torch.nn import functional as F
3
-
4
- import numpy as np
5
-
6
-
7
- DEFAULT_MIN_BIN_WIDTH = 1e-3
8
- DEFAULT_MIN_BIN_HEIGHT = 1e-3
9
- DEFAULT_MIN_DERIVATIVE = 1e-3
10
-
11
-
12
- def piecewise_rational_quadratic_transform(inputs,
13
- unnormalized_widths,
14
- unnormalized_heights,
15
- unnormalized_derivatives,
16
- inverse=False,
17
- tails=None,
18
- tail_bound=1.,
19
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
20
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
21
- min_derivative=DEFAULT_MIN_DERIVATIVE):
22
-
23
- if tails is None:
24
- spline_fn = rational_quadratic_spline
25
- spline_kwargs = {}
26
- else:
27
- spline_fn = unconstrained_rational_quadratic_spline
28
- spline_kwargs = {
29
- 'tails': tails,
30
- 'tail_bound': tail_bound
31
- }
32
-
33
- outputs, logabsdet = spline_fn(
34
- inputs=inputs,
35
- unnormalized_widths=unnormalized_widths,
36
- unnormalized_heights=unnormalized_heights,
37
- unnormalized_derivatives=unnormalized_derivatives,
38
- inverse=inverse,
39
- min_bin_width=min_bin_width,
40
- min_bin_height=min_bin_height,
41
- min_derivative=min_derivative,
42
- **spline_kwargs
43
- )
44
- return outputs, logabsdet
45
-
46
-
47
- def searchsorted(bin_locations, inputs, eps=1e-6):
48
- bin_locations[..., -1] += eps
49
- return torch.sum(
50
- inputs[..., None] >= bin_locations,
51
- dim=-1
52
- ) - 1
53
-
54
-
55
- def unconstrained_rational_quadratic_spline(inputs,
56
- unnormalized_widths,
57
- unnormalized_heights,
58
- unnormalized_derivatives,
59
- inverse=False,
60
- tails='linear',
61
- tail_bound=1.,
62
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
63
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
64
- min_derivative=DEFAULT_MIN_DERIVATIVE):
65
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
66
- outside_interval_mask = ~inside_interval_mask
67
-
68
- outputs = torch.zeros_like(inputs)
69
- logabsdet = torch.zeros_like(inputs)
70
-
71
- if tails == 'linear':
72
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
73
- constant = np.log(np.exp(1 - min_derivative) - 1)
74
- unnormalized_derivatives[..., 0] = constant
75
- unnormalized_derivatives[..., -1] = constant
76
-
77
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
78
- logabsdet[outside_interval_mask] = 0
79
- else:
80
- raise RuntimeError('{} tails are not implemented.'.format(tails))
81
-
82
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
83
- inputs=inputs[inside_interval_mask],
84
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
85
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
86
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
87
- inverse=inverse,
88
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
89
- min_bin_width=min_bin_width,
90
- min_bin_height=min_bin_height,
91
- min_derivative=min_derivative
92
- )
93
-
94
- return outputs, logabsdet
95
-
96
- def rational_quadratic_spline(inputs,
97
- unnormalized_widths,
98
- unnormalized_heights,
99
- unnormalized_derivatives,
100
- inverse=False,
101
- left=0., right=1., bottom=0., top=1.,
102
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
103
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
104
- min_derivative=DEFAULT_MIN_DERIVATIVE):
105
- if torch.min(inputs) < left or torch.max(inputs) > right:
106
- raise ValueError('Input to a transform is not within its domain')
107
-
108
- num_bins = unnormalized_widths.shape[-1]
109
-
110
- if min_bin_width * num_bins > 1.0:
111
- raise ValueError('Minimal bin width too large for the number of bins')
112
- if min_bin_height * num_bins > 1.0:
113
- raise ValueError('Minimal bin height too large for the number of bins')
114
-
115
- widths = F.softmax(unnormalized_widths, dim=-1)
116
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
117
- cumwidths = torch.cumsum(widths, dim=-1)
118
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
119
- cumwidths = (right - left) * cumwidths + left
120
- cumwidths[..., 0] = left
121
- cumwidths[..., -1] = right
122
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
123
-
124
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
125
-
126
- heights = F.softmax(unnormalized_heights, dim=-1)
127
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
128
- cumheights = torch.cumsum(heights, dim=-1)
129
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
130
- cumheights = (top - bottom) * cumheights + bottom
131
- cumheights[..., 0] = bottom
132
- cumheights[..., -1] = top
133
- heights = cumheights[..., 1:] - cumheights[..., :-1]
134
-
135
- if inverse:
136
- bin_idx = searchsorted(cumheights, inputs)[..., None]
137
- else:
138
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
139
-
140
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
141
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
142
-
143
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
144
- delta = heights / widths
145
- input_delta = delta.gather(-1, bin_idx)[..., 0]
146
-
147
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
148
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
149
-
150
- input_heights = heights.gather(-1, bin_idx)[..., 0]
151
-
152
- if inverse:
153
- a = (((inputs - input_cumheights) * (input_derivatives
154
- + input_derivatives_plus_one
155
- - 2 * input_delta)
156
- + input_heights * (input_delta - input_derivatives)))
157
- b = (input_heights * input_derivatives
158
- - (inputs - input_cumheights) * (input_derivatives
159
- + input_derivatives_plus_one
160
- - 2 * input_delta))
161
- c = - input_delta * (inputs - input_cumheights)
162
-
163
- discriminant = b.pow(2) - 4 * a * c
164
- assert (discriminant >= 0).all()
165
-
166
- root = (2 * c) / (-b - torch.sqrt(discriminant))
167
- outputs = root * input_bin_widths + input_cumwidths
168
-
169
- theta_one_minus_theta = root * (1 - root)
170
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
171
- * theta_one_minus_theta)
172
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
173
- + 2 * input_delta * theta_one_minus_theta
174
- + input_derivatives * (1 - root).pow(2))
175
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
176
-
177
- return outputs, -logabsdet
178
- else:
179
- theta = (inputs - input_cumwidths) / input_bin_widths
180
- theta_one_minus_theta = theta * (1 - theta)
181
-
182
- numerator = input_heights * (input_delta * theta.pow(2)
183
- + input_derivatives * theta_one_minus_theta)
184
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
185
- * theta_one_minus_theta)
186
- outputs = input_cumheights + numerator / denominator
187
-
188
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
189
- + 2 * input_delta * theta_one_minus_theta
190
- + input_derivatives * (1 - theta).pow(2))
191
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
192
-
193
- return outputs, logabsdet
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyKorshuk/model-evaluation/tabs/arena_side_by_side.py DELETED
@@ -1,240 +0,0 @@
1
- import time
2
-
3
- import gradio as gr
4
- import random
5
- from conversation import Conversation
6
- from utils import get_matchmaking
7
-
8
-
9
- def get_tab_arena_side_by_side(download_bot_config, get_bot_profile, model_mapping, client):
10
- gr.Markdown("""
11
- # ⚔️ Chatbot Arena (side-by-side) ⚔️
12
- ## Rules
13
- * Chat with two models side-by-side and vote for which one is better!
14
- * You pick the models you want to chat with.
15
- * You can continue chatting and voting or click “Clear” to start a new round.
16
- """)
17
- default_bot_id = "_bot_e21de304-6151-4a04-b025-4c553ae8cbca"
18
- bot_config = download_bot_config(default_bot_id)
19
- user_state = gr.State(
20
- bot_config
21
- )
22
- with gr.Row():
23
- bot_id = gr.Textbox(label="Chai bot ID", value=default_bot_id, interactive=True)
24
- reload_bot_button = gr.Button("Reload bot")
25
- bot_profile = gr.HTML(get_bot_profile(bot_config))
26
- with gr.Accordion("Bot config:", open=False):
27
- bot_config_text = gr.Markdown(f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}\n")
28
-
29
- with gr.Row():
30
- values = list(model_mapping.keys())
31
- first_message = (None, bot_config["firstMessage"])
32
- height = 450
33
- model_a_value, model_b_value = get_matchmaking(client, values, is_anonymous=False)
34
- with gr.Column():
35
- model_a = gr.Dropdown(values, value=model_a_value, label="Model A")
36
- chatbot_a = gr.Chatbot([first_message])
37
- chatbot_a.style(height=height)
38
- with gr.Column():
39
- model_b = gr.Dropdown(values, value=model_b_value, label="Model B")
40
- chatbot_b = gr.Chatbot([first_message])
41
- chatbot_b.style(height=height)
42
-
43
- with gr.Row():
44
- with gr.Column(scale=3):
45
- msg = gr.Textbox(show_label=False, value="Hi there!", interactive=True)
46
- with gr.Column(scale=3):
47
- send = gr.Button("Send")
48
- with gr.Row():
49
- vote_a = gr.Button("👈 A is better", interactive=False)
50
- vote_b = gr.Button("👉 B is better", interactive=False)
51
- vote_tie = gr.Button("🤝 Tie", interactive=False)
52
- vote_bad = gr.Button("💩 Both are bad", interactive=False)
53
- with gr.Row():
54
- regenerate = gr.Button("Regenerate", interactive=False)
55
- clear = gr.Button("Clear")
56
-
57
- with gr.Accordion("Generation parameters for model A", open=False):
58
- model = model_mapping[model_a.value]
59
- temperature_model_a = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["temperature"],
60
- interactive=True, label="Temperature")
61
- repetition_penalty_model_a = gr.Slider(minimum=0.0, maximum=2.0,
62
- value=model.generation_params["repetition_penalty"],
63
- interactive=True, label="Repetition penalty")
64
- max_new_tokens_model_a = gr.Slider(minimum=1, maximum=512, value=model.generation_params["max_new_tokens"],
65
- interactive=True, label="Max new tokens")
66
- top_k_model_a = gr.Slider(minimum=1, maximum=100, value=model.generation_params["top_k"],
67
- interactive=True, label="Top-K")
68
- top_p_model_a = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["top_p"],
69
- interactive=True, label="Top-P")
70
-
71
- with gr.Accordion("Generation parameters for model B", open=False):
72
- model = model_mapping[model_b.value]
73
- temperature_model_b = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["temperature"],
74
- interactive=True, label="Temperature")
75
- repetition_penalty_model_b = gr.Slider(minimum=0.0, maximum=2.0,
76
- value=model.generation_params["repetition_penalty"],
77
- interactive=True, label="Repetition penalty")
78
- max_new_tokens_model_b = gr.Slider(minimum=1, maximum=512, value=model.generation_params["max_new_tokens"],
79
- interactive=True, label="Max new tokens")
80
- top_k_model_b = gr.Slider(minimum=1, maximum=100, value=model.generation_params["top_k"],
81
- interactive=True, label="Top-K")
82
- top_p_model_b = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["top_p"],
83
- interactive=True, label="Top-P")
84
-
85
- def clear_chat(user_state):
86
- return "", [(None, user_state["firstMessage"])], [(None, user_state["firstMessage"])]
87
-
88
- def reload_bot(bot_id):
89
- bot_config = download_bot_config(bot_id)
90
- bot_profile = get_bot_profile(bot_config)
91
- return bot_profile, [(None, bot_config["firstMessage"])], [(None, bot_config[
92
- "firstMessage"])], bot_config, f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}"
93
-
94
- def get_generation_args(model_tag):
95
- model = model_mapping[model_tag]
96
- return (
97
- model.generation_params["temperature"],
98
- model.generation_params["repetition_penalty"],
99
- model.generation_params["max_new_tokens"],
100
- model.generation_params["top_k"],
101
- model.generation_params["top_p"],
102
- )
103
-
104
- def respond(message, chat_history, user_state, model_tag,
105
- temperature, repetition_penalty, max_new_tokens, top_k, top_p):
106
- custom_generation_params = {
107
- 'temperature': temperature,
108
- 'repetition_penalty': repetition_penalty,
109
- 'max_new_tokens': max_new_tokens,
110
- 'top_k': top_k,
111
- 'top_p': top_p,
112
- }
113
- conv = Conversation(user_state)
114
- conv.set_chat_history(chat_history)
115
- conv.add_user_message(message)
116
- model = model_mapping[model_tag]
117
- bot_message = model.generate_response(conv, custom_generation_params)
118
- chat_history.append(
119
- (message, bot_message)
120
- )
121
- return "", chat_history
122
-
123
- def record_vote(user_state, vote,
124
- chat_history_a, model_tag_a,
125
- chat_history_b, model_tag_b):
126
- if len(chat_history_a) < 2:
127
- return
128
- conv_a = Conversation(user_state)
129
- conv_a.set_chat_history(chat_history_a)
130
- conv_b = Conversation(user_state)
131
- conv_b.set_chat_history(chat_history_b)
132
- if "A is better" in vote:
133
- vote_str = "model_a"
134
- elif "B is better" in vote:
135
- vote_str = "model_b"
136
- elif "Tie" in vote:
137
- vote_str = "tie"
138
- else:
139
- vote_str = "tie (bothbad)"
140
- row = {
141
- "timestamp": time.time(),
142
- "bot_id": user_state["bot_id"],
143
- "vote": vote_str,
144
- "model_a": model_tag_a,
145
- "model_b": model_tag_b,
146
- "is_anonymous": int(False)
147
- }
148
- sheet = client.open("Chat Arena").sheet1
149
- num_rows = len(sheet.get_all_records())
150
- sheet.insert_row(list(row.values()), index=num_rows + 2)
151
- return
152
-
153
- def regenerate_response(chat_history, user_state, model_tag,
154
- temperature, repetition_penalty, max_new_tokens, top_k, top_p):
155
- custom_generation_params = {
156
- 'temperature': temperature,
157
- 'repetition_penalty': repetition_penalty,
158
- 'max_new_tokens': max_new_tokens,
159
- 'top_k': top_k,
160
- 'top_p': top_p,
161
- }
162
- last_row = chat_history.pop(-1)
163
- chat_history.append((last_row[0], None))
164
- model = model_mapping[model_tag]
165
- conv = Conversation(user_state)
166
- conv.set_chat_history(chat_history)
167
- bot_message = model.generate_response(conv, custom_generation_params)
168
- chat_history[-1] = (last_row[0], bot_message)
169
- return "", chat_history
170
-
171
- def disable_voting():
172
- return [gr.Button.update(interactive=False)] * 4
173
-
174
- def enable_voting():
175
- return [gr.Button.update(interactive=True)] * 4
176
-
177
- def enable_send():
178
- return [gr.Button.update(interactive=True), gr.Button.update(interactive=False)]
179
-
180
- def enable_regenerate():
181
- return gr.Button.update(interactive=True)
182
-
183
- for vote in [vote_a, vote_b, vote_tie, vote_bad]:
184
- vote.click(record_vote,
185
- [user_state, vote, chatbot_a, model_a, chatbot_b, model_b],
186
- None,
187
- queue=False)
188
- vote.click(disable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
189
-
190
- model_a.change(get_generation_args, [model_a],
191
- [temperature_model_a, repetition_penalty_model_a, max_new_tokens_model_a, top_k_model_a,
192
- top_p_model_a], queue=False)
193
- model_b.change(get_generation_args, [model_b],
194
- [temperature_model_b, repetition_penalty_model_b, max_new_tokens_model_b, top_k_model_b,
195
- top_p_model_b], queue=False)
196
- reload_bot_button.click(reload_bot, [bot_id], [bot_profile, chatbot_a, chatbot_b, user_state, bot_config_text],
197
- queue=False)
198
- clear.click(clear_chat, [user_state], [msg, chatbot_a, chatbot_b], queue=False)
199
- model_a.change(clear_chat, [user_state], [msg, chatbot_a, chatbot_b], queue=False)
200
- model_b.change(clear_chat, [user_state], [msg, chatbot_a, chatbot_b], queue=False)
201
- clear.click(enable_send, None, [send, regenerate], queue=False)
202
- reload_bot_button.click(enable_send, None, [send, regenerate], queue=False)
203
-
204
- model_a.change(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
205
- model_b.change(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
206
- reload_bot_button.click(disable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
207
- send.click(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
208
- clear.click(disable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
209
- regenerate.click(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
210
- msg.submit(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
211
-
212
- send.click(respond,
213
- [msg, chatbot_a, user_state, model_a, temperature_model_a, repetition_penalty_model_a,
214
- max_new_tokens_model_a, top_k_model_a, top_p_model_a], [msg, chatbot_a],
215
- queue=False)
216
- msg.submit(respond,
217
- [msg, chatbot_a, user_state, model_a, temperature_model_a, repetition_penalty_model_a,
218
- max_new_tokens_model_a, top_k_model_a, top_p_model_a], [msg, chatbot_a],
219
- queue=False)
220
-
221
- send.click(respond,
222
- [msg, chatbot_b, user_state, model_b, temperature_model_b, repetition_penalty_model_b,
223
- max_new_tokens_model_b, top_k_model_b, top_p_model_b], [msg, chatbot_b],
224
- queue=False)
225
- msg.submit(respond,
226
- [msg, chatbot_b, user_state, model_b, temperature_model_b, repetition_penalty_model_b,
227
- max_new_tokens_model_b, top_k_model_b, top_p_model_b], [msg, chatbot_b],
228
- queue=False)
229
-
230
- send.click(enable_regenerate, None, [regenerate], queue=False)
231
- msg.submit(enable_regenerate, None, [regenerate], queue=False)
232
-
233
- regenerate.click(regenerate_response,
234
- [chatbot_a, user_state, model_a, temperature_model_a, repetition_penalty_model_a,
235
- max_new_tokens_model_a, top_k_model_a,
236
- top_p_model_a], [msg, chatbot_a], queue=False)
237
- regenerate.click(regenerate_response,
238
- [chatbot_b, user_state, model_b, temperature_model_b, repetition_penalty_model_b,
239
- max_new_tokens_model_b, top_k_model_b,
240
- top_p_model_b], [msg, chatbot_b], queue=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyKorshuk/thin-plate-spline-motion-model/train.py DELETED
@@ -1,94 +0,0 @@
1
- from tqdm import trange
2
- import torch
3
- from torch.utils.data import DataLoader
4
- from logger import Logger
5
- from modules.model import GeneratorFullModel
6
- from torch.optim.lr_scheduler import MultiStepLR
7
- from torch.nn.utils import clip_grad_norm_
8
- from frames_dataset import DatasetRepeater
9
- import math
10
-
11
- def train(config, inpainting_network, kp_detector, bg_predictor, dense_motion_network, checkpoint, log_dir, dataset):
12
- train_params = config['train_params']
13
- optimizer = torch.optim.Adam(
14
- [{'params': list(inpainting_network.parameters()) +
15
- list(dense_motion_network.parameters()) +
16
- list(kp_detector.parameters()), 'initial_lr': train_params['lr_generator']}],lr=train_params['lr_generator'], betas=(0.5, 0.999), weight_decay = 1e-4)
17
-
18
- optimizer_bg_predictor = None
19
- if bg_predictor:
20
- optimizer_bg_predictor = torch.optim.Adam(
21
- [{'params':bg_predictor.parameters(),'initial_lr': train_params['lr_generator']}],
22
- lr=train_params['lr_generator'], betas=(0.5, 0.999), weight_decay = 1e-4)
23
-
24
- if checkpoint is not None:
25
- start_epoch = Logger.load_cpk(
26
- checkpoint, inpainting_network = inpainting_network, dense_motion_network = dense_motion_network,
27
- kp_detector = kp_detector, bg_predictor = bg_predictor,
28
- optimizer = optimizer, optimizer_bg_predictor = optimizer_bg_predictor)
29
- print('load success:', start_epoch)
30
- start_epoch += 1
31
- else:
32
- start_epoch = 0
33
-
34
- scheduler_optimizer = MultiStepLR(optimizer, train_params['epoch_milestones'], gamma=0.1,
35
- last_epoch=start_epoch - 1)
36
- if bg_predictor:
37
- scheduler_bg_predictor = MultiStepLR(optimizer_bg_predictor, train_params['epoch_milestones'],
38
- gamma=0.1, last_epoch=start_epoch - 1)
39
-
40
- if 'num_repeats' in train_params or train_params['num_repeats'] != 1:
41
- dataset = DatasetRepeater(dataset, train_params['num_repeats'])
42
- dataloader = DataLoader(dataset, batch_size=train_params['batch_size'], shuffle=True,
43
- num_workers=train_params['dataloader_workers'], drop_last=True)
44
-
45
- generator_full = GeneratorFullModel(kp_detector, bg_predictor, dense_motion_network, inpainting_network, train_params)
46
-
47
- if torch.cuda.is_available():
48
- generator_full = torch.nn.DataParallel(generator_full).cuda()
49
-
50
- bg_start = train_params['bg_start']
51
-
52
- with Logger(log_dir=log_dir, visualizer_params=config['visualizer_params'],
53
- checkpoint_freq=train_params['checkpoint_freq']) as logger:
54
- for epoch in trange(start_epoch, train_params['num_epochs']):
55
- for x in dataloader:
56
- if(torch.cuda.is_available()):
57
- x['driving'] = x['driving'].cuda()
58
- x['source'] = x['source'].cuda()
59
-
60
- losses_generator, generated = generator_full(x, epoch)
61
- loss_values = [val.mean() for val in losses_generator.values()]
62
- loss = sum(loss_values)
63
- loss.backward()
64
-
65
- clip_grad_norm_(kp_detector.parameters(), max_norm=10, norm_type = math.inf)
66
- clip_grad_norm_(dense_motion_network.parameters(), max_norm=10, norm_type = math.inf)
67
- if bg_predictor and epoch>=bg_start:
68
- clip_grad_norm_(bg_predictor.parameters(), max_norm=10, norm_type = math.inf)
69
-
70
- optimizer.step()
71
- optimizer.zero_grad()
72
- if bg_predictor and epoch>=bg_start:
73
- optimizer_bg_predictor.step()
74
- optimizer_bg_predictor.zero_grad()
75
-
76
- losses = {key: value.mean().detach().data.cpu().numpy() for key, value in losses_generator.items()}
77
- logger.log_iter(losses=losses)
78
-
79
- scheduler_optimizer.step()
80
- if bg_predictor:
81
- scheduler_bg_predictor.step()
82
-
83
- model_save = {
84
- 'inpainting_network': inpainting_network,
85
- 'dense_motion_network': dense_motion_network,
86
- 'kp_detector': kp_detector,
87
- 'optimizer': optimizer,
88
- }
89
- if bg_predictor and epoch>=bg_start:
90
- model_save['bg_predictor'] = bg_predictor
91
- model_save['optimizer_bg_predictor'] = optimizer_bg_predictor
92
-
93
- logger.log_epoch(epoch, model_save, inp=x, out=generated)
94
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWang/lama/bin/gen_outpainting_dataset.py DELETED
@@ -1,88 +0,0 @@
1
- #!/usr/bin/env python3
2
- import glob
3
- import logging
4
- import os
5
- import shutil
6
- import sys
7
- import traceback
8
-
9
- from saicinpainting.evaluation.data import load_image
10
- from saicinpainting.evaluation.utils import move_to_device
11
-
12
- os.environ['OMP_NUM_THREADS'] = '1'
13
- os.environ['OPENBLAS_NUM_THREADS'] = '1'
14
- os.environ['MKL_NUM_THREADS'] = '1'
15
- os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
16
- os.environ['NUMEXPR_NUM_THREADS'] = '1'
17
-
18
- import cv2
19
- import hydra
20
- import numpy as np
21
- import torch
22
- import tqdm
23
- import yaml
24
- from omegaconf import OmegaConf
25
- from torch.utils.data._utils.collate import default_collate
26
-
27
- from saicinpainting.training.data.datasets import make_default_val_dataset
28
- from saicinpainting.training.trainers import load_checkpoint
29
- from saicinpainting.utils import register_debug_signal_handlers
30
-
31
- LOGGER = logging.getLogger(__name__)
32
-
33
-
34
- def main(args):
35
- try:
36
- if not args.indir.endswith('/'):
37
- args.indir += '/'
38
-
39
- for in_img in glob.glob(os.path.join(args.indir, '**', '*' + args.img_suffix), recursive=True):
40
- if 'mask' in os.path.basename(in_img):
41
- continue
42
-
43
- out_img_path = os.path.join(args.outdir, os.path.splitext(in_img[len(args.indir):])[0] + '.png')
44
- out_mask_path = f'{os.path.splitext(out_img_path)[0]}_mask.png'
45
-
46
- os.makedirs(os.path.dirname(out_img_path), exist_ok=True)
47
-
48
- img = load_image(in_img)
49
- height, width = img.shape[1:]
50
- pad_h, pad_w = int(height * args.coef / 2), int(width * args.coef / 2)
51
-
52
- mask = np.zeros((height, width), dtype='uint8')
53
-
54
- if args.expand:
55
- img = np.pad(img, ((0, 0), (pad_h, pad_h), (pad_w, pad_w)))
56
- mask = np.pad(mask, ((pad_h, pad_h), (pad_w, pad_w)), mode='constant', constant_values=255)
57
- else:
58
- mask[:pad_h] = 255
59
- mask[-pad_h:] = 255
60
- mask[:, :pad_w] = 255
61
- mask[:, -pad_w:] = 255
62
-
63
- # img = np.pad(img, ((0, 0), (pad_h * 2, pad_h * 2), (pad_w * 2, pad_w * 2)), mode='symmetric')
64
- # mask = np.pad(mask, ((pad_h * 2, pad_h * 2), (pad_w * 2, pad_w * 2)), mode = 'symmetric')
65
-
66
- img = np.clip(np.transpose(img, (1, 2, 0)) * 255, 0, 255).astype('uint8')
67
- img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
68
- cv2.imwrite(out_img_path, img)
69
-
70
- cv2.imwrite(out_mask_path, mask)
71
- except KeyboardInterrupt:
72
- LOGGER.warning('Interrupted by user')
73
- except Exception as ex:
74
- LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
75
- sys.exit(1)
76
-
77
-
78
- if __name__ == '__main__':
79
- import argparse
80
-
81
- aparser = argparse.ArgumentParser()
82
- aparser.add_argument('indir', type=str, help='Root directory with images')
83
- aparser.add_argument('outdir', type=str, help='Where to store results')
84
- aparser.add_argument('--img-suffix', type=str, default='.png', help='Input image extension')
85
- aparser.add_argument('--expand', action='store_true', help='Generate mask by padding (true) or by cropping (false)')
86
- aparser.add_argument('--coef', type=float, default=0.2, help='How much to crop/expand in order to get masks')
87
-
88
- main(aparser.parse_args())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ame42/UBTH/app.py DELETED
@@ -1,225 +0,0 @@
1
- # This is a sample Python script.
2
-
3
- # Press Shift+F10 to execute it or replace it with your code.
4
- import gradio as gr
5
- from utils import *
6
- from datetime import datetime
7
-
8
- doc_type = ipp
9
- prev_sht = None
10
- curr_sht = None
11
-
12
-
13
- def ui_builder():
14
- with gr.Blocks() as demo:
15
- err_view = gr.Textbox(label="Error found", visible=False)
16
-
17
- with gr.Tab("Multiple files"):
18
-
19
- def generate_all(d):
20
- try:
21
- d = [retrieve(dt) for dt in d if retrieve(dt) is not None]
22
-
23
- out = "All months.csv"
24
-
25
- merge_all(d).to_csv(out)
26
-
27
- return {
28
- err_view: gr.update(visible=False),
29
- out_file: gr.update(value=out, visible=True, label="Merged file")
30
- }
31
- except TypeError:
32
- return {
33
- err_view: gr.update(
34
- value="Please select a folder containing all the files you want to filter",
35
- visible=True
36
- ),
37
- out_file: gr.update(visible=False)
38
- }
39
-
40
- # input ui
41
- gr.Markdown('### See data that shows up in every month file in the chosen folder')
42
- all_data = gr.File(label="Add a folder with all months", file_count="directory")
43
-
44
- # output ui
45
- output = gr.Markdown("## *Download your file", visible=False)
46
- out_file = gr.File(value="Tutorial Guide.pdf", label="Learn to use this app", visible=True)
47
- run = gr.Button("Generate file")
48
-
49
- run.click(fn=generate_all, inputs=all_data, outputs=[err_view, out_file])
50
- with gr.Tab("Compare two"):
51
-
52
- def err_str(err):
53
- f"""\
54
- [Faulty file]
55
- Check ••••• {
56
- os.path.split(
57
- os.path.splitext(
58
- err.get_file()
59
- )[0]
60
- )[1][:-8]
61
- }
62
-
63
- {err.get_message()}\
64
- """
65
-
66
- def raise_error(msg: str) -> dict:
67
- return {
68
- err_view: gr.update(
69
- value=msg,
70
- visible=True
71
- ),
72
- b: gr.update(visible=False),
73
- f: gr.update(visible=False),
74
- s: gr.update(visible=False),
75
- prev_dis: gr.update(value=None),
76
- curr_dis: gr.update(value=None),
77
- files: gr.update(visible=False)
78
- }
79
-
80
- def choose_type(event: gr.SelectData):
81
- global doc_type
82
- doc_type = event.value
83
- return {
84
- uploads: gr.update(visible=True)
85
- }
86
-
87
- def check_prev(pr):
88
- try:
89
- shts = pd.ExcelFile(pr.name).sheet_names
90
-
91
- return {
92
- prev_sheet: gr.update(choices=shts),
93
- sheets: gr.update(visible=True)
94
- }
95
- except UnusualFileError as err:
96
- return raise_error(err_str(err))
97
-
98
- def check_curr(cr):
99
- try:
100
- shts = pd.ExcelFile(cr.name).sheet_names
101
-
102
- return {
103
- curr_sheet: gr.update(choices=shts),
104
- sheets: gr.update(visible=True)
105
- }
106
- except UnusualFileError as err:
107
- return raise_error(err_str(err))
108
-
109
- def sheet_prev(event: gr.SelectData, file):
110
- global prev_sht
111
- prev_sht = event.value
112
- name, ext = os.path.splitext(file.name)
113
- pr = get_raw(file.name, prev_sht, ext)
114
- return {
115
- data: gr.update(visible=True),
116
- outputs: gr.update(visible=True),
117
- prev_dis: gr.update(value=pr)
118
- }
119
-
120
- def sheet_curr(event: gr.SelectData, file):
121
- global curr_sht
122
- curr_sht = event.value
123
- name, ext = os.path.splitext(file.name)
124
- cr = get_raw(file.name, curr_sht, ext)
125
- return {
126
- data: gr.update(visible=True),
127
- outputs: gr.update(visible=True),
128
- curr_dis: gr.update(value=cr)
129
- }
130
-
131
- def generate(p, c, b_i, f_i, s_i):
132
- current_time = datetime.now()
133
- formatted_time = current_time.strftime('• %d-%m-%Y • %H.%M.%S')
134
- b_file, f_file, s_file = f"Present in both {formatted_time}.csv", f"Exits {formatted_time}.csv", \
135
- f"Entries {formatted_time}.csv"
136
- # extract info from UI results
137
- try:
138
- p_name, p_ext = os.path.splitext(p.name)
139
- c_name, c_ext = os.path.splitext(c.name)
140
- p = get_data(p.name, prev_sht, doc_type, p_ext)
141
- c = get_data(c.name, curr_sht, doc_type, c_ext)
142
-
143
- # process the data
144
- if p is None or c is None:
145
- return raise_error(f"Incompatible column names in either or both files. Make sure they "
146
- f"conform to the standard.\n\nIPPIS: {ipp_col}\nGIFMIS: {gif_col}")
147
- elif p.columns[0] != c.columns[0]:
148
- return raise_error(f"You seem to be mixing {ipp} and {gif} files. This is not allowed")
149
- else:
150
- both_, p_merged, c_merged = merge_two(p, c, doc_type)
151
-
152
- clear_csv_trash()
153
-
154
- # save only the files the user requested
155
- if b_i:
156
- both_.to_csv(b_file, index=False)
157
-
158
- if f_i:
159
- p_merged.to_csv(f_file, index=False)
160
-
161
- if s_i:
162
- c_merged.to_csv(s_file, index=False)
163
-
164
- return {
165
- err_view: gr.update(visible=False),
166
- b: gr.update(value=b_file, visible=True) if b_i else gr.update(visible=False),
167
- f: gr.update(value=f_file, visible=True) if f_i else gr.update(visible=False),
168
- s: gr.update(value=s_file, visible=True) if s_i else gr.update(visible=False),
169
- prev_dis: gr.update(value=p),
170
- curr_dis: gr.update(value=c),
171
- files: gr.update(visible=True) if b_i or f_i or s_i else gr.update(visible=False)
172
- }
173
- except AttributeError:
174
- return raise_error("Please select both files below before generating files")
175
- except UnusualFileError as err:
176
- return raise_error(err_str(err))
177
-
178
- # input ui
179
- with gr.Blocks():
180
- ########################################################################################################
181
- type = gr.Radio([ipp, gif], label="Type", info="Choose a file type")
182
- ########################################################################################################
183
- with gr.Row(visible=False) as uploads:
184
- prev = gr.File(label="Previous month", file_types=['.csv', '.xls', '.xlsx'])
185
- curr = gr.File(label="Current month", file_types=['.csv', '.xls', '.xlsx'])
186
- ########################################################################################################
187
- with gr.Row(visible=False) as sheets:
188
- prev_sheet = gr.Radio(["N/A"], label="Sheets", info="Which sheet do you want to use?",
189
- interactive=True)
190
- curr_sheet = gr.Radio(["N/A"], label="Sheets", info="Which sheet do you want to use?",
191
- interactive=True)
192
- ########################################################################################################
193
- with gr.Row(visible=False) as data:
194
- prev_dis = gr.Dataframe(row_count=(5, "fixed"), col_count=(5, "fixed"), interactive=False)
195
- curr_dis = gr.Dataframe(row_count=(5, "fixed"), col_count=(5, "fixed"), interactive=False)
196
- ########################################################################################################
197
- with gr.Column(visible=False) as outputs:
198
- both = gr.Checkbox(label="See data that shows up in both months")
199
- first = gr.Checkbox(label="See data that's in the previous month but not in the current")
200
- second = gr.Checkbox(True, label="See data that's in the current month but not in the previous")
201
- ########################################################################################################
202
- # output ui
203
- with gr.Blocks():
204
- output = gr.Markdown("## *Download your files", visible=False)
205
- with gr.Row(visible=False) as files:
206
- b = gr.File(label="Both months", visible=False)
207
- f = gr.File(label="Previous month", visible=False)
208
- s = gr.File(label="Current month", visible=False)
209
- run = gr.Button("Generate files")
210
-
211
- type.select(fn=choose_type, inputs=None, outputs=[uploads])
212
- prev.upload(fn=check_prev, inputs=[prev], outputs=[prev_sheet, sheets])
213
- curr.upload(fn=check_curr, inputs=[curr], outputs=[curr_sheet, sheets])
214
- prev_sheet.select(fn=sheet_prev, inputs=[prev], outputs=[data, outputs, prev_dis])
215
- curr_sheet.select(fn=sheet_curr, inputs=[curr], outputs=[data, outputs, curr_dis])
216
- run.click(fn=generate, inputs=[prev, curr, both, first, second], outputs=[err_view, b, f, s, prev_dis,
217
- curr_dis, files])
218
- demo.launch()
219
-
220
-
221
- # Press the green button in the gutter to run the script.
222
- if __name__ == '__main__':
223
- ui_builder()
224
-
225
- # See PyCharm help at https://www.jetbrains.com/help/pycharm/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/hubble-jwst-compare/app.py DELETED
@@ -1,53 +0,0 @@
1
- import streamlit as st
2
- from streamlit_image_comparison import image_comparison
3
-
4
- # set page config
5
- st.set_page_config(page_title="James Webb Space Telescope vs Hubble Telescope Images", layout="centered")
6
-
7
- st.title("James Webb vs Hubble Telescope Pictures")
8
-
9
- st.markdown("# Southern Nebula")
10
-
11
- # render image-comparison
12
- image_comparison(
13
- img1="https://www.webbcompare.com/img/hubble/southern_nebula_700.jpg",
14
- img2="https://www.webbcompare.com/img/webb/southern_nebula_700.jpg",
15
- label1="Hubble",
16
- label2="Webb"
17
- )
18
-
19
-
20
- st.markdown("# Galaxy Cluster SMACS 0723")
21
-
22
- # render image-comparison
23
- image_comparison(
24
- img1="https://www.webbcompare.com/img/hubble/deep_field_700.jpg",
25
- img2="https://www.webbcompare.com/img/webb/deep_field_700.jpg",
26
- label1="Hubble",
27
- label2="Webb"
28
- )
29
-
30
-
31
- st.markdown("# Carina Nebula")
32
-
33
- # render image-comparison
34
- image_comparison(
35
- img1="https://www.webbcompare.com/img/hubble/carina_700.png",
36
- img2="https://www.webbcompare.com/img/webb/carina_700.jpg",
37
- label1="Hubble",
38
- label2="Webb"
39
- )
40
-
41
- st.markdown("# Stephan's Quintet")
42
-
43
- # render image-comparison
44
- image_comparison(
45
- img1="https://www.webbcompare.com/img/hubble/stephans_quintet_700.jpg",
46
- img2="https://www.webbcompare.com/img/webb/stephans_quintet_700.jpg",
47
- label1="Hubble",
48
- label2="Webb"
49
- )
50
-
51
-
52
-
53
- st.caption("Inspiration Credit - https://www.webbcompare.com/")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_scipy_objects.py DELETED
@@ -1,17 +0,0 @@
1
- # This file is autogenerated by the command `make fix-copies`, do not edit.
2
- from ..utils import DummyObject, requires_backends
3
-
4
-
5
- class LMSDiscreteScheduler(metaclass=DummyObject):
6
- _backends = ["torch", "scipy"]
7
-
8
- def __init__(self, *args, **kwargs):
9
- requires_backends(self, ["torch", "scipy"])
10
-
11
- @classmethod
12
- def from_config(cls, *args, **kwargs):
13
- requires_backends(cls, ["torch", "scipy"])
14
-
15
- @classmethod
16
- def from_pretrained(cls, *args, **kwargs):
17
- requires_backends(cls, ["torch", "scipy"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py DELETED
@@ -1,15 +0,0 @@
1
- _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnext101_32x4d',
4
- backbone=dict(
5
- type='ResNeXt',
6
- depth=101,
7
- groups=32,
8
- base_width=4,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- norm_cfg=dict(type='BN', requires_grad=True),
13
- style='pytorch',
14
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
15
- stage_with_dcn=(False, True, True, True)))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_1x_coco.py DELETED
@@ -1,13 +0,0 @@
1
- _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- backbone=dict(plugins=[
4
- dict(
5
- cfg=dict(
6
- type='GeneralizedAttention',
7
- spatial_range=-1,
8
- num_heads=8,
9
- attention_type='1111',
10
- kv_stride=2),
11
- stages=(False, False, True, True),
12
- position='after_conv2')
13
- ]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/__init__.py DELETED
@@ -1 +0,0 @@
1
- from .clip import *
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/__init__.py DELETED
@@ -1,2 +0,0 @@
1
- from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
2
- from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
 
 
 
spaces/AnticPan/Clothes2Human/app.py DELETED
@@ -1,62 +0,0 @@
1
- import os
2
- import json
3
- import requests
4
- import gradio as gr
5
- from util import base64_to_img, img_to_base64, resize_image
6
-
7
- url = os.getenv("REQUEST_URL")
8
- headers = {'Content-Type': 'application/json',
9
- 'Validation-Key': os.getenv("VALIDATION_KEY")}
10
- names = ["input_image", "prompt", "neg_prompt", "maxlen", "step", "cfg", "seed", "up", "down", "left", "right"]
11
- def run(*params):
12
- params = {k:v for k, v in zip(names, params)}
13
- image = params.pop("input_image")
14
- image = resize_image(image)
15
- params["image_base64"] = img_to_base64(image)
16
- try:
17
- response = requests.post(url, headers=headers, data=json.dumps(params), timeout=30)
18
- if response.status_code != 200:
19
- raise ValueError()
20
- data = response.json()
21
- except:
22
- raise gr.Error("Fail to generate")
23
- if data["code"] != 0:
24
- raise gr.Error(data["message"])
25
- result = base64_to_img(data["content"])
26
- return result
27
-
28
- with gr.Blocks() as demo:
29
- gr.Markdown("# SDXL inpainting for Clothes2Human")
30
- with gr.Row().style(equal_height=True):
31
- with gr.Column():
32
- input_image = gr.Image(type="pil", height=300)
33
- with gr.Column():
34
- output_image = gr.Image(type="pil", height=300)
35
-
36
- with gr.Row():
37
- with gr.Column():
38
- prompt = gr.Textbox(label="Prompt")
39
- neg_prompt = gr.Textbox(label="Negative Prompt")
40
-
41
- maxlen = gr.Slider(label="Max Edge Length", step=32, minimum=768, maximum=1536, value=1024)
42
- step = gr.Slider(label="Step", minimum=20, maximum=70, value=50, step=1)
43
-
44
- with gr.Column():
45
- up = gr.Slider(label="Scale Up Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
46
- down = gr.Slider(label="Scale Down Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
47
- left = gr.Slider(label="Scale Left Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
48
- right = gr.Slider(label="Scale Right Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
49
- with gr.Column():
50
- cfg = gr.Slider(label="CFG Scale", minimum=1.0, maximum=9.0, value=5.0, step=0.5)
51
- seed = gr.Slider(label="Seed", minimum=-1, maximum=1000000, value=-1, step=1)
52
- inpaint_button = gr.Button()
53
-
54
- run_in = [input_image, prompt, neg_prompt, maxlen, step, cfg, seed, up, down, left, right]
55
- inpaint_button.click(run, inputs=run_in, outputs=[output_image])
56
-
57
- gr.Examples([["imgs/1.jpg","A man wearing a white T-shirt stands on the beach","", 1024, 50, 5.0, 333866, 0.3, 0.3, 0.1, 0.1],
58
- ["imgs/2.jpg"," woman wearing a blue dress stands in a park, asian race","", 1280, 50, 5.0, 443652, 0.3, 0.3, 0.2, 0.2],
59
- ["imgs/3.jpg","A woman wearing a white dress stands","", 1280, 50, 5.0, 306728, -0.1, -0.2, 0, 0]],
60
- inputs=run_in, outputs=[output_image], fn=run, cache_examples=True)
61
-
62
- demo.queue(concurrency_count=2).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/padding.py DELETED
@@ -1,141 +0,0 @@
1
- from typing import cast, List, Optional, Tuple, TYPE_CHECKING, Union
2
-
3
- if TYPE_CHECKING:
4
- from .console import (
5
- Console,
6
- ConsoleOptions,
7
- RenderableType,
8
- RenderResult,
9
- )
10
- from .jupyter import JupyterMixin
11
- from .measure import Measurement
12
- from .style import Style
13
- from .segment import Segment
14
-
15
-
16
- PaddingDimensions = Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int, int]]
17
-
18
-
19
- class Padding(JupyterMixin):
20
- """Draw space around content.
21
-
22
- Example:
23
- >>> print(Padding("Hello", (2, 4), style="on blue"))
24
-
25
- Args:
26
- renderable (RenderableType): String or other renderable.
27
- pad (Union[int, Tuple[int]]): Padding for top, right, bottom, and left borders.
28
- May be specified with 1, 2, or 4 integers (CSS style).
29
- style (Union[str, Style], optional): Style for padding characters. Defaults to "none".
30
- expand (bool, optional): Expand padding to fit available width. Defaults to True.
31
- """
32
-
33
- def __init__(
34
- self,
35
- renderable: "RenderableType",
36
- pad: "PaddingDimensions" = (0, 0, 0, 0),
37
- *,
38
- style: Union[str, Style] = "none",
39
- expand: bool = True,
40
- ):
41
- self.renderable = renderable
42
- self.top, self.right, self.bottom, self.left = self.unpack(pad)
43
- self.style = style
44
- self.expand = expand
45
-
46
- @classmethod
47
- def indent(cls, renderable: "RenderableType", level: int) -> "Padding":
48
- """Make padding instance to render an indent.
49
-
50
- Args:
51
- renderable (RenderableType): String or other renderable.
52
- level (int): Number of characters to indent.
53
-
54
- Returns:
55
- Padding: A Padding instance.
56
- """
57
-
58
- return Padding(renderable, pad=(0, 0, 0, level), expand=False)
59
-
60
- @staticmethod
61
- def unpack(pad: "PaddingDimensions") -> Tuple[int, int, int, int]:
62
- """Unpack padding specified in CSS style."""
63
- if isinstance(pad, int):
64
- return (pad, pad, pad, pad)
65
- if len(pad) == 1:
66
- _pad = pad[0]
67
- return (_pad, _pad, _pad, _pad)
68
- if len(pad) == 2:
69
- pad_top, pad_right = cast(Tuple[int, int], pad)
70
- return (pad_top, pad_right, pad_top, pad_right)
71
- if len(pad) == 4:
72
- top, right, bottom, left = cast(Tuple[int, int, int, int], pad)
73
- return (top, right, bottom, left)
74
- raise ValueError(f"1, 2 or 4 integers required for padding; {len(pad)} given")
75
-
76
- def __repr__(self) -> str:
77
- return f"Padding({self.renderable!r}, ({self.top},{self.right},{self.bottom},{self.left}))"
78
-
79
- def __rich_console__(
80
- self, console: "Console", options: "ConsoleOptions"
81
- ) -> "RenderResult":
82
- style = console.get_style(self.style)
83
- if self.expand:
84
- width = options.max_width
85
- else:
86
- width = min(
87
- Measurement.get(console, options, self.renderable).maximum
88
- + self.left
89
- + self.right,
90
- options.max_width,
91
- )
92
- render_options = options.update_width(width - self.left - self.right)
93
- if render_options.height is not None:
94
- render_options = render_options.update_height(
95
- height=render_options.height - self.top - self.bottom
96
- )
97
- lines = console.render_lines(
98
- self.renderable, render_options, style=style, pad=True
99
- )
100
- _Segment = Segment
101
-
102
- left = _Segment(" " * self.left, style) if self.left else None
103
- right = (
104
- [_Segment(f'{" " * self.right}', style), _Segment.line()]
105
- if self.right
106
- else [_Segment.line()]
107
- )
108
- blank_line: Optional[List[Segment]] = None
109
- if self.top:
110
- blank_line = [_Segment(f'{" " * width}\n', style)]
111
- yield from blank_line * self.top
112
- if left:
113
- for line in lines:
114
- yield left
115
- yield from line
116
- yield from right
117
- else:
118
- for line in lines:
119
- yield from line
120
- yield from right
121
- if self.bottom:
122
- blank_line = blank_line or [_Segment(f'{" " * width}\n', style)]
123
- yield from blank_line * self.bottom
124
-
125
- def __rich_measure__(
126
- self, console: "Console", options: "ConsoleOptions"
127
- ) -> "Measurement":
128
- max_width = options.max_width
129
- extra_width = self.left + self.right
130
- if max_width - extra_width < 1:
131
- return Measurement(max_width, max_width)
132
- measure_min, measure_max = Measurement.get(console, options, self.renderable)
133
- measurement = Measurement(measure_min + extra_width, measure_max + extra_width)
134
- measurement = measurement.with_maximum(max_width)
135
- return measurement
136
-
137
-
138
- if __name__ == "__main__": # pragma: no cover
139
- from pip._vendor.rich import print
140
-
141
- print(Padding("Hello, World", (2, 4), style="on blue"))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_lib.py DELETED
@@ -1,238 +0,0 @@
1
- """distutils.command.install_lib
2
-
3
- Implements the Distutils 'install_lib' command
4
- (install all Python modules)."""
5
-
6
- import os
7
- import importlib.util
8
- import sys
9
-
10
- from distutils.core import Command
11
- from distutils.errors import DistutilsOptionError
12
-
13
-
14
- # Extension for Python source files.
15
- PYTHON_SOURCE_EXTENSION = ".py"
16
-
17
-
18
- class install_lib(Command):
19
-
20
- description = "install all Python modules (extensions and pure Python)"
21
-
22
- # The byte-compilation options are a tad confusing. Here are the
23
- # possible scenarios:
24
- # 1) no compilation at all (--no-compile --no-optimize)
25
- # 2) compile .pyc only (--compile --no-optimize; default)
26
- # 3) compile .pyc and "opt-1" .pyc (--compile --optimize)
27
- # 4) compile "opt-1" .pyc only (--no-compile --optimize)
28
- # 5) compile .pyc and "opt-2" .pyc (--compile --optimize-more)
29
- # 6) compile "opt-2" .pyc only (--no-compile --optimize-more)
30
- #
31
- # The UI for this is two options, 'compile' and 'optimize'.
32
- # 'compile' is strictly boolean, and only decides whether to
33
- # generate .pyc files. 'optimize' is three-way (0, 1, or 2), and
34
- # decides both whether to generate .pyc files and what level of
35
- # optimization to use.
36
-
37
- user_options = [
38
- ('install-dir=', 'd', "directory to install to"),
39
- ('build-dir=', 'b', "build directory (where to install from)"),
40
- ('force', 'f', "force installation (overwrite existing files)"),
41
- ('compile', 'c', "compile .py to .pyc [default]"),
42
- ('no-compile', None, "don't compile .py files"),
43
- (
44
- 'optimize=',
45
- 'O',
46
- "also compile with optimization: -O1 for \"python -O\", "
47
- "-O2 for \"python -OO\", and -O0 to disable [default: -O0]",
48
- ),
49
- ('skip-build', None, "skip the build steps"),
50
- ]
51
-
52
- boolean_options = ['force', 'compile', 'skip-build']
53
- negative_opt = {'no-compile': 'compile'}
54
-
55
- def initialize_options(self):
56
- # let the 'install' command dictate our installation directory
57
- self.install_dir = None
58
- self.build_dir = None
59
- self.force = 0
60
- self.compile = None
61
- self.optimize = None
62
- self.skip_build = None
63
-
64
- def finalize_options(self):
65
- # Get all the information we need to install pure Python modules
66
- # from the umbrella 'install' command -- build (source) directory,
67
- # install (target) directory, and whether to compile .py files.
68
- self.set_undefined_options(
69
- 'install',
70
- ('build_lib', 'build_dir'),
71
- ('install_lib', 'install_dir'),
72
- ('force', 'force'),
73
- ('compile', 'compile'),
74
- ('optimize', 'optimize'),
75
- ('skip_build', 'skip_build'),
76
- )
77
-
78
- if self.compile is None:
79
- self.compile = True
80
- if self.optimize is None:
81
- self.optimize = False
82
-
83
- if not isinstance(self.optimize, int):
84
- try:
85
- self.optimize = int(self.optimize)
86
- if self.optimize not in (0, 1, 2):
87
- raise AssertionError
88
- except (ValueError, AssertionError):
89
- raise DistutilsOptionError("optimize must be 0, 1, or 2")
90
-
91
- def run(self):
92
- # Make sure we have built everything we need first
93
- self.build()
94
-
95
- # Install everything: simply dump the entire contents of the build
96
- # directory to the installation directory (that's the beauty of
97
- # having a build directory!)
98
- outfiles = self.install()
99
-
100
- # (Optionally) compile .py to .pyc
101
- if outfiles is not None and self.distribution.has_pure_modules():
102
- self.byte_compile(outfiles)
103
-
104
- # -- Top-level worker functions ------------------------------------
105
- # (called from 'run()')
106
-
107
- def build(self):
108
- if not self.skip_build:
109
- if self.distribution.has_pure_modules():
110
- self.run_command('build_py')
111
- if self.distribution.has_ext_modules():
112
- self.run_command('build_ext')
113
-
114
- def install(self):
115
- if os.path.isdir(self.build_dir):
116
- outfiles = self.copy_tree(self.build_dir, self.install_dir)
117
- else:
118
- self.warn(
119
- "'%s' does not exist -- no Python modules to install" % self.build_dir
120
- )
121
- return
122
- return outfiles
123
-
124
- def byte_compile(self, files):
125
- if sys.dont_write_bytecode:
126
- self.warn('byte-compiling is disabled, skipping.')
127
- return
128
-
129
- from distutils.util import byte_compile
130
-
131
- # Get the "--root" directory supplied to the "install" command,
132
- # and use it as a prefix to strip off the purported filename
133
- # encoded in bytecode files. This is far from complete, but it
134
- # should at least generate usable bytecode in RPM distributions.
135
- install_root = self.get_finalized_command('install').root
136
-
137
- if self.compile:
138
- byte_compile(
139
- files,
140
- optimize=0,
141
- force=self.force,
142
- prefix=install_root,
143
- dry_run=self.dry_run,
144
- )
145
- if self.optimize > 0:
146
- byte_compile(
147
- files,
148
- optimize=self.optimize,
149
- force=self.force,
150
- prefix=install_root,
151
- verbose=self.verbose,
152
- dry_run=self.dry_run,
153
- )
154
-
155
- # -- Utility methods -----------------------------------------------
156
-
157
- def _mutate_outputs(self, has_any, build_cmd, cmd_option, output_dir):
158
- if not has_any:
159
- return []
160
-
161
- build_cmd = self.get_finalized_command(build_cmd)
162
- build_files = build_cmd.get_outputs()
163
- build_dir = getattr(build_cmd, cmd_option)
164
-
165
- prefix_len = len(build_dir) + len(os.sep)
166
- outputs = []
167
- for file in build_files:
168
- outputs.append(os.path.join(output_dir, file[prefix_len:]))
169
-
170
- return outputs
171
-
172
- def _bytecode_filenames(self, py_filenames):
173
- bytecode_files = []
174
- for py_file in py_filenames:
175
- # Since build_py handles package data installation, the
176
- # list of outputs can contain more than just .py files.
177
- # Make sure we only report bytecode for the .py files.
178
- ext = os.path.splitext(os.path.normcase(py_file))[1]
179
- if ext != PYTHON_SOURCE_EXTENSION:
180
- continue
181
- if self.compile:
182
- bytecode_files.append(
183
- importlib.util.cache_from_source(py_file, optimization='')
184
- )
185
- if self.optimize > 0:
186
- bytecode_files.append(
187
- importlib.util.cache_from_source(
188
- py_file, optimization=self.optimize
189
- )
190
- )
191
-
192
- return bytecode_files
193
-
194
- # -- External interface --------------------------------------------
195
- # (called by outsiders)
196
-
197
- def get_outputs(self):
198
- """Return the list of files that would be installed if this command
199
- were actually run. Not affected by the "dry-run" flag or whether
200
- modules have actually been built yet.
201
- """
202
- pure_outputs = self._mutate_outputs(
203
- self.distribution.has_pure_modules(),
204
- 'build_py',
205
- 'build_lib',
206
- self.install_dir,
207
- )
208
- if self.compile:
209
- bytecode_outputs = self._bytecode_filenames(pure_outputs)
210
- else:
211
- bytecode_outputs = []
212
-
213
- ext_outputs = self._mutate_outputs(
214
- self.distribution.has_ext_modules(),
215
- 'build_ext',
216
- 'build_lib',
217
- self.install_dir,
218
- )
219
-
220
- return pure_outputs + bytecode_outputs + ext_outputs
221
-
222
- def get_inputs(self):
223
- """Get the list of files that are input to this command, ie. the
224
- files that get installed as they are named in the build tree.
225
- The files in this list correspond one-to-one to the output
226
- filenames returned by 'get_outputs()'.
227
- """
228
- inputs = []
229
-
230
- if self.distribution.has_pure_modules():
231
- build_py = self.get_finalized_command('build_py')
232
- inputs.extend(build_py.get_outputs())
233
-
234
- if self.distribution.has_ext_modules():
235
- build_ext = self.get_finalized_command('build_ext')
236
- inputs.extend(build_ext.get_outputs())
237
-
238
- return inputs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Atualli/yoloxTeste/checkYolox.sh DELETED
@@ -1,17 +0,0 @@
1
- #!/bin/sh
2
- export path=/home/atualli/.local/lib/python3.8/site-packages:$PATH
3
- cd ~/Projetos/huggingface/yoloxTeste
4
- SERVER=192.168.0.153
5
- PORT=8080
6
-
7
- if lsof -Pi :$PORT -sTCP:LISTEN -t >/dev/null ; then
8
- echo "running"
9
- else
10
- ./telegramCrise.sh "reiniciando_yolox_linux_192.168.0.153:8080"
11
- pkill -f app.py
12
- #rm -r /tmp/tmp1*.png
13
- python app.py &
14
- echo "not running"
15
- fi
16
-
17
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/__init__.py DELETED
@@ -1,28 +0,0 @@
1
- from text.symbols import *
2
-
3
-
4
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
5
-
6
- def cleaned_text_to_sequence(cleaned_text, tones, language):
7
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
8
- Args:
9
- text: string to convert to a sequence
10
- Returns:
11
- List of integers corresponding to the symbols in the text
12
- '''
13
- phones = [_symbol_to_id[symbol] for symbol in cleaned_text]
14
- tone_start = language_tone_start_map[language]
15
- tones = [i + tone_start for i in tones]
16
- lang_id = language_id_map[language]
17
- lang_ids = [lang_id for i in phones]
18
- return phones, tones, lang_ids
19
-
20
- def get_bert(norm_text, word2ph, language):
21
- from .chinese_bert import get_bert_feature as zh_bert
22
- from .english_bert_mock import get_bert_feature as en_bert
23
- lang_bert_func_map = {
24
- 'ZH': zh_bert,
25
- 'EN': en_bert
26
- }
27
- bert = lang_bert_func_map[language](norm_text, word2ph)
28
- return bert
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cmo Descargar El Tiempo De Juego Del Proyecto En Steam.md DELETED
@@ -1,123 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar el tiempo de reproducción del proyecto en Steam</h1>
3
- <p>¿Te gustan los juegos de terror? ¿Te gusta jugar con tus amigos o extraños en línea? ¿Quieres experimentar un juego emocionante y aterrador que te mantendrá al borde de tu asiento? Si respondiste afirmativamente a cualquiera de estas preguntas, deberías probar Project Playtime, un juego multijugador gratuito de terror que está disponible en Steam. En este artículo, te mostraremos cómo descargar y jugar a Project Playtime en Steam, además de darte algunos consejos y trucos para sobrevivir como sobreviviente o monstruo. </p>
4
- <h2>¿Qué es Project Playtime? </h2>
5
- <p>Project Playtime es un juego de terror multijugador donde seis jugadores intentan crear un juguete gigante mientras sobreviven a un monstruo aterrador que deambula por la fábrica de juguetes. Un séptimo jugador controla al monstruo y solo tiene un objetivo: Encontrar y matar a todos. El juego fue lanzado el 12 de diciembre de 2022 por Mob Entertainment, un estudio de juegos indie con sede en Texas. El juego ha recibido críticas muy positivas de jugadores y críticos por igual, alabando su jugabilidad, gráficos, diseño de sonido y atmósfera. </p>
6
- <h2>cómo descargar el tiempo de juego del proyecto en Steam</h2><br /><p><b><b>Download</b> &#10003;&#10003;&#10003; <a href="https://bltlly.com/2v6Mnl">https://bltlly.com/2v6Mnl</a></b></p><br /><br />
7
- <h3>¿Por qué deberías jugar Project Playtime? </h3>
8
- <p>Hay muchas razones por las que deberías jugar a Project Playtime si eres un fan de los juegos de terror. Estas son algunas de ellas:</p>
9
- <ul>
10
- <li>El juego es gratuito. No necesitas pagar nada para descargar y jugar el juego. También puedes ganar boletos jugando partidos, completando logros y abriendo cajas de juguetes. Puedes usar estos boletos para comprar cosméticos, beneficios, sabotajes y otros artículos en la tienda. </li>
11
- <li>El juego es multijugador. Puede jugar con sus amigos o unirse a grupos de presión al azar en línea. También puedes chatear con otros jugadores usando chat de voz o de texto. Puedes elegir jugar como un sobreviviente o un monstruo, cada uno con sus propios roles, habilidades y estrategias. </li>
12
-
13
- <li>El juego es divertido. El juego tiene mucho valor de repetición porque cada partido es diferente y desafiante. El juego tiene una gran variedad y opciones de personalización. Puedes elegir entre diferentes monstruos, sobrevivientes, beneficios, sabotajes, mapas, modos y configuraciones. También puedes desbloquear nuevos objetos y logros mientras juegas. </li>
14
- </ul>
15
- <p>Así que, si estás buscando un juego de terror que sea gratuito, multijugador y divertido, definitivamente deberías probar Project Playtime. </p>
16
- <h2>Cómo obtener una cuenta de Steam e instalar Steam</h2>
17
- <p>Antes de poder descargar y jugar Project Playtime en Steam, necesitas tener una cuenta de Steam e instalar el cliente de Steam en tu computadora. Estos son los pasos para hacerlo:</p>
18
- <ol>
19
- <li>Vaya a <a href=">https://store.steampowered.com/join/</a> y haga clic en el botón "Join Steam". </li>
20
- <li>Rellene su dirección de correo electrónico, contraseña, país y código captcha. De acuerdo con los términos del servicio y la política de privacidad. Haga clic en el botón "Continuar". </li>
21
- <li>Revisa tu correo electrónico para ver un código de verificación de Steam. Introduce el código en el sitio web y haz clic en el botón "Crear mi cuenta". </li>
22
- <li>Felicidades! Has creado tu cuenta de Steam. Ahora puedes iniciar sesión en Steam con tu correo electrónico y contraseña. </li>
23
- <li>Vaya a <a href=">https://store.steampowered.com/about/</a> y haga clic en el botón "Instalar vapor". </li>
24
- <li>Descargue el instalador de Steam para su sistema operativo (Windows, Mac o Linux). </li>
25
- <li>Ejecute el instalador y siga las instrucciones para instalar Steam en su computadora. </li>
26
- <li>Inicia Steam e inicia sesión con tu cuenta de Steam. </li>
27
- <li>Ahora estás listo para descargar y jugar Project Playtime en Steam.</li>
28
- </ol>
29
- <h2>Cómo encontrar y descargar Project Playtime en Steam</h2>
30
- <p>Ahora que tienes una cuenta de Steam e has instalado el cliente de Steam, puedes encontrar y descargar Project Playtime en Steam. Estos son los pasos para hacerlo:</p>
31
- <ol>
32
- <li>Abre el cliente de Steam y ve a la pestaña "Tienda". </li>
33
-
34
- <li>Verás la página del juego en la tienda de Steam. Haz clic en el botón "Jugar". </li>
35
- <li>Aparecerá una ventana emergente pidiéndole que instale Project Playtime. Haga clic en el botón "Next". </li>
36
- <li>Seleccione la carpeta de destino donde desea instalar el juego. Haga clic en el botón "Next". </li>
37
- <li> Se iniciará el proceso de descarga. Puede ver el progreso y la velocidad de la descarga en la pestaña "Descargas". </li>
38
- <li>Espere a que termine la descarga. Puede tomar algún tiempo dependiendo de su conexión a Internet y espacio en disco. </li>
39
- <li>Una vez que la descarga se haya completado, verá un mensaje que dice "Project Playtime ya está listo para jugar". Haga clic en el botón "Play". </li>
40
- <li>Has descargado e instalado correctamente el Project Playtime en Steam. ¡Disfruta! </li>
41
- </ol>
42
- <h2>Cómo jugar Project Playtime en Steam</h2>
43
- <p>Ahora que has descargado e instalado Project Playtime en Steam, puedes empezar a reproducirlo. Estos son los pasos para hacerlo:</p>
44
- <ol>
45
- <li>Iniciar tiempo de reproducción del proyecto desde la biblioteca de Steam o desde el acceso directo del escritorio. </li>
46
- <li>Verás el menú principal del juego. Puedes acceder a diferentes opciones como configuración, tienda, logros, perfil, etc.</li>
47
- <li>Para empezar a jugar, haga clic en el botón "Play". Verá dos opciones: "Quick Match" y "Custom Match". </li>
48
- <li>Si desea unirse a un lobby aleatorio en línea, haga clic en "Quick Match". Se le emparejará con otros jugadores en función de su región y preferencias. Puedes elegir jugar como sobreviviente o monstruo, o dejar que el juego decida por ti al azar. </li>
49
-
50
- <li>Una vez que estás en un lobby, puedes chatear con otros jugadores usando chat de voz o texto. También puedes cambiar la apariencia de tu personaje haciendo clic en el botón "Personalizar". Puedes equipar diferentes cosméticos, beneficios, sabotajes, etc. que hayas comprado o ganado en la tienda. También puede cambiar su rol haciendo clic en el botón "Rol". Puedes elegir jugar como sobreviviente o monstruo, o dejar que el juego decida por ti al azar. </li>
51
- <li>Cuando todo el mundo está listo, el anfitrión puede iniciar el partido haciendo clic en el botón "Inicio". El juego cargará el mapa y el modo que se seleccionaron. </li>
52
- <li>El partido comenzará con una corta escena que presenta la historia y el objetivo del juego. Los supervivientes aparecerán en un lugar aleatorio en la fábrica de juguetes. El monstruo aparecerá en una habitación oculta cercana. </li>
53
- <li>El objetivo de los supervivientes es encontrar y recoger seis piezas de juguete que están dispersas por el mapa. Necesitan llevarlos a una máquina de juguete gigante y montarlos juntos. También necesitan resolver rompecabezas, evitar trampas y esconderse del monstruo. Los sobrevivientes tienen una salud limitada, resistencia y batería de linterna. Pueden usar beneficios y sabotajes para ayudarles a escapar. </li>
54
- El objetivo del monstruo es encontrar y matar a todos los supervivientes antes de que completen su objetivo. El monstruo puede usar diferentes habilidades, como correr, rugir, aplastar, etc. El monstruo también puede usar beneficios y sabotajes para obstaculizar el progreso de los sobrevivientes y atraparlos. </li>
55
- <li>El combate terminará cuando los supervivientes completen su objetivo y escapen, o el monstruo mate a todos los supervivientes. El juego mostrará los resultados del partido, como quién ganó, quién murió, quién escapó, etc. El juego también otorgará entradas a cada jugador en función de su rendimiento. </li>
56
- <li> Puede jugar otro partido haciendo clic en el botón "Revancha", o volver al menú principal haciendo clic en el botón "Dejar". </li>
57
- </ol>
58
- <h2>Consejos y trucos para jugar Project Playtime</h2>
59
-
60
- <h3>Cómo trabajar juntos como un sobreviviente</h3>
61
- <p>Como sobreviviente, necesitas cooperar con tus compañeros sobrevivientes para escapar del monstruo y completar tu objetivo. Aquí hay algunas maneras de trabajar juntos como un sobreviviente:</p>
62
- <ul>
63
- <li>Comunícate con tus compañeros de equipo mediante chat de voz o de texto. Puedes compartir información, advertirse, pedir ayuda, etc.</li>
64
- <li>Manténganse juntos tanto como sea posible. Es más probable que sobrevivan si tienen a alguien cuidando su espalda. También pueden ayudarse mutuamente con rompecabezas, trampas y sanación. </li>
65
- <li>Sepárate cuando sea necesario. A veces necesitas cubrir más terreno o distraer al monstruo. También puede utilizar diferentes ventajas y sabotajes para coordinar sus acciones. </li>
66
- <li>Sea consciente de su entorno. Necesita saber dónde están las piezas de juguete, rompecabezas, salidas, escondites, etc. . También debes estar atento a pistas, ruidos, sombras, etc. que indiquen dónde está el monstruo. </li>
67
- <li>Sé inteligente y sigiloso. Debes evitar hacer ruido o dejar rastros que puedan alertar al monstruo. También necesita usar su linterna sabiamente y esconderse cuando sea necesario. </li>
68
- </ul>
69
- <h3>Cómo usar beneficios y sabotajes como sobreviviente</h3>
70
- <p>Como sobreviviente, puedes usar diferentes beneficios y sabotajes para obtener una ventaja sobre el monstruo. Aquí hay algunos ejemplos de ventajas y sabotajes que puedes usar como sobreviviente:</p>
71
- <ul>
72
- <li>Las ventajas son habilidades pasivas que le dan beneficios como el aumento de la salud, la resistencia, la velocidad, etc. Puede equipar hasta tres ventajas a la vez en el menú personalizado. </li>
73
- <li>Los sabotajes son habilidades activas que te permiten interactuar con objetos en el mapa como puertas, interruptores, respiraderos, etc. Puedes usar sabotajes para bloquear, atrapar o distraer al monstruo. Puede equipar hasta dos sabotajes a la vez en el menú personalizado. </li>
74
-
75
- <li>Algunos beneficios y sabotajes tienen tiempos de reutilización o cargos que limitan la frecuencia con la que puede usarlos. Por ejemplo, solo puedes usar el beneficio "Segunda Oportunidad" una vez por partido para revivirte después de ser derribado por el monstruo. Solo puedes usar el sabotaje "Firecracker" tres veces por partido para crear un ruido fuerte que atraiga o espante al monstruo. </li>
76
- <li>Puedes comprar nuevos beneficios y sabotajes en la tienda con entradas que ganas jugando partidos, completando logros y abriendo cajas de juguetes. </li>
77
- </ul>
78
- <h3>Cómo cazar sobrevivientes como un monstruo</h3>
79
- <p>Como monstruo, necesitas usar tus sentidos, habilidades y estrategias para encontrar y matar a todos los supervivientes antes de que escapen. Aquí hay algunas maneras de cazar sobrevivientes como un monstruo:</p>
80
- <p></p>
81
- <ul>
82
- <li>Usa tu visión, oído y olor para localizar a los sobrevivientes. Puedes ver sus huellas, escuchar sus ruidos y oler su sangre. También puede ver sus rayos de linterna y siluetas. </li>
83
- <li>Usa tus habilidades para perseguir, atacar y matar a los supervivientes. Puedes correr, rugir, aplastar y morder. También puedes usar habilidades especiales que son únicas para cada monstruo. Por ejemplo, el Payaso puede lanzar pasteles que los sobrevivientes ciegos, la Muñeca puede teletransportarse a las muñecas cercanas, y el Teddy puede transformarse en un oso gigante. </li>
84
- <li>Usa tus beneficios y sabotajes para obstaculizar, atrapar y asustar a los sobrevivientes. Puede equipar hasta tres ventajas y dos sabotajes a la vez en el menú personalizado. Los beneficios son habilidades pasivas que te dan beneficios como mayor velocidad, daño, salud, etc. Los sabotajes son habilidades activas que te permiten interactuar con objetos en el mapa como puertas, interruptores, respiraderos, etc. Puedes usar sabotajes para bloquear, atrapar o distraer a los sobrevivientes. </li>
85
-
86
- <li>Algunos beneficios y sabotajes tienen tiempos de reutilización o cargos que limitan la frecuencia con la que puede usarlos. Por ejemplo, solo puedes usar el beneficio "Rage" una vez por partido para aumentar tu daño y velocidad por un corto tiempo. Solo puedes usar el sabotaje "Blackout" tres veces por partido para apagar todas las luces del mapa durante unos segundos. </li>
87
- <li>Puedes comprar nuevos beneficios y sabotajes en la tienda con entradas que ganas jugando partidos, completando logros y abriendo cajas de juguetes. </li>
88
- </ul>
89
- <h2>Cómo personalizar tu personaje y jugabilidad en Project Playtime</h2>
90
- <p>Project Playtime te permite personalizar tu personaje y jugabilidad de varias maneras. Puede acceder a la tienda, comprar cosméticos, beneficios, sabotajes y otros artículos con boletos. También puedes cambiar la configuración, como gráficos, sonido, controles, etc. Estas son algunas formas de personalizar tu personaje y el modo de juego en Project Playtime:</p>
91
- <h3>Cómo ganar entradas en Project Playtime</h3>
92
- <p>Las entradas son la moneda de Project Playtime. Puede utilizar las entradas para comprar artículos en la tienda. Aquí hay algunas maneras de ganar tickets en Project Playtime:</p>
93
- <ul>
94
- <li>Juega partidos. Ganarás tickets según tu rendimiento en cada partido. La cantidad de tickets que ganes dependerá de factores como tu rol, tu puntuación, el resultado de tu equipo, etc.</li>
95
- <li>Logros completos. Usted ganará boletos para completar varios logros en el juego. Los logros son desafíos que requieren que hagas tareas específicas o alcances ciertos hitos en el juego. Por ejemplo, puedes ganar un logro por escapar como sobreviviente 10 veces o matar a 50 sobrevivientes como monstruo. </li>
96
- <li>Abre cajas de juguetes. Ganarás entradas para abrir cajas de juguetes que encuentres en el mapa o recibirás como recompensas. Las cajas de juguetes son contenedores que contienen artículos aleatorios como cosméticos, beneficios, sabotajes, etc. Puede abrir cajas de juguetes haciendo clic en ellas en la tienda o en su inventario. </li>
97
- </ul>
98
- <h3>Cómo gastar entradas en Project Playtime</h3>
99
-
100
- <ul>
101
- <li>Navegar por la tienda. Puede acceder a la tienda haciendo clic en el botón "Almacenar" en el menú principal o en el lobby. Puede navegar por diferentes categorías de artículos como cosméticos, beneficios, sabotajes, etc. Puede ver el nombre, la descripción, el precio y la vista previa de cada artículo. También puede filtrar elementos por rol, rareza, tipo, etc.</li>
102
- <li>Comprar artículos. Puede comprar artículos haciendo clic en el botón "Comprar" junto al artículo que desee. Verá una ventana de confirmación que le pedirá que confirme su compra. También puede ver cuántas entradas tiene y cuántas entradas gastará. Haga clic en el botón "Confirmar" para completar su compra. </li>
103
- <li>Equipar artículos. Puede equipar artículos haciendo clic en el botón "Personalizar" en el vestíbulo o en la tienda. Puedes ver la apariencia y las estadísticas de tu personaje. También puedes ver los artículos que has comprado o ganado en tu inventario. Puede equipar los elementos arrastrando y soltando a las ranuras correspondientes. Puede equipar hasta tres cosméticos, tres beneficios y dos sabotajes a la vez. También puede desigualizar elementos arrastrándolos y soltándolos en la papelera. </li>
104
- <li>Usa elementos. Puedes usar elementos jugando con tu personaje personalizado. Puedes ver los elementos equipados en el menú del partido o en la pantalla del juego. Puede usar beneficios y sabotajes presionando los botones o teclas correspondientes. </li>
105
- </ul>
106
- <h2>Conclusión</h2>
107
-
108
- <h2>Preguntas frecuentes</h2>
109
- <p>Aquí hay algunas preguntas frecuentes sobre Project Playtime:</p>
110
- <ul>
111
- <li><b>Q: ¿Está libre el Project Playtime? </b></li>
112
- <li>A: Sí, Project Playtime es gratuito. No necesitas pagar nada para descargar y jugar el juego. </li>
113
- <li><b>Q: ¿Es multijugador Project Playtime? </b></li>
114
- <li>A: Sí, Project Playtime es multijugador. Puedes jugar con tus amigos o unirte a grupos de presión aleatorios en línea. </li>
115
- <li><b>Q: ¿Es Project Playtime horror? </b></li>
116
- <li>A: Sí, Project Playtime es horror. El juego tiene una atmósfera oscura y espeluznante que te hará sentir incómodo y asustado. </li>
117
- <li><b>Q: ¿Cuántos jugadores pueden jugar Project Playtime? </b></li>
118
- <li>A: Project Playtime admite hasta siete jugadores por partido. Seis jugadores juegan como supervivientes y un jugador juega como un monstruo. </li>
119
- <li><b>Q: ¿Cuánto tiempo es una coincidencia en Project Playtime? </b></li>
120
- <li>A: Un partido en Project Playtime dura de 10 a 15 minutos, dependiendo del mapa, el modo, la dificultad y las habilidades de los jugadores. </li>
121
- </ul></p> 64aa2da5cf<br />
122
- <br />
123
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar 3d Fondo De Pantalla En Vivo.md DELETED
@@ -1,94 +0,0 @@
1
-
2
- <h1>Descargar 3D Live Wallpaper: Cómo hacer que su escritorio cobre vida</h1>
3
- <p>¿Quieres darle vida a tu escritorio con algunas imágenes impresionantes? ¿Quieres hacer tu computadora más personalizada e interactiva? ¿Quieres divertirte y divertirte mientras trabajas o estudias? Si respondiste sí a cualquiera de estas preguntas, entonces deberías intentar descargar fondos de escritorio en 3D. </p>
4
- <h2>descargar 3d fondo de pantalla en vivo</h2><br /><p><b><b>Download File</b> &#10037; <a href="https://bltlly.com/2v6MlN">https://bltlly.com/2v6MlN</a></b></p><br /><br />
5
- <p>3D live wallpaper es un tipo de papel pintado animado que utiliza gráficos tridimensionales para crear escenas realistas e inmersivas en su pantalla. A diferencia de los fondos de pantalla estáticos, los fondos de pantalla en vivo en 3D pueden moverse, cambiar y reaccionar a sus acciones. Puedes elegir entre una variedad de temas y estilos, como naturaleza, fantasía, ciencia ficción, anime, películas, juegos y más. También puede crear su propio fondo de pantalla en vivo en 3D utilizando imágenes, vídeos, sitios web o aplicaciones. </p>
6
- <p>En este artículo, le mostraremos cómo descargar fondos de escritorio en vivo en 3D para su escritorio y cómo usarlo de manera efectiva. También compartiremos algunos de los beneficios de usar fondos de escritorio en vivo en 3D y responderemos algunas preguntas comunes al respecto. Al final de este artículo, podrás hacer que tu escritorio cobre vida con un increíble fondo de pantalla en vivo en 3D. </p>
7
- <h2>¿Qué son los fondos de pantalla en vivo en 3D? </h2>
8
- <p>Como su nombre indica, 3D live wallpaper es un tipo de fondo de pantalla que utiliza gráficos tridimensionales para crear escenas dinámicas y realistas en la pantalla. A diferencia de los fondos de pantalla normales, que son solo imágenes que permanecen inmóviles en su fondo, el fondo de pantalla en vivo en 3D puede moverse, cambiar e interactuar con su ratón o teclado. Por ejemplo, puede tener un fondo de pantalla en vivo en 3D de un bosque que cambia con las estaciones, o un fondo de pantalla en vivo en 3D de una nave espacial que vuela por el espacio. </p>
9
-
10
- <p>Algunos ejemplos de temas populares de fondos de escritorio en vivo en 3D son:</p>
11
- <p></p>
12
- <ul>
13
- <li>Naturaleza: Usted puede tener un fondo de pantalla en vivo 3D de una cascada, una montaña, una playa, un bosque, o cualquier otro paisaje natural que te gusta. </li>
14
- <li>Fantasía: Puedes tener un fondo de pantalla en vivo en 3D de un dragón, un unicornio, un hada, un castillo o cualquier otra criatura de fantasía o escenario que te guste. </li>
15
- <li>Ciencia ficción: Puedes tener un fondo de pantalla en 3D de una nave espacial, un robot, un planeta alienígena, una ciudad futurista o cualquier otro elemento de ciencia ficción que te guste. </li>
16
- <li>Anime: Puede tener un fondo de pantalla en 3D en vivo de su personaje o escena de anime favorito de un espectáculo de anime o película. </li>
17
- <li>Películas: Puedes tener un fondo de pantalla en 3D de tu personaje o escena de película favorita de una película que te guste. </li>
18
- <li>Juegos: Puedes tener un fondo de pantalla en 3D en vivo de tu personaje o escena favorita de un juego que te guste. </li>
19
- </ul>
20
- <p>Estos son solo algunos de los ejemplos de temas de fondos de escritorio en vivo en 3D que puedes encontrar en línea. Hay muchas más opciones y categorías que puedes explorar y descargar. </p>
21
- <h2>¿Por qué utilizar fondos de pantalla en vivo 3D? </h2>
22
- <p>Ahora que sabes lo que son los fondos de pantalla en vivo en 3D, es posible que te estés preguntando por qué deberías usarlos en tu escritorio. Estos son algunos de los beneficios de usar fondos de pantalla en vivo en 3D:</p>
23
- <ul>
24
- <li>Personalización: Puede personalizar su escritorio con fondos de pantalla en vivo en 3D que se adapten a sus preferencias, personalidad, estado de ánimo o intereses. También puede crear sus propios fondos de pantalla en 3D usando sus propias imágenes, videos, sitios web o aplicaciones. </li>
25
- <li>Interactividad: Puedes interactuar con tus fondos de pantalla en 3D usando tu ratón o teclado. También puede ajustar la configuración y las características de sus fondos de pantalla en vivo en 3D para hacerlos más receptivos e interactivos. </li>
26
- <li>Entretenimiento: Puede disfrutar viendo sus fondos de pantalla en vivo en 3D a medida que se mueven, cambian y reaccionan a sus acciones. También puede divertirse jugando con sus fondos de pantalla en vivo en 3D, ya que ofrecen varios efectos y animaciones. </li>
27
- </ul>
28
-
29
- <h2>¿Cómo descargar fondos de pantalla en vivo en 3D? </h2>
30
- <p>Si desea descargar fondos de pantalla en vivo en 3D para su escritorio, tiene varias fuentes y métodos para elegir. Estos son algunos de los más populares y confiables:</p>
31
- <h3>MoeWalls</h3>
32
- <p>MoeWalls es un sitio web que ofrece populares fondos de pantalla en vivo gratuitos, fondos de pantalla animados y videos para su escritorio. Puede navegar a través de varias categorías y géneros de fondos de pantalla en vivo en 3D, como anime, juegos, películas, naturaleza, fantasía, ciencia ficción y más. También puede buscar palabras clave específicas o títulos de fondos de pantalla en 3D que desea descargar. </p>
33
- <p>Para descargar fondos de pantalla en vivo en 3D de MoeWalls, debe seguir estos pasos:</p>
34
- <ol>
35
- <li>Ir a <a href=">MoeWalls.com</a>. </li>
36
- <li>Seleccione la categoría o género de fondo de pantalla en vivo en 3D que desea descargar. </li>
37
- <li>Elija el fondo de pantalla en vivo en 3D que desee de la lista de resultados. </li>
38
- <li>Haga clic en el botón de descarga debajo de la vista previa del fondo de pantalla en vivo 3D. </li>
39
- <li>Guarde el archivo en su computadora. </li>
40
- </ol>
41
- <p>A continuación, puede utilizar el archivo como fondo de escritorio o utilizar un software como Wallpaper Engine para ejecutarlo como fondo de pantalla en vivo. </p>
42
- <h3>Motor de fondos de pantalla</h3>
43
- <p>Wallpaper Engine es un software que le permite utilizar fondos de pantalla en vivo en el escritorio de Windows. Puede utilizar varios tipos de fondos de pantalla en vivo, incluyendo animaciones 3D y 2D, sitios web, videos y aplicaciones. También puede crear sus propios fondos de pantalla en vivo utilizando el editor de Wallpaper Engine.</p>
44
- <p>Para descargar fondos de pantalla en vivo en 3D de Wallpaper Engine, debe seguir estos pasos:</p>
45
- <ol>
46
- <li>Ve a <a href=">Steam</a> y compra Wallpaper Engine por $4.99. </li>
47
- <li>Instalar Wallpaper Engine en su computadora. </li>
48
- <li>Inicie Wallpaper Engine y haga clic en la pestaña Taller. </li>
49
- <li>Seleccione la categoría o género de fondo de pantalla en vivo en 3D que desea descargar. </li>
50
- <li>Elija el fondo de pantalla en vivo en 3D que desee de la lista de resultados. </li>
51
-
52
- <li>El fondo de pantalla en vivo 3D se descargará y se agregará a su biblioteca de Wallpaper Engine. </li>
53
- </ol>
54
- <p>A continuación, puede seleccionar el fondo de pantalla en vivo 3D de su biblioteca y aplicarlo como fondo de escritorio. </p>
55
- <h3>Videos de Pexels</h3>
56
- <p>Pexels Videos es un sitio web que proporciona videos de stock gratuitos para uso personal y comercial. Usted puede encontrar varios tipos de vídeos en Pexels Videos, incluyendo fondos de escritorio videos 3D. Estos son videos que están diseñados para ser utilizados como fondos de escritorio con gráficos 3D realistas e inmersivos. </p>
57
- <p>Para descargar videos de escritorio en 3D de Pexels Videos, debe seguir estos pasos:</p>
58
- <ol>
59
- <li>Ir a <a href="">Pexels.com/videos</a>. </li>
60
- <li>Escriba "fondo de escritorio 3d" en el cuadro de búsqueda y pulse enter. </li>
61
- <li>Elija el vídeo que desee de la lista de resultados. </li>
62
- <li>Haga clic en el botón de descarga debajo de la vista previa del video. </li>
63
- <li>Guarde el archivo en su computadora. </li>
64
- </ol>
65
- <p>A continuación, puede utilizar el archivo como fondo de escritorio o utilizar un software como Wallpaper Engine para ejecutarlo como fondo de pantalla en vivo. </p>
66
- <h2>¿Cómo usar fondos de pantalla en vivo en 3D? </h2>
67
- <p>Una vez que haya descargado fondos de pantalla en vivo en 3D para su escritorio, es posible que desee saber cómo usarlos de manera efectiva. Aquí hay algunos consejos y trucos sobre cómo utilizar fondos de pantalla en vivo 3D en su escritorio:</p>
68
- <ul>
69
- <li>Configurarlos como fondo: Puede establecer fondos de pantalla en vivo 3D como fondo de escritorio haciendo clic derecho en el archivo y seleccionando "Establecer como fondo de escritorio". Alternativamente, puede utilizar un software como Wallpaper Engine para aplicar fondos de pantalla en vivo 3D como fondo de escritorio. Wallpaper Engine también le permite personalizar la configuración y las características de sus fondos de pantalla en vivo en 3D, tales como resolución, velocidad de fotogramas, sonido, rendimiento y más. </li>
70
-
71
- <li>Pausándolos cuando sea necesario: Puede pausar sus fondos de pantalla en vivo en 3D cuando necesite centrarse en otras tareas o ahorrar batería. Puede hacer esto haciendo clic derecho en el escritorio y seleccionando "Pausa" o "Detener" en el menú. Alternativamente, puede usar un software como Wallpaper Engine para pausar sus fondos de pantalla en vivo 3D automáticamente cuando está usando una aplicación de pantalla completa o cuando su computadora está inactiva. </li>
72
- </ul>
73
- <p>El uso de fondos de pantalla en vivo en 3D puede mejorar su experiencia de escritorio y hacerla más agradable. Sin embargo, también debe tener en cuenta los posibles inconvenientes de usar fondos de pantalla en vivo en 3D, como el aumento del uso de CPU y GPU, el consumo de batería y la distracción. También debe asegurarse de que su computadora cumple con los requisitos mínimos para ejecutar fondos de pantalla en vivo 3D sin problemas y sin retrasos. </p>
74
- <h2>Conclusión</h2>
75
- <p>En conclusión, 3D live wallpaper es un tipo de papel pintado animado que utiliza gráficos tridimensionales para crear escenas realistas e inmersivas en la pantalla. Puede descargar fondos de escritorio en vivo en 3D de varias fuentes en línea, como MoeWalls, Wallpaper Engine y Pexels Videos. También puede usar fondos de escritorio en vivo en 3D de manera efectiva ajustando los ajustes y pausándolos cuando sea necesario. </p>
76
- <p>Si quieres hacer que tu escritorio cobre vida con un increíble fondo de pantalla en vivo en 3D, deberías intentar descargar algunos de ellos hoy. Usted se sorprenderá por lo mucho que pueden transformar su escritorio en un entorno impresionante e interactivo. También tendrás diversión y entretenimiento mientras trabajas o estudias con tu fondo de pantalla en 3D. </p>
77
- <p>Entonces, ¿qué estás esperando? Descargar 3D fondo de pantalla en vivo ahora y disfrutar! </p>
78
- <h2>Preguntas frecuentes</h2>
79
- <p>Aquí están algunas de las preguntas y respuestas más frecuentes sobre el fondo de pantalla en vivo 3D:</p>
80
- <ol>
81
- <li><b>¿Cuál es la diferencia entre el papel pintado en vivo en 3D y el papel pintado normal? </b><br>
82
-
83
- <li><b>¿Cuánto cuesta descargar fondos de escritorio en 3D? </b><br>
84
- Depende de la fuente y el tipo de fondo de pantalla en vivo 3D que desea descargar. Algunos de ellos son gratuitos, mientras que algunos de ellos requieren una cuota o una suscripción. Por ejemplo, MoeWalls y Pexels Videos ofrecen fondos de escritorio en vivo en 3D gratis, mientras que Wallpaper Engine cuesta $4.99. </li>
85
- <li><b>¿El uso de fondos de escritorio en vivo en 3D afecta el rendimiento de mi computadora? </b><br>
86
- Depende de la calidad y la complejidad del fondo de pantalla en vivo 3D que está utilizando. Algunos de ellos pueden consumir más recursos de CPU y GPU que otros, lo que puede afectar el rendimiento de su computadora. Puede reducir este impacto reduciendo la resolución o la velocidad de fotogramas de su fondo de pantalla 3D en vivo o deteniéndolo cuando no esté en uso. </li>
87
- <li><b>¿Puedo crear mi propio fondo de pantalla en vivo en 3D? </b><br>
88
- Sí, puede crear su propio fondo de pantalla en vivo en 3D utilizando imágenes, videos, sitios web o aplicaciones. Puede utilizar un software como Wallpaper Engine para crear y editar su propio fondo de pantalla en vivo 3D utilizando su editor incorporado. </li>
89
- <li><b>¿Puedo usar fondos de escritorio en vivo en 3D en otros dispositivos además de mi escritorio? </b><br>
90
- Sí, puede usar fondos de escritorio en vivo en 3D en otros dispositivos, como computadoras portátiles, tabletas, teléfonos inteligentes o televisores inteligentes. Sin embargo, es posible que necesite utilizar diferentes fuentes o métodos para descargar y aplicar fondos de escritorio en vivo 3D en diferentes dispositivos. Por ejemplo, puedes usar aplicaciones como 3D Wallpaper Parallax o Live Wallpapers 3D/4K para tu smartphone o tablet. </li>
91
- </ol>
92
- <p>Espero que este artículo haya respondido a sus preguntas y le haya ayudado a aprender más sobre el fondo de pantalla en vivo en 3D. Si tiene alguna otra pregunta o comentario, no dude en dejarlos a continuación. ¡Gracias por leer y tener un gran día! </p> 64aa2da5cf<br />
93
- <br />
94
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/index.py DELETED
@@ -1,139 +0,0 @@
1
- import logging
2
- from optparse import Values
3
- from typing import Any, Iterable, List, Optional, Union
4
-
5
- from pip._vendor.packaging.version import LegacyVersion, Version
6
-
7
- from pip._internal.cli import cmdoptions
8
- from pip._internal.cli.req_command import IndexGroupCommand
9
- from pip._internal.cli.status_codes import ERROR, SUCCESS
10
- from pip._internal.commands.search import print_dist_installation_info
11
- from pip._internal.exceptions import CommandError, DistributionNotFound, PipError
12
- from pip._internal.index.collector import LinkCollector
13
- from pip._internal.index.package_finder import PackageFinder
14
- from pip._internal.models.selection_prefs import SelectionPreferences
15
- from pip._internal.models.target_python import TargetPython
16
- from pip._internal.network.session import PipSession
17
- from pip._internal.utils.misc import write_output
18
-
19
- logger = logging.getLogger(__name__)
20
-
21
-
22
- class IndexCommand(IndexGroupCommand):
23
- """
24
- Inspect information available from package indexes.
25
- """
26
-
27
- ignore_require_venv = True
28
- usage = """
29
- %prog versions <package>
30
- """
31
-
32
- def add_options(self) -> None:
33
- cmdoptions.add_target_python_options(self.cmd_opts)
34
-
35
- self.cmd_opts.add_option(cmdoptions.ignore_requires_python())
36
- self.cmd_opts.add_option(cmdoptions.pre())
37
- self.cmd_opts.add_option(cmdoptions.no_binary())
38
- self.cmd_opts.add_option(cmdoptions.only_binary())
39
-
40
- index_opts = cmdoptions.make_option_group(
41
- cmdoptions.index_group,
42
- self.parser,
43
- )
44
-
45
- self.parser.insert_option_group(0, index_opts)
46
- self.parser.insert_option_group(0, self.cmd_opts)
47
-
48
- def run(self, options: Values, args: List[str]) -> int:
49
- handlers = {
50
- "versions": self.get_available_package_versions,
51
- }
52
-
53
- logger.warning(
54
- "pip index is currently an experimental command. "
55
- "It may be removed/changed in a future release "
56
- "without prior warning."
57
- )
58
-
59
- # Determine action
60
- if not args or args[0] not in handlers:
61
- logger.error(
62
- "Need an action (%s) to perform.",
63
- ", ".join(sorted(handlers)),
64
- )
65
- return ERROR
66
-
67
- action = args[0]
68
-
69
- # Error handling happens here, not in the action-handlers.
70
- try:
71
- handlers[action](options, args[1:])
72
- except PipError as e:
73
- logger.error(e.args[0])
74
- return ERROR
75
-
76
- return SUCCESS
77
-
78
- def _build_package_finder(
79
- self,
80
- options: Values,
81
- session: PipSession,
82
- target_python: Optional[TargetPython] = None,
83
- ignore_requires_python: Optional[bool] = None,
84
- ) -> PackageFinder:
85
- """
86
- Create a package finder appropriate to the index command.
87
- """
88
- link_collector = LinkCollector.create(session, options=options)
89
-
90
- # Pass allow_yanked=False to ignore yanked versions.
91
- selection_prefs = SelectionPreferences(
92
- allow_yanked=False,
93
- allow_all_prereleases=options.pre,
94
- ignore_requires_python=ignore_requires_python,
95
- )
96
-
97
- return PackageFinder.create(
98
- link_collector=link_collector,
99
- selection_prefs=selection_prefs,
100
- target_python=target_python,
101
- )
102
-
103
- def get_available_package_versions(self, options: Values, args: List[Any]) -> None:
104
- if len(args) != 1:
105
- raise CommandError("You need to specify exactly one argument")
106
-
107
- target_python = cmdoptions.make_target_python(options)
108
- query = args[0]
109
-
110
- with self._build_session(options) as session:
111
- finder = self._build_package_finder(
112
- options=options,
113
- session=session,
114
- target_python=target_python,
115
- ignore_requires_python=options.ignore_requires_python,
116
- )
117
-
118
- versions: Iterable[Union[LegacyVersion, Version]] = (
119
- candidate.version for candidate in finder.find_all_candidates(query)
120
- )
121
-
122
- if not options.pre:
123
- # Remove prereleases
124
- versions = (
125
- version for version in versions if not version.is_prerelease
126
- )
127
- versions = set(versions)
128
-
129
- if not versions:
130
- raise DistributionNotFound(
131
- "No matching distribution found for {}".format(query)
132
- )
133
-
134
- formatted_versions = [str(ver) for ver in sorted(versions, reverse=True)]
135
- latest = formatted_versions[0]
136
-
137
- write_output("{} ({})".format(query, latest))
138
- write_output("Available versions: {}".format(", ".join(formatted_versions)))
139
- print_dist_installation_info(query, latest)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/compat.py DELETED
@@ -1,94 +0,0 @@
1
- # Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License"). You
4
- # may not use this file except in compliance with the License. A copy of
5
- # the License is located at
6
- #
7
- # http://aws.amazon.com/apache2.0/
8
- #
9
- # or in the "license" file accompanying this file. This file is
10
- # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11
- # ANY KIND, either express or implied. See the License for the specific
12
- # language governing permissions and limitations under the License.
13
- import errno
14
- import inspect
15
- import os
16
- import socket
17
- import sys
18
-
19
- from botocore.compat import six
20
-
21
- if sys.platform.startswith('win'):
22
- def rename_file(current_filename, new_filename):
23
- try:
24
- os.remove(new_filename)
25
- except OSError as e:
26
- if not e.errno == errno.ENOENT:
27
- # We only want to a ignore trying to remove
28
- # a file that does not exist. If it fails
29
- # for any other reason we should be propagating
30
- # that exception.
31
- raise
32
- os.rename(current_filename, new_filename)
33
- else:
34
- rename_file = os.rename
35
-
36
-
37
- def accepts_kwargs(func):
38
- return inspect.getfullargspec(func)[2]
39
-
40
-
41
- # In python 3, socket.error is OSError, which is too general
42
- # for what we want (i.e FileNotFoundError is a subclass of OSError).
43
- # In python 3, all the socket related errors are in a newly created
44
- # ConnectionError.
45
- SOCKET_ERROR = ConnectionError
46
- MAXINT = None
47
-
48
-
49
- def seekable(fileobj):
50
- """Backwards compat function to determine if a fileobj is seekable
51
-
52
- :param fileobj: The file-like object to determine if seekable
53
-
54
- :returns: True, if seekable. False, otherwise.
55
- """
56
- # If the fileobj has a seekable attr, try calling the seekable()
57
- # method on it.
58
- if hasattr(fileobj, 'seekable'):
59
- return fileobj.seekable()
60
- # If there is no seekable attr, check if the object can be seeked
61
- # or telled. If it can, try to seek to the current position.
62
- elif hasattr(fileobj, 'seek') and hasattr(fileobj, 'tell'):
63
- try:
64
- fileobj.seek(0, 1)
65
- return True
66
- except OSError:
67
- # If an io related error was thrown then it is not seekable.
68
- return False
69
- # Else, the fileobj is not seekable
70
- return False
71
-
72
-
73
- def readable(fileobj):
74
- """Determines whether or not a file-like object is readable.
75
-
76
- :param fileobj: The file-like object to determine if readable
77
-
78
- :returns: True, if readable. False otherwise.
79
- """
80
- if hasattr(fileobj, 'readable'):
81
- return fileobj.readable()
82
-
83
- return hasattr(fileobj, 'read')
84
-
85
-
86
- def fallocate(fileobj, size):
87
- if hasattr(os, 'posix_fallocate'):
88
- os.posix_fallocate(fileobj.fileno(), 0, size)
89
- else:
90
- fileobj.truncate(size)
91
-
92
-
93
- # Import at end of file to avoid circular dependencies
94
- from multiprocessing.managers import BaseManager # noqa: F401,E402
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Branon/Proxy/Dockerfile DELETED
@@ -1,11 +0,0 @@
1
- FROM node:18-bullseye-slim
2
- RUN apt-get update && \
3
- apt-get install -y git
4
- RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
5
- WORKDIR /app
6
- RUN npm install
7
- COPY Dockerfile greeting.md* .env* ./
8
- RUN npm run build
9
- EXPOSE 7860
10
- ENV NODE_ENV=production
11
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/README.md DELETED
@@ -1,8 +0,0 @@
1
-
2
- ## Detectron2 Demo
3
-
4
- We provide a command line tool to run a simple demo of builtin models.
5
- The usage is explained in [GETTING_STARTED.md](../GETTING_STARTED.md).
6
-
7
- See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-)
8
- for a high-quality demo generated with this tool.
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/include/pybind11/detail/init.h DELETED
@@ -1,336 +0,0 @@
1
- /*
2
- pybind11/detail/init.h: init factory function implementation and support code.
3
-
4
- Copyright (c) 2017 Jason Rhinelander <[email protected]>
5
-
6
- All rights reserved. Use of this source code is governed by a
7
- BSD-style license that can be found in the LICENSE file.
8
- */
9
-
10
- #pragma once
11
-
12
- #include "class.h"
13
-
14
- PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
15
- PYBIND11_NAMESPACE_BEGIN(detail)
16
-
17
- template <>
18
- class type_caster<value_and_holder> {
19
- public:
20
- bool load(handle h, bool) {
21
- value = reinterpret_cast<value_and_holder *>(h.ptr());
22
- return true;
23
- }
24
-
25
- template <typename> using cast_op_type = value_and_holder &;
26
- operator value_and_holder &() { return *value; }
27
- static constexpr auto name = _<value_and_holder>();
28
-
29
- private:
30
- value_and_holder *value = nullptr;
31
- };
32
-
33
- PYBIND11_NAMESPACE_BEGIN(initimpl)
34
-
35
- inline void no_nullptr(void *ptr) {
36
- if (!ptr) throw type_error("pybind11::init(): factory function returned nullptr");
37
- }
38
-
39
- // Implementing functions for all forms of py::init<...> and py::init(...)
40
- template <typename Class> using Cpp = typename Class::type;
41
- template <typename Class> using Alias = typename Class::type_alias;
42
- template <typename Class> using Holder = typename Class::holder_type;
43
-
44
- template <typename Class> using is_alias_constructible = std::is_constructible<Alias<Class>, Cpp<Class> &&>;
45
-
46
- // Takes a Cpp pointer and returns true if it actually is a polymorphic Alias instance.
47
- template <typename Class, enable_if_t<Class::has_alias, int> = 0>
48
- bool is_alias(Cpp<Class> *ptr) {
49
- return dynamic_cast<Alias<Class> *>(ptr) != nullptr;
50
- }
51
- // Failing fallback version of the above for a no-alias class (always returns false)
52
- template <typename /*Class*/>
53
- constexpr bool is_alias(void *) { return false; }
54
-
55
- // Constructs and returns a new object; if the given arguments don't map to a constructor, we fall
56
- // back to brace aggregate initiailization so that for aggregate initialization can be used with
57
- // py::init, e.g. `py::init<int, int>` to initialize a `struct T { int a; int b; }`. For
58
- // non-aggregate types, we need to use an ordinary T(...) constructor (invoking as `T{...}` usually
59
- // works, but will not do the expected thing when `T` has an `initializer_list<T>` constructor).
60
- template <typename Class, typename... Args, detail::enable_if_t<std::is_constructible<Class, Args...>::value, int> = 0>
61
- inline Class *construct_or_initialize(Args &&...args) { return new Class(std::forward<Args>(args)...); }
62
- template <typename Class, typename... Args, detail::enable_if_t<!std::is_constructible<Class, Args...>::value, int> = 0>
63
- inline Class *construct_or_initialize(Args &&...args) { return new Class{std::forward<Args>(args)...}; }
64
-
65
- // Attempts to constructs an alias using a `Alias(Cpp &&)` constructor. This allows types with
66
- // an alias to provide only a single Cpp factory function as long as the Alias can be
67
- // constructed from an rvalue reference of the base Cpp type. This means that Alias classes
68
- // can, when appropriate, simply define a `Alias(Cpp &&)` constructor rather than needing to
69
- // inherit all the base class constructors.
70
- template <typename Class>
71
- void construct_alias_from_cpp(std::true_type /*is_alias_constructible*/,
72
- value_and_holder &v_h, Cpp<Class> &&base) {
73
- v_h.value_ptr() = new Alias<Class>(std::move(base));
74
- }
75
- template <typename Class>
76
- [[noreturn]] void construct_alias_from_cpp(std::false_type /*!is_alias_constructible*/,
77
- value_and_holder &, Cpp<Class> &&) {
78
- throw type_error("pybind11::init(): unable to convert returned instance to required "
79
- "alias class: no `Alias<Class>(Class &&)` constructor available");
80
- }
81
-
82
- // Error-generating fallback for factories that don't match one of the below construction
83
- // mechanisms.
84
- template <typename Class>
85
- void construct(...) {
86
- static_assert(!std::is_same<Class, Class>::value /* always false */,
87
- "pybind11::init(): init function must return a compatible pointer, "
88
- "holder, or value");
89
- }
90
-
91
- // Pointer return v1: the factory function returns a class pointer for a registered class.
92
- // If we don't need an alias (because this class doesn't have one, or because the final type is
93
- // inherited on the Python side) we can simply take over ownership. Otherwise we need to try to
94
- // construct an Alias from the returned base instance.
95
- template <typename Class>
96
- void construct(value_and_holder &v_h, Cpp<Class> *ptr, bool need_alias) {
97
- no_nullptr(ptr);
98
- if (Class::has_alias && need_alias && !is_alias<Class>(ptr)) {
99
- // We're going to try to construct an alias by moving the cpp type. Whether or not
100
- // that succeeds, we still need to destroy the original cpp pointer (either the
101
- // moved away leftover, if the alias construction works, or the value itself if we
102
- // throw an error), but we can't just call `delete ptr`: it might have a special
103
- // deleter, or might be shared_from_this. So we construct a holder around it as if
104
- // it was a normal instance, then steal the holder away into a local variable; thus
105
- // the holder and destruction happens when we leave the C++ scope, and the holder
106
- // class gets to handle the destruction however it likes.
107
- v_h.value_ptr() = ptr;
108
- v_h.set_instance_registered(true); // To prevent init_instance from registering it
109
- v_h.type->init_instance(v_h.inst, nullptr); // Set up the holder
110
- Holder<Class> temp_holder(std::move(v_h.holder<Holder<Class>>())); // Steal the holder
111
- v_h.type->dealloc(v_h); // Destroys the moved-out holder remains, resets value ptr to null
112
- v_h.set_instance_registered(false);
113
-
114
- construct_alias_from_cpp<Class>(is_alias_constructible<Class>{}, v_h, std::move(*ptr));
115
- } else {
116
- // Otherwise the type isn't inherited, so we don't need an Alias
117
- v_h.value_ptr() = ptr;
118
- }
119
- }
120
-
121
- // Pointer return v2: a factory that always returns an alias instance ptr. We simply take over
122
- // ownership of the pointer.
123
- template <typename Class, enable_if_t<Class::has_alias, int> = 0>
124
- void construct(value_and_holder &v_h, Alias<Class> *alias_ptr, bool) {
125
- no_nullptr(alias_ptr);
126
- v_h.value_ptr() = static_cast<Cpp<Class> *>(alias_ptr);
127
- }
128
-
129
- // Holder return: copy its pointer, and move or copy the returned holder into the new instance's
130
- // holder. This also handles types like std::shared_ptr<T> and std::unique_ptr<T> where T is a
131
- // derived type (through those holder's implicit conversion from derived class holder constructors).
132
- template <typename Class>
133
- void construct(value_and_holder &v_h, Holder<Class> holder, bool need_alias) {
134
- auto *ptr = holder_helper<Holder<Class>>::get(holder);
135
- no_nullptr(ptr);
136
- // If we need an alias, check that the held pointer is actually an alias instance
137
- if (Class::has_alias && need_alias && !is_alias<Class>(ptr))
138
- throw type_error("pybind11::init(): construction failed: returned holder-wrapped instance "
139
- "is not an alias instance");
140
-
141
- v_h.value_ptr() = ptr;
142
- v_h.type->init_instance(v_h.inst, &holder);
143
- }
144
-
145
- // return-by-value version 1: returning a cpp class by value. If the class has an alias and an
146
- // alias is required the alias must have an `Alias(Cpp &&)` constructor so that we can construct
147
- // the alias from the base when needed (i.e. because of Python-side inheritance). When we don't
148
- // need it, we simply move-construct the cpp value into a new instance.
149
- template <typename Class>
150
- void construct(value_and_holder &v_h, Cpp<Class> &&result, bool need_alias) {
151
- static_assert(std::is_move_constructible<Cpp<Class>>::value,
152
- "pybind11::init() return-by-value factory function requires a movable class");
153
- if (Class::has_alias && need_alias)
154
- construct_alias_from_cpp<Class>(is_alias_constructible<Class>{}, v_h, std::move(result));
155
- else
156
- v_h.value_ptr() = new Cpp<Class>(std::move(result));
157
- }
158
-
159
- // return-by-value version 2: returning a value of the alias type itself. We move-construct an
160
- // Alias instance (even if no the python-side inheritance is involved). The is intended for
161
- // cases where Alias initialization is always desired.
162
- template <typename Class>
163
- void construct(value_and_holder &v_h, Alias<Class> &&result, bool) {
164
- static_assert(std::is_move_constructible<Alias<Class>>::value,
165
- "pybind11::init() return-by-alias-value factory function requires a movable alias class");
166
- v_h.value_ptr() = new Alias<Class>(std::move(result));
167
- }
168
-
169
- // Implementing class for py::init<...>()
170
- template <typename... Args>
171
- struct constructor {
172
- template <typename Class, typename... Extra, enable_if_t<!Class::has_alias, int> = 0>
173
- static void execute(Class &cl, const Extra&... extra) {
174
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
175
- v_h.value_ptr() = construct_or_initialize<Cpp<Class>>(std::forward<Args>(args)...);
176
- }, is_new_style_constructor(), extra...);
177
- }
178
-
179
- template <typename Class, typename... Extra,
180
- enable_if_t<Class::has_alias &&
181
- std::is_constructible<Cpp<Class>, Args...>::value, int> = 0>
182
- static void execute(Class &cl, const Extra&... extra) {
183
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
184
- if (Py_TYPE(v_h.inst) == v_h.type->type)
185
- v_h.value_ptr() = construct_or_initialize<Cpp<Class>>(std::forward<Args>(args)...);
186
- else
187
- v_h.value_ptr() = construct_or_initialize<Alias<Class>>(std::forward<Args>(args)...);
188
- }, is_new_style_constructor(), extra...);
189
- }
190
-
191
- template <typename Class, typename... Extra,
192
- enable_if_t<Class::has_alias &&
193
- !std::is_constructible<Cpp<Class>, Args...>::value, int> = 0>
194
- static void execute(Class &cl, const Extra&... extra) {
195
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
196
- v_h.value_ptr() = construct_or_initialize<Alias<Class>>(std::forward<Args>(args)...);
197
- }, is_new_style_constructor(), extra...);
198
- }
199
- };
200
-
201
- // Implementing class for py::init_alias<...>()
202
- template <typename... Args> struct alias_constructor {
203
- template <typename Class, typename... Extra,
204
- enable_if_t<Class::has_alias && std::is_constructible<Alias<Class>, Args...>::value, int> = 0>
205
- static void execute(Class &cl, const Extra&... extra) {
206
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
207
- v_h.value_ptr() = construct_or_initialize<Alias<Class>>(std::forward<Args>(args)...);
208
- }, is_new_style_constructor(), extra...);
209
- }
210
- };
211
-
212
- // Implementation class for py::init(Func) and py::init(Func, AliasFunc)
213
- template <typename CFunc, typename AFunc = void_type (*)(),
214
- typename = function_signature_t<CFunc>, typename = function_signature_t<AFunc>>
215
- struct factory;
216
-
217
- // Specialization for py::init(Func)
218
- template <typename Func, typename Return, typename... Args>
219
- struct factory<Func, void_type (*)(), Return(Args...)> {
220
- remove_reference_t<Func> class_factory;
221
-
222
- factory(Func &&f) : class_factory(std::forward<Func>(f)) { }
223
-
224
- // The given class either has no alias or has no separate alias factory;
225
- // this always constructs the class itself. If the class is registered with an alias
226
- // type and an alias instance is needed (i.e. because the final type is a Python class
227
- // inheriting from the C++ type) the returned value needs to either already be an alias
228
- // instance, or the alias needs to be constructible from a `Class &&` argument.
229
- template <typename Class, typename... Extra>
230
- void execute(Class &cl, const Extra &...extra) && {
231
- #if defined(PYBIND11_CPP14)
232
- cl.def("__init__", [func = std::move(class_factory)]
233
- #else
234
- auto &func = class_factory;
235
- cl.def("__init__", [func]
236
- #endif
237
- (value_and_holder &v_h, Args... args) {
238
- construct<Class>(v_h, func(std::forward<Args>(args)...),
239
- Py_TYPE(v_h.inst) != v_h.type->type);
240
- }, is_new_style_constructor(), extra...);
241
- }
242
- };
243
-
244
- // Specialization for py::init(Func, AliasFunc)
245
- template <typename CFunc, typename AFunc,
246
- typename CReturn, typename... CArgs, typename AReturn, typename... AArgs>
247
- struct factory<CFunc, AFunc, CReturn(CArgs...), AReturn(AArgs...)> {
248
- static_assert(sizeof...(CArgs) == sizeof...(AArgs),
249
- "pybind11::init(class_factory, alias_factory): class and alias factories "
250
- "must have identical argument signatures");
251
- static_assert(all_of<std::is_same<CArgs, AArgs>...>::value,
252
- "pybind11::init(class_factory, alias_factory): class and alias factories "
253
- "must have identical argument signatures");
254
-
255
- remove_reference_t<CFunc> class_factory;
256
- remove_reference_t<AFunc> alias_factory;
257
-
258
- factory(CFunc &&c, AFunc &&a)
259
- : class_factory(std::forward<CFunc>(c)), alias_factory(std::forward<AFunc>(a)) { }
260
-
261
- // The class factory is called when the `self` type passed to `__init__` is the direct
262
- // class (i.e. not inherited), the alias factory when `self` is a Python-side subtype.
263
- template <typename Class, typename... Extra>
264
- void execute(Class &cl, const Extra&... extra) && {
265
- static_assert(Class::has_alias, "The two-argument version of `py::init()` can "
266
- "only be used if the class has an alias");
267
- #if defined(PYBIND11_CPP14)
268
- cl.def("__init__", [class_func = std::move(class_factory), alias_func = std::move(alias_factory)]
269
- #else
270
- auto &class_func = class_factory;
271
- auto &alias_func = alias_factory;
272
- cl.def("__init__", [class_func, alias_func]
273
- #endif
274
- (value_and_holder &v_h, CArgs... args) {
275
- if (Py_TYPE(v_h.inst) == v_h.type->type)
276
- // If the instance type equals the registered type we don't have inheritance, so
277
- // don't need the alias and can construct using the class function:
278
- construct<Class>(v_h, class_func(std::forward<CArgs>(args)...), false);
279
- else
280
- construct<Class>(v_h, alias_func(std::forward<CArgs>(args)...), true);
281
- }, is_new_style_constructor(), extra...);
282
- }
283
- };
284
-
285
- /// Set just the C++ state. Same as `__init__`.
286
- template <typename Class, typename T>
287
- void setstate(value_and_holder &v_h, T &&result, bool need_alias) {
288
- construct<Class>(v_h, std::forward<T>(result), need_alias);
289
- }
290
-
291
- /// Set both the C++ and Python states
292
- template <typename Class, typename T, typename O,
293
- enable_if_t<std::is_convertible<O, handle>::value, int> = 0>
294
- void setstate(value_and_holder &v_h, std::pair<T, O> &&result, bool need_alias) {
295
- construct<Class>(v_h, std::move(result.first), need_alias);
296
- setattr((PyObject *) v_h.inst, "__dict__", result.second);
297
- }
298
-
299
- /// Implementation for py::pickle(GetState, SetState)
300
- template <typename Get, typename Set,
301
- typename = function_signature_t<Get>, typename = function_signature_t<Set>>
302
- struct pickle_factory;
303
-
304
- template <typename Get, typename Set,
305
- typename RetState, typename Self, typename NewInstance, typename ArgState>
306
- struct pickle_factory<Get, Set, RetState(Self), NewInstance(ArgState)> {
307
- static_assert(std::is_same<intrinsic_t<RetState>, intrinsic_t<ArgState>>::value,
308
- "The type returned by `__getstate__` must be the same "
309
- "as the argument accepted by `__setstate__`");
310
-
311
- remove_reference_t<Get> get;
312
- remove_reference_t<Set> set;
313
-
314
- pickle_factory(Get get, Set set)
315
- : get(std::forward<Get>(get)), set(std::forward<Set>(set)) { }
316
-
317
- template <typename Class, typename... Extra>
318
- void execute(Class &cl, const Extra &...extra) && {
319
- cl.def("__getstate__", std::move(get));
320
-
321
- #if defined(PYBIND11_CPP14)
322
- cl.def("__setstate__", [func = std::move(set)]
323
- #else
324
- auto &func = set;
325
- cl.def("__setstate__", [func]
326
- #endif
327
- (value_and_holder &v_h, ArgState state) {
328
- setstate<Class>(v_h, func(std::forward<ArgState>(state)),
329
- Py_TYPE(v_h.inst) != v_h.type->type);
330
- }, is_new_style_constructor(), extra...);
331
- }
332
- };
333
-
334
- PYBIND11_NAMESPACE_END(initimpl)
335
- PYBIND11_NAMESPACE_END(detail)
336
- PYBIND11_NAMESPACE_END(pybind11)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/distance.h DELETED
@@ -1,77 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file distance.h
19
- * \brief Computes the size of a range
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/iterator/iterator_traits.h>
26
-
27
- namespace thrust
28
- {
29
-
30
-
31
- /*! \addtogroup iterators
32
- * \{
33
- */
34
-
35
- /*! \p distance finds the distance between \p first and \p last, i.e. the
36
- * number of times that \p first must be incremented until it is equal to
37
- * \p last.
38
- *
39
- * \param first The beginning of an input range of interest.
40
- * \param last The end of an input range of interest.
41
- * \return The distance between the beginning and end of the input range.
42
- *
43
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
44
- *
45
- * \pre If \c InputIterator meets the requirements of random access iterator, \p last shall be reachable from \p first or
46
- * \p first shall be reachable from \p last; otherwise, \p last shall be reachable from \p first.
47
- *
48
- * The following code snippet demonstrates how to use \p distance to compute
49
- * the distance to one iterator from another.
50
- *
51
- * \code
52
- * #include <thrust/distance.h>
53
- * #include <thrust/device_vector.h>
54
- * ...
55
- * thrust::device_vector<int> vec(13);
56
- * thrust::device_vector<int>::iterator iter1 = vec.begin();
57
- * thrust::device_vector<int>::iterator iter2 = iter1 + 7;
58
- *
59
- * int d = thrust::distance(iter1, iter2);
60
- *
61
- * // d is 7
62
- * \endcode
63
- *
64
- * \see http://www.sgi.com/tech/stl/distance.html
65
- */
66
- template<typename InputIterator>
67
- inline __host__ __device__
68
- typename thrust::iterator_traits<InputIterator>::difference_type
69
- distance(InputIterator first, InputIterator last);
70
-
71
- /*! \} // end iterators
72
- */
73
-
74
- } // end thrust
75
-
76
- #include <thrust/detail/distance.inl>
77
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/dense_heads/vfnet_head.py DELETED
@@ -1,794 +0,0 @@
1
- import numpy as np
2
- import torch
3
- import torch.nn as nn
4
- from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init
5
- from mmcv.ops import DeformConv2d
6
- from mmcv.runner import force_fp32
7
-
8
- from mmdet.core import (bbox2distance, bbox_overlaps, build_anchor_generator,
9
- build_assigner, build_sampler, distance2bbox,
10
- multi_apply, multiclass_nms, reduce_mean)
11
- from ..builder import HEADS, build_loss
12
- from .atss_head import ATSSHead
13
- from .fcos_head import FCOSHead
14
-
15
- INF = 1e8
16
-
17
-
18
- @HEADS.register_module()
19
- class VFNetHead(ATSSHead, FCOSHead):
20
- """Head of `VarifocalNet (VFNet): An IoU-aware Dense Object
21
- Detector.<https://arxiv.org/abs/2008.13367>`_.
22
-
23
- The VFNet predicts IoU-aware classification scores which mix the
24
- object presence confidence and object localization accuracy as the
25
- detection score. It is built on the FCOS architecture and uses ATSS
26
- for defining positive/negative training examples. The VFNet is trained
27
- with Varifocal Loss and empolys star-shaped deformable convolution to
28
- extract features for a bbox.
29
-
30
- Args:
31
- num_classes (int): Number of categories excluding the background
32
- category.
33
- in_channels (int): Number of channels in the input feature map.
34
- regress_ranges (tuple[tuple[int, int]]): Regress range of multiple
35
- level points.
36
- center_sampling (bool): If true, use center sampling. Default: False.
37
- center_sample_radius (float): Radius of center sampling. Default: 1.5.
38
- sync_num_pos (bool): If true, synchronize the number of positive
39
- examples across GPUs. Default: True
40
- gradient_mul (float): The multiplier to gradients from bbox refinement
41
- and recognition. Default: 0.1.
42
- bbox_norm_type (str): The bbox normalization type, 'reg_denom' or
43
- 'stride'. Default: reg_denom
44
- loss_cls_fl (dict): Config of focal loss.
45
- use_vfl (bool): If true, use varifocal loss for training.
46
- Default: True.
47
- loss_cls (dict): Config of varifocal loss.
48
- loss_bbox (dict): Config of localization loss, GIoU Loss.
49
- loss_bbox (dict): Config of localization refinement loss, GIoU Loss.
50
- norm_cfg (dict): dictionary to construct and config norm layer.
51
- Default: norm_cfg=dict(type='GN', num_groups=32,
52
- requires_grad=True).
53
- use_atss (bool): If true, use ATSS to define positive/negative
54
- examples. Default: True.
55
- anchor_generator (dict): Config of anchor generator for ATSS.
56
-
57
- Example:
58
- >>> self = VFNetHead(11, 7)
59
- >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
60
- >>> cls_score, bbox_pred, bbox_pred_refine= self.forward(feats)
61
- >>> assert len(cls_score) == len(self.scales)
62
- """ # noqa: E501
63
-
64
- def __init__(self,
65
- num_classes,
66
- in_channels,
67
- regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512),
68
- (512, INF)),
69
- center_sampling=False,
70
- center_sample_radius=1.5,
71
- sync_num_pos=True,
72
- gradient_mul=0.1,
73
- bbox_norm_type='reg_denom',
74
- loss_cls_fl=dict(
75
- type='FocalLoss',
76
- use_sigmoid=True,
77
- gamma=2.0,
78
- alpha=0.25,
79
- loss_weight=1.0),
80
- use_vfl=True,
81
- loss_cls=dict(
82
- type='VarifocalLoss',
83
- use_sigmoid=True,
84
- alpha=0.75,
85
- gamma=2.0,
86
- iou_weighted=True,
87
- loss_weight=1.0),
88
- loss_bbox=dict(type='GIoULoss', loss_weight=1.5),
89
- loss_bbox_refine=dict(type='GIoULoss', loss_weight=2.0),
90
- norm_cfg=dict(type='GN', num_groups=32, requires_grad=True),
91
- use_atss=True,
92
- anchor_generator=dict(
93
- type='AnchorGenerator',
94
- ratios=[1.0],
95
- octave_base_scale=8,
96
- scales_per_octave=1,
97
- center_offset=0.0,
98
- strides=[8, 16, 32, 64, 128]),
99
- **kwargs):
100
- # dcn base offsets, adapted from reppoints_head.py
101
- self.num_dconv_points = 9
102
- self.dcn_kernel = int(np.sqrt(self.num_dconv_points))
103
- self.dcn_pad = int((self.dcn_kernel - 1) / 2)
104
- dcn_base = np.arange(-self.dcn_pad,
105
- self.dcn_pad + 1).astype(np.float64)
106
- dcn_base_y = np.repeat(dcn_base, self.dcn_kernel)
107
- dcn_base_x = np.tile(dcn_base, self.dcn_kernel)
108
- dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape(
109
- (-1))
110
- self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1)
111
-
112
- super(FCOSHead, self).__init__(
113
- num_classes, in_channels, norm_cfg=norm_cfg, **kwargs)
114
- self.regress_ranges = regress_ranges
115
- self.reg_denoms = [
116
- regress_range[-1] for regress_range in regress_ranges
117
- ]
118
- self.reg_denoms[-1] = self.reg_denoms[-2] * 2
119
- self.center_sampling = center_sampling
120
- self.center_sample_radius = center_sample_radius
121
- self.sync_num_pos = sync_num_pos
122
- self.bbox_norm_type = bbox_norm_type
123
- self.gradient_mul = gradient_mul
124
- self.use_vfl = use_vfl
125
- if self.use_vfl:
126
- self.loss_cls = build_loss(loss_cls)
127
- else:
128
- self.loss_cls = build_loss(loss_cls_fl)
129
- self.loss_bbox = build_loss(loss_bbox)
130
- self.loss_bbox_refine = build_loss(loss_bbox_refine)
131
-
132
- # for getting ATSS targets
133
- self.use_atss = use_atss
134
- self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
135
- self.anchor_generator = build_anchor_generator(anchor_generator)
136
- self.anchor_center_offset = anchor_generator['center_offset']
137
- self.num_anchors = self.anchor_generator.num_base_anchors[0]
138
- self.sampling = False
139
- if self.train_cfg:
140
- self.assigner = build_assigner(self.train_cfg.assigner)
141
- sampler_cfg = dict(type='PseudoSampler')
142
- self.sampler = build_sampler(sampler_cfg, context=self)
143
-
144
- def _init_layers(self):
145
- """Initialize layers of the head."""
146
- super(FCOSHead, self)._init_cls_convs()
147
- super(FCOSHead, self)._init_reg_convs()
148
- self.relu = nn.ReLU(inplace=True)
149
- self.vfnet_reg_conv = ConvModule(
150
- self.feat_channels,
151
- self.feat_channels,
152
- 3,
153
- stride=1,
154
- padding=1,
155
- conv_cfg=self.conv_cfg,
156
- norm_cfg=self.norm_cfg,
157
- bias=self.conv_bias)
158
- self.vfnet_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
159
- self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides])
160
-
161
- self.vfnet_reg_refine_dconv = DeformConv2d(
162
- self.feat_channels,
163
- self.feat_channels,
164
- self.dcn_kernel,
165
- 1,
166
- padding=self.dcn_pad)
167
- self.vfnet_reg_refine = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
168
- self.scales_refine = nn.ModuleList([Scale(1.0) for _ in self.strides])
169
-
170
- self.vfnet_cls_dconv = DeformConv2d(
171
- self.feat_channels,
172
- self.feat_channels,
173
- self.dcn_kernel,
174
- 1,
175
- padding=self.dcn_pad)
176
- self.vfnet_cls = nn.Conv2d(
177
- self.feat_channels, self.cls_out_channels, 3, padding=1)
178
-
179
- def init_weights(self):
180
- """Initialize weights of the head."""
181
- for m in self.cls_convs:
182
- if isinstance(m.conv, nn.Conv2d):
183
- normal_init(m.conv, std=0.01)
184
- for m in self.reg_convs:
185
- if isinstance(m.conv, nn.Conv2d):
186
- normal_init(m.conv, std=0.01)
187
- normal_init(self.vfnet_reg_conv.conv, std=0.01)
188
- normal_init(self.vfnet_reg, std=0.01)
189
- normal_init(self.vfnet_reg_refine_dconv, std=0.01)
190
- normal_init(self.vfnet_reg_refine, std=0.01)
191
- normal_init(self.vfnet_cls_dconv, std=0.01)
192
- bias_cls = bias_init_with_prob(0.01)
193
- normal_init(self.vfnet_cls, std=0.01, bias=bias_cls)
194
-
195
- def forward(self, feats):
196
- """Forward features from the upstream network.
197
-
198
- Args:
199
- feats (tuple[Tensor]): Features from the upstream network, each is
200
- a 4D-tensor.
201
-
202
- Returns:
203
- tuple:
204
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
205
- level, each is a 4D-tensor, the channel number is
206
- num_points * num_classes.
207
- bbox_preds (list[Tensor]): Box offsets for each
208
- scale level, each is a 4D-tensor, the channel number is
209
- num_points * 4.
210
- bbox_preds_refine (list[Tensor]): Refined Box offsets for
211
- each scale level, each is a 4D-tensor, the channel
212
- number is num_points * 4.
213
- """
214
- return multi_apply(self.forward_single, feats, self.scales,
215
- self.scales_refine, self.strides, self.reg_denoms)
216
-
217
- def forward_single(self, x, scale, scale_refine, stride, reg_denom):
218
- """Forward features of a single scale level.
219
-
220
- Args:
221
- x (Tensor): FPN feature maps of the specified stride.
222
- scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize
223
- the bbox prediction.
224
- scale_refine (:obj: `mmcv.cnn.Scale`): Learnable scale module to
225
- resize the refined bbox prediction.
226
- stride (int): The corresponding stride for feature maps,
227
- used to normalize the bbox prediction when
228
- bbox_norm_type = 'stride'.
229
- reg_denom (int): The corresponding regression range for feature
230
- maps, only used to normalize the bbox prediction when
231
- bbox_norm_type = 'reg_denom'.
232
-
233
- Returns:
234
- tuple: iou-aware cls scores for each box, bbox predictions and
235
- refined bbox predictions of input feature maps.
236
- """
237
- cls_feat = x
238
- reg_feat = x
239
-
240
- for cls_layer in self.cls_convs:
241
- cls_feat = cls_layer(cls_feat)
242
-
243
- for reg_layer in self.reg_convs:
244
- reg_feat = reg_layer(reg_feat)
245
-
246
- # predict the bbox_pred of different level
247
- reg_feat_init = self.vfnet_reg_conv(reg_feat)
248
- if self.bbox_norm_type == 'reg_denom':
249
- bbox_pred = scale(
250
- self.vfnet_reg(reg_feat_init)).float().exp() * reg_denom
251
- elif self.bbox_norm_type == 'stride':
252
- bbox_pred = scale(
253
- self.vfnet_reg(reg_feat_init)).float().exp() * stride
254
- else:
255
- raise NotImplementedError
256
-
257
- # compute star deformable convolution offsets
258
- # converting dcn_offset to reg_feat.dtype thus VFNet can be
259
- # trained with FP16
260
- dcn_offset = self.star_dcn_offset(bbox_pred, self.gradient_mul,
261
- stride).to(reg_feat.dtype)
262
-
263
- # refine the bbox_pred
264
- reg_feat = self.relu(self.vfnet_reg_refine_dconv(reg_feat, dcn_offset))
265
- bbox_pred_refine = scale_refine(
266
- self.vfnet_reg_refine(reg_feat)).float().exp()
267
- bbox_pred_refine = bbox_pred_refine * bbox_pred.detach()
268
-
269
- # predict the iou-aware cls score
270
- cls_feat = self.relu(self.vfnet_cls_dconv(cls_feat, dcn_offset))
271
- cls_score = self.vfnet_cls(cls_feat)
272
-
273
- return cls_score, bbox_pred, bbox_pred_refine
274
-
275
- def star_dcn_offset(self, bbox_pred, gradient_mul, stride):
276
- """Compute the star deformable conv offsets.
277
-
278
- Args:
279
- bbox_pred (Tensor): Predicted bbox distance offsets (l, r, t, b).
280
- gradient_mul (float): Gradient multiplier.
281
- stride (int): The corresponding stride for feature maps,
282
- used to project the bbox onto the feature map.
283
-
284
- Returns:
285
- dcn_offsets (Tensor): The offsets for deformable convolution.
286
- """
287
- dcn_base_offset = self.dcn_base_offset.type_as(bbox_pred)
288
- bbox_pred_grad_mul = (1 - gradient_mul) * bbox_pred.detach() + \
289
- gradient_mul * bbox_pred
290
- # map to the feature map scale
291
- bbox_pred_grad_mul = bbox_pred_grad_mul / stride
292
- N, C, H, W = bbox_pred.size()
293
-
294
- x1 = bbox_pred_grad_mul[:, 0, :, :]
295
- y1 = bbox_pred_grad_mul[:, 1, :, :]
296
- x2 = bbox_pred_grad_mul[:, 2, :, :]
297
- y2 = bbox_pred_grad_mul[:, 3, :, :]
298
- bbox_pred_grad_mul_offset = bbox_pred.new_zeros(
299
- N, 2 * self.num_dconv_points, H, W)
300
- bbox_pred_grad_mul_offset[:, 0, :, :] = -1.0 * y1 # -y1
301
- bbox_pred_grad_mul_offset[:, 1, :, :] = -1.0 * x1 # -x1
302
- bbox_pred_grad_mul_offset[:, 2, :, :] = -1.0 * y1 # -y1
303
- bbox_pred_grad_mul_offset[:, 4, :, :] = -1.0 * y1 # -y1
304
- bbox_pred_grad_mul_offset[:, 5, :, :] = x2 # x2
305
- bbox_pred_grad_mul_offset[:, 7, :, :] = -1.0 * x1 # -x1
306
- bbox_pred_grad_mul_offset[:, 11, :, :] = x2 # x2
307
- bbox_pred_grad_mul_offset[:, 12, :, :] = y2 # y2
308
- bbox_pred_grad_mul_offset[:, 13, :, :] = -1.0 * x1 # -x1
309
- bbox_pred_grad_mul_offset[:, 14, :, :] = y2 # y2
310
- bbox_pred_grad_mul_offset[:, 16, :, :] = y2 # y2
311
- bbox_pred_grad_mul_offset[:, 17, :, :] = x2 # x2
312
- dcn_offset = bbox_pred_grad_mul_offset - dcn_base_offset
313
-
314
- return dcn_offset
315
-
316
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'bbox_preds_refine'))
317
- def loss(self,
318
- cls_scores,
319
- bbox_preds,
320
- bbox_preds_refine,
321
- gt_bboxes,
322
- gt_labels,
323
- img_metas,
324
- gt_bboxes_ignore=None):
325
- """Compute loss of the head.
326
-
327
- Args:
328
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
329
- level, each is a 4D-tensor, the channel number is
330
- num_points * num_classes.
331
- bbox_preds (list[Tensor]): Box offsets for each
332
- scale level, each is a 4D-tensor, the channel number is
333
- num_points * 4.
334
- bbox_preds_refine (list[Tensor]): Refined Box offsets for
335
- each scale level, each is a 4D-tensor, the channel
336
- number is num_points * 4.
337
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
338
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
339
- gt_labels (list[Tensor]): class indices corresponding to each box
340
- img_metas (list[dict]): Meta information of each image, e.g.,
341
- image size, scaling factor, etc.
342
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
343
- boxes can be ignored when computing the loss.
344
- Default: None.
345
-
346
- Returns:
347
- dict[str, Tensor]: A dictionary of loss components.
348
- """
349
- assert len(cls_scores) == len(bbox_preds) == len(bbox_preds_refine)
350
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
351
- all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
352
- bbox_preds[0].device)
353
- labels, label_weights, bbox_targets, bbox_weights = self.get_targets(
354
- cls_scores, all_level_points, gt_bboxes, gt_labels, img_metas,
355
- gt_bboxes_ignore)
356
-
357
- num_imgs = cls_scores[0].size(0)
358
- # flatten cls_scores, bbox_preds and bbox_preds_refine
359
- flatten_cls_scores = [
360
- cls_score.permute(0, 2, 3,
361
- 1).reshape(-1,
362
- self.cls_out_channels).contiguous()
363
- for cls_score in cls_scores
364
- ]
365
- flatten_bbox_preds = [
366
- bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4).contiguous()
367
- for bbox_pred in bbox_preds
368
- ]
369
- flatten_bbox_preds_refine = [
370
- bbox_pred_refine.permute(0, 2, 3, 1).reshape(-1, 4).contiguous()
371
- for bbox_pred_refine in bbox_preds_refine
372
- ]
373
- flatten_cls_scores = torch.cat(flatten_cls_scores)
374
- flatten_bbox_preds = torch.cat(flatten_bbox_preds)
375
- flatten_bbox_preds_refine = torch.cat(flatten_bbox_preds_refine)
376
- flatten_labels = torch.cat(labels)
377
- flatten_bbox_targets = torch.cat(bbox_targets)
378
- # repeat points to align with bbox_preds
379
- flatten_points = torch.cat(
380
- [points.repeat(num_imgs, 1) for points in all_level_points])
381
-
382
- # FG cat_id: [0, num_classes - 1], BG cat_id: num_classes
383
- bg_class_ind = self.num_classes
384
- pos_inds = torch.where(
385
- ((flatten_labels >= 0) & (flatten_labels < bg_class_ind)) > 0)[0]
386
- num_pos = len(pos_inds)
387
-
388
- pos_bbox_preds = flatten_bbox_preds[pos_inds]
389
- pos_bbox_preds_refine = flatten_bbox_preds_refine[pos_inds]
390
- pos_labels = flatten_labels[pos_inds]
391
-
392
- # sync num_pos across all gpus
393
- if self.sync_num_pos:
394
- num_pos_avg_per_gpu = reduce_mean(
395
- pos_inds.new_tensor(num_pos).float()).item()
396
- num_pos_avg_per_gpu = max(num_pos_avg_per_gpu, 1.0)
397
- else:
398
- num_pos_avg_per_gpu = num_pos
399
-
400
- if num_pos > 0:
401
- pos_bbox_targets = flatten_bbox_targets[pos_inds]
402
- pos_points = flatten_points[pos_inds]
403
-
404
- pos_decoded_bbox_preds = distance2bbox(pos_points, pos_bbox_preds)
405
- pos_decoded_target_preds = distance2bbox(pos_points,
406
- pos_bbox_targets)
407
- iou_targets_ini = bbox_overlaps(
408
- pos_decoded_bbox_preds,
409
- pos_decoded_target_preds.detach(),
410
- is_aligned=True).clamp(min=1e-6)
411
- bbox_weights_ini = iou_targets_ini.clone().detach()
412
- iou_targets_ini_avg_per_gpu = reduce_mean(
413
- bbox_weights_ini.sum()).item()
414
- bbox_avg_factor_ini = max(iou_targets_ini_avg_per_gpu, 1.0)
415
- loss_bbox = self.loss_bbox(
416
- pos_decoded_bbox_preds,
417
- pos_decoded_target_preds.detach(),
418
- weight=bbox_weights_ini,
419
- avg_factor=bbox_avg_factor_ini)
420
-
421
- pos_decoded_bbox_preds_refine = \
422
- distance2bbox(pos_points, pos_bbox_preds_refine)
423
- iou_targets_rf = bbox_overlaps(
424
- pos_decoded_bbox_preds_refine,
425
- pos_decoded_target_preds.detach(),
426
- is_aligned=True).clamp(min=1e-6)
427
- bbox_weights_rf = iou_targets_rf.clone().detach()
428
- iou_targets_rf_avg_per_gpu = reduce_mean(
429
- bbox_weights_rf.sum()).item()
430
- bbox_avg_factor_rf = max(iou_targets_rf_avg_per_gpu, 1.0)
431
- loss_bbox_refine = self.loss_bbox_refine(
432
- pos_decoded_bbox_preds_refine,
433
- pos_decoded_target_preds.detach(),
434
- weight=bbox_weights_rf,
435
- avg_factor=bbox_avg_factor_rf)
436
-
437
- # build IoU-aware cls_score targets
438
- if self.use_vfl:
439
- pos_ious = iou_targets_rf.clone().detach()
440
- cls_iou_targets = torch.zeros_like(flatten_cls_scores)
441
- cls_iou_targets[pos_inds, pos_labels] = pos_ious
442
- else:
443
- loss_bbox = pos_bbox_preds.sum() * 0
444
- loss_bbox_refine = pos_bbox_preds_refine.sum() * 0
445
- if self.use_vfl:
446
- cls_iou_targets = torch.zeros_like(flatten_cls_scores)
447
-
448
- if self.use_vfl:
449
- loss_cls = self.loss_cls(
450
- flatten_cls_scores,
451
- cls_iou_targets,
452
- avg_factor=num_pos_avg_per_gpu)
453
- else:
454
- loss_cls = self.loss_cls(
455
- flatten_cls_scores,
456
- flatten_labels,
457
- weight=label_weights,
458
- avg_factor=num_pos_avg_per_gpu)
459
-
460
- return dict(
461
- loss_cls=loss_cls,
462
- loss_bbox=loss_bbox,
463
- loss_bbox_rf=loss_bbox_refine)
464
-
465
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'bbox_preds_refine'))
466
- def get_bboxes(self,
467
- cls_scores,
468
- bbox_preds,
469
- bbox_preds_refine,
470
- img_metas,
471
- cfg=None,
472
- rescale=None,
473
- with_nms=True):
474
- """Transform network outputs for a batch into bbox predictions.
475
-
476
- Args:
477
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
478
- level with shape (N, num_points * num_classes, H, W).
479
- bbox_preds (list[Tensor]): Box offsets for each scale
480
- level with shape (N, num_points * 4, H, W).
481
- bbox_preds_refine (list[Tensor]): Refined Box offsets for
482
- each scale level with shape (N, num_points * 4, H, W).
483
- img_metas (list[dict]): Meta information of each image, e.g.,
484
- image size, scaling factor, etc.
485
- cfg (mmcv.Config): Test / postprocessing configuration,
486
- if None, test_cfg would be used. Default: None.
487
- rescale (bool): If True, return boxes in original image space.
488
- Default: False.
489
- with_nms (bool): If True, do nms before returning boxes.
490
- Default: True.
491
-
492
- Returns:
493
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
494
- The first item is an (n, 5) tensor, where the first 4 columns
495
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the
496
- 5-th column is a score between 0 and 1. The second item is a
497
- (n,) tensor where each item is the predicted class label of
498
- the corresponding box.
499
- """
500
- assert len(cls_scores) == len(bbox_preds) == len(bbox_preds_refine)
501
- num_levels = len(cls_scores)
502
-
503
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
504
- mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
505
- bbox_preds[0].device)
506
- result_list = []
507
- for img_id in range(len(img_metas)):
508
- cls_score_list = [
509
- cls_scores[i][img_id].detach() for i in range(num_levels)
510
- ]
511
- bbox_pred_list = [
512
- bbox_preds_refine[i][img_id].detach()
513
- for i in range(num_levels)
514
- ]
515
- img_shape = img_metas[img_id]['img_shape']
516
- scale_factor = img_metas[img_id]['scale_factor']
517
- det_bboxes = self._get_bboxes_single(cls_score_list,
518
- bbox_pred_list, mlvl_points,
519
- img_shape, scale_factor, cfg,
520
- rescale, with_nms)
521
- result_list.append(det_bboxes)
522
- return result_list
523
-
524
- def _get_bboxes_single(self,
525
- cls_scores,
526
- bbox_preds,
527
- mlvl_points,
528
- img_shape,
529
- scale_factor,
530
- cfg,
531
- rescale=False,
532
- with_nms=True):
533
- """Transform outputs for a single batch item into bbox predictions.
534
-
535
- Args:
536
- cls_scores (list[Tensor]): Box iou-aware scores for a single scale
537
- level with shape (num_points * num_classes, H, W).
538
- bbox_preds (list[Tensor]): Box offsets for a single scale
539
- level with shape (num_points * 4, H, W).
540
- mlvl_points (list[Tensor]): Box reference for a single scale level
541
- with shape (num_total_points, 4).
542
- img_shape (tuple[int]): Shape of the input image,
543
- (height, width, 3).
544
- scale_factor (ndarray): Scale factor of the image arrange as
545
- (w_scale, h_scale, w_scale, h_scale).
546
- cfg (mmcv.Config | None): Test / postprocessing configuration,
547
- if None, test_cfg would be used.
548
- rescale (bool): If True, return boxes in original image space.
549
- Default: False.
550
- with_nms (bool): If True, do nms before returning boxes.
551
- Default: True.
552
-
553
- Returns:
554
- tuple(Tensor):
555
- det_bboxes (Tensor): BBox predictions in shape (n, 5), where
556
- the first 4 columns are bounding box positions
557
- (tl_x, tl_y, br_x, br_y) and the 5-th column is a score
558
- between 0 and 1.
559
- det_labels (Tensor): A (n,) tensor where each item is the
560
- predicted class label of the corresponding box.
561
- """
562
- cfg = self.test_cfg if cfg is None else cfg
563
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_points)
564
- mlvl_bboxes = []
565
- mlvl_scores = []
566
- for cls_score, bbox_pred, points in zip(cls_scores, bbox_preds,
567
- mlvl_points):
568
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
569
- scores = cls_score.permute(1, 2, 0).reshape(
570
- -1, self.cls_out_channels).contiguous().sigmoid()
571
- bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4).contiguous()
572
-
573
- nms_pre = cfg.get('nms_pre', -1)
574
- if 0 < nms_pre < scores.shape[0]:
575
- max_scores, _ = scores.max(dim=1)
576
- _, topk_inds = max_scores.topk(nms_pre)
577
- points = points[topk_inds, :]
578
- bbox_pred = bbox_pred[topk_inds, :]
579
- scores = scores[topk_inds, :]
580
- bboxes = distance2bbox(points, bbox_pred, max_shape=img_shape)
581
- mlvl_bboxes.append(bboxes)
582
- mlvl_scores.append(scores)
583
- mlvl_bboxes = torch.cat(mlvl_bboxes)
584
- if rescale:
585
- mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
586
- mlvl_scores = torch.cat(mlvl_scores)
587
- padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
588
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
589
- # BG cat_id: num_class
590
- mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
591
- if with_nms:
592
- det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores,
593
- cfg.score_thr, cfg.nms,
594
- cfg.max_per_img)
595
- return det_bboxes, det_labels
596
- else:
597
- return mlvl_bboxes, mlvl_scores
598
-
599
- def _get_points_single(self,
600
- featmap_size,
601
- stride,
602
- dtype,
603
- device,
604
- flatten=False):
605
- """Get points according to feature map sizes."""
606
- h, w = featmap_size
607
- x_range = torch.arange(
608
- 0, w * stride, stride, dtype=dtype, device=device)
609
- y_range = torch.arange(
610
- 0, h * stride, stride, dtype=dtype, device=device)
611
- y, x = torch.meshgrid(y_range, x_range)
612
- # to be compatible with anchor points in ATSS
613
- if self.use_atss:
614
- points = torch.stack(
615
- (x.reshape(-1), y.reshape(-1)), dim=-1) + \
616
- stride * self.anchor_center_offset
617
- else:
618
- points = torch.stack(
619
- (x.reshape(-1), y.reshape(-1)), dim=-1) + stride // 2
620
- return points
621
-
622
- def get_targets(self, cls_scores, mlvl_points, gt_bboxes, gt_labels,
623
- img_metas, gt_bboxes_ignore):
624
- """A wrapper for computing ATSS and FCOS targets for points in multiple
625
- images.
626
-
627
- Args:
628
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
629
- level with shape (N, num_points * num_classes, H, W).
630
- mlvl_points (list[Tensor]): Points of each fpn level, each has
631
- shape (num_points, 2).
632
- gt_bboxes (list[Tensor]): Ground truth bboxes of each image,
633
- each has shape (num_gt, 4).
634
- gt_labels (list[Tensor]): Ground truth labels of each box,
635
- each has shape (num_gt,).
636
- img_metas (list[dict]): Meta information of each image, e.g.,
637
- image size, scaling factor, etc.
638
- gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be
639
- ignored, shape (num_ignored_gts, 4).
640
-
641
- Returns:
642
- tuple:
643
- labels_list (list[Tensor]): Labels of each level.
644
- label_weights (Tensor/None): Label weights of all levels.
645
- bbox_targets_list (list[Tensor]): Regression targets of each
646
- level, (l, t, r, b).
647
- bbox_weights (Tensor/None): Bbox weights of all levels.
648
- """
649
- if self.use_atss:
650
- return self.get_atss_targets(cls_scores, mlvl_points, gt_bboxes,
651
- gt_labels, img_metas,
652
- gt_bboxes_ignore)
653
- else:
654
- self.norm_on_bbox = False
655
- return self.get_fcos_targets(mlvl_points, gt_bboxes, gt_labels)
656
-
657
- def _get_target_single(self, *args, **kwargs):
658
- """Avoid ambiguity in multiple inheritance."""
659
- if self.use_atss:
660
- return ATSSHead._get_target_single(self, *args, **kwargs)
661
- else:
662
- return FCOSHead._get_target_single(self, *args, **kwargs)
663
-
664
- def get_fcos_targets(self, points, gt_bboxes_list, gt_labels_list):
665
- """Compute FCOS regression and classification targets for points in
666
- multiple images.
667
-
668
- Args:
669
- points (list[Tensor]): Points of each fpn level, each has shape
670
- (num_points, 2).
671
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image,
672
- each has shape (num_gt, 4).
673
- gt_labels_list (list[Tensor]): Ground truth labels of each box,
674
- each has shape (num_gt,).
675
-
676
- Returns:
677
- tuple:
678
- labels (list[Tensor]): Labels of each level.
679
- label_weights: None, to be compatible with ATSS targets.
680
- bbox_targets (list[Tensor]): BBox targets of each level.
681
- bbox_weights: None, to be compatible with ATSS targets.
682
- """
683
- labels, bbox_targets = FCOSHead.get_targets(self, points,
684
- gt_bboxes_list,
685
- gt_labels_list)
686
- label_weights = None
687
- bbox_weights = None
688
- return labels, label_weights, bbox_targets, bbox_weights
689
-
690
- def get_atss_targets(self,
691
- cls_scores,
692
- mlvl_points,
693
- gt_bboxes,
694
- gt_labels,
695
- img_metas,
696
- gt_bboxes_ignore=None):
697
- """A wrapper for computing ATSS targets for points in multiple images.
698
-
699
- Args:
700
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
701
- level with shape (N, num_points * num_classes, H, W).
702
- mlvl_points (list[Tensor]): Points of each fpn level, each has
703
- shape (num_points, 2).
704
- gt_bboxes (list[Tensor]): Ground truth bboxes of each image,
705
- each has shape (num_gt, 4).
706
- gt_labels (list[Tensor]): Ground truth labels of each box,
707
- each has shape (num_gt,).
708
- img_metas (list[dict]): Meta information of each image, e.g.,
709
- image size, scaling factor, etc.
710
- gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be
711
- ignored, shape (num_ignored_gts, 4). Default: None.
712
-
713
- Returns:
714
- tuple:
715
- labels_list (list[Tensor]): Labels of each level.
716
- label_weights (Tensor): Label weights of all levels.
717
- bbox_targets_list (list[Tensor]): Regression targets of each
718
- level, (l, t, r, b).
719
- bbox_weights (Tensor): Bbox weights of all levels.
720
- """
721
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
722
- assert len(featmap_sizes) == self.anchor_generator.num_levels
723
-
724
- device = cls_scores[0].device
725
- anchor_list, valid_flag_list = self.get_anchors(
726
- featmap_sizes, img_metas, device=device)
727
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
728
-
729
- cls_reg_targets = ATSSHead.get_targets(
730
- self,
731
- anchor_list,
732
- valid_flag_list,
733
- gt_bboxes,
734
- img_metas,
735
- gt_bboxes_ignore_list=gt_bboxes_ignore,
736
- gt_labels_list=gt_labels,
737
- label_channels=label_channels,
738
- unmap_outputs=True)
739
- if cls_reg_targets is None:
740
- return None
741
-
742
- (anchor_list, labels_list, label_weights_list, bbox_targets_list,
743
- bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets
744
-
745
- bbox_targets_list = [
746
- bbox_targets.reshape(-1, 4) for bbox_targets in bbox_targets_list
747
- ]
748
-
749
- num_imgs = len(img_metas)
750
- # transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format
751
- bbox_targets_list = self.transform_bbox_targets(
752
- bbox_targets_list, mlvl_points, num_imgs)
753
-
754
- labels_list = [labels.reshape(-1) for labels in labels_list]
755
- label_weights_list = [
756
- label_weights.reshape(-1) for label_weights in label_weights_list
757
- ]
758
- bbox_weights_list = [
759
- bbox_weights.reshape(-1) for bbox_weights in bbox_weights_list
760
- ]
761
- label_weights = torch.cat(label_weights_list)
762
- bbox_weights = torch.cat(bbox_weights_list)
763
- return labels_list, label_weights, bbox_targets_list, bbox_weights
764
-
765
- def transform_bbox_targets(self, decoded_bboxes, mlvl_points, num_imgs):
766
- """Transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format.
767
-
768
- Args:
769
- decoded_bboxes (list[Tensor]): Regression targets of each level,
770
- in the form of (x1, y1, x2, y2).
771
- mlvl_points (list[Tensor]): Points of each fpn level, each has
772
- shape (num_points, 2).
773
- num_imgs (int): the number of images in a batch.
774
-
775
- Returns:
776
- bbox_targets (list[Tensor]): Regression targets of each level in
777
- the form of (l, t, r, b).
778
- """
779
- # TODO: Re-implemented in Class PointCoder
780
- assert len(decoded_bboxes) == len(mlvl_points)
781
- num_levels = len(decoded_bboxes)
782
- mlvl_points = [points.repeat(num_imgs, 1) for points in mlvl_points]
783
- bbox_targets = []
784
- for i in range(num_levels):
785
- bbox_target = bbox2distance(mlvl_points[i], decoded_bboxes[i])
786
- bbox_targets.append(bbox_target)
787
-
788
- return bbox_targets
789
-
790
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
791
- missing_keys, unexpected_keys, error_msgs):
792
- """Override the method in the parent class to avoid changing para's
793
- name."""
794
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/data/__init__.py DELETED
@@ -1,3 +0,0 @@
1
-
2
- from .dataset import Dataset, TensorDataset, ConcatDataset
3
- from .dataloader import DataLoader
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/data/__init__.py DELETED
@@ -1,19 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- from . import transforms # isort:skip
3
-
4
- from .build import (
5
- build_batch_data_loader,
6
- build_detection_test_loader,
7
- build_detection_train_loader,
8
- get_detection_dataset_dicts,
9
- load_proposals_into_dataset,
10
- print_instances_class_histogram,
11
- )
12
- from .catalog import DatasetCatalog, MetadataCatalog, Metadata
13
- from .common import DatasetFromList, MapDataset
14
- from .dataset_mapper import DatasetMapper
15
-
16
- # ensure the builtin datasets are registered
17
- from . import datasets, samplers # isort:skip
18
-
19
- __all__ = [k for k in globals().keys() if not k.startswith("_")]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChrisPreston/diff-svc_minato_aqua/modules/vocoders/__init__.py DELETED
@@ -1 +0,0 @@
1
- from modules.vocoders import nsf_hifigan
 
 
spaces/CofAI/chat/client/css/dropdown.css DELETED
@@ -1,10 +0,0 @@
1
- .dropdown {
2
- border: 1px solid var(--conversations);
3
- }
4
-
5
- @media screen and (max-width: 990px) {
6
- .dropdown {
7
- padding: 4px 8px;
8
- font-size: 0.75rem;
9
- }
10
- }
 
 
 
 
 
 
 
 
 
 
 
spaces/Covert1107/sd-diffusers-webui/modules/safe.py DELETED
@@ -1,188 +0,0 @@
1
- # this code is adapted from the script contributed by anon from /h/
2
- # modified, from https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/6cff4401824299a983c8e13424018efc347b4a2b/modules/safe.py
3
-
4
- import io
5
- import pickle
6
- import collections
7
- import sys
8
- import traceback
9
-
10
- import torch
11
- import numpy
12
- import _codecs
13
- import zipfile
14
- import re
15
-
16
-
17
- # PyTorch 1.13 and later have _TypedStorage renamed to TypedStorage
18
- TypedStorage = torch.storage.TypedStorage if hasattr(torch.storage, 'TypedStorage') else torch.storage._TypedStorage
19
-
20
-
21
- def encode(*args):
22
- out = _codecs.encode(*args)
23
- return out
24
-
25
-
26
- class RestrictedUnpickler(pickle.Unpickler):
27
- extra_handler = None
28
-
29
- def persistent_load(self, saved_id):
30
- assert saved_id[0] == 'storage'
31
- return TypedStorage()
32
-
33
- def find_class(self, module, name):
34
- if self.extra_handler is not None:
35
- res = self.extra_handler(module, name)
36
- if res is not None:
37
- return res
38
-
39
- if module == 'collections' and name == 'OrderedDict':
40
- return getattr(collections, name)
41
- if module == 'torch._utils' and name in ['_rebuild_tensor_v2', '_rebuild_parameter', '_rebuild_device_tensor_from_numpy']:
42
- return getattr(torch._utils, name)
43
- if module == 'torch' and name in ['FloatStorage', 'HalfStorage', 'IntStorage', 'LongStorage', 'DoubleStorage', 'ByteStorage', 'float32']:
44
- return getattr(torch, name)
45
- if module == 'torch.nn.modules.container' and name in ['ParameterDict']:
46
- return getattr(torch.nn.modules.container, name)
47
- if module == 'numpy.core.multiarray' and name in ['scalar', '_reconstruct']:
48
- return getattr(numpy.core.multiarray, name)
49
- if module == 'numpy' and name in ['dtype', 'ndarray']:
50
- return getattr(numpy, name)
51
- if module == '_codecs' and name == 'encode':
52
- return encode
53
- if module == "pytorch_lightning.callbacks" and name == 'model_checkpoint':
54
- import pytorch_lightning.callbacks
55
- return pytorch_lightning.callbacks.model_checkpoint
56
- if module == "pytorch_lightning.callbacks.model_checkpoint" and name == 'ModelCheckpoint':
57
- import pytorch_lightning.callbacks.model_checkpoint
58
- return pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint
59
- if module == "__builtin__" and name == 'set':
60
- return set
61
-
62
- # Forbid everything else.
63
- raise Exception(f"global '{module}/{name}' is forbidden")
64
-
65
-
66
- # Regular expression that accepts 'dirname/version', 'dirname/data.pkl', and 'dirname/data/<number>'
67
- allowed_zip_names_re = re.compile(r"^([^/]+)/((data/\d+)|version|(data\.pkl))$")
68
- data_pkl_re = re.compile(r"^([^/]+)/data\.pkl$")
69
-
70
- def check_zip_filenames(filename, names):
71
- for name in names:
72
- if allowed_zip_names_re.match(name):
73
- continue
74
-
75
- raise Exception(f"bad file inside {filename}: {name}")
76
-
77
-
78
- def check_pt(filename, extra_handler):
79
- try:
80
-
81
- # new pytorch format is a zip file
82
- with zipfile.ZipFile(filename) as z:
83
- check_zip_filenames(filename, z.namelist())
84
-
85
- # find filename of data.pkl in zip file: '<directory name>/data.pkl'
86
- data_pkl_filenames = [f for f in z.namelist() if data_pkl_re.match(f)]
87
- if len(data_pkl_filenames) == 0:
88
- raise Exception(f"data.pkl not found in {filename}")
89
- if len(data_pkl_filenames) > 1:
90
- raise Exception(f"Multiple data.pkl found in {filename}")
91
- with z.open(data_pkl_filenames[0]) as file:
92
- unpickler = RestrictedUnpickler(file)
93
- unpickler.extra_handler = extra_handler
94
- unpickler.load()
95
-
96
- except zipfile.BadZipfile:
97
-
98
- # if it's not a zip file, it's an olf pytorch format, with five objects written to pickle
99
- with open(filename, "rb") as file:
100
- unpickler = RestrictedUnpickler(file)
101
- unpickler.extra_handler = extra_handler
102
- for i in range(5):
103
- unpickler.load()
104
-
105
-
106
- def load(filename, *args, **kwargs):
107
- return load_with_extra(filename, extra_handler=global_extra_handler, *args, **kwargs)
108
-
109
-
110
- def load_with_extra(filename, extra_handler=None, *args, **kwargs):
111
- """
112
- this function is intended to be used by extensions that want to load models with
113
- some extra classes in them that the usual unpickler would find suspicious.
114
-
115
- Use the extra_handler argument to specify a function that takes module and field name as text,
116
- and returns that field's value:
117
-
118
- ```python
119
- def extra(module, name):
120
- if module == 'collections' and name == 'OrderedDict':
121
- return collections.OrderedDict
122
-
123
- return None
124
-
125
- safe.load_with_extra('model.pt', extra_handler=extra)
126
- ```
127
-
128
- The alternative to this is just to use safe.unsafe_torch_load('model.pt'), which as the name implies is
129
- definitely unsafe.
130
- """
131
-
132
- try:
133
- check_pt(filename, extra_handler)
134
-
135
- except pickle.UnpicklingError:
136
- print(f"Error verifying pickled file from {filename}:", file=sys.stderr)
137
- print(traceback.format_exc(), file=sys.stderr)
138
- print("The file is most likely corrupted.", file=sys.stderr)
139
- return None
140
-
141
- except Exception:
142
- print(f"Error verifying pickled file from {filename}:", file=sys.stderr)
143
- print(traceback.format_exc(), file=sys.stderr)
144
- print("\nThe file may be malicious, so the program is not going to read it.", file=sys.stderr)
145
- print("You can skip this check with --disable-safe-unpickle commandline argument.\n\n", file=sys.stderr)
146
- return None
147
-
148
- return unsafe_torch_load(filename, *args, **kwargs)
149
-
150
-
151
- class Extra:
152
- """
153
- A class for temporarily setting the global handler for when you can't explicitly call load_with_extra
154
- (because it's not your code making the torch.load call). The intended use is like this:
155
-
156
- ```
157
- import torch
158
- from modules import safe
159
-
160
- def handler(module, name):
161
- if module == 'torch' and name in ['float64', 'float16']:
162
- return getattr(torch, name)
163
-
164
- return None
165
-
166
- with safe.Extra(handler):
167
- x = torch.load('model.pt')
168
- ```
169
- """
170
-
171
- def __init__(self, handler):
172
- self.handler = handler
173
-
174
- def __enter__(self):
175
- global global_extra_handler
176
-
177
- assert global_extra_handler is None, 'already inside an Extra() block'
178
- global_extra_handler = self.handler
179
-
180
- def __exit__(self, exc_type, exc_val, exc_tb):
181
- global global_extra_handler
182
-
183
- global_extra_handler = None
184
-
185
-
186
- unsafe_torch_load = torch.load
187
- torch.load = load
188
- global_extra_handler = None