parquet-converter commited on
Commit
8825b56
·
1 Parent(s): ba4f0b9

Update parquet files (step 79 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md +0 -22
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md +0 -28
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md +0 -155
  4. spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md +0 -6
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md +0 -92
  6. spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md +0 -126
  7. spaces/52Hz/SRMNet_thesis/WT/__init__.py +0 -1
  8. spaces/6Eternal9/ChatGPT4/README.md +0 -14
  9. spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py +0 -352
  10. spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py +0 -715
  11. spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py +0 -32
  12. spaces/AchyuthGamer/OpenGPT/client/css/label.css +0 -16
  13. spaces/AchyuthGamer/text-to-speech-client/README.md +0 -10
  14. spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py +0 -651
  15. spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py +0 -16
  16. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js +0 -20
  17. spaces/AiMimicry/sovits-models/inference/infer_tool.py +0 -324
  18. spaces/Aki004/herta-so-vits/flask_api_full_song.py +0 -55
  19. spaces/Albertha/qwe123/start.sh +0 -8
  20. spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py +0 -52
  21. spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py +0 -62
  22. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py +0 -925
  23. spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md +0 -33
  24. spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py +0 -39
  25. spaces/Apex-X/nono/roop/capturer.py +0 -22
  26. spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md +0 -12
  27. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py +0 -25
  28. spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py +0 -25
  29. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py +0 -63
  30. spaces/AzinZ/vitscn/text/__init__.py +0 -54
  31. spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx +0 -20
  32. spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md +0 -75
  33. spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md +0 -68
  34. spaces/BilalSardar/Remove_Text_for_Image/README.md +0 -12
  35. spaces/CVPR/LIVE/pydiffvg_tensorflow/render_tensorflow.py +0 -664
  36. spaces/CVPR/LIVE/thrust/dependencies/cub/README.md +0 -189
  37. spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h +0 -52
  38. spaces/CVPR/MonoScene/monoscene/modules.py +0 -194
  39. spaces/CVPR/drawings-to-human/main.py +0 -3
  40. spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py +0 -35
  41. spaces/CarlDennis/Lovelive-VITS-JPZH/README.md +0 -13
  42. spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py +0 -16
  43. spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py +0 -43
  44. spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js +0 -122
  45. spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py +0 -35
  46. spaces/Cong723/gpt-academic-public/docs/README_RS.md +0 -291
  47. spaces/Cropinky/esrgan/realesrgan/models/__init__.py +0 -10
  48. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/proc.py +0 -51
  49. spaces/Cyril666/my_abi/utils.py +0 -304
  50. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css +0 -1
spaces/1acneusushi/gradio-2dmoleculeeditor/data/A Guide to Fixing Microsoft Office 2021 Error Code 0-2054 (0) on Your Device.md DELETED
@@ -1,22 +0,0 @@
1
-
2
- <h1>How to Fix Microsoft Office 2021 Error Code 0-2054 (0)</h1>
3
- <p>Microsoft Office 2021 is the latest version of the popular productivity suite that offers many new features and improvements. However, some users may encounter an error code 0-2054 (0) when trying to install or update Office 2021 on their devices. This error can prevent the installation or update process from completing successfully and cause frustration for the users.</p>
4
- <p>Fortunately, there are some possible solutions that can help you fix this error and enjoy Office 2021 without any issues. Here are some of them:</p>
5
- <h2>microsoft office 2021 error code 0-2054 (0)</h2><br /><p><b><b>Download</b> ===== <a href="https://byltly.com/2uKxwh">https://byltly.com/2uKxwh</a></b></p><br /><br />
6
- <ol>
7
- <li><b>Uninstall any previous versions of Office</b>. Sometimes, the error code 0-2054 (0) can occur if you have an older version of Office installed on your device, such as Office 365 or Office 2019. To avoid any conflicts, you should uninstall any previous versions of Office using the <a href="https://support.microsoft.com/en-us/office/uninstall-office-from-a-pc-9dd49b83-264a-477a-8fcc-2fdf5dbf61d8">Office uninstall tool</a> or the <a href="https://support.microsoft.com/en-us/topic/uninstall-office-from-a-pc-9dd49b83-264a-477a-8fcc-2fdf5dbf61d8">Control Panel</a>. Make sure to restart your device after uninstalling Office.</li>
8
- <li><b>Disable any firewall, proxy, or antivirus software</b>. Another possible cause of the error code 0-2054 (0) is that some firewall, proxy, or antivirus software may block the installation or update of Office 2021 as a security measure. To avoid this, you should temporarily disable any firewall, proxy, or antivirus software that you have on your device and try to install or update Office 2021 again. Remember to enable them back after you finish the installation or update.</li>
9
- <li><b>Use the Office Deployment Tool</b>. The Office Deployment Tool (ODT) is a tool that allows you to download and install Office 2021 offline using a configuration file. This can help you avoid any network-related issues that may cause the error code 0-2054 (0). To use the ODT, you need to follow these steps:</li>
10
- <ul>
11
- <li>Download the <a href="https://www.microsoft.com/en-us/download/details.aspx?id=49117">Office Deployment Tool</a> and run it to extract the setup.exe file and the configuration.xml file.</li>
12
- <li>Edit the configuration.xml file using a text editor such as Notepad and specify the parameters for your Office 2021 installation or update. You can use the <a href="https://config.office.com/">Office Customization Tool</a> to generate a configuration file based on your preferences.</li>
13
- <li>Save and close the configuration.xml file and place it in the same folder as the setup.exe file.</li>
14
- <li>Open a Command Prompt window as an administrator and navigate to the folder where the setup.exe and configuration.xml files are located.</li>
15
- <li>Type <code>setup.exe /download configuration.xml</code> and press Enter to download the Office 2021 installation files.</li>
16
- <li>Type <code>setup.exe /configure configuration.xml</code> and press Enter to install or update Office 2021 using the configuration file.</li>
17
- </ul>
18
- <li><b>Contact Microsoft support</b>. If none of the above solutions work for you, you may need to contact Microsoft support for further assistance. You can visit the <a href="https://support.microsoft.com/en-us/contactus/">Microsoft support website</a> and choose the option that best suits your situation. You can also post your question on the <a href="https://answers.microsoft.com/en-us/msoffice/forum/all">Microsoft Community forum</a> and get help from other users who may have faced similar issues.</li>
19
- </ol>
20
- <p>We hope that this article has helped you fix the error code 0-2054 (0) for Office 2021 and enjoy its features without any problems. If you have any feedback or suggestions, please let us know in the comments below.</p> ddb901b051<br />
21
- <br />
22
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Baixar e Instalar FIFA 22 Verso Crackeada em Portugus.md DELETED
@@ -1,28 +0,0 @@
1
- <br />
2
- <h1>How to Download FIFA 22 Cracked Version in Portuguese</h1>
3
- <p>If you are a fan of soccer games, you might be interested in downloading FIFA 22, the latest installment of the popular franchise from EA Sports. However, if you don't want to pay for the game or you want to play it in Portuguese, you might be looking for a cracked version that bypasses the DRM protection and allows you to change the language settings.</p>
4
- <h2>fifa 22 download pc crackeado português</h2><br /><p><b><b>Download File</b> --->>> <a href="https://byltly.com/2uKzPO">https://byltly.com/2uKzPO</a></b></p><br /><br />
5
- <p>In this article, we will show you how to download FIFA 22 cracked version in Portuguese using a reliable torrent site and a simple patch. Follow these steps and enjoy the game for free!</p>
6
- <ol>
7
- <li>Go to <a href="https://www.skidrowreloaded.com/">Skidrow Reloaded</a>, one of the best torrent sites for cracked games. Search for FIFA 22 and download the torrent file.</li>
8
- <li>Open the torrent file with your preferred torrent client, such as <a href="https://www.utorrent.com/">uTorrent</a> or <a href="https://www.bittorrent.com/">BitTorrent</a>. Choose a folder to save the game files and start the download.</li>
9
- <li>Once the download is complete, extract the game files using a program like <a href="https://www.win-rar.com/">WinRAR</a> or <a href="https://www.7-zip.org/">7-Zip</a>. You will find a folder called FIFA.22-CPY, which contains the cracked version of the game.</li>
10
- <li>Run the setup.exe file and follow the installation instructions. Make sure to uncheck any additional software or toolbars that might be offered during the installation.</li>
11
- <li>After the installation is done, copy the contents of the CPY folder (which contains the crack) and paste them into the game folder, replacing the original files.</li>
12
- <li>To change the language to Portuguese, open the CPY.ini file with a text editor like <a href="https://notepad-plus-plus.org/">Notepad++</a>. Find the line that says Language=english and change it to Language=brazilian. Save and close the file.</li>
13
- <li>Now you can launch the game from the desktop shortcut or the FIFA22.exe file. Enjoy playing FIFA 22 cracked version in Portuguese!</li>
14
- </ol>
15
- <p>Note: This method is for educational purposes only. We do not condone piracy or illegal downloading of games. If you like FIFA 22, please support the developers and buy the game from <a href="https://www.ea.com/games/fifa/fifa-22/buy/pc">the official website</a>.</p><p>If you are wondering what FIFA 22 has to offer in terms of new features and modes, here are some highlights that you can expect from the game:</p>
16
- <p></p>
17
- <ul>
18
- <li><b>HyperMotion Technology</b>: This is a new gameplay technology that uses advanced machine learning and real-life motion capture data from 22 professional players to create more realistic animations, movements, and interactions on the pitch. HyperMotion Technology is only available on PlayStation 5, Xbox Series X|S, and Stadia.</li>
19
- <li><b>Goalkeeper Rewrite</b>: The goalkeepers have been completely revamped with a new system that improves their shot-stopping abilities, decision-making skills, and personality. You will notice more variety and consistency in how they react to different situations and scenarios.</li>
20
- <li><b>New Attacking Tactics</b>: You can now customize your team's offensive style with more options and control over how they build up play, create chances, and finish. You can also adjust your defensive shape and intensity to counter your opponent's tactics.</li>
21
- <li><b>Career Mode</b>: You can create your own club from scratch and lead them to glory in Career Mode, choosing everything from the name, logo, kit, stadium, and fanbase. You can also enjoy a more immersive Player Career experience that lets you interact with your manager, teammates, and media, as well as participate in training sessions and matches.</li>
22
- <li><b>VOLTA FOOTBALL</b>: VOLTA FOOTBALL returns with more flair and style on the street football playgrounds around the world. You can customize your avatar with new outfits, hairstyles, tattoos, and emotes, as well as unlock new items and rewards as you progress. You can also play with your friends online or offline in various modes and formats.</li>
23
- <li><b>FIFA 22 Ultimate Team</b>: FUT 22 introduces FUT Heroes, which are iconic players from the past who have made a lasting impact on their clubs or leagues. You can also enjoy a redesigned Division Rivals and FUT Champions system that makes it easier to compete and progress against other players. Additionally, you can personalize your club with more customization options for your badge, stadium, kits, and celebrations.</li>
24
- <li><b>Pro Clubs</b>: Pro Clubs lets you create and join a team of up to 11 players online and play matches against other clubs. You can customize your Virtual Pro's appearance, attributes, traits, and positions, as well as track your progress and achievements with a new player growth system. You can also find new teammates and opponents with a streamlined social play feature.</li>
25
- </ul>
26
- <p>These are just some of the new features and modes that FIFA 22 has to offer. If you want to learn more about the game, you can visit <a href="https://www.ea.com/games/fifa/fifa-22">the official website</a> or watch the <a href="https://www.youtube.com/watch?v=o1igaMv46SY">official trailer</a>.</p> ddb901b051<br />
27
- <br />
28
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download Free High-Quality Fonts for Windows and Mac.md DELETED
@@ -1,155 +0,0 @@
1
- <br />
2
- <h1>Gujarati Kaps Fonts: A Guide to Download and Use 150+ Stylish Fonts for Photoshop</h1>
3
- <p>If you are looking for some unique and elegant fonts for your Gujarati designs, you might want to check out the Gujarati Kaps fonts. These are a collection of 150+ stylish fonts that are specially designed for Photoshop and other graphic design software. In this article, we will show you what are Gujarati Kaps fonts, how to download them, and how to use them in Photoshop. Let's get started!</p>
4
- <h2>Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar</h2><br /><p><b><b>DOWNLOAD</b> >> <a href="https://byltly.com/2uKvRk">https://byltly.com/2uKvRk</a></b></p><br /><br />
5
- <h2>What are Gujarati Kaps Fonts?</h2>
6
- <p>Gujarati Kaps fonts are a type of Gujarati fonts that have a distinctive style and flair. They are created by Kapilbhai Dave, a professional graphic designer and font creator from Gujarat. He has been making fonts since 1998 and has developed over 5000 fonts in various languages.</p>
7
- <h3>The origin and features of Kaps fonts</h3>
8
- <p>Kapilbhai Dave started making fonts as a hobby when he was studying at the National Institute of Design in Ahmedabad. He was inspired by the calligraphy and typography of different cultures and regions. He wanted to create fonts that would reflect the beauty and diversity of Gujarati language and culture.</p>
9
- <p>He named his fonts as Kaps, which is derived from his own name. He also added numbers to his fonts, such as Kap 1, Kap 2, Kap 3, etc., to indicate the order of creation. He used various tools and techniques to make his fonts, such as pen, brush, ink, paper, scanner, computer, software, etc.</p>
10
- <p>Kapilbhai Dave's fonts have some common features that make them stand out from other Gujarati fonts. Some of these features are:</p>
11
- <ul>
12
- <li>They have a smooth and flowing curve that gives them a natural and organic look.</li>
13
- <li>They have a balanced and harmonious proportion that makes them easy to read and pleasing to the eye.</li>
14
- <li>They have a creative and artistic flair that adds character and personality to the text.</li>
15
- <li>They have a variety of styles and weights that suit different purposes and moods.</li>
16
- <li>They have a high-quality and professional finish that makes them suitable for print and digital media.</li>
17
- </ul>
18
- <h3>The benefits and applications of Kaps fonts</h3>
19
- <p>Kapilbhai Dave's fonts have many benefits and applications for designers and users alike. Some of these benefits are:</p>
20
- <ul>
21
- <li>They enhance the aesthetic appeal and visual impact of the design.</li>
22
- <li>They convey the message and tone of the content more effectively.</li>
23
- <li>They attract the attention and interest of the audience more easily.</li>
24
- <li>They express the identity and culture of the brand or organization more authentically.</li>
25
- <li>They add value and uniqueness to the product or service more convincingly.</li>
26
- </ul>
27
- <p>Kapilbhai Dave's fonts can be used for various purposes and projects, such as:</p>
28
- <ul>
29
- <li>Wedding invitations, brochures, pamphlets, flyers, posters, banners, etc.</li>
30
- <li>Logos, slogans, headlines, titles, captions, etc.</li>
31
- <li>Books, magazines, newspapers, newsletters, etc.</li>
32
- <li>Websites, blogs, social media posts, etc.</li>
33
- <li>Videos, animations, presentations, etc.</li>
34
- </ul>
35
- <h2>How to download Gujarati Kaps Fonts?</h2>
36
- <p>If you want to use Kapilbhai Dave's fonts in your designs, you need to download them first. There are many websites that offer his fonts for free or for a fee. However, one of the easiest ways to download his fonts is from 4shared.com. This is a file-sharing website that allows you to download files from other users. Here are the steps to download Gujarati Kaps Fonts from 4shared.com:</p>
37
- <h3>The steps to download the fonts from 4shared.com</h3>
38
- <ol>
39
- <li>Go to <a href="https://www.free-fonts.com/gujarati-kaps">https://www.free-fonts.com/gujarati-kaps</a> This is a web page that has a link to download 150+ KAP Gujarati Fonts from 4shared.com.</li>
40
- <li>Click on the link that says "Download gujarati kaps fonts (150 varity of gujarati fonts).rar from 4shared.com". This will take you to another web page that has the file name "Gujarati KAPS Fonts (150 varity of gujarati fonts).rar".</li>
41
- <li>Click on the green button that says "Download". This will start downloading the file to your computer. The file size is about 5 MB.</li>
42
- <li>Wait for the download to finish. You can check the progress of the download on your browser or on your download manager.</li>
43
- </ol>
44
- <h3>The steps to unzip and install the fonts on Windows</h3>
45
- <ol>
46
- <li>Locate the file "Gujarati KAPS Fonts (150 varity of gujarati fonts).rar" on your computer. It should be in your Downloads folder or wherever you saved it.</li>
47
- <li>Right-click on the file and select "Extract Here" or "Extract All". This will unzip or extract the file into a folder with the same name.</li>
48
- <li>Open the folder "Gujarati KAPS Fonts (150 varity of gujarati fonts)". You will see many subfolders with names like "KAP-01", "KAP-02", "KAP-03", etc. Each subfolder contains one or more font files with extensions like ".ttf", ".otf", ".fon", etc.</li>
49
- <li>Select all the font files that you want to install. You can use Ctrl+A to select all or Ctrl+click to select multiple files.</li>
50
- <li>Right-click on the selected files and select "Install". This will install the fonts on your computer. You may need administrator permission or password to do this.</li>
51
- <li>Wait for the installation to finish. You can check if the installation was successful by going to Control Panel > Fonts or by opening any software that uses fonts like Word or Photoshop.</li>
52
- </ol>
53
- <h2>How to use Gujarati Kaps Fonts in Photoshop?</h2>
54
- <p>Now that you have downloaded and installed Gujarati Kaps Fonts on your computer, you can use them in Photoshop or any other graphic design software. Here are some steps to use Gujarati Kaps Fonts in Photoshop:</p>
55
- <h3>The steps to select and apply the fonts in Photoshop</h3>
56
- <ol>
57
- <li>Open Photoshop and create a new document or open an existing one.</li>
58
- <li>Select the Text tool (T) from the toolbar or press T on your keyboard.</li>
59
- <li>Click on the document where you want to add text or select an existing text layer.</li>
60
- <li>In the Options bar at the top of your screen, click on the Font drop-down menu. This will show you all the available fonts on your computer.</li>
61
- <li>Scroll down until you find the font name that starts with "KAP". You will see many options like "KAP-01", "KAP-02", "KAP-03", etc. These are all different styles of Gujarati Kaps Fonts. You can also type "KAP" in the search box to filter out other fonts.</li>
62
- <li>Select the font style Continuing the article: that you like and click on it. You will see a preview of the font on your text.</li>
63
- <li>Adjust the font size, color, alignment, and other settings as you wish. You can also use the Character panel (Window > Character) or the Paragraph panel (Window > Paragraph) for more options.</li>
64
- <li>Repeat the steps for any other text layers that you want to apply Gujarati Kaps Fonts to.</li>
65
- </ol>
66
- <h3>The tips and tricks to create stunning designs with Kaps fonts</h3>
67
- <p>Gujarati Kaps Fonts are versatile and expressive fonts that can help you create stunning designs with Photoshop. Here are some tips and tricks to make the most of them:</p>
68
- <p>How to download 150+ KAP Gujarati Fonts for Photoshop[^1^]<br />
69
- Free Stylish Gujarati Fonts For Photoshop - YouTube[^1^]<br />
70
- Download Gujarati files - TraDownload[^2^]<br />
71
- Download kap Fonts - Search Free Fonts[^2^]<br />
72
- Gujarati Kaps Free Font - Free Fonts search and download[^2^]<br />
73
- Gujarati kaps fonts (150 varity of gujarati fonts).rar from 4shared.com[^2^]<br />
74
- Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar Download[^3^]<br />
75
- Lián Types - The best website for free high-quality Gujarati Kap fonts[^3^]<br />
76
- Gujarati Kaps Fonts 150 Varity Of Gujarati Fonts Rar | Peatix<br />
77
- gujarati kaps fonts 150 varity of gujarati fonts rar is a lightweight and easy to use program<br />
78
- Gujarati KAPS Fonts (150 varity of gujarati fonts).rar Download<br />
79
- Direct link Gujarati KAPS Fonts (150 varity of gujarati fonts).rar 4shared for all<br />
80
- Kap 127 to Unicode | Unicode to Kap 127 | Gujarati Font Converter<br />
81
- Apart from Kap 127 to Unicode conversion, this unique program converts non Unicode fonts into Gujarati Unicode text and vice versa<br />
82
- 34 Professional Gujarati Kaps Fonts to Download - Typograph<br />
83
- Shree Gujarati 1110 Italic Modular InfoTech - Most popular font for professional printout<br />
84
- Fonts - 4shared - minifolder with various gujarati fonts and software<br />
85
- Indica Summit Scrip Gujarati + Hindi Multi Typing Software.rar from 4shared.com<br />
86
- MultiFont_KBE.zip - a collection of multiple fonts for different languages<br />
87
- TBIL 4.1.rar - a tool for transliteration and script conversion of Indian languages<br />
88
- akruti 6.0 indian language typing software for desk top publishing.zip from 4shared.com<br />
89
- gujarati and clip art font (terafonts).rar - a set of fonts with clip art symbols for gujarati language<br />
90
- gujarati font aa_post script font.rar - a post script font for gujarati language<br />
91
- How to install gujarati kaps fonts on windows 10 - tutorial video<br />
92
- How to use gujarati kaps fonts in kinemaster or picsart pixellab - tutorial video<br />
93
- How to create wedding invitations, brouchers and pamphlets in gujarati language using kaps fonts<br />
94
- How to download and use free stylish gujarati fonts for Microsoft Word<br />
95
- How to convert gujarati kaps fonts to unicode online<br />
96
- How to type in gujarati using kaps fonts on your smartphone<br />
97
- How to customize and edit your own kaps fonts using FontForge<br />
98
- How to design logos and banners using kaps fonts in Adobe Illustrator<br />
99
- How to make your own clip art symbols for gujarati language using terafonts<br />
100
- How to write beautiful calligraphy using kaps fonts in Adobe Photoshop<br />
101
- How to print high-quality documents using shree gujarati 1110 italic modular infotech font<br />
102
- How to translate text from english to gujarati using tbil 4.1 tool<br />
103
- How to type in multiple languages using multifont_kbe.zip software<br />
104
- How to learn gujarati language using indica summit scrip gujarati + hindi multi typing software<br />
105
- How to create professional desktop publishing projects using akruti 6.0 indian language typing software<br />
106
- How to make your own post script font for gujarati language using FontLab Studio<br />
107
- How to share your gujarati kaps fonts with others using 4shared.com<br />
108
- How to find more free and high-quality gujarati kap fonts on lián types website<br />
109
- How to compare different types of gujarati kaps fonts using typograph website<br />
110
- How to write mathematical expressions in gujarati using kaps fonts and LaTeX<br />
111
- How to create memes and stickers using kaps fonts and clip art symbols<br />
112
- How to make your own font family using kaps fonts and FontStruct<br />
113
- How to embed kaps fonts in your website or blog using CSS<br />
114
- How to create animated gifs and videos using kaps fonts and GIMP<br />
115
- How to generate QR codes and barcodes using kaps fonts and online tools<br />
116
- How to create crossword puzzles and word games using kaps fonts and Excel<br />
117
- How to make your own handwriting font using kaps fonts and Scanahand</p>
118
- <ul>
119
- <li>Use contrast and hierarchy to create visual interest and clarity. You can mix different styles and weights of Kaps fonts to create contrast and hierarchy. For example, you can use a bold or heavy style for headlines and a light or regular style for body text. You can also use different colors or sizes to emphasize important words or phrases.</li>
120
- <li>Use kerning and tracking to adjust the spacing between letters and words. Kerning is the adjustment of the space between individual letters, while tracking is the adjustment of the space between groups of letters or words. You can use these tools to fine-tune the appearance and readability of your text. To access these tools, select your text layer and go to Window > Character. Then use the sliders or input boxes for kerning and tracking.</li>
121
- <li>Use leading to adjust the spacing between lines of text. Leading is the vertical space between lines of text. You can use this tool to control the density and flow of your text. To access this tool, select your text layer and go to Window > Character. Then use the slider or input box for leading.</li>
122
- <li>Use alignment and justification to arrange your text in different ways. Alignment is the horizontal position of your text relative to its margins or edges. Justification is the adjustment of the space between words to make them fit evenly across a line. You can use these tools to create different effects and layouts for your text. To access these tools, select your text layer and go to Window > Paragraph. Then use the buttons for alignment and justification.</li>
123
- <li>Use ligatures and alternates to add some flair and variety to your text. Ligatures are special characters that combine two or more letters into one glyph, such as "fi" or "fl". Alternates are different versions of a letter that have a different shape or style, such as "a" or "g". You can use these tools to make your text more unique and dynamic. To access these tools, select your text layer and go to Window > Glyphs. Then browse through the glyphs panel and double-click on any ligature or alternate that you want to use.</li>
124
- </ul>
125
- <h2>Conclusion</h2>
126
- <p>Gujarati Kaps Fonts are a great choice for anyone who wants to create beautiful and professional designs with Gujarati text. They are easy to download, install, and use in Photoshop or any other graphic design software. They have a wide range of styles and weights that can suit any purpose and mood. They have a smooth and flowing curve that gives them a natural and organic look. They have a balanced and harmonious proportion that makes them easy to read and pleasing to the eye. They have a creative and artistic flair that adds character and personality to the text.</p>
127
- <p>If you are interested in using Gujarati Kaps Fonts in your designs, you can follow the steps and tips that we have shared in this article. You can also experiment with different combinations and settings to find your own style and voice. We hope that this article has inspired you to try out Gujarati Kaps Fonts and create stunning designs with them.</p>
128
- <p>Do you have any questions or comments about Gujarati Kaps Fonts? Do you have any suggestions or feedback for us? Let us know in the comments below!</p>
129
- <h2>FAQs</h2>
130
- <h4>Q1: How many Kaps fonts are there in total?</h4>
131
- <p>A1: According to Kapilbhai Dave's website, there are over 5000 fonts in total, including Gujarati, Hindi, English, Sanskrit, Marathi, Bengali, Tamil, Telugu, Malayalam, Kannada, Punjabi, Oriya, Assamese, Nepali, Tibetan, Arabic, Persian, Urdu, Sindhi, Pashto, Balochi, Kurdish, Hebrew, Greek, Russian, Mongolian, Chinese, Continuing the FAQs: Japanese, Korean, Thai, Lao, Khmer, Vietnamese, Burmese, Sinhala, and more.</p>
132
- <h4>Q2: Are Kaps fonts free to use?</h4>
133
- <p>A2: It depends on where you download them from and what you use them for. Some websites offer Kaps fonts for free for personal or non-commercial use only. Others may charge a fee for commercial use or for the full version of the fonts. You should always check the license terms and conditions before downloading and using any font. You should also respect the intellectual property and rights of the font creator.</p>
134
- <h4>Q3: Can I use Kaps fonts in other software besides Photoshop?</h4>
135
- <p>A3: Yes, you can use Kaps fonts in any software that supports TrueType, OpenType, or other font formats. However, some software may have different features and options for using fonts than Photoshop. For example, some software may not support ligatures or alternates, or may have different ways of accessing them. You should always check the documentation and help files of your software to learn how to use fonts effectively.</p>
136
- <h4>Q4: How can I preview the fonts before downloading them?</h4>
137
- <p>A4: One way to preview the fonts before downloading them is to use online font preview tools. These are websites that allow you to type in some text and see how it looks with different fonts. Some examples of online font preview tools are:</p>
138
- <ul>
139
- <li><a href="https://www.fontsquirrel.com/matcherator">Font Squirrel Matcherator</a>: This tool allows you to upload an image of a font and find similar or matching fonts.</li>
140
- <li><a href="https://www.myfonts.com/WhatTheFont/">MyFonts WhatTheFont</a>: This tool allows you to upload an image of a font and identify it.</li>
141
- <li><a href="https://wordmark.it/">Wordmark.it</a>: This tool allows you to type in some text and see how it looks with all the fonts installed on your computer.</li>
142
- <li><a href="https://www.dafont.com/">DaFont</a>: This website allows you to browse through thousands of free fonts and see how they look with custom text.</li>
143
- </ul>
144
- <h4>Q5: Where can I find more resources and tutorials on Kaps fonts?</h4>
145
- <p>A5: If you want to learn more about Kaps fonts and how to use them in your designs, you can check out some of these resources and tutorials:</p>
146
- <ul>
147
- <li><a href="https://www.youtube.com/watch?v=BQBwR7ZKuCU">How to download 150+ KAP Gujarati Fonts? | Free Stylish Gujarati Fonts For Photoshop</a>: This is a video tutorial that shows you how to download and install Kaps fonts from 4shared.com.</li>
148
- <li><a href="https://www.creativebloq.com/typography/design-your-own-typeface-8133919">Font design: 17 brilliant tips to create your own typeface</a>: This is an article that gives you some tips and advice on how to design your own font.</li>
149
- <li><a href="https://visme.co/blog/top-fonts-2020/">Top Fonts For 2020 To Create Outstanding Designs</a>: This is an article that showcases some of the best fonts for 2020, including Kaps fonts.</li>
150
- <li><a href="https://glyphsapp.com/learn/creating-an-all-caps-font">Creating an all-caps font | Glyphs</a>: This is a tutorial that shows you how to create an all-caps font using Glyphs, a professional font editor.</li>
151
- <li><a href="https://justcreative.com/all-caps-fonts/">46+ Stunning ALL CAPS Fonts to Make a Statement</a>: This is an article that features some of the most stunning all-caps fonts available online.</li>
152
- </ul>
153
- </p> 0a6ba089eb<br />
154
- <br />
155
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Foxit Advanced Pdf Editor 310 Serial Number A Powerful and Easy-to-Use PDF Editor.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Foxit Advanced Pdf Editor 310 Serial Number</h2><br /><p><b><b>DOWNLOAD</b> &#10022; <a href="https://imgfil.com/2uxYce">https://imgfil.com/2uxYce</a></b></p><br /><br />
2
-
3
- aaccfb2cb3<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 Mod APK Unlimited Money and All Cars Unlocked for Free.md DELETED
@@ -1,92 +0,0 @@
1
- <br />
2
- <h1>Car Simulator 2 All Cars Unlocked APK: A Realistic and Fun Racing Game</h1>
3
- <p>If you are a fan of racing games, you might have heard of Car Simulator 2, a popular simulation game that lets you drive various cars in a realistic world. But did you know that you can download Car Simulator 2 all cars unlocked apk and enjoy the game with more features and benefits? In this article, we will tell you what Car Simulator 2 is, why you should download the modded version, and how to do it. Read on to find out more.</p>
4
- <h2>What is Car Simulator 2?</h2>
5
- <p>Car Simulator 2 is a simulation game developed by Oppana Games FZC LLC. It is available for Android devices and has over 10 million downloads on Google Play Store. The game has impressive graphics and physics that make you feel like you are driving a real car. You can explore a vast open world with different locations, such as cities, deserts, mountains, and highways. You can also choose from a variety of cars, ranging from sports cars, muscle cars, SUVs, trucks, and more. You can customize your car with different colors, wheels, spoilers, and other accessories.</p>
6
- <h2>car simulator 2 all cars unlocked apk</h2><br /><p><b><b>Download</b> &#10084; <a href="https://urlin.us/2uT0Wb">https://urlin.us/2uT0Wb</a></b></p><br /><br />
7
- <p>The game has different modes that you can play solo or with your friends online. You can participate in races, missions, daily challenges, and events to earn money and gold. You can also join clubs and compete with other players on the leaderboard. The game is fun and addictive, as you can experience realistic driving scenarios, such as traffic, police, accidents, weather, and more.</p>
8
- <h2>Why download Car Simulator 2 all cars unlocked apk?</h2>
9
- <p>While Car Simulator 2 is a free game, it has some limitations that might affect your enjoyment. For example, you need to spend money and gold to buy new cars or upgrade your existing ones. You also need to unlock new locations by completing certain tasks or reaching certain levels. Moreover, some cars and locations are only available through in-app purchases that require real money.</p>
10
- <p>That is why downloading Car Simulator 2 all cars unlocked apk is a good idea. This is a modded version of the game that gives you unlimited money and gold. You can use them to buy any car or location you want without any restrictions. You can also access all the features and content of the game without spending a dime. This way, you can have more fun and freedom in the game.</p>
11
- <h2>How to download and install Car Simulator 2 all cars unlocked apk?</h2>
12
- <p>Downloading and installing Car Simulator 2 all cars unlocked apk is easy and simple. Just follow these steps:</p>
13
- <ol>
14
- <li>Download the apk file from a trusted source. You can use this link to get the latest version of the modded game.</li>
15
- <li>Enable unknown sources in your device settings. This will allow you to install apps from sources other than Google Play Store.</li>
16
- <li>Install the apk file by tapping on it and following the instructions.</li>
17
- <li>Launch the game and enjoy.</li>
18
- </ol>
19
- <h2>Conclusion</h2>
20
- <p>Car Simulator 2 is a realistic and fun racing game that lets you drive various cars in a vast open world. You can play different modes, missions, challenges, and events with your friends online. You can also customize your car with different colors, wheels, spoilers, and other accessories.</p>
21
- <p>car simulator 2 mod apk unlimited money and gold<br />
22
- car simulator 2 hack apk download for android<br />
23
- car simulator 2 latest version mod apk<br />
24
- car simulator 2 realistic driving game mod apk<br />
25
- car simulator 2 multiplayer racing game mod apk<br />
26
- car simulator 2 free download with all cars unlocked<br />
27
- car simulator 2 apk + obb data file<br />
28
- car simulator 2 gameplay features and tips<br />
29
- car simulator 2 best cars to drive and customize<br />
30
- car simulator 2 how to unlock all locations and missions<br />
31
- car simulator 2 cheats and tricks for android<br />
32
- car simulator 2 review and rating by users<br />
33
- car simulator 2 online mode with friends and strangers<br />
34
- car simulator 2 offline mode without internet connection<br />
35
- car simulator 2 new update and patch notes<br />
36
- car simulator 2 alternatives and similar games<br />
37
- car simulator 2 system requirements and compatibility<br />
38
- car simulator 2 bugs and issues fix guide<br />
39
- car simulator 2 support and contact information<br />
40
- car simulator 2 mod menu with unlimited resources<br />
41
- car simulator 2 no ads and in-app purchases<br />
42
- car simulator 2 premium version with extra benefits<br />
43
- car simulator 2 how to install and run on pc<br />
44
- car simulator 2 how to backup and restore data<br />
45
- car simulator 2 how to play with controller or keyboard<br />
46
- car simulator 2 how to earn money and gold fast<br />
47
- car simulator 2 how to upgrade and repair cars<br />
48
- car simulator 2 how to change camera and view angle<br />
49
- car simulator 2 how to switch between day and night mode<br />
50
- car simulator 2 how to use nitro and drift skills<br />
51
- car simulator 2 how to join and create clubs<br />
52
- car simulator 2 how to participate in tournaments and events<br />
53
- car simulator 2 how to rank up and level up<br />
54
- car simulator 2 how to unlock achievements and rewards<br />
55
- car simulator 2 how to customize your avatar and profile<br />
56
- car simulator 2 pros and cons of the game<br />
57
- car simulator 2 frequently asked questions and answers<br />
58
- car simulator 2 feedback and suggestions from players<br />
59
- car simulator 2 fan art and wallpapers download<br />
60
- car simulator 2 mod apk safe and virus free download link</p>
61
- <p>If you want to enjoy the game with more features and benefits, you should download Car Simulator 2 all cars unlocked apk. This This is a modded version of the game that gives you unlimited money and gold. You can use them to buy any car or location you want without any restrictions. You can also access all the features and content of the game without spending a dime. This way, you can have more fun and freedom in the game.</p>
62
- <h2>FAQs</h2>
63
- <p>Here are some frequently asked questions about Car Simulator 2 all cars unlocked apk:</p>
64
- <table>
65
- <tr>
66
- <th>Question</th>
67
- <th>Answer</th>
68
- </tr>
69
- <tr>
70
- <td>Is Car Simulator 2 all cars unlocked apk safe to download and install?</td>
71
- <td>Yes, it is safe as long as you download it from a trusted source. However, you should always scan the apk file with an antivirus before installing it.</td>
72
- </tr>
73
- <tr>
74
- <td>Will I get banned for using Car Simulator 2 all cars unlocked apk?</td>
75
- <td>No, you will not get banned for using the modded version of the game. The game does not have any anti-cheat system or online verification. You can play the game offline or online without any problems.</td>
76
- </tr>
77
- <tr>
78
- <td>Can I update Car Simulator 2 all cars unlocked apk?</td>
79
- <td>No, you cannot update the modded version of the game. If you want to get the latest updates and features of the game, you will have to download and install the original version from Google Play Store.</td>
80
- </tr>
81
- <tr>
82
- <td>Can I play Car Simulator 2 all cars unlocked apk with my friends online?</td>
83
- <td>Yes, you can play the game with your friends online. You can join clubs, races, missions, and events with other players who have the same version of the game.</td>
84
- </tr>
85
- <tr>
86
- <td>What are the minimum requirements to play Car Simulator 2 all cars unlocked apk?</td>
87
- <td>The minimum requirements to play the game are Android 4.4 or higher, 1 GB of RAM, and 300 MB of free storage space.</td>
88
- </tr>
89
- </table>
90
- <p>I hope this article has helped you learn more about Car Simulator 2 all cars unlocked apk. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy gaming!</p> 197e85843d<br />
91
- <br />
92
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Go Go by BTS and Join the ARMY - The Biggest Fan Community in the World.md DELETED
@@ -1,126 +0,0 @@
1
-
2
- <h1>Download Go Go by BTS: A Guide for ARMYs</h1>
3
- <p>Are you a fan of BTS, the global sensation and phenomenon in the music industry? If so, you probably have heard of their hit song "Go Go", a catchy and upbeat track that showcases their charisma and talent. But have you downloaded it yet? If not, you are missing out on a lot of fun and excitement. In this article, we will tell you everything you need to know about "Go Go" by BTS, and why you should download it right now.</p>
4
- <h2>download go go by bts</h2><br /><p><b><b>Download Zip</b> &gt; <a href="https://jinyurl.com/2uNOEQ">https://jinyurl.com/2uNOEQ</a></b></p><br /><br />
5
- <h2>What is Go Go by BTS?</h2>
6
- <p>"Go Go" is a song by BTS, a seven-member South Korean boy band that has taken over the world with their music, message, and style. The song was released on September 18, 2017, as part of their fifth mini album "Love Yourself: Her". It is the eighth track on the album, and also appears as the fourth track on their second compilation album "Love Yourself: Answer".</p>
7
- <p>The song is a fusion of trap, hip hop, and EDM genres, with a catchy chorus and playful lyrics. The song is about living in the moment and enjoying life without worrying too much about the future or money. The song also reflects the youth culture and attitude of BTS and their fans, who are often called ARMYs.</p>
8
- <h2>Why you should download Go Go by BTS?</h2>
9
- <p>There are many reasons why you should download "Go Go" by BTS. Here are some of them:</p>
10
- <h3>How to support BTS by downloading Go Go?</h3>
11
- <p>One of the best ways to support BTS is by downloading their songs legally and ethically. By doing so, you are showing your appreciation and respect for their hard work and creativity. You are also helping them achieve more recognition and success in the music industry. Downloading their songs also contributes to their chart rankings, awards nominations, and sales records.</p>
12
- <p>There are many platforms and methods to download "Go Go" by BTS legally and ethically. Some of them are:</p>
13
- <ul>
14
- <li>Buying or streaming the song from official online music stores or services, such as iTunes, Spotify, Amazon Music, YouTube Music, etc.</li>
15
- <li>Purchasing or downloading the song from official physical albums or CDs, such as "Love Yourself: Her" or "Love Yourself: Answer".</li>
16
- <li>Using official fan club memberships or subscriptions to access exclusive content or benefits related to the song or BTS.</li>
17
- </ul>
18
- <h3>How to enjoy Go Go by BTS?</h3>
19
- <p>Another reason why you should download "Go Go" by BTS is because it is a fun and enjoyable song that will make you happy and energetic. There are many ways to listen to and appreciate the song, such as:</p>
20
- <ul>
21
- <li>Watching the music video of "Go Go" on YouTube or other platforms. The music video features BTS performing the song in colorful outfits and settings, with hilarious expressions and gestures. The music video also has some references and parodies of popular culture and memes.</li>
22
- <li>Learning the choreography of "Go Go" from online tutorials or videos. The chore ography of "Go Go" is very catchy and fun, with some moves inspired by the "Gwiyomi" song and the "Dame Tu Cosita" dance. You can learn the dance steps and practice them with your friends or alone.</li>
23
- <li>Singing along to "Go Go" with the lyrics or karaoke versions. The lyrics of "Go Go" are very witty and humorous, with some wordplay and slang. You can sing along to the song and express your feelings and thoughts about life and money.</li>
24
- </ul>
25
- <h3>How to join the ARMY fandom with Go Go?</h3>
26
- <p>A third reason why you should download "Go Go" by BTS is because it will help you connect with other fans of the song and BTS, who are known as ARMYs. ARMYs are one of the most loyal and passionate fandoms in the world, who support and love BTS unconditionally. There are many communities and activities to join the ARMY fandom with "Go Go", such as:</p>
27
- <ul>
28
- <li>Following BTS and their official accounts on social media, such as Twitter, Instagram, Facebook, Weverse, etc. You can interact with BTS and other ARMYs by liking, commenting, sharing, or posting about "Go Go" or other BTS-related topics.</li>
29
- <li>Participating in fan projects or events related to "Go Go" or BTS, such as streaming parties, hashtag campaigns, fan art contests, charity donations, etc. You can show your appreciation and support for BTS and their music by joining these projects or events.</li>
30
- <li>Attending concerts or fan meetings of BTS where they perform "Go Go" or other songs live. You can experience the amazing performance and energy of BTS and their fans by attending these concerts or fan meetings.</li>
31
- </ul>
32
- <h2>Where to download Go Go by BTS?</h2>
33
- <p>Now that you know why you should download "Go Go" by BTS, you might be wondering where to download it from. There are many sources and sites to download the song, but not all of them are reliable or convenient. To help you choose the best option for you, we have prepared a comparison table of the best sources and sites to download "Go Go" by BTS, based on quality, price, and convenience.</p>
34
- <p>download go go by bts mp3<br />
35
- download go go by bts lyrics<br />
36
- download go go by bts video<br />
37
- download go go by bts live performance<br />
38
- download go go by bts dance practice<br />
39
- download go go by bts instrumental<br />
40
- download go go by bts ringtone<br />
41
- download go go by bts album<br />
42
- download go go by bts boomplay<br />
43
- download go go by bts internet archive<br />
44
- download go go by bts m countdown<br />
45
- download go go by bts english version<br />
46
- download go go by bts remix<br />
47
- download go go by bts acoustic cover<br />
48
- download go go by bts karaoke<br />
49
- download go go by bts reaction<br />
50
- download go go by bts piano sheet music<br />
51
- download go go by bts guitar chords<br />
52
- download go go by bts spotify<br />
53
- download go go by bts apple music<br />
54
- download go go by bts soundcloud<br />
55
- download go go by bts amazon music<br />
56
- download go go by bts youtube music<br />
57
- download go go by bts tiktok<br />
58
- download go go by bts 320kbps<br />
59
- download go go by bts flac<br />
60
- download go go by bts wav<br />
61
- download go go by bts zip file<br />
62
- download gogo song of BTS <br />
63
- how to download gogo song of BTS</p>
64
- <table>
65
- <tr>
66
- <th>Source/Site</th>
67
- <th>Quality</th>
68
- <th>Price</th>
69
- <th>Convenience</th>
70
- </tr>
71
- <tr>
72
- <td>iTunes</td>
73
- <td>High</td>
74
- <td>$1.29 per song</td>
75
- <td>Easy to use, compatible with Apple devices</td>
76
- </tr>
77
- <tr>
78
- <td>Spotify</td>
79
- <td>High</td>
80
- <td>$9.99 per month for premium subscription</td>
81
- <td>Easy to use, compatible with various devices, offers offline mode</td>
82
- </tr>
83
- <tr>
84
- <td>Amazon Music</td>
85
- <td>High</td>
86
- <td>$0.99 per song or $7.99 per month for unlimited subscription</td>
87
- <td>Easy to use, compatible with various devices, offers offline mode</td>
88
- </tr>
89
- <tr>
90
- <td>YouTube Music</td>
91
- <td>Medium</td>
92
- <td>$11.99 per month for premium subscription</td>
93
- <td>Easy to use, compatible with various devices, offers offline mode and music video access</td>
94
- </tr>
95
- <tr>
96
- <td>"Love Yourself: Her" album</td>
97
- <td>High</td ><td>$19.99 per album (includes 9 songs)</td ><td>Requires physical purchase or delivery, offers additional content such as photobook and photocard</td ></tr ><tr ><td>"Love Yourself: Answer" album</td ><td>High</td ><td>$24.99 per album (includes 26 songs)</td ><td>Requires physical purchase or delivery, offers additional content such as photobook and photocard</td ></tr ></table ><h2>Conclusion</h2 ><p>In conclusion, "Go Go" by BTS is a great song that you should download right now. It is a fun and upbeat song that will make you happy and energetic. It is also a way to support BTS and their music, enjoy their performance and style, and join their fandom and community. You can download the song from various sources and sites, depending on your preference and budget. So what are you waiting for? Go go go and download "Go Go" by BTS today!</p ><h4>Frequently Asked Questions (FAQs)</h4 ><p>Here are some of the most common questions that people have about "Go Go" by BTS:</p ><ol ><li><b>What does "yolo yolo yolo yo" mean in the chorus of "Go Go"?</b></li ><p>This <p>This is a repetition of the acronym "YOLO", which stands for "You Only Live Once". It is a popular phrase that expresses the idea of living in the present and enjoying life without regrets. In the context of the song, it means that BTS and their fans are having fun and spending money without worrying about the future or saving up.</p>
98
- <li><b>What is the meaning of the money gun gesture in the "Go Go" choreography?</b></li>
99
- <p>This is a gesture that mimics shooting money from a toy gun, which is often used by rappers or celebrities to show off their wealth and status. In the context of the song, it is a sarcastic and ironic gesture that mocks the materialistic and consumerist culture of society. It also shows that BTS and their fans are not obsessed with money or fame, but rather value happiness and freedom.</p>
100
- <li><b>What are some of the references and parodies in the "Go Go" music video?</b></li>
101
- <p>There are many references and parodies in the "Go Go" music video, such as:</p>
102
- <ul>
103
- <li>The opening scene where BTS are lying on a pile of money and wearing masks is a reference to the movie "The Purge", which is a dystopian thriller about a night where all crimes are legal.</li>
104
- <li>The scene where BTS are dancing on a yacht and wearing Hawaiian shirts is a parody of the song "Gangnam Style" by Psy, which is a viral hit that mocks the lavish lifestyle of Seoul's elite.</li>
105
- <li>The scene where BTS are playing video games and eating snacks is a reference to the popular online game "PlayerUnknown's Battlegrounds", which is a survival shooter game where players compete against each other.</li>
106
- <li>The scene where BTS are wearing animal onesies and dancing with inflatable toys is a parody of the song "Dame Tu Cosita" by El Chombo, which is a viral hit that features an alien dancing to a catchy tune.</li>
107
- </ul>
108
- <li><b>What are some of the wordplay and slang in the "Go Go" lyrics?</b></li>
109
- <p>There are some wordplay and slang in the "Go Go" lyrics, such as:</p>
110
- <ul>
111
- <li>The phrase "dallyeoga go go" means "run go go", but it also sounds like "dalla ga go go", which means "be different go go". This plays on the double meaning of the word "dallyeoga", which can mean either "run" or "be different".</li>
112
- <li>The phrase "jeonbu da nae baee" means "it's all my money", but it also sounds like "jeonbu da nae bae", which means "it's all my boat". This plays on the homophony of the words "baee" and "bae", which can mean either "money" or "boat".</li>
113
- <li>The word "doljikgu" means "honesty", but it also sounds like "dollar jikgu", which means "dollar direct hire". This plays on the similarity of the words "doljikgu" and "dollar jikgu", which can mean either "honesty" or "dollar direct hire".</li>
114
- <li>The word "jjajeungna" means "annoyed", but it also sounds like "jjajangmyeon", which is a popular Korean noodle dish with black bean sauce. This plays on the similarity of the words "jjajeungna" and "jjajangmyeon", which can mean either "annoyed" or "annoyed" or "jjajangmyeon".</li>
115
- </ul>
116
- <li><b>What are some of the awards and achievements of "Go Go" by BTS?</b></li>
117
- <p>"Go Go" by BTS is a very successful and popular song that has won many awards and achievements, such as:</p>
118
- <ul>
119
- <li>It peaked at number 3 on the Billboard World Digital Songs chart, and number 71 on the Billboard Canadian Hot 100 chart.</li>
120
- <li>It sold over 200,000 digital downloads and over 100 million streams worldwide.</li>
121
- <li>It won the Best Dance Performance award at the 2017 Mnet Asian Music Awards, and the Best Music Video award at the 2018 Seoul Music Awards.</li>
122
- <li>It was nominated for the Song of the Year award at the 2018 Golden Disc Awards, and the Best Pop Song award at the 2018 Korean Music Awards.</li>
123
- <li>It was performed by BTS at various shows and events, such as the 2017 American Music Awards, the 2017 Mnet Asian Music Awards, the 2017 Melon Music Awards, the 2018 Seoul Music Awards, and the 2018 Lotte Family Concert.</li>
124
- </ul></p> 401be4b1e0<br />
125
- <br />
126
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/52Hz/SRMNet_thesis/WT/__init__.py DELETED
@@ -1 +0,0 @@
1
- from .transform import *
 
 
spaces/6Eternal9/ChatGPT4/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: Chat-with-GPT4
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.21.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- duplicated_from: ysharma/ChatGPT4
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/tests/data/test_audio_dataset.py DELETED
@@ -1,352 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from functools import partial
8
- from itertools import product
9
- import json
10
- import math
11
- import os
12
- import random
13
- import typing as tp
14
-
15
- import pytest
16
- import torch
17
- from torch.utils.data import DataLoader
18
-
19
- from audiocraft.data.audio_dataset import (
20
- AudioDataset,
21
- AudioMeta,
22
- _get_audio_meta,
23
- load_audio_meta,
24
- save_audio_meta
25
- )
26
- from audiocraft.data.zip import PathInZip
27
-
28
- from ..common_utils import TempDirMixin, get_white_noise, save_wav
29
-
30
-
31
- class TestAudioMeta(TempDirMixin):
32
-
33
- def test_get_audio_meta(self):
34
- sample_rates = [8000, 16_000]
35
- channels = [1, 2]
36
- duration = 1.
37
- for sample_rate, ch in product(sample_rates, channels):
38
- n_frames = int(duration * sample_rate)
39
- wav = get_white_noise(ch, n_frames)
40
- path = self.get_temp_path('sample.wav')
41
- save_wav(path, wav, sample_rate)
42
- m = _get_audio_meta(path, minimal=True)
43
- assert m.path == path, 'path does not match'
44
- assert m.sample_rate == sample_rate, 'sample rate does not match'
45
- assert m.duration == duration, 'duration does not match'
46
- assert m.amplitude is None
47
- assert m.info_path is None
48
-
49
- def test_save_audio_meta(self):
50
- audio_meta = [
51
- AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')),
52
- AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json'))
53
- ]
54
- empty_audio_meta = []
55
- for idx, meta in enumerate([audio_meta, empty_audio_meta]):
56
- path = self.get_temp_path(f'data_{idx}_save.jsonl')
57
- save_audio_meta(path, meta)
58
- with open(path, 'r') as f:
59
- lines = f.readlines()
60
- read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines]
61
- assert len(read_meta) == len(meta)
62
- for m, read_m in zip(meta, read_meta):
63
- assert m == read_m
64
-
65
- def test_load_audio_meta(self):
66
- try:
67
- import dora
68
- except ImportError:
69
- dora = None # type: ignore
70
-
71
- audio_meta = [
72
- AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')),
73
- AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json'))
74
- ]
75
- empty_meta = []
76
- for idx, meta in enumerate([audio_meta, empty_meta]):
77
- path = self.get_temp_path(f'data_{idx}_load.jsonl')
78
- with open(path, 'w') as f:
79
- for m in meta:
80
- json_str = json.dumps(m.to_dict()) + '\n'
81
- f.write(json_str)
82
- read_meta = load_audio_meta(path)
83
- assert len(read_meta) == len(meta)
84
- for m, read_m in zip(meta, read_meta):
85
- if dora:
86
- m.path = dora.git_save.to_absolute_path(m.path)
87
- assert m == read_m, f'original={m}, read={read_m}'
88
-
89
-
90
- class TestAudioDataset(TempDirMixin):
91
-
92
- def _create_audio_files(self,
93
- root_name: str,
94
- num_examples: int,
95
- durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.),
96
- sample_rate: int = 16_000,
97
- channels: int = 1):
98
- root_dir = self.get_temp_dir(root_name)
99
- for i in range(num_examples):
100
- if isinstance(durations, float):
101
- duration = durations
102
- elif isinstance(durations, tuple) and len(durations) == 1:
103
- duration = durations[0]
104
- elif isinstance(durations, tuple) and len(durations) == 2:
105
- duration = random.uniform(durations[0], durations[1])
106
- else:
107
- assert False
108
- n_frames = int(duration * sample_rate)
109
- wav = get_white_noise(channels, n_frames)
110
- path = os.path.join(root_dir, f'example_{i}.wav')
111
- save_wav(path, wav, sample_rate)
112
- return root_dir
113
-
114
- def _create_audio_dataset(self,
115
- root_name: str,
116
- total_num_examples: int,
117
- durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.),
118
- sample_rate: int = 16_000,
119
- channels: int = 1,
120
- segment_duration: tp.Optional[float] = None,
121
- num_examples: int = 10,
122
- shuffle: bool = True,
123
- return_info: bool = False):
124
- root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels)
125
- dataset = AudioDataset.from_path(root_dir,
126
- minimal_meta=True,
127
- segment_duration=segment_duration,
128
- num_samples=num_examples,
129
- sample_rate=sample_rate,
130
- channels=channels,
131
- shuffle=shuffle,
132
- return_info=return_info)
133
- return dataset
134
-
135
- def test_dataset_full(self):
136
- total_examples = 10
137
- min_duration, max_duration = 1., 4.
138
- sample_rate = 16_000
139
- channels = 1
140
- dataset = self._create_audio_dataset(
141
- 'dset', total_examples, durations=(min_duration, max_duration),
142
- sample_rate=sample_rate, channels=channels, segment_duration=None)
143
- assert len(dataset) == total_examples
144
- assert dataset.sample_rate == sample_rate
145
- assert dataset.channels == channels
146
- for idx in range(len(dataset)):
147
- sample = dataset[idx]
148
- assert sample.shape[0] == channels
149
- assert sample.shape[1] <= int(max_duration * sample_rate)
150
- assert sample.shape[1] >= int(min_duration * sample_rate)
151
-
152
- def test_dataset_segment(self):
153
- total_examples = 10
154
- num_samples = 20
155
- min_duration, max_duration = 1., 4.
156
- segment_duration = 1.
157
- sample_rate = 16_000
158
- channels = 1
159
- dataset = self._create_audio_dataset(
160
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
161
- channels=channels, segment_duration=segment_duration, num_examples=num_samples)
162
- assert len(dataset) == num_samples
163
- assert dataset.sample_rate == sample_rate
164
- assert dataset.channels == channels
165
- for idx in range(len(dataset)):
166
- sample = dataset[idx]
167
- assert sample.shape[0] == channels
168
- assert sample.shape[1] == int(segment_duration * sample_rate)
169
-
170
- def test_dataset_equal_audio_and_segment_durations(self):
171
- total_examples = 1
172
- num_samples = 2
173
- audio_duration = 1.
174
- segment_duration = 1.
175
- sample_rate = 16_000
176
- channels = 1
177
- dataset = self._create_audio_dataset(
178
- 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate,
179
- channels=channels, segment_duration=segment_duration, num_examples=num_samples)
180
- assert len(dataset) == num_samples
181
- assert dataset.sample_rate == sample_rate
182
- assert dataset.channels == channels
183
- for idx in range(len(dataset)):
184
- sample = dataset[idx]
185
- assert sample.shape[0] == channels
186
- assert sample.shape[1] == int(segment_duration * sample_rate)
187
- # the random seek_time adds variability on audio read
188
- sample_1 = dataset[0]
189
- sample_2 = dataset[1]
190
- assert not torch.allclose(sample_1, sample_2)
191
-
192
- def test_dataset_samples(self):
193
- total_examples = 1
194
- num_samples = 2
195
- audio_duration = 1.
196
- segment_duration = 1.
197
- sample_rate = 16_000
198
- channels = 1
199
-
200
- create_dataset = partial(
201
- self._create_audio_dataset,
202
- 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate,
203
- channels=channels, segment_duration=segment_duration, num_examples=num_samples,
204
- )
205
-
206
- dataset = create_dataset(shuffle=True)
207
- # when shuffle = True, we have different inputs for the same index across epoch
208
- sample_1 = dataset[0]
209
- sample_2 = dataset[0]
210
- assert not torch.allclose(sample_1, sample_2)
211
-
212
- dataset_noshuffle = create_dataset(shuffle=False)
213
- # when shuffle = False, we have same inputs for the same index across epoch
214
- sample_1 = dataset_noshuffle[0]
215
- sample_2 = dataset_noshuffle[0]
216
- assert torch.allclose(sample_1, sample_2)
217
-
218
- def test_dataset_return_info(self):
219
- total_examples = 10
220
- num_samples = 20
221
- min_duration, max_duration = 1., 4.
222
- segment_duration = 1.
223
- sample_rate = 16_000
224
- channels = 1
225
- dataset = self._create_audio_dataset(
226
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
227
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
228
- assert len(dataset) == num_samples
229
- assert dataset.sample_rate == sample_rate
230
- assert dataset.channels == channels
231
- for idx in range(len(dataset)):
232
- sample, segment_info = dataset[idx]
233
- assert sample.shape[0] == channels
234
- assert sample.shape[1] == int(segment_duration * sample_rate)
235
- assert segment_info.sample_rate == sample_rate
236
- assert segment_info.total_frames == int(segment_duration * sample_rate)
237
- assert segment_info.n_frames <= int(segment_duration * sample_rate)
238
- assert segment_info.seek_time >= 0
239
-
240
- def test_dataset_return_info_no_segment_duration(self):
241
- total_examples = 10
242
- num_samples = 20
243
- min_duration, max_duration = 1., 4.
244
- segment_duration = None
245
- sample_rate = 16_000
246
- channels = 1
247
- dataset = self._create_audio_dataset(
248
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
249
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
250
- assert len(dataset) == total_examples
251
- assert dataset.sample_rate == sample_rate
252
- assert dataset.channels == channels
253
- for idx in range(len(dataset)):
254
- sample, segment_info = dataset[idx]
255
- assert sample.shape[0] == channels
256
- assert sample.shape[1] == segment_info.total_frames
257
- assert segment_info.sample_rate == sample_rate
258
- assert segment_info.n_frames <= segment_info.total_frames
259
-
260
- def test_dataset_collate_fn(self):
261
- total_examples = 10
262
- num_samples = 20
263
- min_duration, max_duration = 1., 4.
264
- segment_duration = 1.
265
- sample_rate = 16_000
266
- channels = 1
267
- dataset = self._create_audio_dataset(
268
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
269
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False)
270
- batch_size = 4
271
- dataloader = DataLoader(
272
- dataset,
273
- batch_size=batch_size,
274
- num_workers=0
275
- )
276
- for idx, batch in enumerate(dataloader):
277
- assert batch.shape[0] == batch_size
278
-
279
- @pytest.mark.parametrize("segment_duration", [1.0, None])
280
- def test_dataset_with_meta_collate_fn(self, segment_duration):
281
- total_examples = 10
282
- num_samples = 20
283
- min_duration, max_duration = 1., 4.
284
- segment_duration = 1.
285
- sample_rate = 16_000
286
- channels = 1
287
- dataset = self._create_audio_dataset(
288
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
289
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
290
- batch_size = 4
291
- dataloader = DataLoader(
292
- dataset,
293
- batch_size=batch_size,
294
- collate_fn=dataset.collater,
295
- num_workers=0
296
- )
297
- for idx, batch in enumerate(dataloader):
298
- wav, infos = batch
299
- assert wav.shape[0] == batch_size
300
- assert len(infos) == batch_size
301
-
302
- @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [
303
- [1, True, True, 0.5, 0.5, 0.0],
304
- [1, False, True, 0.25, 0.5, 0.25],
305
- [1, True, False, 0.666, 0.333, 0.0],
306
- [1, False, False, 0.333, 0.333, 0.333],
307
- [None, False, False, 0.333, 0.333, 0.333]])
308
- def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist):
309
- random.seed(1234)
310
- rng = torch.Generator()
311
- rng.manual_seed(1234)
312
-
313
- def _get_histogram(dataset, repetitions=20_000):
314
- counts = {file_meta.path: 0. for file_meta in meta}
315
- for _ in range(repetitions):
316
- file_meta = dataset.sample_file(0, rng)
317
- counts[file_meta.path] += 1
318
- return {name: count / repetitions for name, count in counts.items()}
319
-
320
- meta = [
321
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
322
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
323
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
324
- ]
325
- dataset = AudioDataset(
326
- meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight,
327
- sample_on_duration=sample_on_duration)
328
- hist = _get_histogram(dataset)
329
- assert math.isclose(hist['a'], a_hist, abs_tol=0.01)
330
- assert math.isclose(hist['b'], b_hist, abs_tol=0.01)
331
- assert math.isclose(hist['c'], c_hist, abs_tol=0.01)
332
-
333
- def test_meta_duration_filter_all(self):
334
- meta = [
335
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
336
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
337
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
338
- ]
339
- try:
340
- AudioDataset(meta, segment_duration=11, min_segment_ratio=1)
341
- assert False
342
- except AssertionError:
343
- assert True
344
-
345
- def test_meta_duration_filter_long(self):
346
- meta = [
347
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
348
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
349
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
350
- ]
351
- dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7)
352
- assert len(dataset) == 2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/ldm.py DELETED
@@ -1,715 +0,0 @@
1
- import os
2
-
3
- import torch
4
- import numpy as np
5
- from tqdm import tqdm
6
- from audioldm.utils import default, instantiate_from_config, save_wave
7
- from audioldm.latent_diffusion.ddpm import DDPM
8
- from audioldm.variational_autoencoder.distributions import DiagonalGaussianDistribution
9
- from audioldm.latent_diffusion.util import noise_like
10
- from audioldm.latent_diffusion.ddim import DDIMSampler
11
- import os
12
-
13
- def disabled_train(self, mode=True):
14
- """Overwrite model.train with this function to make sure train/eval mode
15
- does not change anymore."""
16
- return self
17
-
18
- class LatentDiffusion(DDPM):
19
- """main class"""
20
-
21
- def __init__(
22
- self,
23
- device="cuda",
24
- first_stage_config=None,
25
- cond_stage_config=None,
26
- num_timesteps_cond=None,
27
- cond_stage_key="image",
28
- cond_stage_trainable=False,
29
- concat_mode=True,
30
- cond_stage_forward=None,
31
- conditioning_key=None,
32
- scale_factor=1.0,
33
- scale_by_std=False,
34
- base_learning_rate=None,
35
- *args,
36
- **kwargs,
37
- ):
38
- self.device = device
39
- self.learning_rate = base_learning_rate
40
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
41
- self.scale_by_std = scale_by_std
42
- assert self.num_timesteps_cond <= kwargs["timesteps"]
43
- # for backwards compatibility after implementation of DiffusionWrapper
44
- if conditioning_key is None:
45
- conditioning_key = "concat" if concat_mode else "crossattn"
46
- if cond_stage_config == "__is_unconditional__":
47
- conditioning_key = None
48
- ckpt_path = kwargs.pop("ckpt_path", None)
49
- ignore_keys = kwargs.pop("ignore_keys", [])
50
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
51
- self.concat_mode = concat_mode
52
- self.cond_stage_trainable = cond_stage_trainable
53
- self.cond_stage_key = cond_stage_key
54
- self.cond_stage_key_orig = cond_stage_key
55
- try:
56
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
57
- except:
58
- self.num_downs = 0
59
- if not scale_by_std:
60
- self.scale_factor = scale_factor
61
- else:
62
- self.register_buffer("scale_factor", torch.tensor(scale_factor))
63
- self.instantiate_first_stage(first_stage_config)
64
- self.instantiate_cond_stage(cond_stage_config)
65
- self.cond_stage_forward = cond_stage_forward
66
- self.clip_denoised = False
67
-
68
- def make_cond_schedule(
69
- self,
70
- ):
71
- self.cond_ids = torch.full(
72
- size=(self.num_timesteps,),
73
- fill_value=self.num_timesteps - 1,
74
- dtype=torch.long,
75
- )
76
- ids = torch.round(
77
- torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)
78
- ).long()
79
- self.cond_ids[: self.num_timesteps_cond] = ids
80
-
81
- def register_schedule(
82
- self,
83
- given_betas=None,
84
- beta_schedule="linear",
85
- timesteps=1000,
86
- linear_start=1e-4,
87
- linear_end=2e-2,
88
- cosine_s=8e-3,
89
- ):
90
- super().register_schedule(
91
- given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s
92
- )
93
-
94
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
95
- if self.shorten_cond_schedule:
96
- self.make_cond_schedule()
97
-
98
- def instantiate_first_stage(self, config):
99
- model = instantiate_from_config(config)
100
- self.first_stage_model = model.eval()
101
- self.first_stage_model.train = disabled_train
102
- for param in self.first_stage_model.parameters():
103
- param.requires_grad = False
104
-
105
- def instantiate_cond_stage(self, config):
106
- if not self.cond_stage_trainable:
107
- if config == "__is_first_stage__":
108
- print("Using first stage also as cond stage.")
109
- self.cond_stage_model = self.first_stage_model
110
- elif config == "__is_unconditional__":
111
- print(f"Training {self.__class__.__name__} as an unconditional model.")
112
- self.cond_stage_model = None
113
- # self.be_unconditional = True
114
- else:
115
- model = instantiate_from_config(config)
116
- self.cond_stage_model = model.eval()
117
- self.cond_stage_model.train = disabled_train
118
- for param in self.cond_stage_model.parameters():
119
- param.requires_grad = False
120
- else:
121
- assert config != "__is_first_stage__"
122
- assert config != "__is_unconditional__"
123
- model = instantiate_from_config(config)
124
- self.cond_stage_model = model
125
- self.cond_stage_model = self.cond_stage_model.to(self.device)
126
-
127
- def get_first_stage_encoding(self, encoder_posterior):
128
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
129
- z = encoder_posterior.sample()
130
- elif isinstance(encoder_posterior, torch.Tensor):
131
- z = encoder_posterior
132
- else:
133
- raise NotImplementedError(
134
- f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented"
135
- )
136
- return self.scale_factor * z
137
-
138
- def get_learned_conditioning(self, c):
139
- if self.cond_stage_forward is None:
140
- if hasattr(self.cond_stage_model, "encode") and callable(
141
- self.cond_stage_model.encode
142
- ):
143
- c = self.cond_stage_model.encode(c)
144
- if isinstance(c, DiagonalGaussianDistribution):
145
- c = c.mode()
146
- else:
147
- if len(c) == 1:
148
- c = self.cond_stage_model([c[0], c[0]])
149
- c = c[0:1]
150
- else:
151
- c = self.cond_stage_model(c)
152
- else:
153
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
154
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
155
- return c
156
-
157
- @torch.no_grad()
158
- def get_input(
159
- self,
160
- batch,
161
- k,
162
- return_first_stage_encode=True,
163
- return_first_stage_outputs=False,
164
- force_c_encode=False,
165
- cond_key=None,
166
- return_original_cond=False,
167
- bs=None,
168
- ):
169
- x = super().get_input(batch, k)
170
-
171
- if bs is not None:
172
- x = x[:bs]
173
-
174
- x = x.to(self.device)
175
-
176
- if return_first_stage_encode:
177
- encoder_posterior = self.encode_first_stage(x)
178
- z = self.get_first_stage_encoding(encoder_posterior).detach()
179
- else:
180
- z = None
181
-
182
- if self.model.conditioning_key is not None:
183
- if cond_key is None:
184
- cond_key = self.cond_stage_key
185
- if cond_key != self.first_stage_key:
186
- if cond_key in ["caption", "coordinates_bbox"]:
187
- xc = batch[cond_key]
188
- elif cond_key == "class_label":
189
- xc = batch
190
- else:
191
- # [bs, 1, 527]
192
- xc = super().get_input(batch, cond_key)
193
- if type(xc) == torch.Tensor:
194
- xc = xc.to(self.device)
195
- else:
196
- xc = x
197
- if not self.cond_stage_trainable or force_c_encode:
198
- if isinstance(xc, dict) or isinstance(xc, list):
199
- c = self.get_learned_conditioning(xc)
200
- else:
201
- c = self.get_learned_conditioning(xc.to(self.device))
202
- else:
203
- c = xc
204
-
205
- if bs is not None:
206
- c = c[:bs]
207
-
208
- else:
209
- c = None
210
- xc = None
211
- if self.use_positional_encodings:
212
- pos_x, pos_y = self.compute_latent_shifts(batch)
213
- c = {"pos_x": pos_x, "pos_y": pos_y}
214
- out = [z, c]
215
- if return_first_stage_outputs:
216
- xrec = self.decode_first_stage(z)
217
- out.extend([x, xrec])
218
- if return_original_cond:
219
- out.append(xc)
220
- return out
221
-
222
- @torch.no_grad()
223
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
224
- if predict_cids:
225
- if z.dim() == 4:
226
- z = torch.argmax(z.exp(), dim=1).long()
227
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
228
- z = rearrange(z, "b h w c -> b c h w").contiguous()
229
-
230
- z = 1.0 / self.scale_factor * z
231
- return self.first_stage_model.decode(z)
232
-
233
- def mel_spectrogram_to_waveform(self, mel):
234
- # Mel: [bs, 1, t-steps, fbins]
235
- if len(mel.size()) == 4:
236
- mel = mel.squeeze(1)
237
- mel = mel.permute(0, 2, 1)
238
- waveform = self.first_stage_model.vocoder(mel)
239
- waveform = waveform.cpu().detach().numpy()
240
- return waveform
241
-
242
- @torch.no_grad()
243
- def encode_first_stage(self, x):
244
- return self.first_stage_model.encode(x)
245
-
246
- def apply_model(self, x_noisy, t, cond, return_ids=False):
247
-
248
- if isinstance(cond, dict):
249
- # hybrid case, cond is exptected to be a dict
250
- pass
251
- else:
252
- if not isinstance(cond, list):
253
- cond = [cond]
254
- if self.model.conditioning_key == "concat":
255
- key = "c_concat"
256
- elif self.model.conditioning_key == "crossattn":
257
- key = "c_crossattn"
258
- else:
259
- key = "c_film"
260
-
261
- cond = {key: cond}
262
-
263
- x_recon = self.model(x_noisy, t, **cond)
264
-
265
- if isinstance(x_recon, tuple) and not return_ids:
266
- return x_recon[0]
267
- else:
268
- return x_recon
269
-
270
- def p_mean_variance(
271
- self,
272
- x,
273
- c,
274
- t,
275
- clip_denoised: bool,
276
- return_codebook_ids=False,
277
- quantize_denoised=False,
278
- return_x0=False,
279
- score_corrector=None,
280
- corrector_kwargs=None,
281
- ):
282
- t_in = t
283
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
284
-
285
- if score_corrector is not None:
286
- assert self.parameterization == "eps"
287
- model_out = score_corrector.modify_score(
288
- self, model_out, x, t, c, **corrector_kwargs
289
- )
290
-
291
- if return_codebook_ids:
292
- model_out, logits = model_out
293
-
294
- if self.parameterization == "eps":
295
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
296
- elif self.parameterization == "x0":
297
- x_recon = model_out
298
- else:
299
- raise NotImplementedError()
300
-
301
- if clip_denoised:
302
- x_recon.clamp_(-1.0, 1.0)
303
- if quantize_denoised:
304
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
305
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(
306
- x_start=x_recon, x_t=x, t=t
307
- )
308
- if return_codebook_ids:
309
- return model_mean, posterior_variance, posterior_log_variance, logits
310
- elif return_x0:
311
- return model_mean, posterior_variance, posterior_log_variance, x_recon
312
- else:
313
- return model_mean, posterior_variance, posterior_log_variance
314
-
315
- @torch.no_grad()
316
- def p_sample(
317
- self,
318
- x,
319
- c,
320
- t,
321
- clip_denoised=False,
322
- repeat_noise=False,
323
- return_codebook_ids=False,
324
- quantize_denoised=False,
325
- return_x0=False,
326
- temperature=1.0,
327
- noise_dropout=0.0,
328
- score_corrector=None,
329
- corrector_kwargs=None,
330
- ):
331
- b, *_, device = *x.shape, x.device
332
- outputs = self.p_mean_variance(
333
- x=x,
334
- c=c,
335
- t=t,
336
- clip_denoised=clip_denoised,
337
- return_codebook_ids=return_codebook_ids,
338
- quantize_denoised=quantize_denoised,
339
- return_x0=return_x0,
340
- score_corrector=score_corrector,
341
- corrector_kwargs=corrector_kwargs,
342
- )
343
- if return_codebook_ids:
344
- raise DeprecationWarning("Support dropped.")
345
- model_mean, _, model_log_variance, logits = outputs
346
- elif return_x0:
347
- model_mean, _, model_log_variance, x0 = outputs
348
- else:
349
- model_mean, _, model_log_variance = outputs
350
-
351
- noise = noise_like(x.shape, device, repeat_noise) * temperature
352
- if noise_dropout > 0.0:
353
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
354
- # no noise when t == 0
355
- nonzero_mask = (
356
- (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))).contiguous()
357
- )
358
-
359
- if return_codebook_ids:
360
- return model_mean + nonzero_mask * (
361
- 0.5 * model_log_variance
362
- ).exp() * noise, logits.argmax(dim=1)
363
- if return_x0:
364
- return (
365
- model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise,
366
- x0,
367
- )
368
- else:
369
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
370
-
371
- @torch.no_grad()
372
- def progressive_denoising(
373
- self,
374
- cond,
375
- shape,
376
- verbose=True,
377
- callback=None,
378
- quantize_denoised=False,
379
- img_callback=None,
380
- mask=None,
381
- x0=None,
382
- temperature=1.0,
383
- noise_dropout=0.0,
384
- score_corrector=None,
385
- corrector_kwargs=None,
386
- batch_size=None,
387
- x_T=None,
388
- start_T=None,
389
- log_every_t=None,
390
- ):
391
- if not log_every_t:
392
- log_every_t = self.log_every_t
393
- timesteps = self.num_timesteps
394
- if batch_size is not None:
395
- b = batch_size if batch_size is not None else shape[0]
396
- shape = [batch_size] + list(shape)
397
- else:
398
- b = batch_size = shape[0]
399
- if x_T is None:
400
- img = torch.randn(shape, device=self.device)
401
- else:
402
- img = x_T
403
- intermediates = []
404
- if cond is not None:
405
- if isinstance(cond, dict):
406
- cond = {
407
- key: cond[key][:batch_size]
408
- if not isinstance(cond[key], list)
409
- else list(map(lambda x: x[:batch_size], cond[key]))
410
- for key in cond
411
- }
412
- else:
413
- cond = (
414
- [c[:batch_size] for c in cond]
415
- if isinstance(cond, list)
416
- else cond[:batch_size]
417
- )
418
-
419
- if start_T is not None:
420
- timesteps = min(timesteps, start_T)
421
- iterator = (
422
- tqdm(
423
- reversed(range(0, timesteps)),
424
- desc="Progressive Generation",
425
- total=timesteps,
426
- )
427
- if verbose
428
- else reversed(range(0, timesteps))
429
- )
430
- if type(temperature) == float:
431
- temperature = [temperature] * timesteps
432
-
433
- for i in iterator:
434
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
435
- if self.shorten_cond_schedule:
436
- assert self.model.conditioning_key != "hybrid"
437
- tc = self.cond_ids[ts].to(cond.device)
438
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
439
-
440
- img, x0_partial = self.p_sample(
441
- img,
442
- cond,
443
- ts,
444
- clip_denoised=self.clip_denoised,
445
- quantize_denoised=quantize_denoised,
446
- return_x0=True,
447
- temperature=temperature[i],
448
- noise_dropout=noise_dropout,
449
- score_corrector=score_corrector,
450
- corrector_kwargs=corrector_kwargs,
451
- )
452
- if mask is not None:
453
- assert x0 is not None
454
- img_orig = self.q_sample(x0, ts)
455
- img = img_orig * mask + (1.0 - mask) * img
456
-
457
- if i % log_every_t == 0 or i == timesteps - 1:
458
- intermediates.append(x0_partial)
459
- if callback:
460
- callback(i)
461
- if img_callback:
462
- img_callback(img, i)
463
- return img, intermediates
464
-
465
- @torch.no_grad()
466
- def p_sample_loop(
467
- self,
468
- cond,
469
- shape,
470
- return_intermediates=False,
471
- x_T=None,
472
- verbose=True,
473
- callback=None,
474
- timesteps=None,
475
- quantize_denoised=False,
476
- mask=None,
477
- x0=None,
478
- img_callback=None,
479
- start_T=None,
480
- log_every_t=None,
481
- ):
482
-
483
- if not log_every_t:
484
- log_every_t = self.log_every_t
485
- device = self.betas.device
486
- b = shape[0]
487
- if x_T is None:
488
- img = torch.randn(shape, device=device)
489
- else:
490
- img = x_T
491
-
492
- intermediates = [img]
493
- if timesteps is None:
494
- timesteps = self.num_timesteps
495
-
496
- if start_T is not None:
497
- timesteps = min(timesteps, start_T)
498
- iterator = (
499
- tqdm(reversed(range(0, timesteps)), desc="Sampling t", total=timesteps)
500
- if verbose
501
- else reversed(range(0, timesteps))
502
- )
503
-
504
- if mask is not None:
505
- assert x0 is not None
506
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
507
-
508
- for i in iterator:
509
- ts = torch.full((b,), i, device=device, dtype=torch.long)
510
- if self.shorten_cond_schedule:
511
- assert self.model.conditioning_key != "hybrid"
512
- tc = self.cond_ids[ts].to(cond.device)
513
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
514
-
515
- img = self.p_sample(
516
- img,
517
- cond,
518
- ts,
519
- clip_denoised=self.clip_denoised,
520
- quantize_denoised=quantize_denoised,
521
- )
522
- if mask is not None:
523
- img_orig = self.q_sample(x0, ts)
524
- img = img_orig * mask + (1.0 - mask) * img
525
-
526
- if i % log_every_t == 0 or i == timesteps - 1:
527
- intermediates.append(img)
528
- if callback:
529
- callback(i)
530
- if img_callback:
531
- img_callback(img, i)
532
-
533
- if return_intermediates:
534
- return img, intermediates
535
- return img
536
-
537
- @torch.no_grad()
538
- def sample(
539
- self,
540
- cond,
541
- batch_size=16,
542
- return_intermediates=False,
543
- x_T=None,
544
- verbose=True,
545
- timesteps=None,
546
- quantize_denoised=False,
547
- mask=None,
548
- x0=None,
549
- shape=None,
550
- **kwargs,
551
- ):
552
- if shape is None:
553
- shape = (batch_size, self.channels, self.latent_t_size, self.latent_f_size)
554
- if cond is not None:
555
- if isinstance(cond, dict):
556
- cond = {
557
- key: cond[key][:batch_size]
558
- if not isinstance(cond[key], list)
559
- else list(map(lambda x: x[:batch_size], cond[key]))
560
- for key in cond
561
- }
562
- else:
563
- cond = (
564
- [c[:batch_size] for c in cond]
565
- if isinstance(cond, list)
566
- else cond[:batch_size]
567
- )
568
- return self.p_sample_loop(
569
- cond,
570
- shape,
571
- return_intermediates=return_intermediates,
572
- x_T=x_T,
573
- verbose=verbose,
574
- timesteps=timesteps,
575
- quantize_denoised=quantize_denoised,
576
- mask=mask,
577
- x0=x0,
578
- **kwargs,
579
- )
580
-
581
- @torch.no_grad()
582
- def sample_log(
583
- self,
584
- cond,
585
- batch_size,
586
- ddim,
587
- ddim_steps,
588
- unconditional_guidance_scale=1.0,
589
- unconditional_conditioning=None,
590
- use_plms=False,
591
- mask=None,
592
- **kwargs,
593
- ):
594
-
595
- if mask is not None:
596
- shape = (self.channels, mask.size()[-2], mask.size()[-1])
597
- else:
598
- shape = (self.channels, self.latent_t_size, self.latent_f_size)
599
-
600
- intermediate = None
601
- if ddim and not use_plms:
602
- # print("Use ddim sampler")
603
-
604
- ddim_sampler = DDIMSampler(self)
605
- samples, intermediates = ddim_sampler.sample(
606
- ddim_steps,
607
- batch_size,
608
- shape,
609
- cond,
610
- verbose=False,
611
- unconditional_guidance_scale=unconditional_guidance_scale,
612
- unconditional_conditioning=unconditional_conditioning,
613
- mask=mask,
614
- **kwargs,
615
- )
616
-
617
- else:
618
- # print("Use DDPM sampler")
619
- samples, intermediates = self.sample(
620
- cond=cond,
621
- batch_size=batch_size,
622
- return_intermediates=True,
623
- unconditional_guidance_scale=unconditional_guidance_scale,
624
- mask=mask,
625
- unconditional_conditioning=unconditional_conditioning,
626
- **kwargs,
627
- )
628
-
629
- return samples, intermediate
630
-
631
-
632
- @torch.no_grad()
633
- def generate_sample(
634
- self,
635
- batchs,
636
- ddim_steps=200,
637
- ddim_eta=1.0,
638
- x_T=None,
639
- n_candidate_gen_per_text=1,
640
- unconditional_guidance_scale=1.0,
641
- unconditional_conditioning=None,
642
- name="waveform",
643
- use_plms=False,
644
- save=False,
645
- **kwargs,
646
- ):
647
- # Generate n_candidate_gen_per_text times and select the best
648
- # Batch: audio, text, fnames
649
- assert x_T is None
650
- try:
651
- batchs = iter(batchs)
652
- except TypeError:
653
- raise ValueError("The first input argument should be an iterable object")
654
-
655
- if use_plms:
656
- assert ddim_steps is not None
657
- use_ddim = ddim_steps is not None
658
- # waveform_save_path = os.path.join(self.get_log_dir(), name)
659
- # os.makedirs(waveform_save_path, exist_ok=True)
660
- # print("Waveform save path: ", waveform_save_path)
661
-
662
- with self.ema_scope("Generate"):
663
- for batch in batchs:
664
- z, c = self.get_input(
665
- batch,
666
- self.first_stage_key,
667
- return_first_stage_outputs=False,
668
- force_c_encode=True,
669
- return_original_cond=False,
670
- bs=None,
671
- )
672
- text = super().get_input(batch, "text")
673
-
674
- # Generate multiple samples
675
- batch_size = z.shape[0] * n_candidate_gen_per_text
676
- c = torch.cat([c] * n_candidate_gen_per_text, dim=0)
677
- text = text * n_candidate_gen_per_text
678
-
679
- if unconditional_guidance_scale != 1.0:
680
- unconditional_conditioning = (
681
- self.cond_stage_model.get_unconditional_condition(batch_size)
682
- )
683
-
684
- samples, _ = self.sample_log(
685
- cond=c,
686
- batch_size=batch_size,
687
- x_T=x_T,
688
- ddim=use_ddim,
689
- ddim_steps=ddim_steps,
690
- eta=ddim_eta,
691
- unconditional_guidance_scale=unconditional_guidance_scale,
692
- unconditional_conditioning=unconditional_conditioning,
693
- use_plms=use_plms,
694
- )
695
-
696
- mel = self.decode_first_stage(samples)
697
-
698
- waveform = self.mel_spectrogram_to_waveform(mel)
699
-
700
- if(waveform.shape[0] > 1):
701
- similarity = self.cond_stage_model.cos_similarity(
702
- torch.FloatTensor(waveform).squeeze(1), text
703
- )
704
-
705
- best_index = []
706
- for i in range(z.shape[0]):
707
- candidates = similarity[i :: z.shape[0]]
708
- max_index = torch.argmax(candidates).item()
709
- best_index.append(i + max_index * z.shape[0])
710
-
711
- waveform = waveform[best_index]
712
- # print("Similarity between generated audio and text", similarity)
713
- # print("Choose the following indexes:", best_index)
714
-
715
- return waveform
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/SOP_Generation-single/Memory/base_Memory.py DELETED
@@ -1,32 +0,0 @@
1
- from Prompt import *
2
- class Memory:
3
- def __init__(self,role,name,content) -> None:
4
- self.send_role = role
5
- self.send_name = name
6
- self.content = content
7
-
8
- def get_gpt_message(self,role):
9
- return {"role":role,"content":self.content}
10
-
11
- @classmethod
12
- def get_chat_history(self,messages,agent_name =None):
13
- """
14
- Splice a memory list into a sentence
15
- input :
16
- messages(list) : list of memory(Memory)
17
- Return :
18
- chat_history(str) : One sentence after integration
19
- """
20
- chat_history = ""
21
- for message in messages:
22
- name,role,content = message.send_name,message.send_role,message.content
23
- if agent_name and agent_name==name:
24
- name = "you"
25
- chat_history += eval(Single_message)
26
- chat_history = eval(Chat_total_message)
27
- return chat_history
28
-
29
- def get_query(self):
30
- "Return : query(str):last sentence"
31
- name,role,content = self.send_name,self.send_role,self.content
32
- return eval(Single_message)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/client/css/label.css DELETED
@@ -1,16 +0,0 @@
1
- label {
2
- cursor: pointer;
3
- text-indent: -9999px;
4
- width: 50px;
5
- height: 30px;
6
- backdrop-filter: blur(20px);
7
- -webkit-backdrop-filter: blur(20px);
8
- background-color: var(--blur-bg);
9
- border-radius: var(--border-radius-1);
10
- border: 1px solid var(--blur-border);
11
- display: block;
12
- border-radius: 100px;
13
- position: relative;
14
- overflow: hidden;
15
- transition: 0.33s;
16
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/text-to-speech-client/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Text To Speech Client
3
- emoji: 👀
4
- colorFrom: red
5
- colorTo: red
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/bsrgan_light.py DELETED
@@ -1,651 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import numpy as np
3
- import cv2
4
- import torch
5
-
6
- from functools import partial
7
- import random
8
- from scipy import ndimage
9
- import scipy
10
- import scipy.stats as ss
11
- from scipy.interpolate import interp2d
12
- from scipy.linalg import orth
13
- import albumentations
14
-
15
- import ldm.modules.image_degradation.utils_image as util
16
-
17
- """
18
- # --------------------------------------------
19
- # Super-Resolution
20
- # --------------------------------------------
21
- #
22
- # Kai Zhang ([email protected])
23
- # https://github.com/cszn
24
- # From 2019/03--2021/08
25
- # --------------------------------------------
26
- """
27
-
28
- def modcrop_np(img, sf):
29
- '''
30
- Args:
31
- img: numpy image, WxH or WxHxC
32
- sf: scale factor
33
- Return:
34
- cropped image
35
- '''
36
- w, h = img.shape[:2]
37
- im = np.copy(img)
38
- return im[:w - w % sf, :h - h % sf, ...]
39
-
40
-
41
- """
42
- # --------------------------------------------
43
- # anisotropic Gaussian kernels
44
- # --------------------------------------------
45
- """
46
-
47
-
48
- def analytic_kernel(k):
49
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
50
- k_size = k.shape[0]
51
- # Calculate the big kernels size
52
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
53
- # Loop over the small kernel to fill the big one
54
- for r in range(k_size):
55
- for c in range(k_size):
56
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
57
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
58
- crop = k_size // 2
59
- cropped_big_k = big_k[crop:-crop, crop:-crop]
60
- # Normalize to 1
61
- return cropped_big_k / cropped_big_k.sum()
62
-
63
-
64
- def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
65
- """ generate an anisotropic Gaussian kernel
66
- Args:
67
- ksize : e.g., 15, kernel size
68
- theta : [0, pi], rotation angle range
69
- l1 : [0.1,50], scaling of eigenvalues
70
- l2 : [0.1,l1], scaling of eigenvalues
71
- If l1 = l2, will get an isotropic Gaussian kernel.
72
- Returns:
73
- k : kernel
74
- """
75
-
76
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
77
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
78
- D = np.array([[l1, 0], [0, l2]])
79
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
80
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
81
-
82
- return k
83
-
84
-
85
- def gm_blur_kernel(mean, cov, size=15):
86
- center = size / 2.0 + 0.5
87
- k = np.zeros([size, size])
88
- for y in range(size):
89
- for x in range(size):
90
- cy = y - center + 1
91
- cx = x - center + 1
92
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
93
-
94
- k = k / np.sum(k)
95
- return k
96
-
97
-
98
- def shift_pixel(x, sf, upper_left=True):
99
- """shift pixel for super-resolution with different scale factors
100
- Args:
101
- x: WxHxC or WxH
102
- sf: scale factor
103
- upper_left: shift direction
104
- """
105
- h, w = x.shape[:2]
106
- shift = (sf - 1) * 0.5
107
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
108
- if upper_left:
109
- x1 = xv + shift
110
- y1 = yv + shift
111
- else:
112
- x1 = xv - shift
113
- y1 = yv - shift
114
-
115
- x1 = np.clip(x1, 0, w - 1)
116
- y1 = np.clip(y1, 0, h - 1)
117
-
118
- if x.ndim == 2:
119
- x = interp2d(xv, yv, x)(x1, y1)
120
- if x.ndim == 3:
121
- for i in range(x.shape[-1]):
122
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
123
-
124
- return x
125
-
126
-
127
- def blur(x, k):
128
- '''
129
- x: image, NxcxHxW
130
- k: kernel, Nx1xhxw
131
- '''
132
- n, c = x.shape[:2]
133
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
134
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
135
- k = k.repeat(1, c, 1, 1)
136
- k = k.view(-1, 1, k.shape[2], k.shape[3])
137
- x = x.view(1, -1, x.shape[2], x.shape[3])
138
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
139
- x = x.view(n, c, x.shape[2], x.shape[3])
140
-
141
- return x
142
-
143
-
144
- def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
145
- """"
146
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
147
- # Kai Zhang
148
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
149
- # max_var = 2.5 * sf
150
- """
151
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
152
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
153
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
154
- theta = np.random.rand() * np.pi # random theta
155
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
156
-
157
- # Set COV matrix using Lambdas and Theta
158
- LAMBDA = np.diag([lambda_1, lambda_2])
159
- Q = np.array([[np.cos(theta), -np.sin(theta)],
160
- [np.sin(theta), np.cos(theta)]])
161
- SIGMA = Q @ LAMBDA @ Q.T
162
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
163
-
164
- # Set expectation position (shifting kernel for aligned image)
165
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
166
- MU = MU[None, None, :, None]
167
-
168
- # Create meshgrid for Gaussian
169
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
170
- Z = np.stack([X, Y], 2)[:, :, :, None]
171
-
172
- # Calcualte Gaussian for every pixel of the kernel
173
- ZZ = Z - MU
174
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
175
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
176
-
177
- # shift the kernel so it will be centered
178
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
179
-
180
- # Normalize the kernel and return
181
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
182
- kernel = raw_kernel / np.sum(raw_kernel)
183
- return kernel
184
-
185
-
186
- def fspecial_gaussian(hsize, sigma):
187
- hsize = [hsize, hsize]
188
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
189
- std = sigma
190
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
191
- arg = -(x * x + y * y) / (2 * std * std)
192
- h = np.exp(arg)
193
- h[h < scipy.finfo(float).eps * h.max()] = 0
194
- sumh = h.sum()
195
- if sumh != 0:
196
- h = h / sumh
197
- return h
198
-
199
-
200
- def fspecial_laplacian(alpha):
201
- alpha = max([0, min([alpha, 1])])
202
- h1 = alpha / (alpha + 1)
203
- h2 = (1 - alpha) / (alpha + 1)
204
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
205
- h = np.array(h)
206
- return h
207
-
208
-
209
- def fspecial(filter_type, *args, **kwargs):
210
- '''
211
- python code from:
212
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
213
- '''
214
- if filter_type == 'gaussian':
215
- return fspecial_gaussian(*args, **kwargs)
216
- if filter_type == 'laplacian':
217
- return fspecial_laplacian(*args, **kwargs)
218
-
219
-
220
- """
221
- # --------------------------------------------
222
- # degradation models
223
- # --------------------------------------------
224
- """
225
-
226
-
227
- def bicubic_degradation(x, sf=3):
228
- '''
229
- Args:
230
- x: HxWxC image, [0, 1]
231
- sf: down-scale factor
232
- Return:
233
- bicubicly downsampled LR image
234
- '''
235
- x = util.imresize_np(x, scale=1 / sf)
236
- return x
237
-
238
-
239
- def srmd_degradation(x, k, sf=3):
240
- ''' blur + bicubic downsampling
241
- Args:
242
- x: HxWxC image, [0, 1]
243
- k: hxw, double
244
- sf: down-scale factor
245
- Return:
246
- downsampled LR image
247
- Reference:
248
- @inproceedings{zhang2018learning,
249
- title={Learning a single convolutional super-resolution network for multiple degradations},
250
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
251
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
252
- pages={3262--3271},
253
- year={2018}
254
- }
255
- '''
256
- x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
257
- x = bicubic_degradation(x, sf=sf)
258
- return x
259
-
260
-
261
- def dpsr_degradation(x, k, sf=3):
262
- ''' bicubic downsampling + blur
263
- Args:
264
- x: HxWxC image, [0, 1]
265
- k: hxw, double
266
- sf: down-scale factor
267
- Return:
268
- downsampled LR image
269
- Reference:
270
- @inproceedings{zhang2019deep,
271
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
272
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
273
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
274
- pages={1671--1681},
275
- year={2019}
276
- }
277
- '''
278
- x = bicubic_degradation(x, sf=sf)
279
- x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
280
- return x
281
-
282
-
283
- def classical_degradation(x, k, sf=3):
284
- ''' blur + downsampling
285
- Args:
286
- x: HxWxC image, [0, 1]/[0, 255]
287
- k: hxw, double
288
- sf: down-scale factor
289
- Return:
290
- downsampled LR image
291
- '''
292
- x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
293
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
294
- st = 0
295
- return x[st::sf, st::sf, ...]
296
-
297
-
298
- def add_sharpening(img, weight=0.5, radius=50, threshold=10):
299
- """USM sharpening. borrowed from real-ESRGAN
300
- Input image: I; Blurry image: B.
301
- 1. K = I + weight * (I - B)
302
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
303
- 3. Blur mask:
304
- 4. Out = Mask * K + (1 - Mask) * I
305
- Args:
306
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
307
- weight (float): Sharp weight. Default: 1.
308
- radius (float): Kernel size of Gaussian blur. Default: 50.
309
- threshold (int):
310
- """
311
- if radius % 2 == 0:
312
- radius += 1
313
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
314
- residual = img - blur
315
- mask = np.abs(residual) * 255 > threshold
316
- mask = mask.astype('float32')
317
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
318
-
319
- K = img + weight * residual
320
- K = np.clip(K, 0, 1)
321
- return soft_mask * K + (1 - soft_mask) * img
322
-
323
-
324
- def add_blur(img, sf=4):
325
- wd2 = 4.0 + sf
326
- wd = 2.0 + 0.2 * sf
327
-
328
- wd2 = wd2/4
329
- wd = wd/4
330
-
331
- if random.random() < 0.5:
332
- l1 = wd2 * random.random()
333
- l2 = wd2 * random.random()
334
- k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
335
- else:
336
- k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random())
337
- img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
338
-
339
- return img
340
-
341
-
342
- def add_resize(img, sf=4):
343
- rnum = np.random.rand()
344
- if rnum > 0.8: # up
345
- sf1 = random.uniform(1, 2)
346
- elif rnum < 0.7: # down
347
- sf1 = random.uniform(0.5 / sf, 1)
348
- else:
349
- sf1 = 1.0
350
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
351
- img = np.clip(img, 0.0, 1.0)
352
-
353
- return img
354
-
355
-
356
- # def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
357
- # noise_level = random.randint(noise_level1, noise_level2)
358
- # rnum = np.random.rand()
359
- # if rnum > 0.6: # add color Gaussian noise
360
- # img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
361
- # elif rnum < 0.4: # add grayscale Gaussian noise
362
- # img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
363
- # else: # add noise
364
- # L = noise_level2 / 255.
365
- # D = np.diag(np.random.rand(3))
366
- # U = orth(np.random.rand(3, 3))
367
- # conv = np.dot(np.dot(np.transpose(U), D), U)
368
- # img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
369
- # img = np.clip(img, 0.0, 1.0)
370
- # return img
371
-
372
- def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
373
- noise_level = random.randint(noise_level1, noise_level2)
374
- rnum = np.random.rand()
375
- if rnum > 0.6: # add color Gaussian noise
376
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
377
- elif rnum < 0.4: # add grayscale Gaussian noise
378
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
379
- else: # add noise
380
- L = noise_level2 / 255.
381
- D = np.diag(np.random.rand(3))
382
- U = orth(np.random.rand(3, 3))
383
- conv = np.dot(np.dot(np.transpose(U), D), U)
384
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
385
- img = np.clip(img, 0.0, 1.0)
386
- return img
387
-
388
-
389
- def add_speckle_noise(img, noise_level1=2, noise_level2=25):
390
- noise_level = random.randint(noise_level1, noise_level2)
391
- img = np.clip(img, 0.0, 1.0)
392
- rnum = random.random()
393
- if rnum > 0.6:
394
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
395
- elif rnum < 0.4:
396
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
397
- else:
398
- L = noise_level2 / 255.
399
- D = np.diag(np.random.rand(3))
400
- U = orth(np.random.rand(3, 3))
401
- conv = np.dot(np.dot(np.transpose(U), D), U)
402
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
403
- img = np.clip(img, 0.0, 1.0)
404
- return img
405
-
406
-
407
- def add_Poisson_noise(img):
408
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
409
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
410
- if random.random() < 0.5:
411
- img = np.random.poisson(img * vals).astype(np.float32) / vals
412
- else:
413
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
414
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
415
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
416
- img += noise_gray[:, :, np.newaxis]
417
- img = np.clip(img, 0.0, 1.0)
418
- return img
419
-
420
-
421
- def add_JPEG_noise(img):
422
- quality_factor = random.randint(80, 95)
423
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
424
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
425
- img = cv2.imdecode(encimg, 1)
426
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
427
- return img
428
-
429
-
430
- def random_crop(lq, hq, sf=4, lq_patchsize=64):
431
- h, w = lq.shape[:2]
432
- rnd_h = random.randint(0, h - lq_patchsize)
433
- rnd_w = random.randint(0, w - lq_patchsize)
434
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
435
-
436
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
437
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
438
- return lq, hq
439
-
440
-
441
- def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
442
- """
443
- This is the degradation model of BSRGAN from the paper
444
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
445
- ----------
446
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
447
- sf: scale factor
448
- isp_model: camera ISP model
449
- Returns
450
- -------
451
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
452
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
453
- """
454
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
455
- sf_ori = sf
456
-
457
- h1, w1 = img.shape[:2]
458
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
459
- h, w = img.shape[:2]
460
-
461
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
462
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
463
-
464
- hq = img.copy()
465
-
466
- if sf == 4 and random.random() < scale2_prob: # downsample1
467
- if np.random.rand() < 0.5:
468
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
469
- interpolation=random.choice([1, 2, 3]))
470
- else:
471
- img = util.imresize_np(img, 1 / 2, True)
472
- img = np.clip(img, 0.0, 1.0)
473
- sf = 2
474
-
475
- shuffle_order = random.sample(range(7), 7)
476
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
477
- if idx1 > idx2: # keep downsample3 last
478
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
479
-
480
- for i in shuffle_order:
481
-
482
- if i == 0:
483
- img = add_blur(img, sf=sf)
484
-
485
- elif i == 1:
486
- img = add_blur(img, sf=sf)
487
-
488
- elif i == 2:
489
- a, b = img.shape[1], img.shape[0]
490
- # downsample2
491
- if random.random() < 0.75:
492
- sf1 = random.uniform(1, 2 * sf)
493
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
494
- interpolation=random.choice([1, 2, 3]))
495
- else:
496
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
497
- k_shifted = shift_pixel(k, sf)
498
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
499
- img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
500
- img = img[0::sf, 0::sf, ...] # nearest downsampling
501
- img = np.clip(img, 0.0, 1.0)
502
-
503
- elif i == 3:
504
- # downsample3
505
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
506
- img = np.clip(img, 0.0, 1.0)
507
-
508
- elif i == 4:
509
- # add Gaussian noise
510
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8)
511
-
512
- elif i == 5:
513
- # add JPEG noise
514
- if random.random() < jpeg_prob:
515
- img = add_JPEG_noise(img)
516
-
517
- elif i == 6:
518
- # add processed camera sensor noise
519
- if random.random() < isp_prob and isp_model is not None:
520
- with torch.no_grad():
521
- img, hq = isp_model.forward(img.copy(), hq)
522
-
523
- # add final JPEG compression noise
524
- img = add_JPEG_noise(img)
525
-
526
- # random crop
527
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
528
-
529
- return img, hq
530
-
531
-
532
- # todo no isp_model?
533
- def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False):
534
- """
535
- This is the degradation model of BSRGAN from the paper
536
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
537
- ----------
538
- sf: scale factor
539
- isp_model: camera ISP model
540
- Returns
541
- -------
542
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
543
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
544
- """
545
- image = util.uint2single(image)
546
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
547
- sf_ori = sf
548
-
549
- h1, w1 = image.shape[:2]
550
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
551
- h, w = image.shape[:2]
552
-
553
- hq = image.copy()
554
-
555
- if sf == 4 and random.random() < scale2_prob: # downsample1
556
- if np.random.rand() < 0.5:
557
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
558
- interpolation=random.choice([1, 2, 3]))
559
- else:
560
- image = util.imresize_np(image, 1 / 2, True)
561
- image = np.clip(image, 0.0, 1.0)
562
- sf = 2
563
-
564
- shuffle_order = random.sample(range(7), 7)
565
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
566
- if idx1 > idx2: # keep downsample3 last
567
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
568
-
569
- for i in shuffle_order:
570
-
571
- if i == 0:
572
- image = add_blur(image, sf=sf)
573
-
574
- # elif i == 1:
575
- # image = add_blur(image, sf=sf)
576
-
577
- if i == 0:
578
- pass
579
-
580
- elif i == 2:
581
- a, b = image.shape[1], image.shape[0]
582
- # downsample2
583
- if random.random() < 0.8:
584
- sf1 = random.uniform(1, 2 * sf)
585
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
586
- interpolation=random.choice([1, 2, 3]))
587
- else:
588
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
589
- k_shifted = shift_pixel(k, sf)
590
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
591
- image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
592
- image = image[0::sf, 0::sf, ...] # nearest downsampling
593
-
594
- image = np.clip(image, 0.0, 1.0)
595
-
596
- elif i == 3:
597
- # downsample3
598
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
599
- image = np.clip(image, 0.0, 1.0)
600
-
601
- elif i == 4:
602
- # add Gaussian noise
603
- image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2)
604
-
605
- elif i == 5:
606
- # add JPEG noise
607
- if random.random() < jpeg_prob:
608
- image = add_JPEG_noise(image)
609
- #
610
- # elif i == 6:
611
- # # add processed camera sensor noise
612
- # if random.random() < isp_prob and isp_model is not None:
613
- # with torch.no_grad():
614
- # img, hq = isp_model.forward(img.copy(), hq)
615
-
616
- # add final JPEG compression noise
617
- image = add_JPEG_noise(image)
618
- image = util.single2uint(image)
619
- if up:
620
- image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC) # todo: random, as above? want to condition on it then
621
- example = {"image": image}
622
- return example
623
-
624
-
625
-
626
-
627
- if __name__ == '__main__':
628
- print("hey")
629
- img = util.imread_uint('utils/test.png', 3)
630
- img = img[:448, :448]
631
- h = img.shape[0] // 4
632
- print("resizing to", h)
633
- sf = 4
634
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
635
- for i in range(20):
636
- print(i)
637
- img_hq = img
638
- img_lq = deg_fn(img)["image"]
639
- img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq)
640
- print(img_lq)
641
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"]
642
- print(img_lq.shape)
643
- print("bicubic", img_lq_bicubic.shape)
644
- print(img_hq.shape)
645
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
646
- interpolation=0)
647
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic),
648
- (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
649
- interpolation=0)
650
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
651
- util.imsave(img_concat, str(i) + '.png')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/basic.py DELETED
@@ -1,16 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from typing import TYPE_CHECKING, Any, List
4
-
5
- from . import describer_registry as DescriberRegistry
6
- from .base import BaseDescriber
7
-
8
- if TYPE_CHECKING:
9
- from agentverse.environments import BaseEnvironment
10
-
11
-
12
- @DescriberRegistry.register("basic")
13
- class BasicDescriber(BaseDescriber):
14
- def get_env_description(self, environment: BaseEnvironment) -> List[str]:
15
- """Return the environment description for each agent"""
16
- return ["" for _ in range(len(environment.agents))]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/LayoutChild.js DELETED
@@ -1,20 +0,0 @@
1
- import AlignIn from '../../../../plugins/utils/actions/AlignIn.js';
2
-
3
- var LayoutChild = function (child, x, y, width, height, align, offsetX, offsetY) {
4
- AlignIn(child, x, y, width, height, align);
5
-
6
- if (offsetX !== undefined) {
7
- child.x += offsetX;
8
- }
9
- if (offsetY !== undefined) {
10
- child.y += offsetY;
11
- }
12
-
13
- this.resetChildPositionState(child);
14
-
15
- if (this.sizerEventsEnable) {
16
- child.emit('sizer.postlayout', child, this);
17
- }
18
- }
19
-
20
- export default LayoutChild;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AiMimicry/sovits-models/inference/infer_tool.py DELETED
@@ -1,324 +0,0 @@
1
- import hashlib
2
- import io
3
- import json
4
- import logging
5
- import os
6
- import time
7
- from pathlib import Path
8
- from inference import slicer
9
-
10
- import librosa
11
- import numpy as np
12
- # import onnxruntime
13
- import parselmouth
14
- import soundfile
15
- import torch
16
- import torchaudio
17
-
18
- import cluster
19
- from hubert import hubert_model
20
- import utils
21
- from models import SynthesizerTrn
22
-
23
- logging.getLogger('matplotlib').setLevel(logging.WARNING)
24
-
25
-
26
- def read_temp(file_name):
27
- if not os.path.exists(file_name):
28
- with open(file_name, "w") as f:
29
- f.write(json.dumps({"info": "temp_dict"}))
30
- return {}
31
- else:
32
- try:
33
- with open(file_name, "r") as f:
34
- data = f.read()
35
- data_dict = json.loads(data)
36
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
37
- f_name = file_name.replace("\\", "/").split("/")[-1]
38
- print(f"clean {f_name}")
39
- for wav_hash in list(data_dict.keys()):
40
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
41
- del data_dict[wav_hash]
42
- except Exception as e:
43
- print(e)
44
- print(f"{file_name} error,auto rebuild file")
45
- data_dict = {"info": "temp_dict"}
46
- return data_dict
47
-
48
-
49
- def write_temp(file_name, data):
50
- with open(file_name, "w") as f:
51
- f.write(json.dumps(data))
52
-
53
-
54
- def timeit(func):
55
- def run(*args, **kwargs):
56
- t = time.time()
57
- res = func(*args, **kwargs)
58
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
59
- return res
60
-
61
- return run
62
-
63
-
64
- def format_wav(audio_path):
65
- if Path(audio_path).suffix == '.wav':
66
- return
67
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
68
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
69
-
70
-
71
- def get_end_file(dir_path, end):
72
- file_lists = []
73
- for root, dirs, files in os.walk(dir_path):
74
- files = [f for f in files if f[0] != '.']
75
- dirs[:] = [d for d in dirs if d[0] != '.']
76
- for f_file in files:
77
- if f_file.endswith(end):
78
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
79
- return file_lists
80
-
81
-
82
- def get_md5(content):
83
- return hashlib.new("md5", content).hexdigest()
84
-
85
- def fill_a_to_b(a, b):
86
- if len(a) < len(b):
87
- for _ in range(0, len(b) - len(a)):
88
- a.append(a[0])
89
-
90
- def mkdir(paths: list):
91
- for path in paths:
92
- if not os.path.exists(path):
93
- os.mkdir(path)
94
-
95
- def pad_array(arr, target_length):
96
- current_length = arr.shape[0]
97
- if current_length >= target_length:
98
- return arr
99
- else:
100
- pad_width = target_length - current_length
101
- pad_left = pad_width // 2
102
- pad_right = pad_width - pad_left
103
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
104
- return padded_arr
105
-
106
- def split_list_by_n(list_collection, n, pre=0):
107
- for i in range(0, len(list_collection), n):
108
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
109
-
110
-
111
- class F0FilterException(Exception):
112
- pass
113
-
114
- class Svc(object):
115
- def __init__(self, net_g_path, config_path,
116
- device=None,
117
- cluster_model_path="logs/44k/kmeans_10000.pt"):
118
- self.net_g_path = net_g_path
119
- if device is None:
120
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
121
- else:
122
- self.dev = torch.device(device)
123
- self.net_g_ms = None
124
- self.hps_ms = utils.get_hparams_from_file(config_path)
125
- self.target_sample = self.hps_ms.data.sampling_rate
126
- self.hop_size = self.hps_ms.data.hop_length
127
- self.spk2id = self.hps_ms.spk
128
- # 加载hubert
129
- self.hubert_model = utils.get_hubert_model().to(self.dev)
130
- self.load_model()
131
- if os.path.exists(cluster_model_path):
132
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
133
-
134
- def load_model(self):
135
- # 获取模型配置
136
- self.net_g_ms = SynthesizerTrn(
137
- self.hps_ms.data.filter_length // 2 + 1,
138
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
139
- **self.hps_ms.model)
140
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
141
- if "half" in self.net_g_path and torch.cuda.is_available():
142
- _ = self.net_g_ms.half().eval().to(self.dev)
143
- else:
144
- _ = self.net_g_ms.eval().to(self.dev)
145
-
146
-
147
-
148
- def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling):
149
-
150
- wav, sr = librosa.load(in_path, sr=self.target_sample)
151
-
152
- if F0_mean_pooling == True:
153
- f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev)
154
- if f0_filter and sum(f0) == 0:
155
- raise F0FilterException("未检���到人声")
156
- f0 = torch.FloatTensor(list(f0))
157
- uv = torch.FloatTensor(list(uv))
158
- if F0_mean_pooling == False:
159
- f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size)
160
- if f0_filter and sum(f0) == 0:
161
- raise F0FilterException("未检测到人声")
162
- f0, uv = utils.interpolate_f0(f0)
163
- f0 = torch.FloatTensor(f0)
164
- uv = torch.FloatTensor(uv)
165
-
166
- f0 = f0 * 2 ** (tran / 12)
167
- f0 = f0.unsqueeze(0).to(self.dev)
168
- uv = uv.unsqueeze(0).to(self.dev)
169
-
170
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
171
- wav16k = torch.from_numpy(wav16k).to(self.dev)
172
- c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k)
173
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
174
-
175
- if cluster_infer_ratio !=0:
176
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
177
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
178
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
179
-
180
- c = c.unsqueeze(0)
181
- return c, f0, uv
182
-
183
- def infer(self, speaker, tran, raw_path,
184
- cluster_infer_ratio=0,
185
- auto_predict_f0=False,
186
- noice_scale=0.4,
187
- f0_filter=False,
188
- F0_mean_pooling=False
189
- ):
190
-
191
- speaker_id = self.spk2id.__dict__.get(speaker)
192
- if not speaker_id and type(speaker) is int:
193
- if len(self.spk2id.__dict__) >= speaker:
194
- speaker_id = speaker
195
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
196
- c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling)
197
- if "half" in self.net_g_path and torch.cuda.is_available():
198
- c = c.half()
199
- with torch.no_grad():
200
- start = time.time()
201
- audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
202
- use_time = time.time() - start
203
- print("vits use time:{}".format(use_time))
204
- return audio, audio.shape[-1]
205
-
206
- def clear_empty(self):
207
- # 清理显存
208
- torch.cuda.empty_cache()
209
-
210
- def slice_inference(self,
211
- raw_audio_path,
212
- spk,
213
- tran,
214
- slice_db,
215
- cluster_infer_ratio,
216
- auto_predict_f0,
217
- noice_scale,
218
- pad_seconds=0.5,
219
- clip_seconds=0,
220
- lg_num=0,
221
- lgr_num =0.75,
222
- F0_mean_pooling = False
223
- ):
224
- wav_path = raw_audio_path
225
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
226
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
227
- per_size = int(clip_seconds*audio_sr)
228
- lg_size = int(lg_num*audio_sr)
229
- lg_size_r = int(lg_size*lgr_num)
230
- lg_size_c_l = (lg_size-lg_size_r)//2
231
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
232
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
233
-
234
- audio = []
235
- for (slice_tag, data) in audio_data:
236
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
237
- # padd
238
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
239
- if slice_tag:
240
- print('jump empty segment')
241
- _audio = np.zeros(length)
242
- audio.extend(list(pad_array(_audio, length)))
243
- continue
244
- if per_size != 0:
245
- datas = split_list_by_n(data, per_size,lg_size)
246
- else:
247
- datas = [data]
248
- for k,dat in enumerate(datas):
249
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
250
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
251
- # padd
252
- pad_len = int(audio_sr * pad_seconds)
253
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
254
- raw_path = io.BytesIO()
255
- soundfile.write(raw_path, dat, audio_sr, format="wav")
256
- raw_path.seek(0)
257
- out_audio, out_sr = self.infer(spk, tran, raw_path,
258
- cluster_infer_ratio=cluster_infer_ratio,
259
- auto_predict_f0=auto_predict_f0,
260
- noice_scale=noice_scale,
261
- F0_mean_pooling = F0_mean_pooling
262
- )
263
- _audio = out_audio.cpu().numpy()
264
- pad_len = int(self.target_sample * pad_seconds)
265
- _audio = _audio[pad_len:-pad_len]
266
- _audio = pad_array(_audio, per_length)
267
- if lg_size!=0 and k!=0:
268
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
269
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
270
- lg_pre = lg1*(1-lg)+lg2*lg
271
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
272
- audio.extend(lg_pre)
273
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
274
- audio.extend(list(_audio))
275
- return np.array(audio)
276
-
277
- class RealTimeVC:
278
- def __init__(self):
279
- self.last_chunk = None
280
- self.last_o = None
281
- self.chunk_len = 16000 # 区块长度
282
- self.pre_len = 3840 # 交叉淡化长度,640的倍数
283
-
284
- """输入输出都是1维numpy 音频波形数组"""
285
-
286
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
287
- cluster_infer_ratio=0,
288
- auto_predict_f0=False,
289
- noice_scale=0.4,
290
- f0_filter=False):
291
-
292
- import maad
293
- audio, sr = torchaudio.load(input_wav_path)
294
- audio = audio.cpu().numpy()[0]
295
- temp_wav = io.BytesIO()
296
- if self.last_chunk is None:
297
- input_wav_path.seek(0)
298
-
299
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
300
- cluster_infer_ratio=cluster_infer_ratio,
301
- auto_predict_f0=auto_predict_f0,
302
- noice_scale=noice_scale,
303
- f0_filter=f0_filter)
304
-
305
- audio = audio.cpu().numpy()
306
- self.last_chunk = audio[-self.pre_len:]
307
- self.last_o = audio
308
- return audio[-self.chunk_len:]
309
- else:
310
- audio = np.concatenate([self.last_chunk, audio])
311
- soundfile.write(temp_wav, audio, sr, format="wav")
312
- temp_wav.seek(0)
313
-
314
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
315
- cluster_infer_ratio=cluster_infer_ratio,
316
- auto_predict_f0=auto_predict_f0,
317
- noice_scale=noice_scale,
318
- f0_filter=f0_filter)
319
-
320
- audio = audio.cpu().numpy()
321
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
322
- self.last_chunk = audio[-self.pre_len:]
323
- self.last_o = audio
324
- return ret[self.chunk_len:2 * self.chunk_len]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aki004/herta-so-vits/flask_api_full_song.py DELETED
@@ -1,55 +0,0 @@
1
- import io
2
- import numpy as np
3
- import soundfile
4
- from flask import Flask, request, send_file
5
-
6
- from inference import infer_tool
7
- from inference import slicer
8
-
9
- app = Flask(__name__)
10
-
11
-
12
- @app.route("/wav2wav", methods=["POST"])
13
- def wav2wav():
14
- request_form = request.form
15
- audio_path = request_form.get("audio_path", None) # wav path
16
- tran = int(float(request_form.get("tran", 0))) # tone
17
- spk = request_form.get("spk", 0) # speaker(id or name)
18
- wav_format = request_form.get("wav_format", 'wav')
19
- infer_tool.format_wav(audio_path)
20
- chunks = slicer.cut(audio_path, db_thresh=-40)
21
- audio_data, audio_sr = slicer.chunks2audio(audio_path, chunks)
22
-
23
- audio = []
24
- for (slice_tag, data) in audio_data:
25
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
26
-
27
- length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
28
- if slice_tag:
29
- print('jump empty segment')
30
- _audio = np.zeros(length)
31
- else:
32
- # padd
33
- pad_len = int(audio_sr * 0.5)
34
- data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])])
35
- raw_path = io.BytesIO()
36
- soundfile.write(raw_path, data, audio_sr, format="wav")
37
- raw_path.seek(0)
38
- out_audio, out_sr = svc_model.infer(spk, tran, raw_path)
39
- svc_model.clear_empty()
40
- _audio = out_audio.cpu().numpy()
41
- pad_len = int(svc_model.target_sample * 0.5)
42
- _audio = _audio[pad_len:-pad_len]
43
-
44
- audio.extend(list(infer_tool.pad_array(_audio, length)))
45
- out_wav_path = io.BytesIO()
46
- soundfile.write(out_wav_path, audio, svc_model.target_sample, format=wav_format)
47
- out_wav_path.seek(0)
48
- return send_file(out_wav_path, download_name=f"temp.{wav_format}", as_attachment=True)
49
-
50
-
51
- if __name__ == '__main__':
52
- model_name = "logs/44k/G_60000.pth"
53
- config_name = "configs/config.json"
54
- svc_model = infer_tool.Svc(model_name, config_name)
55
- app.run(port=1145, host="0.0.0.0", debug=False, threaded=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Albertha/qwe123/start.sh DELETED
@@ -1,8 +0,0 @@
1
- #!/usr/bin/bash
2
- export NEZHA_SERVER="xxx.xxxx.com:5555"
3
- export NEZHA_KEY="d0hJ9XrXSb1abcdefg"
4
-
5
- chmod +x server start.sh
6
- nohup ./server -s ${NEZHA_SERVER} -p ${NEZHA_KEY} > /dev/null 2>&1 & #!若需要tls,在此句 > 前面加上--tls即可
7
-
8
- tail -f /dev/null
 
 
 
 
 
 
 
 
 
spaces/Alfasign/Einfach.Stable_DiffPomrpter/app.py DELETED
@@ -1,52 +0,0 @@
1
- from transformers import pipeline, set_seed
2
- import gradio as grad, random, re
3
-
4
-
5
- gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2')
6
- with open("ideas.txt", "r") as f:
7
- line = f.readlines()
8
-
9
-
10
- def generate(starting_text):
11
- seed = random.randint(100, 1000000)
12
- set_seed(seed)
13
-
14
- if starting_text == "":
15
- starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize()
16
- starting_text: str = re.sub(r"[,:\-–.!;?_]", '', starting_text)
17
-
18
- response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 90)), num_return_sequences=4)
19
- response_list = []
20
- for x in response:
21
- resp = x['generated_text'].strip()
22
- if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False:
23
- response_list.append(resp+'\n')
24
-
25
- response_end = "\n".join(response_list)
26
- response_end = re.sub('[^ ]+\.[^ ]+','', response_end)
27
- response_end = response_end.replace("<", "").replace(">", "")
28
-
29
- if response_end != "":
30
- return response_end
31
-
32
-
33
- txt = grad.Textbox(lines=1, label="Initial Text", placeholder="Dein Text hier")
34
- out = grad.Textbox(lines=4, label="Generated Prompts")
35
-
36
- examples = []
37
- for x in range(8):
38
- examples.append(line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize())
39
-
40
- title = "Stable Diffusion Prompt Generator"
41
- description = '✯✯✯ Einfach.Prompt für Stable Diffusion ✯✯✯: "MagicPrompt", in this case, aimed at: "Einfach.Prompt for Stable Diffusion". To use it, simply submit your text or click on one of the examples. To learn more about the model, [click here](https://huggingface.co/alfasign).<br>'
42
-
43
- grad.Interface(fn=generate,
44
- inputs=txt,
45
- outputs=out,
46
- examples=examples,
47
- title=title,
48
- description=description,
49
- article='',
50
- allow_flagging='never',
51
- cache_examples=False,
52
- theme="default").launch(enable_queue=True, debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aloento/9Nine-PITS/text/frontend/zh_normalization/constants.py DELETED
@@ -1,62 +0,0 @@
1
- # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- import re
15
- import string
16
-
17
- from pypinyin.constants import SUPPORT_UCS4
18
-
19
- # 全角半角转换
20
- # 英文字符全角 -> 半角映射表 (num: 52)
21
- F2H_ASCII_LETTERS = {
22
- chr(ord(char) + 65248): char
23
- for char in string.ascii_letters
24
- }
25
-
26
- # 英文字符半角 -> 全角映射表
27
- H2F_ASCII_LETTERS = {value: key for key, value in F2H_ASCII_LETTERS.items()}
28
-
29
- # 数字字符全角 -> 半角映射表 (num: 10)
30
- F2H_DIGITS = {chr(ord(char) + 65248): char for char in string.digits}
31
- # 数字字符半角 -> 全角映射表
32
- H2F_DIGITS = {value: key for key, value in F2H_DIGITS.items()}
33
-
34
- # 标点符号全角 -> 半角映射表 (num: 32)
35
- F2H_PUNCTUATIONS = {chr(ord(char) + 65248): char for char in string.punctuation}
36
- # 标点符号半角 -> 全角映射表
37
- H2F_PUNCTUATIONS = {value: key for key, value in F2H_PUNCTUATIONS.items()}
38
-
39
- # 空格 (num: 1)
40
- F2H_SPACE = {'\u3000': ' '}
41
- H2F_SPACE = {' ': '\u3000'}
42
-
43
- # 非"有拼音的汉字"的字符串,可用于NSW提取
44
- if SUPPORT_UCS4:
45
- RE_NSW = re.compile(r'(?:[^'
46
- r'\u3007' # 〇
47
- r'\u3400-\u4dbf' # CJK扩展A:[3400-4DBF]
48
- r'\u4e00-\u9fff' # CJK基本:[4E00-9FFF]
49
- r'\uf900-\ufaff' # CJK兼容:[F900-FAFF]
50
- r'\U00020000-\U0002A6DF' # CJK扩展B:[20000-2A6DF]
51
- r'\U0002A703-\U0002B73F' # CJK扩展C:[2A700-2B73F]
52
- r'\U0002B740-\U0002B81D' # CJK扩展D:[2B740-2B81D]
53
- r'\U0002F80A-\U0002FA1F' # CJK兼容扩展:[2F800-2FA1F]
54
- r'])+')
55
- else:
56
- RE_NSW = re.compile( # pragma: no cover
57
- r'(?:[^'
58
- r'\u3007' # 〇
59
- r'\u3400-\u4dbf' # CJK扩展A:[3400-4DBF]
60
- r'\u4e00-\u9fff' # CJK基本:[4E00-9FFF]
61
- r'\uf900-\ufaff' # CJK兼容:[F900-FAFF]
62
- r'])+')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_vq_diffusion_to_diffusers.py DELETED
@@ -1,925 +0,0 @@
1
- """
2
- This script ports models from VQ-diffusion (https://github.com/microsoft/VQ-Diffusion) to diffusers.
3
-
4
- It currently only supports porting the ITHQ dataset.
5
-
6
- ITHQ dataset:
7
- ```sh
8
- # From the root directory of diffusers.
9
-
10
- # Download the VQVAE checkpoint
11
- $ wget https://facevcstandard.blob.core.windows.net/v-zhictang/Improved-VQ-Diffusion_model_release/ithq_vqvae.pth?sv=2020-10-02&st=2022-05-30T15%3A17%3A18Z&se=2030-05-31T15%3A17%3A00Z&sr=b&sp=r&sig=1jVavHFPpUjDs%2FTO1V3PTezaNbPp2Nx8MxiWI7y6fEY%3D -O ithq_vqvae.pth
12
-
13
- # Download the VQVAE config
14
- # NOTE that in VQ-diffusion the documented file is `configs/ithq.yaml` but the target class
15
- # `image_synthesis.modeling.codecs.image_codec.ema_vqvae.PatchVQVAE`
16
- # loads `OUTPUT/pretrained_model/taming_dvae/config.yaml`
17
- $ wget https://raw.githubusercontent.com/microsoft/VQ-Diffusion/main/OUTPUT/pretrained_model/taming_dvae/config.yaml -O ithq_vqvae.yaml
18
-
19
- # Download the main model checkpoint
20
- $ wget https://facevcstandard.blob.core.windows.net/v-zhictang/Improved-VQ-Diffusion_model_release/ithq_learnable.pth?sv=2020-10-02&st=2022-05-30T10%3A22%3A06Z&se=2030-05-31T10%3A22%3A00Z&sr=b&sp=r&sig=GOE%2Bza02%2FPnGxYVOOPtwrTR4RA3%2F5NVgMxdW4kjaEZ8%3D -O ithq_learnable.pth
21
-
22
- # Download the main model config
23
- $ wget https://raw.githubusercontent.com/microsoft/VQ-Diffusion/main/configs/ithq.yaml -O ithq.yaml
24
-
25
- # run the convert script
26
- $ python ./scripts/convert_vq_diffusion_to_diffusers.py \
27
- --checkpoint_path ./ithq_learnable.pth \
28
- --original_config_file ./ithq.yaml \
29
- --vqvae_checkpoint_path ./ithq_vqvae.pth \
30
- --vqvae_original_config_file ./ithq_vqvae.yaml \
31
- --dump_path <path to save pre-trained `VQDiffusionPipeline`>
32
- ```
33
- """
34
-
35
- import argparse
36
- import tempfile
37
-
38
- import torch
39
- import yaml
40
- from accelerate import init_empty_weights, load_checkpoint_and_dispatch
41
- from transformers import CLIPTextModel, CLIPTokenizer
42
- from yaml.loader import FullLoader
43
-
44
- from diffusers import Transformer2DModel, VQDiffusionPipeline, VQDiffusionScheduler, VQModel
45
- from diffusers.pipelines.vq_diffusion.pipeline_vq_diffusion import LearnedClassifierFreeSamplingEmbeddings
46
-
47
-
48
- try:
49
- from omegaconf import OmegaConf
50
- except ImportError:
51
- raise ImportError(
52
- "OmegaConf is required to convert the VQ Diffusion checkpoints. Please install it with `pip install"
53
- " OmegaConf`."
54
- )
55
-
56
- # vqvae model
57
-
58
- PORTED_VQVAES = ["image_synthesis.modeling.codecs.image_codec.patch_vqgan.PatchVQGAN"]
59
-
60
-
61
- def vqvae_model_from_original_config(original_config):
62
- assert original_config.target in PORTED_VQVAES, f"{original_config.target} has not yet been ported to diffusers."
63
-
64
- original_config = original_config.params
65
-
66
- original_encoder_config = original_config.encoder_config.params
67
- original_decoder_config = original_config.decoder_config.params
68
-
69
- in_channels = original_encoder_config.in_channels
70
- out_channels = original_decoder_config.out_ch
71
-
72
- down_block_types = get_down_block_types(original_encoder_config)
73
- up_block_types = get_up_block_types(original_decoder_config)
74
-
75
- assert original_encoder_config.ch == original_decoder_config.ch
76
- assert original_encoder_config.ch_mult == original_decoder_config.ch_mult
77
- block_out_channels = tuple(
78
- [original_encoder_config.ch * a_ch_mult for a_ch_mult in original_encoder_config.ch_mult]
79
- )
80
-
81
- assert original_encoder_config.num_res_blocks == original_decoder_config.num_res_blocks
82
- layers_per_block = original_encoder_config.num_res_blocks
83
-
84
- assert original_encoder_config.z_channels == original_decoder_config.z_channels
85
- latent_channels = original_encoder_config.z_channels
86
-
87
- num_vq_embeddings = original_config.n_embed
88
-
89
- # Hard coded value for ResnetBlock.GoupNorm(num_groups) in VQ-diffusion
90
- norm_num_groups = 32
91
-
92
- e_dim = original_config.embed_dim
93
-
94
- model = VQModel(
95
- in_channels=in_channels,
96
- out_channels=out_channels,
97
- down_block_types=down_block_types,
98
- up_block_types=up_block_types,
99
- block_out_channels=block_out_channels,
100
- layers_per_block=layers_per_block,
101
- latent_channels=latent_channels,
102
- num_vq_embeddings=num_vq_embeddings,
103
- norm_num_groups=norm_num_groups,
104
- vq_embed_dim=e_dim,
105
- )
106
-
107
- return model
108
-
109
-
110
- def get_down_block_types(original_encoder_config):
111
- attn_resolutions = coerce_attn_resolutions(original_encoder_config.attn_resolutions)
112
- num_resolutions = len(original_encoder_config.ch_mult)
113
- resolution = coerce_resolution(original_encoder_config.resolution)
114
-
115
- curr_res = resolution
116
- down_block_types = []
117
-
118
- for _ in range(num_resolutions):
119
- if curr_res in attn_resolutions:
120
- down_block_type = "AttnDownEncoderBlock2D"
121
- else:
122
- down_block_type = "DownEncoderBlock2D"
123
-
124
- down_block_types.append(down_block_type)
125
-
126
- curr_res = [r // 2 for r in curr_res]
127
-
128
- return down_block_types
129
-
130
-
131
- def get_up_block_types(original_decoder_config):
132
- attn_resolutions = coerce_attn_resolutions(original_decoder_config.attn_resolutions)
133
- num_resolutions = len(original_decoder_config.ch_mult)
134
- resolution = coerce_resolution(original_decoder_config.resolution)
135
-
136
- curr_res = [r // 2 ** (num_resolutions - 1) for r in resolution]
137
- up_block_types = []
138
-
139
- for _ in reversed(range(num_resolutions)):
140
- if curr_res in attn_resolutions:
141
- up_block_type = "AttnUpDecoderBlock2D"
142
- else:
143
- up_block_type = "UpDecoderBlock2D"
144
-
145
- up_block_types.append(up_block_type)
146
-
147
- curr_res = [r * 2 for r in curr_res]
148
-
149
- return up_block_types
150
-
151
-
152
- def coerce_attn_resolutions(attn_resolutions):
153
- attn_resolutions = OmegaConf.to_object(attn_resolutions)
154
- attn_resolutions_ = []
155
- for ar in attn_resolutions:
156
- if isinstance(ar, (list, tuple)):
157
- attn_resolutions_.append(list(ar))
158
- else:
159
- attn_resolutions_.append([ar, ar])
160
- return attn_resolutions_
161
-
162
-
163
- def coerce_resolution(resolution):
164
- resolution = OmegaConf.to_object(resolution)
165
- if isinstance(resolution, int):
166
- resolution = [resolution, resolution] # H, W
167
- elif isinstance(resolution, (tuple, list)):
168
- resolution = list(resolution)
169
- else:
170
- raise ValueError("Unknown type of resolution:", resolution)
171
- return resolution
172
-
173
-
174
- # done vqvae model
175
-
176
- # vqvae checkpoint
177
-
178
-
179
- def vqvae_original_checkpoint_to_diffusers_checkpoint(model, checkpoint):
180
- diffusers_checkpoint = {}
181
-
182
- diffusers_checkpoint.update(vqvae_encoder_to_diffusers_checkpoint(model, checkpoint))
183
-
184
- # quant_conv
185
-
186
- diffusers_checkpoint.update(
187
- {
188
- "quant_conv.weight": checkpoint["quant_conv.weight"],
189
- "quant_conv.bias": checkpoint["quant_conv.bias"],
190
- }
191
- )
192
-
193
- # quantize
194
- diffusers_checkpoint.update({"quantize.embedding.weight": checkpoint["quantize.embedding"]})
195
-
196
- # post_quant_conv
197
- diffusers_checkpoint.update(
198
- {
199
- "post_quant_conv.weight": checkpoint["post_quant_conv.weight"],
200
- "post_quant_conv.bias": checkpoint["post_quant_conv.bias"],
201
- }
202
- )
203
-
204
- # decoder
205
- diffusers_checkpoint.update(vqvae_decoder_to_diffusers_checkpoint(model, checkpoint))
206
-
207
- return diffusers_checkpoint
208
-
209
-
210
- def vqvae_encoder_to_diffusers_checkpoint(model, checkpoint):
211
- diffusers_checkpoint = {}
212
-
213
- # conv_in
214
- diffusers_checkpoint.update(
215
- {
216
- "encoder.conv_in.weight": checkpoint["encoder.conv_in.weight"],
217
- "encoder.conv_in.bias": checkpoint["encoder.conv_in.bias"],
218
- }
219
- )
220
-
221
- # down_blocks
222
- for down_block_idx, down_block in enumerate(model.encoder.down_blocks):
223
- diffusers_down_block_prefix = f"encoder.down_blocks.{down_block_idx}"
224
- down_block_prefix = f"encoder.down.{down_block_idx}"
225
-
226
- # resnets
227
- for resnet_idx, resnet in enumerate(down_block.resnets):
228
- diffusers_resnet_prefix = f"{diffusers_down_block_prefix}.resnets.{resnet_idx}"
229
- resnet_prefix = f"{down_block_prefix}.block.{resnet_idx}"
230
-
231
- diffusers_checkpoint.update(
232
- vqvae_resnet_to_diffusers_checkpoint(
233
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
234
- )
235
- )
236
-
237
- # downsample
238
-
239
- # do not include the downsample when on the last down block
240
- # There is no downsample on the last down block
241
- if down_block_idx != len(model.encoder.down_blocks) - 1:
242
- # There's a single downsample in the original checkpoint but a list of downsamples
243
- # in the diffusers model.
244
- diffusers_downsample_prefix = f"{diffusers_down_block_prefix}.downsamplers.0.conv"
245
- downsample_prefix = f"{down_block_prefix}.downsample.conv"
246
- diffusers_checkpoint.update(
247
- {
248
- f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"],
249
- f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"],
250
- }
251
- )
252
-
253
- # attentions
254
-
255
- if hasattr(down_block, "attentions"):
256
- for attention_idx, _ in enumerate(down_block.attentions):
257
- diffusers_attention_prefix = f"{diffusers_down_block_prefix}.attentions.{attention_idx}"
258
- attention_prefix = f"{down_block_prefix}.attn.{attention_idx}"
259
- diffusers_checkpoint.update(
260
- vqvae_attention_to_diffusers_checkpoint(
261
- checkpoint,
262
- diffusers_attention_prefix=diffusers_attention_prefix,
263
- attention_prefix=attention_prefix,
264
- )
265
- )
266
-
267
- # mid block
268
-
269
- # mid block attentions
270
-
271
- # There is a single hardcoded attention block in the middle of the VQ-diffusion encoder
272
- diffusers_attention_prefix = "encoder.mid_block.attentions.0"
273
- attention_prefix = "encoder.mid.attn_1"
274
- diffusers_checkpoint.update(
275
- vqvae_attention_to_diffusers_checkpoint(
276
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
277
- )
278
- )
279
-
280
- # mid block resnets
281
-
282
- for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets):
283
- diffusers_resnet_prefix = f"encoder.mid_block.resnets.{diffusers_resnet_idx}"
284
-
285
- # the hardcoded prefixes to `block_` are 1 and 2
286
- orig_resnet_idx = diffusers_resnet_idx + 1
287
- # There are two hardcoded resnets in the middle of the VQ-diffusion encoder
288
- resnet_prefix = f"encoder.mid.block_{orig_resnet_idx}"
289
-
290
- diffusers_checkpoint.update(
291
- vqvae_resnet_to_diffusers_checkpoint(
292
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
293
- )
294
- )
295
-
296
- diffusers_checkpoint.update(
297
- {
298
- # conv_norm_out
299
- "encoder.conv_norm_out.weight": checkpoint["encoder.norm_out.weight"],
300
- "encoder.conv_norm_out.bias": checkpoint["encoder.norm_out.bias"],
301
- # conv_out
302
- "encoder.conv_out.weight": checkpoint["encoder.conv_out.weight"],
303
- "encoder.conv_out.bias": checkpoint["encoder.conv_out.bias"],
304
- }
305
- )
306
-
307
- return diffusers_checkpoint
308
-
309
-
310
- def vqvae_decoder_to_diffusers_checkpoint(model, checkpoint):
311
- diffusers_checkpoint = {}
312
-
313
- # conv in
314
- diffusers_checkpoint.update(
315
- {
316
- "decoder.conv_in.weight": checkpoint["decoder.conv_in.weight"],
317
- "decoder.conv_in.bias": checkpoint["decoder.conv_in.bias"],
318
- }
319
- )
320
-
321
- # up_blocks
322
-
323
- for diffusers_up_block_idx, up_block in enumerate(model.decoder.up_blocks):
324
- # up_blocks are stored in reverse order in the VQ-diffusion checkpoint
325
- orig_up_block_idx = len(model.decoder.up_blocks) - 1 - diffusers_up_block_idx
326
-
327
- diffusers_up_block_prefix = f"decoder.up_blocks.{diffusers_up_block_idx}"
328
- up_block_prefix = f"decoder.up.{orig_up_block_idx}"
329
-
330
- # resnets
331
- for resnet_idx, resnet in enumerate(up_block.resnets):
332
- diffusers_resnet_prefix = f"{diffusers_up_block_prefix}.resnets.{resnet_idx}"
333
- resnet_prefix = f"{up_block_prefix}.block.{resnet_idx}"
334
-
335
- diffusers_checkpoint.update(
336
- vqvae_resnet_to_diffusers_checkpoint(
337
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
338
- )
339
- )
340
-
341
- # upsample
342
-
343
- # there is no up sample on the last up block
344
- if diffusers_up_block_idx != len(model.decoder.up_blocks) - 1:
345
- # There's a single upsample in the VQ-diffusion checkpoint but a list of downsamples
346
- # in the diffusers model.
347
- diffusers_downsample_prefix = f"{diffusers_up_block_prefix}.upsamplers.0.conv"
348
- downsample_prefix = f"{up_block_prefix}.upsample.conv"
349
- diffusers_checkpoint.update(
350
- {
351
- f"{diffusers_downsample_prefix}.weight": checkpoint[f"{downsample_prefix}.weight"],
352
- f"{diffusers_downsample_prefix}.bias": checkpoint[f"{downsample_prefix}.bias"],
353
- }
354
- )
355
-
356
- # attentions
357
-
358
- if hasattr(up_block, "attentions"):
359
- for attention_idx, _ in enumerate(up_block.attentions):
360
- diffusers_attention_prefix = f"{diffusers_up_block_prefix}.attentions.{attention_idx}"
361
- attention_prefix = f"{up_block_prefix}.attn.{attention_idx}"
362
- diffusers_checkpoint.update(
363
- vqvae_attention_to_diffusers_checkpoint(
364
- checkpoint,
365
- diffusers_attention_prefix=diffusers_attention_prefix,
366
- attention_prefix=attention_prefix,
367
- )
368
- )
369
-
370
- # mid block
371
-
372
- # mid block attentions
373
-
374
- # There is a single hardcoded attention block in the middle of the VQ-diffusion decoder
375
- diffusers_attention_prefix = "decoder.mid_block.attentions.0"
376
- attention_prefix = "decoder.mid.attn_1"
377
- diffusers_checkpoint.update(
378
- vqvae_attention_to_diffusers_checkpoint(
379
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
380
- )
381
- )
382
-
383
- # mid block resnets
384
-
385
- for diffusers_resnet_idx, resnet in enumerate(model.encoder.mid_block.resnets):
386
- diffusers_resnet_prefix = f"decoder.mid_block.resnets.{diffusers_resnet_idx}"
387
-
388
- # the hardcoded prefixes to `block_` are 1 and 2
389
- orig_resnet_idx = diffusers_resnet_idx + 1
390
- # There are two hardcoded resnets in the middle of the VQ-diffusion decoder
391
- resnet_prefix = f"decoder.mid.block_{orig_resnet_idx}"
392
-
393
- diffusers_checkpoint.update(
394
- vqvae_resnet_to_diffusers_checkpoint(
395
- resnet, checkpoint, diffusers_resnet_prefix=diffusers_resnet_prefix, resnet_prefix=resnet_prefix
396
- )
397
- )
398
-
399
- diffusers_checkpoint.update(
400
- {
401
- # conv_norm_out
402
- "decoder.conv_norm_out.weight": checkpoint["decoder.norm_out.weight"],
403
- "decoder.conv_norm_out.bias": checkpoint["decoder.norm_out.bias"],
404
- # conv_out
405
- "decoder.conv_out.weight": checkpoint["decoder.conv_out.weight"],
406
- "decoder.conv_out.bias": checkpoint["decoder.conv_out.bias"],
407
- }
408
- )
409
-
410
- return diffusers_checkpoint
411
-
412
-
413
- def vqvae_resnet_to_diffusers_checkpoint(resnet, checkpoint, *, diffusers_resnet_prefix, resnet_prefix):
414
- rv = {
415
- # norm1
416
- f"{diffusers_resnet_prefix}.norm1.weight": checkpoint[f"{resnet_prefix}.norm1.weight"],
417
- f"{diffusers_resnet_prefix}.norm1.bias": checkpoint[f"{resnet_prefix}.norm1.bias"],
418
- # conv1
419
- f"{diffusers_resnet_prefix}.conv1.weight": checkpoint[f"{resnet_prefix}.conv1.weight"],
420
- f"{diffusers_resnet_prefix}.conv1.bias": checkpoint[f"{resnet_prefix}.conv1.bias"],
421
- # norm2
422
- f"{diffusers_resnet_prefix}.norm2.weight": checkpoint[f"{resnet_prefix}.norm2.weight"],
423
- f"{diffusers_resnet_prefix}.norm2.bias": checkpoint[f"{resnet_prefix}.norm2.bias"],
424
- # conv2
425
- f"{diffusers_resnet_prefix}.conv2.weight": checkpoint[f"{resnet_prefix}.conv2.weight"],
426
- f"{diffusers_resnet_prefix}.conv2.bias": checkpoint[f"{resnet_prefix}.conv2.bias"],
427
- }
428
-
429
- if resnet.conv_shortcut is not None:
430
- rv.update(
431
- {
432
- f"{diffusers_resnet_prefix}.conv_shortcut.weight": checkpoint[f"{resnet_prefix}.nin_shortcut.weight"],
433
- f"{diffusers_resnet_prefix}.conv_shortcut.bias": checkpoint[f"{resnet_prefix}.nin_shortcut.bias"],
434
- }
435
- )
436
-
437
- return rv
438
-
439
-
440
- def vqvae_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix):
441
- return {
442
- # group_norm
443
- f"{diffusers_attention_prefix}.group_norm.weight": checkpoint[f"{attention_prefix}.norm.weight"],
444
- f"{diffusers_attention_prefix}.group_norm.bias": checkpoint[f"{attention_prefix}.norm.bias"],
445
- # query
446
- f"{diffusers_attention_prefix}.query.weight": checkpoint[f"{attention_prefix}.q.weight"][:, :, 0, 0],
447
- f"{diffusers_attention_prefix}.query.bias": checkpoint[f"{attention_prefix}.q.bias"],
448
- # key
449
- f"{diffusers_attention_prefix}.key.weight": checkpoint[f"{attention_prefix}.k.weight"][:, :, 0, 0],
450
- f"{diffusers_attention_prefix}.key.bias": checkpoint[f"{attention_prefix}.k.bias"],
451
- # value
452
- f"{diffusers_attention_prefix}.value.weight": checkpoint[f"{attention_prefix}.v.weight"][:, :, 0, 0],
453
- f"{diffusers_attention_prefix}.value.bias": checkpoint[f"{attention_prefix}.v.bias"],
454
- # proj_attn
455
- f"{diffusers_attention_prefix}.proj_attn.weight": checkpoint[f"{attention_prefix}.proj_out.weight"][
456
- :, :, 0, 0
457
- ],
458
- f"{diffusers_attention_prefix}.proj_attn.bias": checkpoint[f"{attention_prefix}.proj_out.bias"],
459
- }
460
-
461
-
462
- # done vqvae checkpoint
463
-
464
- # transformer model
465
-
466
- PORTED_DIFFUSIONS = ["image_synthesis.modeling.transformers.diffusion_transformer.DiffusionTransformer"]
467
- PORTED_TRANSFORMERS = ["image_synthesis.modeling.transformers.transformer_utils.Text2ImageTransformer"]
468
- PORTED_CONTENT_EMBEDDINGS = ["image_synthesis.modeling.embeddings.dalle_mask_image_embedding.DalleMaskImageEmbedding"]
469
-
470
-
471
- def transformer_model_from_original_config(
472
- original_diffusion_config, original_transformer_config, original_content_embedding_config
473
- ):
474
- assert (
475
- original_diffusion_config.target in PORTED_DIFFUSIONS
476
- ), f"{original_diffusion_config.target} has not yet been ported to diffusers."
477
- assert (
478
- original_transformer_config.target in PORTED_TRANSFORMERS
479
- ), f"{original_transformer_config.target} has not yet been ported to diffusers."
480
- assert (
481
- original_content_embedding_config.target in PORTED_CONTENT_EMBEDDINGS
482
- ), f"{original_content_embedding_config.target} has not yet been ported to diffusers."
483
-
484
- original_diffusion_config = original_diffusion_config.params
485
- original_transformer_config = original_transformer_config.params
486
- original_content_embedding_config = original_content_embedding_config.params
487
-
488
- inner_dim = original_transformer_config["n_embd"]
489
-
490
- n_heads = original_transformer_config["n_head"]
491
-
492
- # VQ-Diffusion gives dimension of the multi-headed attention layers as the
493
- # number of attention heads times the sequence length (the dimension) of a
494
- # single head. We want to specify our attention blocks with those values
495
- # specified separately
496
- assert inner_dim % n_heads == 0
497
- d_head = inner_dim // n_heads
498
-
499
- depth = original_transformer_config["n_layer"]
500
- context_dim = original_transformer_config["condition_dim"]
501
-
502
- num_embed = original_content_embedding_config["num_embed"]
503
- # the number of embeddings in the transformer includes the mask embedding.
504
- # the content embedding (the vqvae) does not include the mask embedding.
505
- num_embed = num_embed + 1
506
-
507
- height = original_transformer_config["content_spatial_size"][0]
508
- width = original_transformer_config["content_spatial_size"][1]
509
-
510
- assert width == height, "width has to be equal to height"
511
- dropout = original_transformer_config["resid_pdrop"]
512
- num_embeds_ada_norm = original_diffusion_config["diffusion_step"]
513
-
514
- model_kwargs = {
515
- "attention_bias": True,
516
- "cross_attention_dim": context_dim,
517
- "attention_head_dim": d_head,
518
- "num_layers": depth,
519
- "dropout": dropout,
520
- "num_attention_heads": n_heads,
521
- "num_vector_embeds": num_embed,
522
- "num_embeds_ada_norm": num_embeds_ada_norm,
523
- "norm_num_groups": 32,
524
- "sample_size": width,
525
- "activation_fn": "geglu-approximate",
526
- }
527
-
528
- model = Transformer2DModel(**model_kwargs)
529
- return model
530
-
531
-
532
- # done transformer model
533
-
534
- # transformer checkpoint
535
-
536
-
537
- def transformer_original_checkpoint_to_diffusers_checkpoint(model, checkpoint):
538
- diffusers_checkpoint = {}
539
-
540
- transformer_prefix = "transformer.transformer"
541
-
542
- diffusers_latent_image_embedding_prefix = "latent_image_embedding"
543
- latent_image_embedding_prefix = f"{transformer_prefix}.content_emb"
544
-
545
- # DalleMaskImageEmbedding
546
- diffusers_checkpoint.update(
547
- {
548
- f"{diffusers_latent_image_embedding_prefix}.emb.weight": checkpoint[
549
- f"{latent_image_embedding_prefix}.emb.weight"
550
- ],
551
- f"{diffusers_latent_image_embedding_prefix}.height_emb.weight": checkpoint[
552
- f"{latent_image_embedding_prefix}.height_emb.weight"
553
- ],
554
- f"{diffusers_latent_image_embedding_prefix}.width_emb.weight": checkpoint[
555
- f"{latent_image_embedding_prefix}.width_emb.weight"
556
- ],
557
- }
558
- )
559
-
560
- # transformer blocks
561
- for transformer_block_idx, transformer_block in enumerate(model.transformer_blocks):
562
- diffusers_transformer_block_prefix = f"transformer_blocks.{transformer_block_idx}"
563
- transformer_block_prefix = f"{transformer_prefix}.blocks.{transformer_block_idx}"
564
-
565
- # ada norm block
566
- diffusers_ada_norm_prefix = f"{diffusers_transformer_block_prefix}.norm1"
567
- ada_norm_prefix = f"{transformer_block_prefix}.ln1"
568
-
569
- diffusers_checkpoint.update(
570
- transformer_ada_norm_to_diffusers_checkpoint(
571
- checkpoint, diffusers_ada_norm_prefix=diffusers_ada_norm_prefix, ada_norm_prefix=ada_norm_prefix
572
- )
573
- )
574
-
575
- # attention block
576
- diffusers_attention_prefix = f"{diffusers_transformer_block_prefix}.attn1"
577
- attention_prefix = f"{transformer_block_prefix}.attn1"
578
-
579
- diffusers_checkpoint.update(
580
- transformer_attention_to_diffusers_checkpoint(
581
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
582
- )
583
- )
584
-
585
- # ada norm block
586
- diffusers_ada_norm_prefix = f"{diffusers_transformer_block_prefix}.norm2"
587
- ada_norm_prefix = f"{transformer_block_prefix}.ln1_1"
588
-
589
- diffusers_checkpoint.update(
590
- transformer_ada_norm_to_diffusers_checkpoint(
591
- checkpoint, diffusers_ada_norm_prefix=diffusers_ada_norm_prefix, ada_norm_prefix=ada_norm_prefix
592
- )
593
- )
594
-
595
- # attention block
596
- diffusers_attention_prefix = f"{diffusers_transformer_block_prefix}.attn2"
597
- attention_prefix = f"{transformer_block_prefix}.attn2"
598
-
599
- diffusers_checkpoint.update(
600
- transformer_attention_to_diffusers_checkpoint(
601
- checkpoint, diffusers_attention_prefix=diffusers_attention_prefix, attention_prefix=attention_prefix
602
- )
603
- )
604
-
605
- # norm block
606
- diffusers_norm_block_prefix = f"{diffusers_transformer_block_prefix}.norm3"
607
- norm_block_prefix = f"{transformer_block_prefix}.ln2"
608
-
609
- diffusers_checkpoint.update(
610
- {
611
- f"{diffusers_norm_block_prefix}.weight": checkpoint[f"{norm_block_prefix}.weight"],
612
- f"{diffusers_norm_block_prefix}.bias": checkpoint[f"{norm_block_prefix}.bias"],
613
- }
614
- )
615
-
616
- # feedforward block
617
- diffusers_feedforward_prefix = f"{diffusers_transformer_block_prefix}.ff"
618
- feedforward_prefix = f"{transformer_block_prefix}.mlp"
619
-
620
- diffusers_checkpoint.update(
621
- transformer_feedforward_to_diffusers_checkpoint(
622
- checkpoint,
623
- diffusers_feedforward_prefix=diffusers_feedforward_prefix,
624
- feedforward_prefix=feedforward_prefix,
625
- )
626
- )
627
-
628
- # to logits
629
-
630
- diffusers_norm_out_prefix = "norm_out"
631
- norm_out_prefix = f"{transformer_prefix}.to_logits.0"
632
-
633
- diffusers_checkpoint.update(
634
- {
635
- f"{diffusers_norm_out_prefix}.weight": checkpoint[f"{norm_out_prefix}.weight"],
636
- f"{diffusers_norm_out_prefix}.bias": checkpoint[f"{norm_out_prefix}.bias"],
637
- }
638
- )
639
-
640
- diffusers_out_prefix = "out"
641
- out_prefix = f"{transformer_prefix}.to_logits.1"
642
-
643
- diffusers_checkpoint.update(
644
- {
645
- f"{diffusers_out_prefix}.weight": checkpoint[f"{out_prefix}.weight"],
646
- f"{diffusers_out_prefix}.bias": checkpoint[f"{out_prefix}.bias"],
647
- }
648
- )
649
-
650
- return diffusers_checkpoint
651
-
652
-
653
- def transformer_ada_norm_to_diffusers_checkpoint(checkpoint, *, diffusers_ada_norm_prefix, ada_norm_prefix):
654
- return {
655
- f"{diffusers_ada_norm_prefix}.emb.weight": checkpoint[f"{ada_norm_prefix}.emb.weight"],
656
- f"{diffusers_ada_norm_prefix}.linear.weight": checkpoint[f"{ada_norm_prefix}.linear.weight"],
657
- f"{diffusers_ada_norm_prefix}.linear.bias": checkpoint[f"{ada_norm_prefix}.linear.bias"],
658
- }
659
-
660
-
661
- def transformer_attention_to_diffusers_checkpoint(checkpoint, *, diffusers_attention_prefix, attention_prefix):
662
- return {
663
- # key
664
- f"{diffusers_attention_prefix}.to_k.weight": checkpoint[f"{attention_prefix}.key.weight"],
665
- f"{diffusers_attention_prefix}.to_k.bias": checkpoint[f"{attention_prefix}.key.bias"],
666
- # query
667
- f"{diffusers_attention_prefix}.to_q.weight": checkpoint[f"{attention_prefix}.query.weight"],
668
- f"{diffusers_attention_prefix}.to_q.bias": checkpoint[f"{attention_prefix}.query.bias"],
669
- # value
670
- f"{diffusers_attention_prefix}.to_v.weight": checkpoint[f"{attention_prefix}.value.weight"],
671
- f"{diffusers_attention_prefix}.to_v.bias": checkpoint[f"{attention_prefix}.value.bias"],
672
- # linear out
673
- f"{diffusers_attention_prefix}.to_out.0.weight": checkpoint[f"{attention_prefix}.proj.weight"],
674
- f"{diffusers_attention_prefix}.to_out.0.bias": checkpoint[f"{attention_prefix}.proj.bias"],
675
- }
676
-
677
-
678
- def transformer_feedforward_to_diffusers_checkpoint(checkpoint, *, diffusers_feedforward_prefix, feedforward_prefix):
679
- return {
680
- f"{diffusers_feedforward_prefix}.net.0.proj.weight": checkpoint[f"{feedforward_prefix}.0.weight"],
681
- f"{diffusers_feedforward_prefix}.net.0.proj.bias": checkpoint[f"{feedforward_prefix}.0.bias"],
682
- f"{diffusers_feedforward_prefix}.net.2.weight": checkpoint[f"{feedforward_prefix}.2.weight"],
683
- f"{diffusers_feedforward_prefix}.net.2.bias": checkpoint[f"{feedforward_prefix}.2.bias"],
684
- }
685
-
686
-
687
- # done transformer checkpoint
688
-
689
-
690
- def read_config_file(filename):
691
- # The yaml file contains annotations that certain values should
692
- # loaded as tuples. By default, OmegaConf will panic when reading
693
- # these. Instead, we can manually read the yaml with the FullLoader and then
694
- # construct the OmegaConf object.
695
- with open(filename) as f:
696
- original_config = yaml.load(f, FullLoader)
697
-
698
- return OmegaConf.create(original_config)
699
-
700
-
701
- # We take separate arguments for the vqvae because the ITHQ vqvae config file
702
- # is separate from the config file for the rest of the model.
703
- if __name__ == "__main__":
704
- parser = argparse.ArgumentParser()
705
-
706
- parser.add_argument(
707
- "--vqvae_checkpoint_path",
708
- default=None,
709
- type=str,
710
- required=True,
711
- help="Path to the vqvae checkpoint to convert.",
712
- )
713
-
714
- parser.add_argument(
715
- "--vqvae_original_config_file",
716
- default=None,
717
- type=str,
718
- required=True,
719
- help="The YAML config file corresponding to the original architecture for the vqvae.",
720
- )
721
-
722
- parser.add_argument(
723
- "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert."
724
- )
725
-
726
- parser.add_argument(
727
- "--original_config_file",
728
- default=None,
729
- type=str,
730
- required=True,
731
- help="The YAML config file corresponding to the original architecture.",
732
- )
733
-
734
- parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.")
735
-
736
- parser.add_argument(
737
- "--checkpoint_load_device",
738
- default="cpu",
739
- type=str,
740
- required=False,
741
- help="The device passed to `map_location` when loading checkpoints.",
742
- )
743
-
744
- # See link for how ema weights are always selected
745
- # https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/inference_VQ_Diffusion.py#L65
746
- parser.add_argument(
747
- "--no_use_ema",
748
- action="store_true",
749
- required=False,
750
- help=(
751
- "Set to not use the ema weights from the original VQ-Diffusion checkpoint. You probably do not want to set"
752
- " it as the original VQ-Diffusion always uses the ema weights when loading models."
753
- ),
754
- )
755
-
756
- args = parser.parse_args()
757
-
758
- use_ema = not args.no_use_ema
759
-
760
- print(f"loading checkpoints to {args.checkpoint_load_device}")
761
-
762
- checkpoint_map_location = torch.device(args.checkpoint_load_device)
763
-
764
- # vqvae_model
765
-
766
- print(f"loading vqvae, config: {args.vqvae_original_config_file}, checkpoint: {args.vqvae_checkpoint_path}")
767
-
768
- vqvae_original_config = read_config_file(args.vqvae_original_config_file).model
769
- vqvae_checkpoint = torch.load(args.vqvae_checkpoint_path, map_location=checkpoint_map_location)["model"]
770
-
771
- with init_empty_weights():
772
- vqvae_model = vqvae_model_from_original_config(vqvae_original_config)
773
-
774
- vqvae_diffusers_checkpoint = vqvae_original_checkpoint_to_diffusers_checkpoint(vqvae_model, vqvae_checkpoint)
775
-
776
- with tempfile.NamedTemporaryFile() as vqvae_diffusers_checkpoint_file:
777
- torch.save(vqvae_diffusers_checkpoint, vqvae_diffusers_checkpoint_file.name)
778
- del vqvae_diffusers_checkpoint
779
- del vqvae_checkpoint
780
- load_checkpoint_and_dispatch(vqvae_model, vqvae_diffusers_checkpoint_file.name, device_map="auto")
781
-
782
- print("done loading vqvae")
783
-
784
- # done vqvae_model
785
-
786
- # transformer_model
787
-
788
- print(
789
- f"loading transformer, config: {args.original_config_file}, checkpoint: {args.checkpoint_path}, use ema:"
790
- f" {use_ema}"
791
- )
792
-
793
- original_config = read_config_file(args.original_config_file).model
794
-
795
- diffusion_config = original_config.params.diffusion_config
796
- transformer_config = original_config.params.diffusion_config.params.transformer_config
797
- content_embedding_config = original_config.params.diffusion_config.params.content_emb_config
798
-
799
- pre_checkpoint = torch.load(args.checkpoint_path, map_location=checkpoint_map_location)
800
-
801
- if use_ema:
802
- if "ema" in pre_checkpoint:
803
- checkpoint = {}
804
- for k, v in pre_checkpoint["model"].items():
805
- checkpoint[k] = v
806
-
807
- for k, v in pre_checkpoint["ema"].items():
808
- # The ema weights are only used on the transformer. To mimic their key as if they came
809
- # from the state_dict for the top level model, we prefix with an additional "transformer."
810
- # See the source linked in the args.use_ema config for more information.
811
- checkpoint[f"transformer.{k}"] = v
812
- else:
813
- print("attempted to load ema weights but no ema weights are specified in the loaded checkpoint.")
814
- checkpoint = pre_checkpoint["model"]
815
- else:
816
- checkpoint = pre_checkpoint["model"]
817
-
818
- del pre_checkpoint
819
-
820
- with init_empty_weights():
821
- transformer_model = transformer_model_from_original_config(
822
- diffusion_config, transformer_config, content_embedding_config
823
- )
824
-
825
- diffusers_transformer_checkpoint = transformer_original_checkpoint_to_diffusers_checkpoint(
826
- transformer_model, checkpoint
827
- )
828
-
829
- # classifier free sampling embeddings interlude
830
-
831
- # The learned embeddings are stored on the transformer in the original VQ-diffusion. We store them on a separate
832
- # model, so we pull them off the checkpoint before the checkpoint is deleted.
833
-
834
- learnable_classifier_free_sampling_embeddings = diffusion_config.params.learnable_cf
835
-
836
- if learnable_classifier_free_sampling_embeddings:
837
- learned_classifier_free_sampling_embeddings_embeddings = checkpoint["transformer.empty_text_embed"]
838
- else:
839
- learned_classifier_free_sampling_embeddings_embeddings = None
840
-
841
- # done classifier free sampling embeddings interlude
842
-
843
- with tempfile.NamedTemporaryFile() as diffusers_transformer_checkpoint_file:
844
- torch.save(diffusers_transformer_checkpoint, diffusers_transformer_checkpoint_file.name)
845
- del diffusers_transformer_checkpoint
846
- del checkpoint
847
- load_checkpoint_and_dispatch(transformer_model, diffusers_transformer_checkpoint_file.name, device_map="auto")
848
-
849
- print("done loading transformer")
850
-
851
- # done transformer_model
852
-
853
- # text encoder
854
-
855
- print("loading CLIP text encoder")
856
-
857
- clip_name = "openai/clip-vit-base-patch32"
858
-
859
- # The original VQ-Diffusion specifies the pad value by the int used in the
860
- # returned tokens. Each model uses `0` as the pad value. The transformers clip api
861
- # specifies the pad value via the token before it has been tokenized. The `!` pad
862
- # token is the same as padding with the `0` pad value.
863
- pad_token = "!"
864
-
865
- tokenizer_model = CLIPTokenizer.from_pretrained(clip_name, pad_token=pad_token, device_map="auto")
866
-
867
- assert tokenizer_model.convert_tokens_to_ids(pad_token) == 0
868
-
869
- text_encoder_model = CLIPTextModel.from_pretrained(
870
- clip_name,
871
- # `CLIPTextModel` does not support device_map="auto"
872
- # device_map="auto"
873
- )
874
-
875
- print("done loading CLIP text encoder")
876
-
877
- # done text encoder
878
-
879
- # scheduler
880
-
881
- scheduler_model = VQDiffusionScheduler(
882
- # the scheduler has the same number of embeddings as the transformer
883
- num_vec_classes=transformer_model.num_vector_embeds
884
- )
885
-
886
- # done scheduler
887
-
888
- # learned classifier free sampling embeddings
889
-
890
- with init_empty_weights():
891
- learned_classifier_free_sampling_embeddings_model = LearnedClassifierFreeSamplingEmbeddings(
892
- learnable_classifier_free_sampling_embeddings,
893
- hidden_size=text_encoder_model.config.hidden_size,
894
- length=tokenizer_model.model_max_length,
895
- )
896
-
897
- learned_classifier_free_sampling_checkpoint = {
898
- "embeddings": learned_classifier_free_sampling_embeddings_embeddings.float()
899
- }
900
-
901
- with tempfile.NamedTemporaryFile() as learned_classifier_free_sampling_checkpoint_file:
902
- torch.save(learned_classifier_free_sampling_checkpoint, learned_classifier_free_sampling_checkpoint_file.name)
903
- del learned_classifier_free_sampling_checkpoint
904
- del learned_classifier_free_sampling_embeddings_embeddings
905
- load_checkpoint_and_dispatch(
906
- learned_classifier_free_sampling_embeddings_model,
907
- learned_classifier_free_sampling_checkpoint_file.name,
908
- device_map="auto",
909
- )
910
-
911
- # done learned classifier free sampling embeddings
912
-
913
- print(f"saving VQ diffusion model, path: {args.dump_path}")
914
-
915
- pipe = VQDiffusionPipeline(
916
- vqvae=vqvae_model,
917
- transformer=transformer_model,
918
- tokenizer=tokenizer_model,
919
- text_encoder=text_encoder_model,
920
- learned_classifier_free_sampling_embeddings=learned_classifier_free_sampling_embeddings_model,
921
- scheduler=scheduler_model,
922
- )
923
- pipe.save_pretrained(args.dump_path)
924
-
925
- print("done writing VQ diffusion model")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/cornernet/README.md DELETED
@@ -1,33 +0,0 @@
1
- # CornerNet
2
-
3
- ## Introduction
4
-
5
- [ALGORITHM]
6
-
7
- ```latex
8
- @inproceedings{law2018cornernet,
9
- title={Cornernet: Detecting objects as paired keypoints},
10
- author={Law, Hei and Deng, Jia},
11
- booktitle={15th European Conference on Computer Vision, ECCV 2018},
12
- pages={765--781},
13
- year={2018},
14
- organization={Springer Verlag}
15
- }
16
- ```
17
-
18
- ## Results and models
19
-
20
- | Backbone | Batch Size | Step/Total Epochs | Mem (GB) | Inf time (fps) | box AP | Config | Download |
21
- | :-------------: | :--------: |:----------------: | :------: | :------------: | :----: | :------: | :--------: |
22
- | HourglassNet-104 | [10 x 5](./cornernet_hourglass104_mstest_10x5_210e_coco.py) | 180/210 | 13.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720-5fefbf1c.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_10x5_210e_coco/cornernet_hourglass104_mstest_10x5_210e_coco_20200824_185720.log.json) |
23
- | HourglassNet-104 | [8 x 6](./cornernet_hourglass104_mstest_8x6_210e_coco.py) | 180/210 | 15.9 | 4.2 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618.log.json) |
24
- | HourglassNet-104 | [32 x 3](./cornernet_hourglass104_mstest_32x3_210e_coco.py) | 180/210 | 9.5 | 3.9 | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110-1efaea91.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110.log.json) |
25
-
26
- Note:
27
-
28
- - TTA setting is single-scale and `flip=True`.
29
- - Experiments with `images_per_gpu=6` are conducted on Tesla V100-SXM2-32GB, `images_per_gpu=3` are conducted on GeForce GTX 1080 Ti.
30
- - Here are the descriptions of each experiment setting:
31
- - 10 x 5: 10 GPUs with 5 images per gpu. This is the same setting as that reported in the original paper.
32
- - 8 x 6: 8 GPUs with 6 images per gpu. The total batchsize is similar to paper and only need 1 node to train.
33
- - 32 x 3: 32 GPUs with 3 images per gpu. The default setting for 1080TI and need 4 nodes to train.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset.py DELETED
@@ -1,39 +0,0 @@
1
- import json
2
- import cv2
3
- import numpy as np
4
-
5
- from torch.utils.data import Dataset
6
-
7
-
8
- class MyDataset(Dataset):
9
- def __init__(self):
10
- self.data = []
11
- with open('./training/fill50k/prompt.json', 'rt') as f:
12
- for line in f:
13
- self.data.append(json.loads(line))
14
-
15
- def __len__(self):
16
- return len(self.data)
17
-
18
- def __getitem__(self, idx):
19
- item = self.data[idx]
20
-
21
- source_filename = item['source']
22
- target_filename = item['target']
23
- prompt = item['prompt']
24
-
25
- source = cv2.imread('./training/fill50k/' + source_filename)
26
- target = cv2.imread('./training/fill50k/' + target_filename)
27
-
28
- # Do not forget that OpenCV read images in BGR order.
29
- source = cv2.cvtColor(source, cv2.COLOR_BGR2RGB)
30
- target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)
31
-
32
- # Normalize source images to [0, 1].
33
- source = source.astype(np.float32) / 255.0
34
-
35
- # Normalize target images to [-1, 1].
36
- target = (target.astype(np.float32) / 127.5) - 1.0
37
-
38
- return dict(jpg=target, txt=prompt, hint=source)
39
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/nono/roop/capturer.py DELETED
@@ -1,22 +0,0 @@
1
- from typing import Optional
2
- import cv2
3
-
4
- from roop.typing import Frame
5
-
6
-
7
- def get_video_frame(video_path: str, frame_number: int = 0) -> Optional[Frame]:
8
- capture = cv2.VideoCapture(video_path)
9
- frame_total = capture.get(cv2.CAP_PROP_FRAME_COUNT)
10
- capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
11
- has_frame, frame = capture.read()
12
- capture.release()
13
- if has_frame:
14
- return frame
15
- return None
16
-
17
-
18
- def get_video_frame_total(video_path: str) -> int:
19
- capture = cv2.VideoCapture(video_path)
20
- video_frame_total = int(capture.get(cv2.CAP_PROP_FRAME_COUNT))
21
- capture.release()
22
- return video_frame_total
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ashish17/Ashish_Open_Chat_AI_17/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Ashish Open Chat AI 17
3
- emoji: 📚
4
- colorFrom: red
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/__init__.py DELETED
@@ -1,25 +0,0 @@
1
- """distutils.command
2
-
3
- Package containing implementation of all the standard Distutils
4
- commands."""
5
-
6
- __all__ = [ # noqa: F822
7
- 'build',
8
- 'build_py',
9
- 'build_ext',
10
- 'build_clib',
11
- 'build_scripts',
12
- 'clean',
13
- 'install',
14
- 'install_lib',
15
- 'install_headers',
16
- 'install_scripts',
17
- 'install_data',
18
- 'sdist',
19
- 'register',
20
- 'bdist',
21
- 'bdist_dumb',
22
- 'bdist_rpm',
23
- 'check',
24
- 'upload',
25
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/__init__.py DELETED
@@ -1,25 +0,0 @@
1
- from .factory import (
2
- list_models,
3
- create_model,
4
- create_model_and_transforms,
5
- add_model_config,
6
- )
7
- from .loss import ClipLoss, gather_features, LPLoss, lp_gather_features, LPMetrics
8
- from .model import (
9
- CLAP,
10
- CLAPTextCfg,
11
- CLAPVisionCfg,
12
- CLAPAudioCfp,
13
- convert_weights_to_fp16,
14
- trace_model,
15
- )
16
- from .openai import load_openai_model, list_openai_models
17
- from .pretrained import (
18
- list_pretrained,
19
- list_pretrained_tag_models,
20
- list_pretrained_model_tags,
21
- get_pretrained_url,
22
- download_pretrained,
23
- )
24
- from .tokenizer import SimpleTokenizer, tokenize
25
- from .transform import image_transform
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py DELETED
@@ -1,63 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
- # Modified by Xingyi Zhou
4
- """
5
- Implement many useful :class:`Augmentation`.
6
- """
7
- import numpy as np
8
- import sys
9
- from fvcore.transforms.transform import (
10
- BlendTransform,
11
- CropTransform,
12
- HFlipTransform,
13
- NoOpTransform,
14
- Transform,
15
- VFlipTransform,
16
- )
17
- from PIL import Image
18
-
19
- from detectron2.data.transforms.augmentation import Augmentation
20
- from .custom_transform import EfficientDetResizeCropTransform
21
-
22
- __all__ = [
23
- "EfficientDetResizeCrop",
24
- ]
25
-
26
-
27
- class EfficientDetResizeCrop(Augmentation):
28
- """
29
- Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge.
30
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
31
- """
32
-
33
- def __init__(
34
- self, size, scale, interp=Image.BILINEAR
35
- ):
36
- """
37
- Args:
38
- """
39
- super().__init__()
40
- self.target_size = (size, size)
41
- self.scale = scale
42
- self.interp = interp
43
-
44
- def get_transform(self, img):
45
- # Select a random scale factor.
46
- scale_factor = np.random.uniform(*self.scale)
47
- scaled_target_height = scale_factor * self.target_size[0]
48
- scaled_target_width = scale_factor * self.target_size[1]
49
- # Recompute the accurate scale_factor using rounded scaled image size.
50
- width, height = img.shape[1], img.shape[0]
51
- img_scale_y = scaled_target_height / height
52
- img_scale_x = scaled_target_width / width
53
- img_scale = min(img_scale_y, img_scale_x)
54
-
55
- # Select non-zero random offset (x, y) if scaled image is larger than target size
56
- scaled_h = int(height * img_scale)
57
- scaled_w = int(width * img_scale)
58
- offset_y = scaled_h - self.target_size[0]
59
- offset_x = scaled_w - self.target_size[1]
60
- offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1))
61
- offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1))
62
- return EfficientDetResizeCropTransform(
63
- scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AzinZ/vitscn/text/__init__.py DELETED
@@ -1,54 +0,0 @@
1
- """ from https://github.com/keithito/tacotron """
2
- from text import cleaners
3
- from text.symbols import symbols
4
-
5
-
6
- # Mappings from symbol to numeric ID and vice versa:
7
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
8
- _id_to_symbol = {i: s for i, s in enumerate(symbols)}
9
-
10
-
11
- def text_to_sequence(text, cleaner_names):
12
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
13
- Args:
14
- text: string to convert to a sequence
15
- cleaner_names: names of the cleaner functions to run the text through
16
- Returns:
17
- List of integers corresponding to the symbols in the text
18
- '''
19
- sequence = []
20
-
21
- clean_text = _clean_text(text, cleaner_names)
22
- for symbol in clean_text:
23
- symbol_id = _symbol_to_id[symbol]
24
- sequence += [symbol_id]
25
- return sequence
26
-
27
-
28
- def cleaned_text_to_sequence(cleaned_text):
29
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
30
- Args:
31
- text: string to convert to a sequence
32
- Returns:
33
- List of integers corresponding to the symbols in the text
34
- '''
35
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text]
36
- return sequence
37
-
38
-
39
- def sequence_to_text(sequence):
40
- '''Converts a sequence of IDs back to a string'''
41
- result = ''
42
- for symbol_id in sequence:
43
- s = _id_to_symbol[symbol_id]
44
- result += s
45
- return result
46
-
47
-
48
- def _clean_text(text, cleaner_names):
49
- for name in cleaner_names:
50
- cleaner = getattr(cleaners, name)
51
- if not cleaner:
52
- raise Exception('Unknown cleaner: %s' % name)
53
- text = cleaner(text)
54
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/app/interface/maintenance/index.tsx DELETED
@@ -1,20 +0,0 @@
1
- import { fonts } from "@/lib/fonts"
2
- import { cn } from "@/lib/utils"
3
-
4
- export function Maintenance() {
5
- return (
6
- <div className="z-20 fixed inset-0 w-screen h-screen bg-white text-stone-800 flex flex-col items-center justify-center">
7
- <div className={cn(
8
- fonts.actionman.className,
9
- "text-center"
10
- )}>
11
- <p className="text-4xl">🚧 Maintenance in progress 🚧</p>
12
- <p className="text-3xl mt-12 mb-8">See the <a
13
- href="https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory/discussions/339"
14
- className="underline text-yellow-500"
15
- >announcement here</a> <img src="/quick-and-dirty-emoji.png" className="inline w-10 h-10"></img></p>
16
- <p className="text-2xl">This shouldn&apos;t last long, so stay tuned!</p>
17
- </div>
18
- </div>
19
- )
20
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Apk Skachat.md DELETED
@@ -1,75 +0,0 @@
1
-
2
- <h1>Aparcamiento de coches multijugador APK Skachat: Una guía para descargar y jugar el juego en su PC</h1>
3
- <p>Si usted está buscando un juego de simulación de estacionamiento de coches realista y divertido, es posible que desee probar Parking Multijugador. Este juego es desarrollado por olzhass y tiene más de 100 millones de descargas en Google Play Store. Pero ¿qué pasa si desea jugar en su PC en lugar de su dispositivo móvil? En este artículo, le mostraremos cómo descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC utilizando dos emuladores populares de Android: BlueStacks y NoxPlayer. También te daremos algunos consejos sobre cómo jugar el juego en tu PC y disfrutar de sus características. </p>
4
- <h2>¿Qué es el Aparcamiento Multijugador? </h2>
5
- <p>Car Parking Multiplayer es un juego de simulación que te permite experimentar la emoción de aparcar varios coches en diferentes escenarios. Puede elegir entre más de 100 coches con interiores reales y personalizarlos con afinación, vinilos y partes del cuerpo. También puede explorar un mundo abierto con estaciones de servicio y servicios de automóviles reales, competir contra otros jugadores en carreras multijugador, intercambiar coches con otros jugadores, chatear con amigos e incluso jugar roles como oficial de policía. </p>
6
- <h2>aparcamiento de coches multijugador apk skachat</h2><br /><p><b><b>DOWNLOAD</b> &raquo; <a href="https://bltlly.com/2v6L8k">https://bltlly.com/2v6L8k</a></b></p><br /><br />
7
- <h3>Características del juego</h3>
8
- <p>Algunas de las características de Aparcamiento multijugador son:</p>
9
- <ul>
10
- <li> Modo multijugador de mundo abierto con caminar gratis, chat de voz, lista de amigos y modo policía. </li>
11
- <li>82 desafíos de estacionamiento y conducción en la vida real con diferentes vehículos, como remolques, camionetas, camiones, autos deportivos y autos clásicos. </li>
12
- <li>Gráficos de alta calidad y efectos de sonido con física realista y sistema de daños. </li>
13
- <li> Personalización del coche con suspensión ajustable, ángulo de rueda, ajuste del motor, turbo, caja de cambios, escape y visual auto tungs. </li>
14
- <li>Entornos altamente detallados con edificios con interior, ciclo día-noche, efectos meteorológicos y sistema de tráfico. </li>
15
- </ul>
16
- <h3>Requisitos y compatibilidad</h3>
17
-
18
- <p>Para jugar Car Parking Multijugador en su PC, es necesario tener un equipo con Windows o Mac con al menos 4 GB de RAM y 5 GB de espacio en disco libre. También es necesario descargar e instalar un emulador de Android como BlueStacks o NoxPlayer que puede ejecutar el juego sin problemas en su PC. Explicaremos cómo hacerlo en la siguiente sección. </p>
19
- <h2>Cómo descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC? </h2>
20
- <p>Aparcamiento de coches multijugador APK Skachat es una versión modificada del juego original que le permite descargar e instalar de forma gratuita sin restricciones. Sin embargo, ya que no está disponible en las tiendas de aplicaciones oficiales como Google Play Store o Apple App Store, debe usar una fuente de terceros para obtenerlo. Una de las fuentes más fiables es APKPure.com, donde se puede encontrar la última versión de Aparcamiento Multijugador APK Skachat junto con su información de archivos y comentarios de los usuarios. </p>
21
- <p>Para descargar e instalar Aparcamiento de coches multijugador APK Skachat en su PC usando BlueStacks o NoxPlayer emulador, siga estos pasos:</p>
22
- <h3>Usando el emulador de BlueStacks</h3>
23
- <ol>
24
- <li>Descargar e instalar el emulador BlueStacks desde su sitio web oficial[ 3 ] . </li>
25
- <li>Inicie BlueStacks e inicie sesión con su cuenta de Google o cree una nueva. </li>
26
- <li>Abra la aplicación del navegador en BlueStacks y vaya a APKPure.com. Buscar Aparcamiento de coches multijugador APK Skachat y descargarlo en su PC.</li>
27
- <li>Busque el archivo descargado en su PC y haga clic derecho en él. Elija "Abrir con" y seleccione BlueStacks como el emulador. </li>
28
- <li>Espere a que el proceso de instalación se complete y luego abra el juego desde la pantalla de inicio de BlueStacks. </li>
29
- </ol>
30
- <h3>Usando el emulador de NoxPlayer</h3>
31
- <ol>
32
- <li>Descargar e instalar el emulador NoxPlayer desde su sitio web oficial. </li>
33
- <li>Inicie NoxPlayer e inicie sesión con su cuenta de Google o cree una nueva. </li>
34
-
35
- <li>Arrastre y suelte el archivo descargado a la ventana NoxPlayer y espere a que se complete el proceso de instalación. </li>
36
- <li>Abre el juego desde la pantalla de inicio de NoxPlayer y disfruta. </li>
37
- </ol>
38
- <h2>¿Cómo se juega Aparcamiento de coches multijugador en su PC? </h2>
39
- <p>Una vez que haya descargado e instalado Aparcamiento Multijugador APK Skachat en su PC utilizando BlueStacks o NoxPlayer emulador, puede comenzar a jugar el juego en su PC. Aquí hay algunos consejos sobre cómo jugar el juego en su PC:</p>
40
- <h3>Controles y ajustes</h3>
41
- <p>Puedes usar el teclado y el ratón para controlar el juego en tu PC. También puedes personalizar la asignación de teclas según tus preferencias. Para ello, haga clic en el icono del teclado en la esquina inferior derecha de la pantalla del emulador y elija "Controles del juego". A continuación, puede arrastrar y soltar las teclas de los botones correspondientes en la pantalla del juego. También puede ajustar la sensibilidad, la transparencia y el tamaño de las teclas. Para guardar la configuración, haga clic en "Guardar" y luego en "Cerrar". </p>
42
- <p>También puede cambiar la configuración del juego como gráficos, sonido, idioma, cámara, etc. haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla del juego. A continuación, puede elegir entre baja, media, alta o ultra calidad gráfica, habilitar o desactivar efectos de sonido y música, seleccionar su idioma preferido, cambiar entre diferentes modos de cámara, etc. Para aplicar los cambios, haga clic en "OK". </p>
43
- <p></p>
44
- <h3>Consejos y trucos</h3>
45
- <p>Aquí hay algunos consejos y trucos para ayudarle a jugar Car Parking Multijugador mejor en su PC:</p>
46
- <ul>
47
- <li>Utilice el mini-mapa en la esquina superior izquierda de la pantalla del juego para navegar por el mundo abierto. También puede acercar o alejar usando la rueda del ratón. </li>
48
- <li>Utilice el icono de la gasolinera en el mini-mapa para encontrar la gasolinera más cercana donde puede repostar su coche. También puede utilizar el icono de servicio de automóvil para encontrar el servicio de automóvil más cercano donde puede reparar su automóvil o cambiar sus piezas. </li>
49
-
50
- <li>Utilice el icono de menú en la esquina inferior izquierda de la pantalla de juego para acceder a varias opciones como el modo multijugador, intercambio de coches, garaje, perfil, configuración, etc.</li>
51
- <li>Utilice el icono de estacionamiento en la esquina inferior derecha de la pantalla de juego para iniciar un desafío de estacionamiento. Puedes elegir entre diferentes niveles de dificultad y ubicaciones. También puede ver su progreso y logros haciendo clic en el icono de trofeo al lado. </li>
52
- <li>Utilice el icono de carrera en la esquina inferior derecha de la pantalla del juego para iniciar un desafío de carreras multijugador. Puede elegir entre diferentes modos, como carrera de arrastre, carrera de deriva, carrera de circuito, etc. También puede ver su clasificación y recompensas haciendo clic en el icono de copa al lado. </li>
53
- </ul>
54
- <h2>Conclusión</h2>
55
- <p>Car Parking Multiplayer es un divertido y realista juego de simulación de aparcamiento que puedes jugar en tu PC usando un emulador de Android como BlueStacks o NoxPlayer. Puede descargar e instalar Aparcamiento de coches multijugador APK Skachat de forma gratuita desde APKPure.com y disfrutar de sus características tales como modo multijugador mundo abierto, personalización del coche, alta-altográficos de calidad, etc. Esperamos que este artículo le ha ayudado a aprender a descargar y jugar Aparcamiento de coches multijugador APK Skachat en su PC. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación. </p>
56
- <h2>Preguntas frecuentes</h2>
57
- <p>Aquí hay algunas preguntas frecuentes acerca de Aparcamiento de coches multijugador APK Skachat:</p>
58
- <h4> ¿Es seguro para descargar aparcamiento multijugador APK Skachat? </h4>
59
- <p <p>Sí, Aparcamiento de coches multijugador APK Skachat es seguro para descargar siempre y cuando se utiliza una fuente de confianza como APKPure.com. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo APK de fuentes desconocidas, ya que pueden contener malware o virus que pueden dañar su dispositivo o datos. También debe comprobar la información del archivo y las opiniones de los usuarios antes de descargar e instalar cualquier archivo APK. </p>
60
- <h4>¿Cuáles son las ventajas de jugar Car Parking Multijugador en PC? </h4>
61
-
62
- <ul>
63
- <li> Puede disfrutar de una pantalla más grande y una mejor calidad de gráficos en su PC.</li>
64
- <li> Puede utilizar el teclado y el ratón para controlar el juego con mayor facilidad y precisión en su PC.</li>
65
- <li>Puede ahorrar su batería y espacio de almacenamiento en su dispositivo móvil jugando el juego en su PC.</li>
66
- <li>Puedes jugar el juego sin interrupciones o distracciones de llamadas telefónicas, mensajes, notificaciones, etc. en tu PC.</li>
67
- </ul>
68
- <h4>¿Puedo jugar Aparcamiento de coches multijugador fuera de línea? </h4>
69
- <p>No, no puedes jugar Car Parking Multijugador sin conexión, ya que requiere una conexión a Internet para acceder a algunas de sus características como el modo multijugador, chat en línea, intercambio de coches, etc. Sin embargo, todavía se puede jugar el juego en una solamodo de reproductor sin conexión a Internet mediante la elección de la opción sin conexión del menú. </p>
70
- <h4>¿Cómo puedo actualizar Aparcamiento de coches multijugador APK Skachat? </h4>
71
- <p>Para actualizar Aparcamiento Multijugador APK Skachat, es necesario descargar e instalar la última versión del archivo APK de APKPure.com o cualquier otra fuente confiable. También puedes buscar actualizaciones de la configuración del juego haciendo clic en el icono del engranaje y luego elegir "Buscar actualizaciones". Si hay una nueva versión disponible, puedes descargarla e instalarla desde allí. </p>
72
- <h4>¿Cómo puedo contactar al desarrollador de Car Parking Multijugador? </h4>
73
- <p>Si tienes alguna pregunta, sugerencia, comentario, o problemas con respecto a Car Parking Multijugador, puede ponerse en contacto con el desarrollador del juego enviando un correo electrónico a [email protected] o visitando su página de Facebook. También puede unirse a su servidor de discordia para chatear con otros jugadores y obtener apoyo de los moderadores. </p> 64aa2da5cf<br />
74
- <br />
75
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Apk Caso Penal Con Trampa.md DELETED
@@ -1,68 +0,0 @@
1
-
2
- <h1>Cómo descargar e instalar caso penal APK con Cheat</h1>
3
- <p>Si te gusta jugar juegos de detectives en tu dispositivo Android, es posible que haya oído hablar de Criminal Case. Es un popular juego de objetos ocultos donde tienes que investigar casos de asesinato, encontrar pistas, interrogar a los sospechosos y atrapar a los asesinos. Pero lo que si quieres hacer el juego más divertido y fácil? Ahí es donde Criminal Case APK con engaño entra en juego. En este artículo, te mostraremos cómo descargar e instalar esta versión modificada del juego que te da energía ilimitada, pistas, estrellas y más. También te explicaremos qué es un archivo APK, cómo instalarlo en tu dispositivo, cómo jugar a Criminal Case con trucos y cuáles son los pros y los contras de usarlo. </p>
4
- <h2> ¿Qué es un archivo APK y cómo instalarlo en Android</h2>
5
- <p>Un archivo APK es un archivo de paquete de Android que contiene todos los archivos y el código necesario para ejecutar una aplicación en su dispositivo Android. Es similar a un archivo EXE en Windows o un archivo DMG en Mac. Los archivos APK se utilizan generalmente para distribuir aplicaciones que no están disponibles en Google Play Store, o para actualizar aplicaciones antes de su lanzamiento oficial. También puedes usar archivos APK para instalar versiones modificadas o hackeadas de aplicaciones que ofrecen características o beneficios adicionales. </p>
6
- <h2>apk caso penal con trampa</h2><br /><p><b><b>Download</b> &#9999; <a href="https://bltlly.com/2v6LMp">https://bltlly.com/2v6LMp</a></b></p><br /><br />
7
- <p>Para instalar un archivo APK en tu dispositivo Android, necesitas hacer dos cosas. Primero, necesitas habilitar fuentes desconocidas en la configuración de tu dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, ve a Configuración > Aplicaciones > Menú > Acceso especial > Instalar aplicaciones desconocidas. Luego, selecciona la aplicación de tu navegador (como Chrome) y activa la opción Permitir desde esta fuente. </p>
8
-
9
- <h2>¿Qué es el caso penal APK con Cheat</h2>
10
- <p>Caso Penal APK con trampa es una versión modificada de Caso Penal que le da acceso a recursos ilimitados y características que pueden ayudarle a resolver los casos más rápido y más fácil. Algunas de las características incluyen:</p>
11
- <ul>
12
- <li>Energía ilimitada: Puedes reproducir tantas escenas como quieras sin quedarte sin energía. </li>
13
- <li>Pistas ilimitadas: Puedes usar pistas para encontrar objetos más rápido y ganar más puntos. </li>
14
- <li>Estrellas ilimitadas: puedes usar estrellas para desbloquear nuevas escenas, examinar pistas, interrogar sospechosos y arrestar asesinos. </li>
15
- <li>Análisis instantáneo: No tienes que esperar a los resultados de laboratorio o informes. Puedes obtenerlos al instante. </li>
16
- <li>Saltar escenas y minijuegos: Puedes saltarte cualquier escena o mini-juego que no quieras jugar. </li>
17
- <li>No hay anuncios: Puedes disfrutar del juego sin interrupciones ni distracciones. </li>
18
- </ul>
19
- <p>Con estas características, usted puede tener más diversión y emoción jugando Criminal Case. También puedes ahorrar tiempo y dinero al no tener que comprar energía o pistas con dinero real. </p>
20
- <h2>Cómo descargar caso penal APK con Cheat</h2>
21
- <p>Para descargar Criminal Case APK con cheat, es necesario seguir estos pasos:</p> <p>1. Ir a un sitio web que ofrece APK Criminal Case con cheat. Puede utilizar la aplicación de su navegador para buscar estos sitios web, o puede utilizar uno de los siguientes enlaces:</p>
22
- <tabla>
23
- <tr>
24
- <th>Sitio web</th>
25
- <th>URL</th>
26
- </tr>
27
- <tr>
28
- <td>Filehippo</td>
29
- <td><a href="( 1 )">Descargar caso penal APK 2.39 para Android - Filehippo.com</a></td>
30
- </tr>
31
- <tr>
32
- <td>APKCombo</td>
33
- <td><a href="( 2 )">Criminal Case APK (Android Game) - Descarga gratuita - APKCombo</a></td>
34
- </tr>
35
- </tabla>
36
- <p>2. Elija la versión de Criminal Case APK con truco que desea descargar. Asegúrese de que es compatible con su dispositivo y tiene las características que desea. </p>
37
-
38
- <p>4. Una vez descargado el archivo, búsquelo en su dispositivo usando la aplicación del navegador o una aplicación de administrador de archivos. Toque el archivo para instalarlo. Es posible que necesite aceptar algunas ventanas emergentes o permisos antes de instalar el archivo. </p>
39
- <p></p>
40
- <p>5. Después de que la instalación se haya completado, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. ¡Disfrute jugando Criminal Case con trucos! </p>
41
- <h2>Cómo Jugar Caso Criminal con Cheat</h2>
42
- <p>Jugar a Criminal Case con trucos es similar a jugar la versión original del juego, excepto que tienes acceso a recursos ilimitados y características que pueden hacer el juego más fácil y más divertido. Aquí hay algunos consejos y trucos sobre cómo jugar Criminal Case con cheat:</p>
43
- <ul>
44
- <li>Para usar energía ilimitada, toca el icono de energía en la esquina superior derecha de la pantalla. Puedes recargar tu energía tantas veces como quieras sin esperar ni pagar. </li>
45
- <li>Para usar pistas ilimitadas, toque el icono de pista en la esquina inferior derecha de la pantalla durante una escena. Puedes usar pistas tantas veces como quieras sin perder puntos o estrellas. </li>
46
- <li>Para usar estrellas ilimitadas, toca el icono de estrella en la esquina superior izquierda de la pantalla. Puedes usar estrellas tantas veces como quieras para desbloquear nuevas escenas, examinar pistas, interrogar sospechosos y arrestar asesinos. </li>
47
- <li> Para utilizar el análisis instantáneo, toque el icono de análisis en la esquina inferior izquierda de la pantalla durante una escena. Puedes obtener resultados instantáneos sin esperar ni pagar. </li>
48
- <li>Para saltar escenas y minijuegos, toque el icono de salto en la esquina superior derecha de la pantalla durante una escena o un mini-juego. Puedes saltarte cualquier escena o mini-juego que no quieras jugar sin perder puntos o estrellas. </li>
49
- <li>Para eliminar anuncios, toque el icono de configuración en la esquina superior derecha de la pantalla. Luego, toque la opción de eliminar anuncios y confirme su elección. Puedes disfrutar del juego sin interrupciones ni distracciones. </li>
50
- </ul>
51
-
52
- <h2> Pros y contras de usar APK caso penal con trampa</h2>
53
- <p>El uso de APK Caso Penal con trampa tiene sus pros y sus contras. Aquí están algunos de ellos:</p>
54
- | Pros | Contras | | -- | -- - | | Usted puede tener más diversión y emoción jugando Caso Criminal | Usted puede perder algo del desafío y emoción de jugar Caso Criminal | | | Usted puede ahorrar tiempo y dinero por no tener que comprar energía o pistas con dinero real | Usted puede encontrar algunos errores o errores que pueden afectar su rendimiento del juego | | Puede probar diferentes características y opciones que no están disponibles en la versión original del juego | Puede violar algunos términos y condiciones del desarrollador del juego o Google Play Store | | Puede compartir sus logros y progresos con sus amigos y otros jugadores | Usted puede correr el riesgo de perder sus datos de juego o cuenta si desinstalar o actualizar el juego | <p>Usted debe sopesar estos pros y contras antes de decidir si desea utilizar Criminal Case APK con trampa o no. En última instancia, depende de su preferencia personal y estilo de juego. </p>
55
- <h2>Conclusión</h2>
56
- <p>Criminal Case es un divertido y adictivo juego de objetos ocultos que te permite jugar como detective y resolver casos de asesinato. Pero si quieres hacer el juego más divertido y fácil, se puede tratar de usar Caso Penal APK con trampa. Esta es una versión modificada del juego que te da energía ilimitada, pistas, estrellas y más. Puede descargar e instalar esta versión desde un sitio web de buena reputación y disfrutar jugando Criminal Case con trampa. </p>
57
- <h3>Preguntas frecuentes</h3>
58
- <p>Aquí hay algunas preguntas y respuestas frecuentes sobre Caso Penal APK con trampa:</p>
59
- <ol>
60
-
61
- <li><b> ¿Es legal usar Caso Penal APK con trampa? </b><br>El uso de APK Caso Penal con trampa puede no ser legal en algunos países o regiones, ya que puede violar algunos términos y condiciones del desarrollador del juego o Google Play Store. Usted debe comprobar las leyes y reglamentos de su ubicación antes de usar Caso Penal APK con trampa. También debe respetar los derechos e intereses del desarrollador del juego y otros jugadores, y no utilizar Criminal Case APK con engaño para cualquier propósito malicioso o fraudulento. </li>
62
- <li><b>Se Caso Penal APK con trucos de trabajo en mi dispositivo? </b><br>Caso Penal APK con trampa debe funcionar en la mayoría de los dispositivos Android que soportan la versión original de Caso Penal. Sin embargo, algunos dispositivos pueden no ser compatibles con Criminal Case APK con trampa, o pueden experimentar algunos problemas o errores al usarlo. Usted debe comprobar la compatibilidad y los requisitos de Caso Penal APK con tramposo antes de descargar e instalar en su dispositivo. También debe actualizar el software y la configuración del dispositivo para garantizar un rendimiento óptimo. </li>
63
- <li><b>¿Puedo jugar APK Caso Penal con tramposo en línea o fuera de línea? </b><br>Puedes jugar APK Caso Penal con tramposo tanto en línea como fuera de línea. Sin embargo, algunas características y funciones pueden requerir una conexión a Internet para funcionar correctamente, como sincronizar los datos y la cuenta del juego, acceder a nuevos casos y actualizaciones o interactuar con otros jugadores. Usted debe asegurarse de que tiene una conexión a Internet estable y segura al jugar Caso Penal APK con trampa en línea. </li>
64
-
65
- </ol>
66
- <p>Espero que este artículo le ha ayudado a aprender más acerca de Caso Penal APK con trampa y cómo descargar e instalar en su dispositivo Android. Si tiene alguna pregunta o comentario, por favor deje un comentario abajo. ¡Gracias por leer! </p> 64aa2da5cf<br />
67
- <br />
68
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BilalSardar/Remove_Text_for_Image/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Remove Text For Image
3
- emoji: 👀
4
- colorFrom: gray
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.47.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pydiffvg_tensorflow/render_tensorflow.py DELETED
@@ -1,664 +0,0 @@
1
- import os
2
- import tensorflow as tf
3
- import diffvg
4
- import pydiffvg_tensorflow as pydiffvg
5
- import time
6
- from enum import IntEnum
7
- import warnings
8
-
9
- print_timing = False
10
- __EMPTY_TENSOR = tf.constant([])
11
-
12
- def is_empty_tensor(tensor):
13
- return tf.equal(tf.size(tensor), 0)
14
-
15
- def set_print_timing(val):
16
- global print_timing
17
- print_timing=val
18
-
19
- class OutputType(IntEnum):
20
- color = 1
21
- sdf = 2
22
-
23
- class ShapeType:
24
- __shapetypes = [
25
- diffvg.ShapeType.circle,
26
- diffvg.ShapeType.ellipse,
27
- diffvg.ShapeType.path,
28
- diffvg.ShapeType.rect
29
- ]
30
-
31
- @staticmethod
32
- def asTensor(type):
33
- for i in range(len(ShapeType.__shapetypes)):
34
- if ShapeType.__shapetypes[i] == type:
35
- return tf.constant(i)
36
-
37
- @staticmethod
38
- def asShapeType(index: tf.Tensor):
39
- if is_empty_tensor(index):
40
- return None
41
- try:
42
- type = ShapeType.__shapetypes[index]
43
- except IndexError:
44
- print(f'{index} is out of range: [0, {len(ShapeType.__shapetypes)})')
45
- import sys
46
- sys.exit()
47
- else:
48
- return type
49
-
50
- class ColorType:
51
- __colortypes = [
52
- diffvg.ColorType.constant,
53
- diffvg.ColorType.linear_gradient,
54
- diffvg.ColorType.radial_gradient
55
- ]
56
-
57
- @staticmethod
58
- def asTensor(type):
59
- for i in range(len(ColorType.__colortypes)):
60
- if ColorType.__colortypes[i] == type:
61
- return tf.constant(i)
62
-
63
- @staticmethod
64
- def asColorType(index: tf.Tensor):
65
- if is_empty_tensor(index):
66
- return None
67
- try:
68
- type = ColorType.__colortypes[index]
69
- except IndexError:
70
- print(f'{index} is out of range: [0, {len(ColorType.__colortypes)})')
71
- import sys
72
- sys.exit()
73
- else:
74
- return type
75
-
76
- class FilterType:
77
- __filtertypes = [
78
- diffvg.FilterType.box,
79
- diffvg.FilterType.tent,
80
- diffvg.FilterType.hann
81
- ]
82
-
83
- @staticmethod
84
- def asTensor(type):
85
- for i in range(len(FilterType.__filtertypes)):
86
- if FilterType.__filtertypes[i] == type:
87
- return tf.constant(i)
88
-
89
- @staticmethod
90
- def asFilterType(index: tf.Tensor):
91
- if is_empty_tensor(index):
92
- return None
93
- try:
94
- type = FilterType.__filtertypes[index]
95
- except IndexError:
96
- print(f'{index} is out of range: [0, {len(FilterType.__filtertypes)})')
97
- import sys
98
- sys.exit()
99
- else:
100
- return type
101
-
102
- def serialize_scene(canvas_width,
103
- canvas_height,
104
- shapes,
105
- shape_groups,
106
- filter = pydiffvg.PixelFilter(type = diffvg.FilterType.box,
107
- radius = tf.constant(0.5)),
108
- output_type = OutputType.color,
109
- use_prefiltering = False):
110
- """
111
- Given a list of shapes, convert them to a linear list of argument,
112
- so that we can use it in TF.
113
- """
114
- with tf.device('/device:cpu:' + str(pydiffvg.get_cpu_device_id())):
115
- num_shapes = len(shapes)
116
- num_shape_groups = len(shape_groups)
117
- args = []
118
- args.append(tf.constant(canvas_width))
119
- args.append(tf.constant(canvas_height))
120
- args.append(tf.constant(num_shapes))
121
- args.append(tf.constant(num_shape_groups))
122
- args.append(tf.constant(output_type))
123
- args.append(tf.constant(use_prefiltering))
124
- for shape in shapes:
125
- if isinstance(shape, pydiffvg.Circle):
126
- args.append(ShapeType.asTensor(diffvg.ShapeType.circle))
127
- args.append(tf.identity(shape.radius))
128
- args.append(tf.identity(shape.center))
129
- elif isinstance(shape, pydiffvg.Ellipse):
130
- args.append(ShapeType.asTensor(diffvg.ShapeType.ellipse))
131
- args.append(tf.identity(shape.radius))
132
- args.append(tf.identity(shape.center))
133
- elif isinstance(shape, pydiffvg.Path):
134
- assert(shape.points.shape[1] == 2)
135
- args.append(ShapeType.asTensor(diffvg.ShapeType.path))
136
- args.append(tf.identity(shape.num_control_points))
137
- args.append(tf.identity(shape.points))
138
- args.append(tf.constant(shape.is_closed))
139
- args.append(tf.constant(shape.use_distance_approx))
140
- elif isinstance(shape, pydiffvg.Polygon):
141
- assert(shape.points.shape[1] == 2)
142
- args.append(ShapeType.asTensor(diffvg.ShapeType.path))
143
- if shape.is_closed:
144
- args.append(tf.zeros(shape.points.shape[0], dtype = tf.int32))
145
- else:
146
- args.append(tf.zeros(shape.points.shape[0] - 1, dtype = tf.int32))
147
- args.append(tf.identity(shape.points))
148
- args.append(tf.constant(shape.is_closed))
149
- elif isinstance(shape, pydiffvg.Rect):
150
- args.append(ShapeType.asTensor(diffvg.ShapeType.rect))
151
- args.append(tf.identity(shape.p_min))
152
- args.append(tf.identity(shape.p_max))
153
- else:
154
- assert(False)
155
- args.append(tf.identity(shape.stroke_width))
156
-
157
- for shape_group in shape_groups:
158
- args.append(tf.identity(shape_group.shape_ids))
159
- # Fill color
160
- if shape_group.fill_color is None:
161
- args.append(__EMPTY_TENSOR)
162
- elif tf.is_tensor(shape_group.fill_color):
163
- args.append(ColorType.asTensor(diffvg.ColorType.constant))
164
- args.append(tf.identity(shape_group.fill_color))
165
- elif isinstance(shape_group.fill_color, pydiffvg.LinearGradient):
166
- args.append(ColorType.asTensor(diffvg.ColorType.linear_gradient))
167
- args.append(tf.identity(shape_group.fill_color.begin))
168
- args.append(tf.identity(shape_group.fill_color.end))
169
- args.append(tf.identity(shape_group.fill_color.offsets))
170
- args.append(tf.identity(shape_group.fill_color.stop_colors))
171
- elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient):
172
- args.append(ColorType.asTensor(diffvg.ColorType.radial_gradient))
173
- args.append(tf.identity(shape_group.fill_color.center))
174
- args.append(tf.identity(shape_group.fill_color.radius))
175
- args.append(tf.identity(shape_group.fill_color.offsets))
176
- args.append(tf.identity(shape_group.fill_color.stop_colors))
177
-
178
- if shape_group.fill_color is not None:
179
- # go through the underlying shapes and check if they are all closed
180
- for shape_id in shape_group.shape_ids:
181
- if isinstance(shapes[shape_id], pydiffvg.Path):
182
- if not shapes[shape_id].is_closed:
183
- warnings.warn("Detected non-closed paths with fill color. This might causes unexpected results.", Warning)
184
-
185
- # Stroke color
186
- if shape_group.stroke_color is None:
187
- args.append(__EMPTY_TENSOR)
188
- elif tf.is_tensor(shape_group.stroke_color):
189
- args.append(tf.constant(0))
190
- args.append(tf.identity(shape_group.stroke_color))
191
- elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient):
192
- args.append(ColorType.asTensor(diffvg.ColorType.linear_gradient))
193
- args.append(tf.identity(shape_group.stroke_color.begin))
194
- args.append(tf.identity(shape_group.stroke_color.end))
195
- args.append(tf.identity(shape_group.stroke_color.offsets))
196
- args.append(tf.identity(shape_group.stroke_color.stop_colors))
197
- elif isinstance(shape_group.stroke_color, pydiffvg.RadialGradient):
198
- args.append(ColorType.asTensor(diffvg.ColorType.radial_gradient))
199
- args.append(tf.identity(shape_group.stroke_color.center))
200
- args.append(tf.identity(shape_group.stroke_color.radius))
201
- args.append(tf.identity(shape_group.stroke_color.offsets))
202
- args.append(tf.identity(shape_group.stroke_color.stop_colors))
203
- args.append(tf.constant(shape_group.use_even_odd_rule))
204
- # Transformation
205
- args.append(tf.identity(shape_group.shape_to_canvas))
206
- args.append(FilterType.asTensor(filter.type))
207
- args.append(tf.constant(filter.radius))
208
- return args
209
-
210
- class Context: pass
211
-
212
- def forward(width,
213
- height,
214
- num_samples_x,
215
- num_samples_y,
216
- seed,
217
- *args):
218
- """
219
- Forward rendering pass: given a serialized scene and output an image.
220
- """
221
- # Unpack arguments
222
- with tf.device('/device:cpu:' + str(pydiffvg.get_cpu_device_id())):
223
- current_index = 0
224
- canvas_width = int(args[current_index])
225
- current_index += 1
226
- canvas_height = int(args[current_index])
227
- current_index += 1
228
- num_shapes = int(args[current_index])
229
- current_index += 1
230
- num_shape_groups = int(args[current_index])
231
- current_index += 1
232
- output_type = OutputType(int(args[current_index]))
233
- current_index += 1
234
- use_prefiltering = bool(args[current_index])
235
- current_index += 1
236
- shapes = []
237
- shape_groups = []
238
- shape_contents = [] # Important to avoid GC deleting the shapes
239
- color_contents = [] # Same as above
240
- for shape_id in range(num_shapes):
241
- shape_type = ShapeType.asShapeType(args[current_index])
242
- current_index += 1
243
- if shape_type == diffvg.ShapeType.circle:
244
- radius = args[current_index]
245
- current_index += 1
246
- center = args[current_index]
247
- current_index += 1
248
- shape = diffvg.Circle(float(radius),
249
- diffvg.Vector2f(float(center[0]), float(center[1])))
250
- elif shape_type == diffvg.ShapeType.ellipse:
251
- radius = args[current_index]
252
- current_index += 1
253
- center = args[current_index]
254
- current_index += 1
255
- shape = diffvg.Ellipse(diffvg.Vector2f(float(radius[0]), float(radius[1])),
256
- diffvg.Vector2f(float(center[0]), float(center[1])))
257
- elif shape_type == diffvg.ShapeType.path:
258
- num_control_points = args[current_index]
259
- current_index += 1
260
- points = args[current_index]
261
- current_index += 1
262
- is_closed = args[current_index]
263
- current_index += 1
264
- use_distance_approx = args[current_index]
265
- current_index += 1
266
- shape = diffvg.Path(diffvg.int_ptr(pydiffvg.data_ptr(num_control_points)),
267
- diffvg.float_ptr(pydiffvg.data_ptr(points)),
268
- diffvg.float_ptr(0), # thickness
269
- num_control_points.shape[0],
270
- points.shape[0],
271
- is_closed,
272
- use_distance_approx)
273
- elif shape_type == diffvg.ShapeType.rect:
274
- p_min = args[current_index]
275
- current_index += 1
276
- p_max = args[current_index]
277
- current_index += 1
278
- shape = diffvg.Rect(diffvg.Vector2f(float(p_min[0]), float(p_min[1])),
279
- diffvg.Vector2f(float(p_max[0]), float(p_max[1])))
280
- else:
281
- assert(False)
282
- stroke_width = args[current_index]
283
- current_index += 1
284
- shapes.append(diffvg.Shape(\
285
- shape_type, shape.get_ptr(), float(stroke_width)))
286
- shape_contents.append(shape)
287
-
288
- for shape_group_id in range(num_shape_groups):
289
- shape_ids = args[current_index]
290
- current_index += 1
291
- fill_color_type = ColorType.asColorType(args[current_index])
292
- current_index += 1
293
- if fill_color_type == diffvg.ColorType.constant:
294
- color = args[current_index]
295
- current_index += 1
296
- fill_color = diffvg.Constant(\
297
- diffvg.Vector4f(color[0], color[1], color[2], color[3]))
298
- elif fill_color_type == diffvg.ColorType.linear_gradient:
299
- beg = args[current_index]
300
- current_index += 1
301
- end = args[current_index]
302
- current_index += 1
303
- offsets = args[current_index]
304
- current_index += 1
305
- stop_colors = args[current_index]
306
- current_index += 1
307
- assert(offsets.shape[0] == stop_colors.shape[0])
308
- fill_color = diffvg.LinearGradient(diffvg.Vector2f(float(beg[0]), float(beg[1])),
309
- diffvg.Vector2f(float(end[0]), float(end[1])),
310
- offsets.shape[0],
311
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
312
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
313
- elif fill_color_type == diffvg.ColorType.radial_gradient:
314
- center = args[current_index]
315
- current_index += 1
316
- radius = args[current_index]
317
- current_index += 1
318
- offsets = args[current_index]
319
- current_index += 1
320
- stop_colors = args[current_index]
321
- current_index += 1
322
- assert(offsets.shape[0] == stop_colors.shape[0])
323
- fill_color = diffvg.RadialGradient(diffvg.Vector2f(float(center[0]), float(center[1])),
324
- diffvg.Vector2f(float(radius[0]), float(radius[1])),
325
- offsets.shape[0],
326
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
327
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
328
- elif fill_color_type is None:
329
- fill_color = None
330
- else:
331
- assert(False)
332
-
333
- stroke_color_type = ColorType.asColorType(args[current_index])
334
- current_index += 1
335
- if stroke_color_type == diffvg.ColorType.constant:
336
- color = args[current_index]
337
- current_index += 1
338
- stroke_color = diffvg.Constant(\
339
- diffvg.Vector4f(float(color[0]),
340
- float(color[1]),
341
- float(color[2]),
342
- float(color[3])))
343
- elif stroke_color_type == diffvg.ColorType.linear_gradient:
344
- beg = args[current_index]
345
- current_index += 1
346
- end = args[current_index]
347
- current_index += 1
348
- offsets = args[current_index]
349
- current_index += 1
350
- stop_colors = args[current_index]
351
- current_index += 1
352
- assert(offsets.shape[0] == stop_colors.shape[0])
353
- stroke_color = diffvg.LinearGradient(\
354
- diffvg.Vector2f(float(beg[0]), float(beg[1])),
355
- diffvg.Vector2f(float(end[0]), float(end[1])),
356
- offsets.shape[0],
357
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
358
- diffvg.float_ptr(stop_colors.data_ptr()))
359
- elif stroke_color_type == diffvg.ColorType.radial_gradient:
360
- center = args[current_index]
361
- current_index += 1
362
- radius = args[current_index]
363
- current_index += 1
364
- offsets = args[current_index]
365
- current_index += 1
366
- stop_colors = args[current_index]
367
- current_index += 1
368
- assert(offsets.shape[0] == stop_colors.shape[0])
369
- stroke_color = diffvg.RadialGradient(\
370
- diffvg.Vector2f(float(center[0]), float(center[1])),
371
- diffvg.Vector2f(float(radius[0]), float(radius[1])),
372
- offsets.shape[0],
373
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
374
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
375
- elif stroke_color_type is None:
376
- stroke_color = None
377
- else:
378
- assert(False)
379
- use_even_odd_rule = bool(args[current_index])
380
- current_index += 1
381
- shape_to_canvas = args[current_index]
382
- current_index += 1
383
-
384
- if fill_color is not None:
385
- color_contents.append(fill_color)
386
- if stroke_color is not None:
387
- color_contents.append(stroke_color)
388
- shape_groups.append(diffvg.ShapeGroup(\
389
- diffvg.int_ptr(pydiffvg.data_ptr(shape_ids)),
390
- shape_ids.shape[0],
391
- diffvg.ColorType.constant if fill_color_type is None else fill_color_type,
392
- diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(),
393
- diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type,
394
- diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(),
395
- use_even_odd_rule,
396
- diffvg.float_ptr(pydiffvg.data_ptr(shape_to_canvas))))
397
-
398
- filter_type = FilterType.asFilterType(args[current_index])
399
- current_index += 1
400
- filter_radius = args[current_index]
401
- current_index += 1
402
- filt = diffvg.Filter(filter_type, filter_radius)
403
-
404
- device_name = pydiffvg.get_device_name()
405
- device_spec = tf.DeviceSpec.from_string(device_name)
406
- use_gpu = device_spec.device_type == 'GPU'
407
- gpu_index = device_spec.device_index if device_spec.device_index is not None else 0
408
-
409
- start = time.time()
410
- scene = diffvg.Scene(canvas_width,
411
- canvas_height,
412
- shapes,
413
- shape_groups,
414
- filt,
415
- use_gpu,
416
- gpu_index)
417
- time_elapsed = time.time() - start
418
- global print_timing
419
- if print_timing:
420
- print('Scene construction, time: %.5f s' % time_elapsed)
421
-
422
- with tf.device(device_name):
423
- if output_type == OutputType.color:
424
- rendered_image = tf.zeros((int(height), int(width), 4), dtype = tf.float32)
425
- else:
426
- assert(output_type == OutputType.sdf)
427
- rendered_image = tf.zeros((int(height), int(width), 1), dtype = tf.float32)
428
-
429
- start = time.time()
430
- diffvg.render(scene,
431
- diffvg.float_ptr(0), # background image
432
- diffvg.float_ptr(pydiffvg.data_ptr(rendered_image) if output_type == OutputType.color else 0),
433
- diffvg.float_ptr(pydiffvg.data_ptr(rendered_image) if output_type == OutputType.sdf else 0),
434
- width,
435
- height,
436
- int(num_samples_x),
437
- int(num_samples_y),
438
- seed,
439
- diffvg.float_ptr(0), # d_background_image
440
- diffvg.float_ptr(0), # d_render_image
441
- diffvg.float_ptr(0), # d_render_sdf
442
- diffvg.float_ptr(0), # d_translation
443
- use_prefiltering,
444
- diffvg.float_ptr(0), # eval_positions
445
- 0 ) # num_eval_positions (automatically set to entire raster)
446
- time_elapsed = time.time() - start
447
- if print_timing:
448
- print('Forward pass, time: %.5f s' % time_elapsed)
449
-
450
- ctx = Context()
451
- ctx.scene = scene
452
- ctx.shape_contents = shape_contents
453
- ctx.color_contents = color_contents
454
- ctx.filter = filt
455
- ctx.width = width
456
- ctx.height = height
457
- ctx.num_samples_x = num_samples_x
458
- ctx.num_samples_y = num_samples_y
459
- ctx.seed = seed
460
- ctx.output_type = output_type
461
- ctx.use_prefiltering = use_prefiltering
462
- return rendered_image, ctx
463
-
464
- @tf.custom_gradient
465
- def render(*x):
466
- """
467
- The main TensorFlow interface of C++ diffvg.
468
- """
469
- assert(tf.executing_eagerly())
470
- if pydiffvg.get_use_gpu() and os.environ.get('TF_FORCE_GPU_ALLOW_GROWTH') != 'true':
471
- print('******************** WARNING ********************')
472
- print('Tensorflow by default allocates all GPU memory,')
473
- print('causing huge amount of page faults when rendering.')
474
- print('Please set the environment variable TF_FORCE_GPU_ALLOW_GROWTH to true,')
475
- print('so that Tensorflow allocates memory on demand.')
476
- print('*************************************************')
477
-
478
- width = x[0]
479
- height = x[1]
480
- num_samples_x = x[2]
481
- num_samples_y = x[3]
482
- seed = x[4]
483
- args = x[5:]
484
- img, ctx = forward(width, height, num_samples_x, num_samples_y, seed, *args)
485
-
486
- def backward(grad_img):
487
- scene = ctx.scene
488
- width = ctx.width
489
- height = ctx.height
490
- num_samples_x = ctx.num_samples_x
491
- num_samples_y = ctx.num_samples_y
492
- seed = ctx.seed
493
- output_type = ctx.output_type
494
- use_prefiltering = ctx.use_prefiltering
495
-
496
- start = time.time()
497
- with tf.device(pydiffvg.get_device_name()):
498
- diffvg.render(scene,
499
- diffvg.float_ptr(0), # background_image
500
- diffvg.float_ptr(0), # render_image
501
- diffvg.float_ptr(0), # render_sdf
502
- width,
503
- height,
504
- num_samples_x,
505
- num_samples_y,
506
- seed,
507
- diffvg.float_ptr(0), # d_background_image
508
- diffvg.float_ptr(pydiffvg.data_ptr(grad_img) if output_type == OutputType.color else 0),
509
- diffvg.float_ptr(pydiffvg.data_ptr(grad_img) if output_type == OutputType.sdf else 0),
510
- diffvg.float_ptr(0), # d_translation
511
- use_prefiltering,
512
- diffvg.float_ptr(0), # eval_positions
513
- 0 ) # num_eval_positions (automatically set to entire raster))
514
- time_elapsed = time.time() - start
515
- global print_timing
516
- if print_timing:
517
- print('Backward pass, time: %.5f s' % time_elapsed)
518
-
519
- with tf.device('/device:cpu:' + str(pydiffvg.get_cpu_device_id())):
520
- d_args = []
521
- d_args.append(None) # width
522
- d_args.append(None) # height
523
- d_args.append(None) # num_samples_x
524
- d_args.append(None) # num_samples_y
525
- d_args.append(None) # seed
526
- d_args.append(None) # canvas_width
527
- d_args.append(None) # canvas_height
528
- d_args.append(None) # num_shapes
529
- d_args.append(None) # num_shape_groups
530
- d_args.append(None) # output_type
531
- d_args.append(None) # use_prefiltering
532
- for shape_id in range(scene.num_shapes):
533
- d_args.append(None) # type
534
- d_shape = scene.get_d_shape(shape_id)
535
- if d_shape.type == diffvg.ShapeType.circle:
536
- d_circle = d_shape.as_circle()
537
- radius = tf.constant(d_circle.radius)
538
- d_args.append(radius)
539
- c = d_circle.center
540
- c = tf.constant((c.x, c.y))
541
- d_args.append(c)
542
- elif d_shape.type == diffvg.ShapeType.ellipse:
543
- d_ellipse = d_shape.as_ellipse()
544
- r = d_ellipse.radius
545
- r = tf.constant((d_ellipse.radius.x, d_ellipse.radius.y))
546
- d_args.append(r)
547
- c = d_ellipse.center
548
- c = tf.constant((c.x, c.y))
549
- d_args.append(c)
550
- elif d_shape.type == diffvg.ShapeType.path:
551
- d_path = d_shape.as_path()
552
- points = tf.zeros((d_path.num_points, 2), dtype=tf.float32)
553
- d_path.copy_to(diffvg.float_ptr(pydiffvg.data_ptr(points)),diffvg.float_ptr(0))
554
- d_args.append(None) # num_control_points
555
- d_args.append(points)
556
- d_args.append(None) # is_closed
557
- d_args.append(None) # use_distance_approx
558
- elif d_shape.type == diffvg.ShapeType.rect:
559
- d_rect = d_shape.as_rect()
560
- p_min = tf.constant((d_rect.p_min.x, d_rect.p_min.y))
561
- p_max = tf.constant((d_rect.p_max.x, d_rect.p_max.y))
562
- d_args.append(p_min)
563
- d_args.append(p_max)
564
- else:
565
- assert(False)
566
- w = tf.constant((d_shape.stroke_width))
567
- d_args.append(w)
568
-
569
- for group_id in range(scene.num_shape_groups):
570
- d_shape_group = scene.get_d_shape_group(group_id)
571
- d_args.append(None) # shape_ids
572
- d_args.append(None) # fill_color_type
573
- if d_shape_group.has_fill_color():
574
- if d_shape_group.fill_color_type == diffvg.ColorType.constant:
575
- d_constant = d_shape_group.fill_color_as_constant()
576
- c = d_constant.color
577
- d_args.append(tf.constant((c.x, c.y, c.z, c.w)))
578
- elif d_shape_group.fill_color_type == diffvg.ColorType.linear_gradient:
579
- d_linear_gradient = d_shape_group.fill_color_as_linear_gradient()
580
- beg = d_linear_gradient.begin
581
- d_args.append(tf.constant((beg.x, beg.y)))
582
- end = d_linear_gradient.end
583
- d_args.append(tf.constant((end.x, end.y)))
584
- offsets = tf.zeros((d_linear_gradient.num_stops), dtype=tf.float32)
585
- stop_colors = tf.zeros((d_linear_gradient.num_stops, 4), dtype=tf.float32)
586
- # HACK: tensorflow's eager mode uses a cache to store scalar
587
- # constants to avoid memory copy. If we pass scalar tensors
588
- # into the C++ code and modify them, we would corrupt the
589
- # cache, causing incorrect result in future scalar constant
590
- # creations. Thus we force tensorflow to copy by plusing a zero.
591
- # (also see https://github.com/tensorflow/tensorflow/issues/11186
592
- # for more discussion regarding copying tensors)
593
- if offsets.shape.num_elements() == 1:
594
- offsets = offsets + 0
595
- d_linear_gradient.copy_to(\
596
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
597
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
598
- d_args.append(offsets)
599
- d_args.append(stop_colors)
600
- elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient:
601
- d_radial_gradient = d_shape_group.fill_color_as_radial_gradient()
602
- center = d_radial_gradient.center
603
- d_args.append(tf.constant((center.x, center.y)))
604
- radius = d_radial_gradient.radius
605
- d_args.append(tf.constant((radius.x, radius.y)))
606
- offsets = tf.zeros((d_radial_gradient.num_stops))
607
- if offsets.shape.num_elements() == 1:
608
- offsets = offsets + 0
609
- stop_colors = tf.zeros((d_radial_gradient.num_stops, 4))
610
- d_radial_gradient.copy_to(\
611
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
612
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
613
- d_args.append(offsets)
614
- d_args.append(stop_colors)
615
- else:
616
- assert(False)
617
- d_args.append(None) # stroke_color_type
618
- if d_shape_group.has_stroke_color():
619
- if d_shape_group.stroke_color_type == diffvg.ColorType.constant:
620
- d_constant = d_shape_group.stroke_color_as_constant()
621
- c = d_constant.color
622
- d_args.append(tf.constant((c.x, c.y, c.z, c.w)))
623
- elif d_shape_group.stroke_color_type == diffvg.ColorType.linear_gradient:
624
- d_linear_gradient = d_shape_group.stroke_color_as_linear_gradient()
625
- beg = d_linear_gradient.begin
626
- d_args.append(tf.constant((beg.x, beg.y)))
627
- end = d_linear_gradient.end
628
- d_args.append(tf.constant((end.x, end.y)))
629
- offsets = tf.zeros((d_linear_gradient.num_stops))
630
- stop_colors = tf.zeros((d_linear_gradient.num_stops, 4))
631
- if offsets.shape.num_elements() == 1:
632
- offsets = offsets + 0
633
- d_linear_gradient.copy_to(\
634
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
635
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
636
- d_args.append(offsets)
637
- d_args.append(stop_colors)
638
- elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient:
639
- d_radial_gradient = d_shape_group.stroke_color_as_radial_gradient()
640
- center = d_radial_gradient.center
641
- d_args.append(tf.constant((center.x, center.y)))
642
- radius = d_radial_gradient.radius
643
- d_args.append(tf.constant((radius.x, radius.y)))
644
- offsets = tf.zeros((d_radial_gradient.num_stops))
645
- stop_colors = tf.zeros((d_radial_gradient.num_stops, 4))
646
- if offsets.shape.num_elements() == 1:
647
- offsets = offsets + 0
648
- d_radial_gradient.copy_to(\
649
- diffvg.float_ptr(pydiffvg.data_ptr(offsets)),
650
- diffvg.float_ptr(pydiffvg.data_ptr(stop_colors)))
651
- d_args.append(offsets)
652
- d_args.append(stop_colors)
653
- else:
654
- assert(False)
655
- d_args.append(None) # use_even_odd_rule
656
- d_shape_to_canvas = tf.zeros((3, 3), dtype = tf.float32)
657
- d_shape_group.copy_to(diffvg.float_ptr(pydiffvg.data_ptr(d_shape_to_canvas)))
658
- d_args.append(d_shape_to_canvas)
659
- d_args.append(None) # filter_type
660
- d_args.append(tf.constant(scene.get_d_filter_radius()))
661
-
662
- return d_args
663
-
664
- return img, backward
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/dependencies/cub/README.md DELETED
@@ -1,189 +0,0 @@
1
- <hr>
2
- <h3>About CUB</h3>
3
-
4
- CUB provides state-of-the-art, reusable software components for every layer
5
- of the CUDA programming model:
6
- - [<b><em>Device-wide primitives</em></b>](https://nvlabs.github.com/cub/group___device_module.html)
7
- - Sort, prefix scan, reduction, histogram, etc.
8
- - Compatible with CUDA dynamic parallelism
9
- - [<b><em>Block-wide "collective" primitives</em></b>](https://nvlabs.github.com/cub/group___block_module.html)
10
- - I/O, sort, prefix scan, reduction, histogram, etc.
11
- - Compatible with arbitrary thread block sizes and types
12
- - [<b><em>Warp-wide "collective" primitives</em></b>](https://nvlabs.github.com/cub/group___warp_module.html)
13
- - Warp-wide prefix scan, reduction, etc.
14
- - Safe and architecture-specific
15
- - [<b><em>Thread and resource utilities</em></b>](https://nvlabs.github.com/cub/group___thread_module.html)
16
- - PTX intrinsics, device reflection, texture-caching iterators, caching memory allocators, etc.
17
-
18
- ![Orientation of collective primitives within the CUDA software stack](http://nvlabs.github.com/cub/cub_overview.png)
19
-
20
- CUB is included in the NVIDIA HPC SDK and the CUDA Toolkit.
21
-
22
- We recommend the [CUB Project Website](http://nvlabs.github.com/cub) for further information and examples.
23
-
24
- <br><hr>
25
- <h3>A Simple Example</h3>
26
-
27
- ```C++
28
- #include <cub/cub.cuh>
29
-
30
- // Block-sorting CUDA kernel
31
- __global__ void BlockSortKernel(int *d_in, int *d_out)
32
- {
33
- using namespace cub;
34
-
35
- // Specialize BlockRadixSort, BlockLoad, and BlockStore for 128 threads
36
- // owning 16 integer items each
37
- typedef BlockRadixSort<int, 128, 16> BlockRadixSort;
38
- typedef BlockLoad<int, 128, 16, BLOCK_LOAD_TRANSPOSE> BlockLoad;
39
- typedef BlockStore<int, 128, 16, BLOCK_STORE_TRANSPOSE> BlockStore;
40
-
41
- // Allocate shared memory
42
- __shared__ union {
43
- typename BlockRadixSort::TempStorage sort;
44
- typename BlockLoad::TempStorage load;
45
- typename BlockStore::TempStorage store;
46
- } temp_storage;
47
-
48
- int block_offset = blockIdx.x * (128 * 16); // OffsetT for this block's ment
49
-
50
- // Obtain a segment of 2048 consecutive keys that are blocked across threads
51
- int thread_keys[16];
52
- BlockLoad(temp_storage.load).Load(d_in + block_offset, thread_keys);
53
- __syncthreads();
54
-
55
- // Collectively sort the keys
56
- BlockRadixSort(temp_storage.sort).Sort(thread_keys);
57
- __syncthreads();
58
-
59
- // Store the sorted segment
60
- BlockStore(temp_storage.store).Store(d_out + block_offset, thread_keys);
61
- }
62
- ```
63
-
64
- Each thread block uses `cub::BlockRadixSort` to collectively sort
65
- its own input segment. The class is specialized by the
66
- data type being sorted, by the number of threads per block, by the number of
67
- keys per thread, and implicitly by the targeted compilation architecture.
68
-
69
- The `cub::BlockLoad` and `cub::BlockStore` classes are similarly specialized.
70
- Furthermore, to provide coalesced accesses to device memory, these primitives are
71
- configured to access memory using a striped access pattern (where consecutive threads
72
- simultaneously access consecutive items) and then <em>transpose</em> the keys into
73
- a [<em>blocked arrangement</em>](index.html#sec4sec3) of elements across threads.
74
-
75
- Once specialized, these classes expose opaque `TempStorage` member types.
76
- The thread block uses these storage types to statically allocate the union of
77
- shared memory needed by the thread block. (Alternatively these storage types
78
- could be aliased to global memory allocations).
79
-
80
- <br><hr>
81
- <h3>Releases</h3>
82
-
83
- CUB is distributed with the NVIDIA HPC SDK and the CUDA Toolkit in addition
84
- to GitHub.
85
-
86
- See the [changelog](CHANGELOG.md) for details about specific releases.
87
-
88
- | CUB Release | Included In |
89
- | ------------------------- | --------------------------------------- |
90
- | 1.9.10-1 | NVIDIA HPC SDK 20.7 & CUDA Toolkit 11.1 |
91
- | 1.9.10 | NVIDIA HPC SDK 20.5 |
92
- | 1.9.9 | CUDA Toolkit 11.0 |
93
- | 1.9.8-1 | NVIDIA HPC SDK 20.3 |
94
- | 1.9.8 | CUDA Toolkit 11.0 Early Access |
95
- | 1.9.8 | CUDA 11.0 Early Access |
96
- | 1.8.0 | |
97
- | 1.7.5 | Thrust 1.9.2 |
98
- | 1.7.4 | Thrust 1.9.1-2 |
99
- | 1.7.3 | |
100
- | 1.7.2 | |
101
- | 1.7.1 | |
102
- | 1.7.0 | Thrust 1.9.0-5 |
103
- | 1.6.4 | |
104
- | 1.6.3 | |
105
- | 1.6.2 (previously 1.5.5) | |
106
- | 1.6.1 (previously 1.5.4) | |
107
- | 1.6.0 (previously 1.5.3) | |
108
- | 1.5.2 | |
109
- | 1.5.1 | |
110
- | 1.5.0 | |
111
- | 1.4.1 | |
112
- | 1.4.0 | |
113
- | 1.3.2 | |
114
- | 1.3.1 | |
115
- | 1.3.0 | |
116
- | 1.2.3 | |
117
- | 1.2.2 | |
118
- | 1.2.0 | |
119
- | 1.1.1 | |
120
- | 1.0.2 | |
121
- | 1.0.1 | |
122
- | 0.9.4 | |
123
- | 0.9.2 | |
124
- | 0.9.1 | |
125
- | 0.9.0 | |
126
-
127
- <br><hr>
128
- <h3>Development Process</h3>
129
-
130
- CUB uses the [CMake build system](https://cmake.org/) to build unit tests,
131
- examples, and header tests. To build CUB as a developer, the following
132
- recipe should be followed:
133
-
134
- ```
135
- # Clone CUB repo from github:
136
- git clone https://github.com/thrust/cub.git
137
- cd cub
138
-
139
- # Create build directory:
140
- mkdir build
141
- cd build
142
-
143
- # Configure -- use one of the following:
144
- cmake .. # Command line interface.
145
- ccmake .. # ncurses GUI (Linux only)
146
- cmake-gui # Graphical UI, set source/build directories in the app
147
-
148
- # Build:
149
- cmake --build . -j <num jobs> # invokes make (or ninja, etc)
150
-
151
- # Run tests and examples:
152
- ctest
153
- ```
154
-
155
- By default, the C++14 standard is targeted, but this can be changed in CMake.
156
- More information on configuring your CUB build and creating a pull request is
157
- found in [CONTRIBUTING.md](CONTRIBUTING.md).
158
-
159
- <br><hr>
160
- <h3>Open Source License</h3>
161
-
162
- CUB is available under the "New BSD" open-source license:
163
-
164
- ```
165
- Copyright (c) 2010-2011, Duane Merrill. All rights reserved.
166
- Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved.
167
-
168
- Redistribution and use in source and binary forms, with or without
169
- modification, are permitted provided that the following conditions are met:
170
- * Redistributions of source code must retain the above copyright
171
- notice, this list of conditions and the following disclaimer.
172
- * Redistributions in binary form must reproduce the above copyright
173
- notice, this list of conditions and the following disclaimer in the
174
- documentation and/or other materials provided with the distribution.
175
- * Neither the name of the NVIDIA CORPORATION nor the
176
- names of its contributors may be used to endorse or promote products
177
- derived from this software without specific prior written permission.
178
-
179
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
180
- ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
181
- WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
182
- DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
183
- DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
184
- (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
185
- LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
186
- ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
187
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
188
- SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
189
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/detail/minimum_category.h DELETED
@@ -1,52 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/type_traits/minimum_type.h>
20
-
21
- namespace thrust
22
- {
23
-
24
- namespace detail
25
- {
26
-
27
- template<typename T1,
28
- typename T2 = minimum_type_detail::any_conversion,
29
- typename T3 = minimum_type_detail::any_conversion,
30
- typename T4 = minimum_type_detail::any_conversion,
31
- typename T5 = minimum_type_detail::any_conversion,
32
- typename T6 = minimum_type_detail::any_conversion,
33
- typename T7 = minimum_type_detail::any_conversion,
34
- typename T8 = minimum_type_detail::any_conversion,
35
- typename T9 = minimum_type_detail::any_conversion,
36
- typename T10 = minimum_type_detail::any_conversion,
37
- typename T11 = minimum_type_detail::any_conversion,
38
- typename T12 = minimum_type_detail::any_conversion,
39
- typename T13 = minimum_type_detail::any_conversion,
40
- typename T14 = minimum_type_detail::any_conversion,
41
- typename T15 = minimum_type_detail::any_conversion,
42
- typename T16 = minimum_type_detail::any_conversion>
43
- struct minimum_category
44
- : minimum_type<T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16>
45
- {
46
- }; // end minimum_category
47
-
48
- } // end detail
49
-
50
- } // end thrust
51
-
52
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/MonoScene/monoscene/modules.py DELETED
@@ -1,194 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- from monoscene.DDR import Bottleneck3D
4
-
5
-
6
- class ASPP(nn.Module):
7
- """
8
- ASPP 3D
9
- Adapt from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7
10
- """
11
-
12
- def __init__(self, planes, dilations_conv_list):
13
- super().__init__()
14
-
15
- # ASPP Block
16
- self.conv_list = dilations_conv_list
17
- self.conv1 = nn.ModuleList(
18
- [
19
- nn.Conv3d(
20
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
21
- )
22
- for dil in dilations_conv_list
23
- ]
24
- )
25
- self.bn1 = nn.ModuleList(
26
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
27
- )
28
- self.conv2 = nn.ModuleList(
29
- [
30
- nn.Conv3d(
31
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
32
- )
33
- for dil in dilations_conv_list
34
- ]
35
- )
36
- self.bn2 = nn.ModuleList(
37
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
38
- )
39
- self.relu = nn.ReLU()
40
-
41
- def forward(self, x_in):
42
-
43
- y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in)))))
44
- for i in range(1, len(self.conv_list)):
45
- y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in)))))
46
- x_in = self.relu(y + x_in) # modified
47
-
48
- return x_in
49
-
50
-
51
- class SegmentationHead(nn.Module):
52
- """
53
- 3D Segmentation heads to retrieve semantic segmentation at each scale.
54
- Formed by Dim expansion, Conv3D, ASPP block, Conv3D.
55
- Taken from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7
56
- """
57
-
58
- def __init__(self, inplanes, planes, nbr_classes, dilations_conv_list):
59
- super().__init__()
60
-
61
- # First convolution
62
- self.conv0 = nn.Conv3d(inplanes, planes, kernel_size=3, padding=1, stride=1)
63
-
64
- # ASPP Block
65
- self.conv_list = dilations_conv_list
66
- self.conv1 = nn.ModuleList(
67
- [
68
- nn.Conv3d(
69
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
70
- )
71
- for dil in dilations_conv_list
72
- ]
73
- )
74
- self.bn1 = nn.ModuleList(
75
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
76
- )
77
- self.conv2 = nn.ModuleList(
78
- [
79
- nn.Conv3d(
80
- planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False
81
- )
82
- for dil in dilations_conv_list
83
- ]
84
- )
85
- self.bn2 = nn.ModuleList(
86
- [nn.BatchNorm3d(planes) for dil in dilations_conv_list]
87
- )
88
- self.relu = nn.ReLU()
89
-
90
- self.conv_classes = nn.Conv3d(
91
- planes, nbr_classes, kernel_size=3, padding=1, stride=1
92
- )
93
-
94
- def forward(self, x_in):
95
-
96
- # Convolution to go from inplanes to planes features...
97
- x_in = self.relu(self.conv0(x_in))
98
-
99
- y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in)))))
100
- for i in range(1, len(self.conv_list)):
101
- y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in)))))
102
- x_in = self.relu(y + x_in) # modified
103
-
104
- x_in = self.conv_classes(x_in)
105
-
106
- return x_in
107
-
108
-
109
- class ProcessKitti(nn.Module):
110
- def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]):
111
- super(Process, self).__init__()
112
- self.main = nn.Sequential(
113
- *[
114
- Bottleneck3D(
115
- feature,
116
- feature // 4,
117
- bn_momentum=bn_momentum,
118
- norm_layer=norm_layer,
119
- dilation=[i, i, i],
120
- )
121
- for i in dilations
122
- ]
123
- )
124
-
125
- def forward(self, x):
126
- return self.main(x)
127
-
128
-
129
- class Process(nn.Module):
130
- def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]):
131
- super(Process, self).__init__()
132
- self.main = nn.Sequential(
133
- *[
134
- Bottleneck3D(
135
- feature,
136
- feature // 4,
137
- bn_momentum=bn_momentum,
138
- norm_layer=norm_layer,
139
- dilation=[i, i, i],
140
- )
141
- for i in dilations
142
- ]
143
- )
144
-
145
- def forward(self, x):
146
- return self.main(x)
147
-
148
-
149
- class Upsample(nn.Module):
150
- def __init__(self, in_channels, out_channels, norm_layer, bn_momentum):
151
- super(Upsample, self).__init__()
152
- self.main = nn.Sequential(
153
- nn.ConvTranspose3d(
154
- in_channels,
155
- out_channels,
156
- kernel_size=3,
157
- stride=2,
158
- padding=1,
159
- dilation=1,
160
- output_padding=1,
161
- ),
162
- norm_layer(out_channels, momentum=bn_momentum),
163
- nn.ReLU(),
164
- )
165
-
166
- def forward(self, x):
167
- return self.main(x)
168
-
169
-
170
- class Downsample(nn.Module):
171
- def __init__(self, feature, norm_layer, bn_momentum, expansion=8):
172
- super(Downsample, self).__init__()
173
- self.main = Bottleneck3D(
174
- feature,
175
- feature // 4,
176
- bn_momentum=bn_momentum,
177
- expansion=expansion,
178
- stride=2,
179
- downsample=nn.Sequential(
180
- nn.AvgPool3d(kernel_size=2, stride=2),
181
- nn.Conv3d(
182
- feature,
183
- int(feature * expansion / 4),
184
- kernel_size=1,
185
- stride=1,
186
- bias=False,
187
- ),
188
- norm_layer(int(feature * expansion / 4), momentum=bn_momentum),
189
- ),
190
- norm_layer=norm_layer,
191
- )
192
-
193
- def forward(self, x):
194
- return self.main(x)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/drawings-to-human/main.py DELETED
@@ -1,3 +0,0 @@
1
- import subprocess
2
-
3
- subprocess.run(["make", "build-all"], shell=False)
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/__init__.py DELETED
@@ -1,35 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead
3
- from .keypoint_head import (
4
- ROI_KEYPOINT_HEAD_REGISTRY,
5
- build_keypoint_head,
6
- BaseKeypointRCNNHead,
7
- KRCNNConvDeconvUpsampleHead,
8
- )
9
- from .mask_head import (
10
- ROI_MASK_HEAD_REGISTRY,
11
- build_mask_head,
12
- BaseMaskRCNNHead,
13
- MaskRCNNConvUpsampleHead,
14
- )
15
- from .roi_heads import (
16
- ROI_HEADS_REGISTRY,
17
- ROIHeads,
18
- Res5ROIHeads,
19
- StandardROIHeads,
20
- build_roi_heads,
21
- select_foreground_proposals,
22
- )
23
- from .clip_roi_heads import (
24
- CLIPRes5ROIHeads,
25
- CLIPSwinROIHeads,
26
- PretrainRes5ROIHeads,
27
- CLIPStandardROIHeads,
28
- )
29
- from .cascade_rcnn import CascadeROIHeads
30
- from .rotated_fast_rcnn import RROIHeads
31
- from .fast_rcnn import FastRCNNOutputLayers
32
-
33
- from . import cascade_rcnn # isort:skip
34
-
35
- __all__ = list(globals().keys())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarlDennis/Lovelive-VITS-JPZH/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Lovelive VITS JPZH
3
- emoji: 📈
4
- colorFrom: purple
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.4.1
8
- app_file: app.py
9
- pinned: false
10
- license: cc-by-nc-3.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChallengeHub/Chinese-LangChain/tests/test_duckduckgo_search.py DELETED
@@ -1,16 +0,0 @@
1
- from duckduckgo_search import ddg
2
- from duckduckgo_search.utils import SESSION
3
-
4
-
5
- SESSION.proxies = {
6
- "http": f"socks5h://localhost:7890",
7
- "https": f"socks5h://localhost:7890"
8
- }
9
- r = ddg("马保国")
10
- print(r[:2])
11
- """
12
- [{'title': '马保国 - 维基百科,自由的百科全书', 'href': 'https://zh.wikipedia.org/wiki/%E9%A9%AC%E4%BF%9D%E5%9B%BD', 'body': '马保国(1951年 — ) ,男,籍贯 山东 临沂,出生及长大于河南,中国大陆太极拳师,自称"浑元形意太极门掌门人" 。 马保国因2017年约战mma格斗家徐晓冬首次出现
13
- 大众视野中。 2020年5月,马保国在对阵民间武术爱好者王庆民的比赛中,30秒内被连续高速击倒三次,此事件成为了持续多日的社交 ...'}, {'title': '馬保國的主页 - 抖音', 'href': 'https://www.douyin.com/user/MS4wLjABAAAAW0E1ziOvxgUh3VVv5FE6xmoo3w5WtZalfphYZKj4mCg', 'body': '6.3万. #马马国教扛打功 最近有几个人模芳我动作,很危险啊,不可以的,朋友们不要受伤了。. 5.3万. #马保国直播带货榜第一 朋友们周末愉快,本周六早上湿点,我本人在此号进行第一次带货直播,活到老,学到老,越活越年轻。. 7.0万. #马保国击破红牛罐 昨天 ...'}]
14
-
15
-
16
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/0_object_detection_model/GroundingDINO_SwinT_OGC.cfg.py DELETED
@@ -1,43 +0,0 @@
1
- batch_size = 1
2
- modelname = "groundingdino"
3
- backbone = "swin_T_224_1k"
4
- position_embedding = "sine"
5
- pe_temperatureH = 20
6
- pe_temperatureW = 20
7
- return_interm_indices = [1, 2, 3]
8
- backbone_freeze_keywords = None
9
- enc_layers = 6
10
- dec_layers = 6
11
- pre_norm = False
12
- dim_feedforward = 2048
13
- hidden_dim = 256
14
- dropout = 0.0
15
- nheads = 8
16
- num_queries = 900
17
- query_dim = 4
18
- num_patterns = 0
19
- num_feature_levels = 4
20
- enc_n_points = 4
21
- dec_n_points = 4
22
- two_stage_type = "standard"
23
- two_stage_bbox_embed_share = False
24
- two_stage_class_embed_share = False
25
- transformer_activation = "relu"
26
- dec_pred_bbox_embed_share = True
27
- dn_box_noise_scale = 1.0
28
- dn_label_noise_ratio = 0.5
29
- dn_label_coef = 1.0
30
- dn_bbox_coef = 1.0
31
- embed_init_tgt = True
32
- dn_labelbook_size = 2000
33
- max_text_len = 256
34
- text_encoder_type = "bert-base-uncased"
35
- use_text_enhancer = True
36
- use_fusion_layer = True
37
- use_checkpoint = True
38
- use_transformer_ckpt = True
39
- use_text_cross_attention = True
40
- text_dropout = 0.0
41
- fusion_dropout = 0.0
42
- fusion_droppath = 0.1
43
- sub_sentence_present = True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/other/restart.js DELETED
@@ -1,122 +0,0 @@
1
- import plugin from '../../lib/plugins/plugin.js'
2
- import { createRequire } from 'module'
3
-
4
- const require = createRequire(import.meta.url)
5
- const { exec } = require('child_process')
6
-
7
- export class Restart extends plugin {
8
- constructor (e = '') {
9
- super({
10
- name: '重启',
11
- dsc: '#重启',
12
- event: 'message',
13
- priority: 10,
14
- rule: [{
15
- reg: '^#重启$',
16
- fnc: 'restart',
17
- permission: 'master'
18
- }, {
19
- reg: '^#(停机|关机)$',
20
- fnc: 'stop',
21
- permission: 'master'
22
- }]
23
- })
24
-
25
- if (e) this.e = e
26
-
27
- this.key = 'Yz:restart'
28
- }
29
-
30
- async init () {
31
- let restart = await redis.get(this.key)
32
- if (restart) {
33
- restart = JSON.parse(restart)
34
- let time = restart.time || new Date().getTime()
35
- time = (new Date().getTime() - time) / 1000
36
-
37
- let msg = `重启成功:耗时${time.toFixed(2)}秒`
38
-
39
- if (restart.isGroup)
40
- Bot.sendGroupMsg(restart.bot_id, restart.id, msg)
41
- else
42
- Bot.sendFriendMsg(restart.bot_id, restart.id, msg)
43
-
44
- redis.del(this.key)
45
- }
46
- }
47
-
48
- async restart () {
49
- await this.e.reply('开始执行重启,请稍等...')
50
- logger.mark(`${this.e.logFnc} 开始执行重启,请稍等...`)
51
-
52
- let data = JSON.stringify({
53
- isGroup: !!this.e.isGroup,
54
- id: this.e.isGroup ? this.e.group_id : this.e.user_id,
55
- bot_id: this.e.self_id,
56
- time: new Date().getTime()
57
- })
58
-
59
- let npm = await this.checkPnpm()
60
-
61
- try {
62
- await redis.set(this.key, data, { EX: 120 })
63
- let cm = `${npm} start`
64
- if (process.argv[1].includes('pm2')) {
65
- cm = `${npm} run restart`
66
- }
67
-
68
- exec(cm, { windowsHide: true }, (error, stdout, stderr) => {
69
- if (error) {
70
- redis.del(this.key)
71
- this.e.reply(`操作失败!\n${error.stack}`)
72
- logger.error(`重启失败\n${error.stack}`)
73
- } else if (stdout) {
74
- logger.mark('重启成功,运行已由前台转为后台')
75
- logger.mark(`查看日志请用命令:${npm} run log`)
76
- logger.mark(`停止后台运行命令:${npm} stop`)
77
- process.exit()
78
- }
79
- })
80
- } catch (error) {
81
- redis.del(this.key)
82
- let e = error.stack ?? error
83
- this.e.reply(`操作失败!\n${e}`)
84
- }
85
-
86
- return true
87
- }
88
-
89
- async checkPnpm () {
90
- let npm = 'npm'
91
- let ret = await this.execSync('pnpm -v')
92
- if (ret.stdout) npm = 'pnpm'
93
- return npm
94
- }
95
-
96
- async execSync (cmd) {
97
- return new Promise((resolve, reject) => {
98
- exec(cmd, { windowsHide: true }, (error, stdout, stderr) => {
99
- resolve({ error, stdout, stderr })
100
- })
101
- })
102
- }
103
-
104
- async stop () {
105
- if (!process.argv[1].includes('pm2')) {
106
- logger.mark('关机成功,已停止运行')
107
- await this.e.reply('关机成功,已停止运行')
108
- process.exit()
109
- }
110
-
111
- logger.mark('关机成功,已停止运行')
112
- await this.e.reply('关机成功,已停止运行')
113
-
114
- let npm = await this.checkPnpm()
115
- exec(`${npm} stop`, { windowsHide: true }, (error, stdout, stderr) => {
116
- if (error) {
117
- this.e.reply(`操作失败!\n${error.stack}`)
118
- logger.error(`关机失败\n${error.stack}`)
119
- }
120
- })
121
- }
122
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/ascension/__init__.py DELETED
@@ -1,35 +0,0 @@
1
- from pathlib import Path
2
- from typing import List
3
-
4
- from pil_utils import BuildImage
5
-
6
- from meme_generator import add_meme
7
- from meme_generator.exception import TextOverLength
8
-
9
- img_dir = Path(__file__).parent / "images"
10
-
11
-
12
- def ascension(images, texts: List[str], args):
13
- frame = BuildImage.open(img_dir / "0.png")
14
- text = f"你原本应该要去地狱的,但因为你生前{texts[0]},我们就当作你已经服完刑期了"
15
- try:
16
- frame.draw_text(
17
- (40, 30, 482, 135),
18
- text,
19
- allow_wrap=True,
20
- max_fontsize=50,
21
- min_fontsize=20,
22
- )
23
- except ValueError:
24
- raise TextOverLength(texts[0])
25
- return frame.save_jpg()
26
-
27
-
28
- add_meme(
29
- "ascension",
30
- ascension,
31
- min_texts=1,
32
- max_texts=1,
33
- default_texts=["学的是机械"],
34
- keywords=["升天"],
35
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cong723/gpt-academic-public/docs/README_RS.md DELETED
@@ -1,291 +0,0 @@
1
- > **Note**
2
- >
3
- > Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
4
- >
5
-
6
- # <img src="logo.png" width="40" > ChatGPT Academic Optimization
7
-
8
- **Если вам понравился этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные академические ярлыки или функциональные плагины, не стесняйтесь создавать запросы на изменение или пул-запросы. Мы также имеем [README на английском языке](docs/README_EN.md), переведенный этим же проектом.
9
-
10
- > **Примечание**
11
- >
12
- > 1. Пожалуйста, обратите внимание, что только функциonal plugins (buttons) с **красным цветом** могут читать файлы, некоторые из которых находятся в **выпадающем меню** плагинов. Кроме того, мы приветствуем и обрабатываем любые новые плагины с **наивысшим приоритетом**!
13
- >
14
- > 2. Функции каждого файла в этом проекте подробно описаны в собственном анализе [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) . При повторных итерациях вы также можете вызывать обновленный отчет функций проекта, щелкнув соответствующий функциональный плагин GPT. Часто задаваемые вопросы собраны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) .
15
-
16
- <div align="center">
17
-
18
- Функция | Описание
19
- --- | ---
20
- Редактирование одним кликом | Поддержка редактирования одним кликом, поиск грамматических ошибок в академических статьях
21
- Переключение языков "Английский-Китайский" одним кликом | Одним кликом переключайте языки "Английский-Китайский"
22
- Разъяснение программного кода одним кликом | Вы можете правильно отобразить и объяснить программный код.
23
- [Настраиваемые сочетания клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настраиваемых сочетаний клавиш
24
- [Настройка сервера-прокси](https://www.bilibili.com/video/BV1rc411W7Dr) | Поддержка настройки сервера-прокси
25
- Модульный дизайн | Поддержка настраиваемых функциональных плагинов высших порядков и функциональных плагинов, поддерживающих [горячее обновление](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
26
- [Автоанализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Прочтение в один клик](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) кода программы проекта
27
- [Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Один клик для проанализирования дерева других проектов Python/C/C++/Java/Lua/...
28
- Чтение статей| [Функциональный плагин] Одним кликом прочитайте весь латех (LaTex) текст статьи и сгенерируйте краткое описание
29
- Перевод и редактирование всех статей из LaTex | [Функциональный плагин] Перевод или редактирование LaTex-статьи всего одним нажатием кнопки
30
- Генерация комментариев в пакетном режиме | [Функциональный плагин] Одним кликом сгенерируйте комментарии к функциям в пакетном режиме
31
- Генерация отчетов пакета CHAT | [Функциональный плагин] Автоматически создавайте сводные отчеты после выполнения
32
- [Помощник по arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи arxiv, чтобы легко перевести резюме и загрузить PDF-файл
33
- [Перевод полного текста статьи в формате PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлеките заголовок статьи, резюме и переведите весь текст статьи (многопоточно)
34
- [Помощник интеграции Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] Дайте GPT выбрать для вас интересные статьи на любой странице поиска Google Scholar.
35
- Отображение формул/изображений/таблиц | Одновременно отображается tex-форма и рендер-форма формул, поддержка формул, высокоскоростных кодов
36
- Поддержка функциональных плагинов многопоточности | Поддержка многопоточной работы с плагинами, обрабатывайте огромные объемы текста или программы одним кликом
37
- Запуск темной темы gradio[подробнее](https://github.com/binary-husky/chatgpt_academic/issues/173) | Добавьте / ?__dark-theme=true в конец URL браузера, чтобы переключиться на темную тему.
38
- [Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), поддержка API2D | Находиться между GPT3.5, GPT4 и [清华ChatGLM](https://github.com/THUDM/ChatGLM-6B) должно быть очень приятно, не так ли?
39
- Альтернатива huggingface без использования научной сети [Онлайн-эксперимент](https://huggingface.co/spaces/qingxu98/gpt-academic) | Войдите в систему, скопируйте пространство [этот пространственный URL](https://huggingface.co/spaces/qingxu98/gpt-academic)
40
- …… | ……
41
-
42
-
43
- </div>
44
-
45
- - Новый интерфейс (вы можете изменить настройку LAYOUT в config.py, чтобы переключаться между "горизонтальным расположением" и "вертикальным расположением")
46
- <div align="center">
47
- <img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
48
- </div>
49
-
50
-
51
- Вы профессиональный переводчик научных статей.
52
-
53
- - Все кнопки генерируются динамически путем чтения functional.py и могут быть легко настроены под пользовательские потребности, освобождая буфер обмена.
54
- <div align="center">
55
- <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
56
- </div>
57
-
58
- - Редактирование/корректирование
59
- <div align="center">
60
- <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
61
- </div>
62
-
63
- - Если вывод содержит формулы, они отображаются одновременно как в формате tex, так и в рендеринговом формате для удобства копирования и чтения.
64
- <div align="center">
65
- <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
66
- </div>
67
-
68
- - Лень смотреть код проекта? Просто покажите chatgpt.
69
- <div align="center">
70
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
71
- </div>
72
-
73
- - Несколько моделей больших языковых моделей смешиваются (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/) -GPT4)
74
- <div align="center">
75
- <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
76
- </div>
77
-
78
- Несколько моделей больших языковых моделей смешиваются в [бета-версии huggingface] (https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (huggingface-версия не поддерживает chatglm).
79
-
80
-
81
- ---
82
-
83
- ## Установка - Метод 1: Запуск (Windows, Linux или MacOS)
84
-
85
- 1. Скачайте проект
86
- ```sh
87
- git clone https://github.com/binary-husky/chatgpt_academic.git
88
- cd chatgpt_academic
89
- ```
90
-
91
- 2. Настройка API_KEY и настройки прокси
92
-
93
- В файле `config.py` настройте зарубежный прокси и OpenAI API KEY, пояснения ниже
94
- ```
95
- 1. Если вы находитесь в Китае, вам нужно настроить зарубежный прокси, чтобы использовать OpenAI API. Пожалуйста, внимательно прочитайте config.py для получения инструкций (1. Измените USE_PROXY на True; 2. Измените прокси в соответствии с инструкциями).
96
- 2. Настройка API KEY OpenAI. Вам необходимо зарегистрироваться на сайте OpenAI и получить API KEY. После получения API KEY настройте его в файле config.py.
97
- 3. Вопросы, связанные с сетевыми проблемами (тайм-аут сети, прокси не работает), можно найти здесь: https://github.com/binary-husky/chatgpt_academic/issues/1
98
- ```
99
- (Примечание: при запуске программы будет проверяться наличие конфиденциального файла конфигурации с именем `config_private.py` и использоваться в нем конфигурация параметров, которая перезаписывает параметры с такими же именами в `config.py`. Поэтому, если вы понимаете логику чтения нашей конфигурации, мы настоятельно рекомендуем вам создать новый файл конфигурации с именем `config_private.py` рядом с `config.py` и переместить (скопировать) настройки из `config.py` в `config_private.py`. `config_private.py` не подвергается контролю git, что делает конфиденциальную информацию более безопасной.)
100
-
101
-
102
- 3. Установить зависимости
103
- ```sh
104
- # (Выбор 1) Рекомендуется
105
- python -m pip install -r requirements.txt
106
-
107
- # (Выбор 2) Если вы используете anaconda, то шаги будут аналогичны:
108
- # (Шаг 2.1) conda create -n gptac_venv python=3.11
109
- # (Шаг 2.2) conda activate gptac_venv
110
- # (Шаг 2.3) python -m pip install -r requirements.txt
111
-
112
- # Примечание: используйте официальный источник pip или источник pip.aliyun.com. Другие источники pip могут вызывать проблемы. временный метод замены источника:
113
- # python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
114
- ```
115
-
116
- Если требуется поддержка TUNA ChatGLM, необходимо установить дополнительные зависимости (если вы неудобны с python, необходимо иметь хорошую конфигурацию компьютера):
117
- ```sh
118
- python -m pip install -r request_llm/requirements_chatglm.txt
119
- ```
120
-
121
- 4. Запустите
122
- ```sh
123
- python main.py
124
- ```
125
-
126
- 5. Тестовые функции плагина
127
- ```
128
- - Тестирвоание анализа проекта Python
129
- В основной области введите `./crazy_functions/test_project/python/dqn` , а затем нажмите "Анализировать весь проект Python"
130
- - Тестирование самостоятельного чтения кода
131
- Щелкните " [Демонстрационный режим многопоточности] Проанализируйте сам проект (расшифровка источника кода)"
132
- - Тестирование функций шаблонного плагина (вы можете использовать эту функцию как шаблон для более сложных функций, требующих ответа от gpt в связи с тем, что произошло сегодня в истории)
133
- Щелкните " [Функции шаблонного плагина] День в истории"
134
- - На нижней панели дополнительные функции для выбора
135
- ```
136
-
137
- ## Установка - Метод 2: Использование docker (Linux)
138
-
139
-
140
- 1. Только ChatGPT (рекомендуется для большинства пользователей):
141
- ``` sh
142
- # Скачать проект
143
- git clone https://github.com/binary-husky/chatgpt_academic.git
144
- cd chatgpt_academic
145
- # Настроить прокси за границей и OpenAI API KEY
146
- Отредактируйте файл config.py в любом текстовом редакторе.
147
- # Установка
148
- docker build -t gpt-academic .
149
- # Запустить
150
- docker run --rm -it --net=host gpt-academic
151
-
152
- # Проверка функциональности плагина
153
- ## Прове��ка шаблонной функции плагина (требуется, чтобы gpt ответил, что произошло "в истории на этот день"), вы можете использовать эту функцию в качестве шаблона для реализации более сложных функций.
154
- Нажмите "[Шаблонный демонстрационный плагин] История на этот день".
155
- ## Тест абстрактного резюме для проекта на Latex
156
- В области ввода введите ./crazy_functions/test_project/latex/attention, а затем нажмите "Чтение реферата о тезисах статьи на LaTeX".
157
- ## Тестовый анализ проекта на Python
158
- Введите в область ввода ./crazy_functions/test_project/python/dqn, затем нажмите "Проанализировать весь проект на Python".
159
-
160
- Выбирайте больше функциональных плагинов в нижнем выпадающем меню.
161
- ```
162
-
163
- 2. ChatGPT + ChatGLM (требуется глубокое знание Docker и достаточно мощное компьютерное оборудование):
164
-
165
- ``` sh
166
- # Изменение Dockerfile
167
- cd docs && nano Dockerfile+ChatGLM
168
- # Как построить | Как запустить (Dockerfile+ChatGLM в пути docs, сначала перейдите в папку с помощью cd docs)
169
- docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
170
- # Как запустить | Как запустить (2) я хочу войти в контейнер и сделать какие-то настройки до запуска:
171
- docker run --rm -it --net=host --gpus=all gpt-academic bash
172
- ```
173
-
174
-
175
- ## Установка-Метод 3: Другие способы развертывания
176
-
177
- 1. Развертывание на удаленном облачном сервере
178
- Пожалуйста, посетите [Deploy Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
179
-
180
- 2. Использование WSL2 (Windows Subsystem for Linux)
181
- Пожалуйста, посетите [Deploy Wiki-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
182
-
183
-
184
- ## Установка-Настройки прокси
185
- ### Метод 1: Обычный способ
186
- [Конфигурация прокси] (https://github.com/binary-husky/chatgpt_academic/issues/1)
187
-
188
- ### Метод 2: Руководство новичка
189
- [Руководство новичка] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
190
-
191
-
192
- ---
193
-
194
- ## Настройка новой удобной кнопки (настройка быстрой клавиши для научной работы)
195
- Откройте `core_functional.py` любым текстовым редактором, добавьте элементы, как показано ниже, затем перезапустите программу. (Если кнопка уже успешно добавлена и видна, то префикс и суффикс поддерживают горячее изменение, чтобы они оказались в действии, не нужно перезапускать программу.)
196
- например
197
- ```
198
- "Супер анг-рус": {
199
- # Префикс, будет добавлен перед вашим вводом. Например, используется для описания ваших потребностей, таких как перевод, кодинг, редактирование и т. д.
200
- "Prefix": "Пожалуйста, переведите этот фрагмент на русский язык, а затем создайте пошаговую таблицу в markdown, чтобы объяснить все специализированные термины, которые встречаются в тексте:\n\n",
201
-
202
- # Суффикс, будет добавлен после вашего ввода. Например, совместно с префиксом можно обрамить ваш ввод в кавычки.
203
- "Suffix": "",
204
- },
205
- ```
206
- <div align="center">
207
- <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
208
- </div>
209
-
210
- ---
211
-
212
-
213
- ## Демонстрация некоторых возможностей
214
-
215
- ### Отображение изображений:
216
-
217
- <div align="center">
218
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
219
- </div>
220
-
221
-
222
- ### Если программа может понимать и разбирать сама себя:
223
-
224
- <div align="center">
225
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
226
- </div>
227
-
228
- <div align="center">
229
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
230
- </div>
231
-
232
-
233
- ### Анализ других проектов на Python/Cpp:
234
- <div align="center">
235
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
236
- </div>
237
-
238
- <div align="center">
239
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
240
- </div>
241
-
242
- ### Генерация понимания и абстрактов с помощью Латех статей в один клик
243
- <div align="center">
244
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
245
- </div>
246
-
247
- ### Автоматическое создание отчетов
248
- <div align="center">
249
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
250
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
251
- <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
252
- </div>
253
-
254
- ### Модульный дизайн функций
255
- <div align="center">
256
- <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
257
- <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
258
- </div>
259
-
260
-
261
- ### Трансляция исходного кода на английский язык
262
-
263
- <div align="center">
264
- <img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
265
- </div>
266
-
267
- ## Todo и планирование версий:
268
- - version 3.2+ (todo): функция плагины поддерживают более многочисленные интерфейсы параметров
269
- - version 3.1: поддержка одновременного опроса нескольких моделей gpt! Поддержка api2d, поддержка балансировки нагрузки множества apikey.
270
- - version 3.0: поддержка chatglm и других маленьких llm
271
- - version 2.6: реструктурировал структуру плагинов, повысил интерактивность, добавил больше плагинов
272
- - version 2.5: само обновление, решение проблемы слишком длинного текста и переполнения токена при переводе всего проекта исходного кода
273
- - version 2.4: (1) добавлена функция перевода всего PDF-документа; (2) добавлена функция изменения положения входной области; (3) добавлена опция вертикального макета; (4) оптимизация функций многопоточности плагина.
274
- - version 2.3: улучшение многопоточной интерактивности
275
- - version 2.2: функция плагинов поддерживает горячую перезагрузку
276
- - version 2.1: блочная раскладка
277
- - version 2.0: модульный дизайн функций плагина
278
- - version 1.0: основные функции
279
-
280
- ## Ссылки на изучение и обучение
281
-
282
- ```
283
- В коде использовано много хороших дизайнерских решений из других отличных проектов, в том числе:
284
-
285
- # Project1: использование многих приемов из ChuanhuChatGPT
286
- https://github.com/GaiZhenbiao/ChuanhuChatGPT
287
-
288
- # Project2: ChatGLM-6B в Тхуде:
289
- https://github.com/THUDM/ChatGLM-6B
290
- ```
291
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cropinky/esrgan/realesrgan/models/__init__.py DELETED
@@ -1,10 +0,0 @@
1
- import importlib
2
- from basicsr.utils import scandir
3
- from os import path as osp
4
-
5
- # automatically scan and import model modules for registry
6
- # scan all the files that end with '_model.py' under the model folder
7
- model_folder = osp.dirname(osp.abspath(__file__))
8
- model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
9
- # import all the model modules
10
- _model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/proc.py DELETED
@@ -1,51 +0,0 @@
1
- import multiprocessing
2
-
3
- def cpu_count():
4
- return multiprocessing.cpu_count()
5
-
6
- def get_pool(processes):
7
- pool = multiprocessing.Pool(processes = processes)
8
- return pool
9
-
10
- def wait_for_pool(pool):
11
- pool.close()
12
- pool.join()
13
-
14
- def set_proc_name(name):
15
- import setproctitle
16
- setproctitle.setproctitle(name)
17
-
18
- def kill(pid):
19
- import util
20
- if type(pid) == list:
21
- for p in pid:
22
- kill(p)
23
- elif type(pid) == int:
24
- cmd = 'kill -9 %d'%(pid)
25
- print cmd
26
- print util.cmd.cmd(cmd)
27
- elif type(pid) == str:
28
- pids = get_pid(pid)
29
- kill(pids)
30
- else:
31
- raise ValueError, 'Not supported parameter type:', type(pid)
32
-
33
- def ps_aux_grep(pattern):
34
- import util
35
- cmd = 'ps aux|grep %s'%(pattern)
36
- return util.cmd.cmd(cmd)
37
-
38
-
39
- def get_pid(pattern):
40
- import util
41
- cmd = 'ps aux|grep %s'%(pattern)
42
- results = util.cmd.cmd(cmd)
43
- results = util.str.split(results, '\n')
44
- pids = []
45
- for result in results:
46
- info = result.split()
47
- if len(info) > 0:
48
- pid = int(info[1])
49
- pids.append(pid)
50
- return pids
51
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/my_abi/utils.py DELETED
@@ -1,304 +0,0 @@
1
- import logging
2
- import os
3
- import time
4
-
5
- import cv2
6
- import numpy as np
7
- import torch
8
- import yaml
9
- from matplotlib import colors
10
- from matplotlib import pyplot as plt
11
- from torch import Tensor, nn
12
- from torch.utils.data import ConcatDataset
13
-
14
- class CharsetMapper(object):
15
- """A simple class to map ids into strings.
16
-
17
- It works only when the character set is 1:1 mapping between individual
18
- characters and individual ids.
19
- """
20
-
21
- def __init__(self,
22
- filename='',
23
- max_length=30,
24
- null_char=u'\u2591'):
25
- """Creates a lookup table.
26
-
27
- Args:
28
- filename: Path to charset file which maps characters to ids.
29
- max_sequence_length: The max length of ids and string.
30
- null_char: A unicode character used to replace '<null>' character.
31
- the default value is a light shade block '░'.
32
- """
33
- self.null_char = null_char
34
- self.max_length = max_length
35
-
36
- self.label_to_char = self._read_charset(filename)
37
- self.char_to_label = dict(map(reversed, self.label_to_char.items()))
38
- self.num_classes = len(self.label_to_char)
39
-
40
- def _read_charset(self, filename):
41
- """Reads a charset definition from a tab separated text file.
42
-
43
- Args:
44
- filename: a path to the charset file.
45
-
46
- Returns:
47
- a dictionary with keys equal to character codes and values - unicode
48
- characters.
49
- """
50
- import re
51
- pattern = re.compile(r'(\d+)\t(.+)')
52
- charset = {}
53
- self.null_label = 0
54
- charset[self.null_label] = self.null_char
55
- with open(filename, 'r') as f:
56
- for i, line in enumerate(f):
57
- m = pattern.match(line)
58
- assert m, f'Incorrect charset file. line #{i}: {line}'
59
- label = int(m.group(1)) + 1
60
- char = m.group(2)
61
- charset[label] = char
62
- return charset
63
-
64
- def trim(self, text):
65
- assert isinstance(text, str)
66
- return text.replace(self.null_char, '')
67
-
68
- def get_text(self, labels, length=None, padding=True, trim=False):
69
- """ Returns a string corresponding to a sequence of character ids.
70
- """
71
- length = length if length else self.max_length
72
- labels = [l.item() if isinstance(l, Tensor) else int(l) for l in labels]
73
- if padding:
74
- labels = labels + [self.null_label] * (length-len(labels))
75
- text = ''.join([self.label_to_char[label] for label in labels])
76
- if trim: text = self.trim(text)
77
- return text
78
-
79
- def get_labels(self, text, length=None, padding=True, case_sensitive=False):
80
- """ Returns the labels of the corresponding text.
81
- """
82
- length = length if length else self.max_length
83
- if padding:
84
- text = text + self.null_char * (length - len(text))
85
- if not case_sensitive:
86
- text = text.lower()
87
- labels = [self.char_to_label[char] for char in text]
88
- return labels
89
-
90
- def pad_labels(self, labels, length=None):
91
- length = length if length else self.max_length
92
-
93
- return labels + [self.null_label] * (length - len(labels))
94
-
95
- @property
96
- def digits(self):
97
- return '0123456789'
98
-
99
- @property
100
- def digit_labels(self):
101
- return self.get_labels(self.digits, padding=False)
102
-
103
- @property
104
- def alphabets(self):
105
- all_chars = list(self.char_to_label.keys())
106
- valid_chars = []
107
- for c in all_chars:
108
- if c in 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ':
109
- valid_chars.append(c)
110
- return ''.join(valid_chars)
111
-
112
- @property
113
- def alphabet_labels(self):
114
- return self.get_labels(self.alphabets, padding=False)
115
-
116
-
117
- class Timer(object):
118
- """A simple timer."""
119
- def __init__(self):
120
- self.data_time = 0.
121
- self.data_diff = 0.
122
- self.data_total_time = 0.
123
- self.data_call = 0
124
- self.running_time = 0.
125
- self.running_diff = 0.
126
- self.running_total_time = 0.
127
- self.running_call = 0
128
-
129
- def tic(self):
130
- self.start_time = time.time()
131
- self.running_time = self.start_time
132
-
133
- def toc_data(self):
134
- self.data_time = time.time()
135
- self.data_diff = self.data_time - self.running_time
136
- self.data_total_time += self.data_diff
137
- self.data_call += 1
138
-
139
- def toc_running(self):
140
- self.running_time = time.time()
141
- self.running_diff = self.running_time - self.data_time
142
- self.running_total_time += self.running_diff
143
- self.running_call += 1
144
-
145
- def total_time(self):
146
- return self.data_total_time + self.running_total_time
147
-
148
- def average_time(self):
149
- return self.average_data_time() + self.average_running_time()
150
-
151
- def average_data_time(self):
152
- return self.data_total_time / (self.data_call or 1)
153
-
154
- def average_running_time(self):
155
- return self.running_total_time / (self.running_call or 1)
156
-
157
-
158
- class Logger(object):
159
- _handle = None
160
- _root = None
161
-
162
- @staticmethod
163
- def init(output_dir, name, phase):
164
- format = '[%(asctime)s %(filename)s:%(lineno)d %(levelname)s {}] ' \
165
- '%(message)s'.format(name)
166
- logging.basicConfig(level=logging.INFO, format=format)
167
-
168
- try: os.makedirs(output_dir)
169
- except: pass
170
- config_path = os.path.join(output_dir, f'{phase}.txt')
171
- Logger._handle = logging.FileHandler(config_path)
172
- Logger._root = logging.getLogger()
173
-
174
- @staticmethod
175
- def enable_file():
176
- if Logger._handle is None or Logger._root is None:
177
- raise Exception('Invoke Logger.init() first!')
178
- Logger._root.addHandler(Logger._handle)
179
-
180
- @staticmethod
181
- def disable_file():
182
- if Logger._handle is None or Logger._root is None:
183
- raise Exception('Invoke Logger.init() first!')
184
- Logger._root.removeHandler(Logger._handle)
185
-
186
-
187
- class Config(object):
188
-
189
- def __init__(self, config_path, host=True):
190
- def __dict2attr(d, prefix=''):
191
- for k, v in d.items():
192
- if isinstance(v, dict):
193
- __dict2attr(v, f'{prefix}{k}_')
194
- else:
195
- if k == 'phase':
196
- assert v in ['train', 'test']
197
- if k == 'stage':
198
- assert v in ['pretrain-vision', 'pretrain-language',
199
- 'train-semi-super', 'train-super']
200
- self.__setattr__(f'{prefix}{k}', v)
201
-
202
- assert os.path.exists(config_path), '%s does not exists!' % config_path
203
- with open(config_path) as file:
204
- config_dict = yaml.load(file, Loader=yaml.FullLoader)
205
- with open('configs/template.yaml') as file:
206
- default_config_dict = yaml.load(file, Loader=yaml.FullLoader)
207
- __dict2attr(default_config_dict)
208
- __dict2attr(config_dict)
209
- self.global_workdir = os.path.join(self.global_workdir, self.global_name)
210
-
211
- def __getattr__(self, item):
212
- attr = self.__dict__.get(item)
213
- if attr is None:
214
- attr = dict()
215
- prefix = f'{item}_'
216
- for k, v in self.__dict__.items():
217
- if k.startswith(prefix):
218
- n = k.replace(prefix, '')
219
- attr[n] = v
220
- return attr if len(attr) > 0 else None
221
- else:
222
- return attr
223
-
224
- def __repr__(self):
225
- str = 'ModelConfig(\n'
226
- for i, (k, v) in enumerate(sorted(vars(self).items())):
227
- str += f'\t({i}): {k} = {v}\n'
228
- str += ')'
229
- return str
230
-
231
- def blend_mask(image, mask, alpha=0.5, cmap='jet', color='b', color_alpha=1.0):
232
- # normalize mask
233
- mask = (mask-mask.min()) / (mask.max() - mask.min() + np.finfo(float).eps)
234
- if mask.shape != image.shape:
235
- mask = cv2.resize(mask,(image.shape[1], image.shape[0]))
236
- # get color map
237
- color_map = plt.get_cmap(cmap)
238
- mask = color_map(mask)[:,:,:3]
239
- # convert float to uint8
240
- mask = (mask * 255).astype(dtype=np.uint8)
241
-
242
- # set the basic color
243
- basic_color = np.array(colors.to_rgb(color)) * 255
244
- basic_color = np.tile(basic_color, [image.shape[0], image.shape[1], 1])
245
- basic_color = basic_color.astype(dtype=np.uint8)
246
- # blend with basic color
247
- blended_img = cv2.addWeighted(image, color_alpha, basic_color, 1-color_alpha, 0)
248
- # blend with mask
249
- blended_img = cv2.addWeighted(blended_img, alpha, mask, 1-alpha, 0)
250
-
251
- return blended_img
252
-
253
- def onehot(label, depth, device=None):
254
- """
255
- Args:
256
- label: shape (n1, n2, ..., )
257
- depth: a scalar
258
-
259
- Returns:
260
- onehot: (n1, n2, ..., depth)
261
- """
262
- if not isinstance(label, torch.Tensor):
263
- label = torch.tensor(label, device=device)
264
- onehot = torch.zeros(label.size() + torch.Size([depth]), device=device)
265
- onehot = onehot.scatter_(-1, label.unsqueeze(-1), 1)
266
-
267
- return onehot
268
-
269
- class MyDataParallel(nn.DataParallel):
270
-
271
- def gather(self, outputs, target_device):
272
- r"""
273
- Gathers tensors from different GPUs on a specified device
274
- (-1 means the CPU).
275
- """
276
- def gather_map(outputs):
277
- out = outputs[0]
278
- if isinstance(out, (str, int, float)):
279
- return out
280
- if isinstance(out, list) and isinstance(out[0], str):
281
- return [o for out in outputs for o in out]
282
- if isinstance(out, torch.Tensor):
283
- return torch.nn.parallel._functions.Gather.apply(target_device, self.dim, *outputs)
284
- if out is None:
285
- return None
286
- if isinstance(out, dict):
287
- if not all((len(out) == len(d) for d in outputs)):
288
- raise ValueError('All dicts must have the same number of keys')
289
- return type(out)(((k, gather_map([d[k] for d in outputs]))
290
- for k in out))
291
- return type(out)(map(gather_map, zip(*outputs)))
292
-
293
- # Recursive function calls like this create reference cycles.
294
- # Setting the function to None clears the refcycle.
295
- try:
296
- res = gather_map(outputs)
297
- finally:
298
- gather_map = None
299
- return res
300
-
301
-
302
- class MyConcatDataset(ConcatDataset):
303
- def __getattr__(self, k):
304
- return getattr(self.datasets[0], k)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2908e8a9.css DELETED
@@ -1 +0,0 @@
1
- .gradio-bokeh.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;justify-content:center}.layout.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full);color:var(--body-text-color)}.altair.svelte-1fe5ixn.svelte-1fe5ixn{display:flex;flex-direction:column;justify-content:center;align-items:center;width:var(--size-full);height:var(--size-full)}.caption.svelte-1fe5ixn.svelte-1fe5ixn{font-size:var(--text-sm)}.matplotlib.svelte-1fe5ixn img.svelte-1fe5ixn{object-fit:contain}