parquet-converter commited on
Commit
8a9e6a6
·
1 Parent(s): f66ebde

Update parquet files (step 43 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/0x90e/ESRGAN-MANGA/util.py +0 -6
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackear Photoshop Uma Soluo ou um Problema? Descubra os Prs e Contras.md +0 -14
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Arcgis 10.8 Full Crack.md +0 -17
  4. spaces/1gistliPinn/ChatGPT4/Examples/Electude-motor-diagnosis-descargar.md +0 -36
  5. spaces/1line/AutoGPT/autogpt/commands/write_tests.py +0 -31
  6. spaces/1line/AutoGPT/autogpt/speech/macos_tts.py +0 -21
  7. spaces/1phancelerku/anime-remove-background/Enjoy Unlimited VK Music Download with the Best Tools.md +0 -127
  8. spaces/801artistry/RVC801/Applio-RVC-Fork/utils/clonerepo_experimental.py +0 -253
  9. spaces/AIConsultant/MusicGen/audiocraft/solvers/base.py +0 -631
  10. spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/__init__.py +0 -0
  11. spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/__init__.py +0 -3
  12. spaces/AIGText/GlyphControl/transfer.py +0 -26
  13. spaces/AIGuardians/SummarizeWikipediaDocument/README.md +0 -13
  14. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup_in1k.py +0 -4
  15. spaces/Ababababababbababa/Arabic_poem_classifier/app.py +0 -36
  16. spaces/Adapter/T2I-Adapter/experiments/README.md +0 -0
  17. spaces/AkitoP/umamusume_bert_vits2/models.py +0 -986
  18. spaces/Akshat231/super_space/app.py +0 -122
  19. spaces/AkshayKollimarala/MYAIVOICESPEECH/app.py +0 -164
  20. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_769x769_80k_cityscapes.py +0 -9
  21. spaces/Araby/BRATArA/app.py +0 -43
  22. spaces/Arijit-hazra/my-image-captioner/app.py +0 -50
  23. spaces/Armandoliv/t5-summarize-app-scitldr/README.md +0 -12
  24. spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/app.py +0 -129
  25. spaces/Audio-AGI/WavJourney/README.md +0 -111
  26. spaces/AutoLLM/AutoAgents/autoagents/spaces/app.py +0 -153
  27. spaces/BAAI/vid2vid-zero/Dockerfile +0 -57
  28. spaces/Bala2-03-2003/MygenvioceAI/app.py +0 -164
  29. spaces/BartPoint/VoiceChange/app_multi.py +0 -469
  30. spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Sudfrica Edicin Apk Descargar.md +0 -134
  31. spaces/Benson/text-generation/Examples/Asfalto Nitro 9 Leyendas Mod Apk.md +0 -76
  32. spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Apk 36.6.md +0 -76
  33. spaces/Benson/text-generation/Examples/Descargar Estrellas Pelea Hack Ios.md +0 -80
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/cmdoptions.py +0 -1074
  35. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/markup.py +0 -246
  36. spaces/Big-Web/MMSD/env/Scripts/deactivate.bat +0 -22
  37. spaces/CAMP-ViL/Xplainer/inference.py +0 -116
  38. spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_facade_category.h +0 -253
  39. spaces/CVPR/WALT/mmdet/models/necks/channel_mapper.py +0 -74
  40. spaces/CVPR/regionclip-demo/datasets/README.md +0 -140
  41. spaces/CarlDennis/HYTTS/text/ger_to_ipa.py +0 -397
  42. spaces/ChrisCaviar/ControlNet-v1-1/app_normal.py +0 -104
  43. spaces/CikeyQI/meme-api/meme_generator/memes/knock/__init__.py +0 -28
  44. spaces/Clebersla/RVC_V2_Huggingface_Version/i18n.py +0 -28
  45. spaces/Codecooker/rvcapi/src/mdx.py +0 -287
  46. spaces/Cvandi/remake/scripts/generate_meta_info_pairdata.py +0 -49
  47. spaces/DEVINKofficial/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/app.py +0 -3
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/GdImageFile.py +0 -97
  49. spaces/Dana19/ImageRecognition_FaceCount/README.md +0 -13
  50. spaces/DemoLou/moe-tts/text/thai.py +0 -44
spaces/0x90e/ESRGAN-MANGA/util.py DELETED
@@ -1,6 +0,0 @@
1
- import os
2
-
3
- def is_google_colab():
4
- if os.getenv("COLAB_RELEASE_TAG"):
5
- return True
6
- return False
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackear Photoshop Uma Soluo ou um Problema? Descubra os Prs e Contras.md DELETED
@@ -1,14 +0,0 @@
1
- <br />
2
- <h1>How to Crack Photoshop: Risks and Alternatives</h1>
3
- <p>Photoshop is a widely used software for photo and image editing. However, the software can be quite expensive for some people. Some people may resort to Photoshop cracks to use the software for free. Although this may seem like a good idea at first, it is important to know the risks associated with cracked software. In this article, we will explain what Photoshop cracks are, why they are illegal and dangerous, and what are some better alternatives to get Photoshop legally and safely.</p>
4
- <h2>What is a Photoshop Crack?</h2>
5
- <p>A Photoshop crack is a software program used to bypass the activation process of Adobe Photoshop software. Cracks are usually created by third-party individuals or organizations and are not endorsed by Adobe. Using a cracked version of Photoshop can result in various risks, such as viruses, malware, or spyware installed on your computer. Moreover, cracked software may not receive updates from the manufacturer, which means that you may miss important security patches. It is important to note that Adobe does not support the use of cracked software and may take legal action against individuals or organizations that distribute cracks.</p>
6
- <h2>como crackear photoshop</h2><br /><p><b><b>Download</b> &rArr; <a href="https://byltly.com/2uKvIs">https://byltly.com/2uKvIs</a></b></p><br /><br />
7
- <h2>Why is Using Photoshop Crack Illegal and Dangerous?</h2>
8
- <p>There are some reasons why using Photoshop crack is illegal and dangerous. First of all, cracked software is illegal. By using a cracked version of Photoshop, you are breaking the law and may be subject to penalties. Additionally, cracked software is usually unstable and full of viruses. This can cause your computer to crash or worse, infect other computers with malware. Furthermore, Photoshop crack may also ban you from certain websites and online forums. Many websites have policies against using cracked software and will ban users who are caught doing so. Sometimes, your IP address may be blacklisted, preventing you from accessing certain websites or online services.</p>
9
- <h2>What are Some Alternatives to Photoshop Crack?</h2>
10
- <p>If you want to use Photoshop legally and safely, there are some alternatives to Photoshop crack that you can consider. One option is to use the free trial version of Photoshop that Adobe offers on its website. The trial version allows you to use all the features of Photoshop for 7 days without any cost. This way, you can test the software before deciding whether to buy it or not. Another option is to use Adobe's subscription plan, which gives you access to Photoshop and other Adobe products for a monthly or yearly fee. The subscription plan also includes cloud storage, online services, and regular updates. You can choose from different plans depending on your needs and budget.</p>
11
- <h2>Conclusion</h2>
12
- <p>Photoshop crack is not a good idea if you want to use Photoshop for photo and image editing. Cracked software is illegal, dangerous, and unreliable. It can expose you to various risks such as viruses, malware, spyware, legal issues, and bans. Instead of using Photoshop crack, you should consider using the free trial version or the subscription plan that Adobe offers on its website. These alternatives are legal, safe, and reliable. They also provide you with the best features and performance that Photoshop can offer.</p> ddb901b051<br />
13
- <br />
14
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Arcgis 10.8 Full Crack.md DELETED
@@ -1,17 +0,0 @@
1
- <br />
2
- <h1>How to Free Download ArcGIS 10.8 Full Crack and Install It on Your PC</h1>
3
- <p>ArcGIS is a powerful software that allows you to create, analyze, and visualize geographic data. It is widely used by professionals and students in various fields such as geography, urban planning, environmental science, engineering, and more. However, ArcGIS is not a free software and you need to purchase a license to use it.</p>
4
- <p>But what if you want to use ArcGIS for free without paying anything? Is there a way to free download ArcGIS 10.8 full crack and install it on your PC? The answer is yes, but you need to be careful and follow some steps to avoid any malware or viruses. Here is how you can do it:</p>
5
- <h2>free download arcgis 10.8 full crack</h2><br /><p><b><b>Download File</b> &#10027;&#10027;&#10027; <a href="https://byltly.com/2uKxxw">https://byltly.com/2uKxxw</a></b></p><br /><br />
6
- <ol>
7
- <li>First, you need to download the ArcGIS 10.8 setup file from a reliable source. You can use this link: <a href="https://www.esri.com/en-us/industries/overview">https://www.esri.com/en-us/industries/overview</a>. Click on the download button and choose the version 10.8 from the list.</li>
8
- <li>Next, you need to download the ArcGIS 10.8 crack file from another source. You can use this link: <a href="https://crackdaily.com/arcgis-crack/">https://crackdaily.com/arcgis-crack/</a>. Scroll down and click on the green download button.</li>
9
- <li>After downloading both files, you need to disable your antivirus software temporarily. This is because the crack file may be detected as a virus by some antivirus programs, but it is actually safe to use.</li>
10
- <li>Then, you need to install ArcGIS 10.8 by running the setup file. Follow the instructions and complete the installation process.</li>
11
- <li>Next, you need to copy the crack file and paste it in the ArcGIS installation folder. The default location is C:\Program Files (x86)\ArcGIS or C:\Program Files\ArcGIS depending on your system architecture.</li>
12
- <li>After that, you need to run the crack file as administrator. Click on the crack button and wait for it to finish.</li>
13
- <li>Finally, you need to restart your computer and enjoy using ArcGIS 10.8 full crack for free.</li>
14
- </ol>
15
- <p>Note: This method is only for educational purposes and we do not recommend using cracked software. If you like ArcGIS and want to support its development, please buy a license from its official website: <a href="https://www.esri.com/en-us/home">https://www.esri.com/en-us/home</a>.</p> ddb901b051<br />
16
- <br />
17
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Electude-motor-diagnosis-descargar.md DELETED
@@ -1,36 +0,0 @@
1
- <h2>electude-motor-diagnosis-descargar</h2><br /><p><b><b>DOWNLOAD</b> &#10038; <a href="https://imgfil.com/2uxXXn">https://imgfil.com/2uxXXn</a></b></p><br /><br />
2
- <br />
3
- -navegador-motor-diagnosis-e-letras/
4
-
5
- Thu, 05 Apr 2018 12:16:26 +0000
6
-
7
- motor-diagnosis-descargar-navegador-motor-diagnosis-e-letras/Siemens announces UFER Enterprise Platform for the industrial internet of things
8
-
9
- Siemens has announced UFER, an enterprise software platform that combines artificial intelligence and the industrial internet of things to help companies process vast amounts of information generated by industrial sensors and machines to transform business operations in production and the environment.
10
-
11
- Siemens AG’s UFER platform will go into pilot use in July 2018. The pilot is targeted at the automotive industry, where UFER will work together with Siemens’ new PlantWise system for the machine-to-machine industrial internet. PlantWise is expected to be released commercially in 2019.
12
-
13
- UFER has four main purposes:
14
-
15
- Reducing the risk of accidents and operational disruptions by processing data from industrial sensors and machines to make it available for analysis at a moment’s notice.
16
-
17
- Organizing data from multiple sources and producing a common, actionable view of plant operations.
18
-
19
- Enabling businesses to become more productive by using data from machines, sensors and the internet of things to optimize production and service processes.
20
-
21
- Reducing costs by optimizing maintenance processes, improving the efficiency of plant operations, and capturing data that can be used to develop new products or services.
22
-
23
- UFER will be developed in collaboration with IBM, leading international manufacturing company, as well as a number of smaller partner companies in a selected industry sector.
24
-
25
- UFER’s main focus is on the automotive industry. This industry has one of the most complex IT infrastructures in the world, including:
26
-
27
- Production lines for which plant operations require close monitoring, such as those for powertrain components and fuel injection systems;
28
-
29
- Networks of machines that capture and analyze plant operations data, such as the multi-sensor networks of forklifts, cranes and conveyor belts;
30
-
31
- PlantWise, which is ready for the commercial release in 2019, will feed UFER with production data from the plant, such as production rate and inventory information, and process it to make it available for analysis at a moment’s notice.
32
-
33
- The U 4fefd39f24<br />
34
- <br />
35
- <br />
36
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1line/AutoGPT/autogpt/commands/write_tests.py DELETED
@@ -1,31 +0,0 @@
1
- """A module that contains a function to generate test cases for the submitted code."""
2
- from __future__ import annotations
3
-
4
- import json
5
-
6
- from autogpt.llm_utils import call_ai_function
7
-
8
-
9
- def write_tests(code: str, focus: list[str]) -> str:
10
- """
11
- A function that takes in code and focus topics and returns a response from create
12
- chat completion api call.
13
-
14
- Parameters:
15
- focus (list): A list of suggestions around what needs to be improved.
16
- code (str): Code for test cases to be generated against.
17
- Returns:
18
- A result string from create chat completion. Test cases for the submitted code
19
- in response.
20
- """
21
-
22
- function_string = (
23
- "def create_test_cases(code: str, focus: Optional[str] = None) -> str:"
24
- )
25
- args = [code, json.dumps(focus)]
26
- description_string = (
27
- "Generates test cases for the existing code, focusing on"
28
- " specific areas if required."
29
- )
30
-
31
- return call_ai_function(function_string, args, description_string)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1line/AutoGPT/autogpt/speech/macos_tts.py DELETED
@@ -1,21 +0,0 @@
1
- """ MacOS TTS Voice. """
2
- import os
3
-
4
- from autogpt.speech.base import VoiceBase
5
-
6
-
7
- class MacOSTTS(VoiceBase):
8
- """MacOS TTS Voice."""
9
-
10
- def _setup(self) -> None:
11
- pass
12
-
13
- def _speech(self, text: str, voice_index: int = 0) -> bool:
14
- """Play the given text."""
15
- if voice_index == 0:
16
- os.system(f'say "{text}"')
17
- elif voice_index == 1:
18
- os.system(f'say -v "Ava (Premium)" "{text}"')
19
- else:
20
- os.system(f'say -v Samantha "{text}"')
21
- return True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy Unlimited VK Music Download with the Best Tools.md DELETED
@@ -1,127 +0,0 @@
1
- <br />
2
- <h1>How to Download VK Music in 2023</h1>
3
- <p>VK, also known as VKontakte, is a popular social media platform in Russia and Europe. It offers a wide range of media content, including movies, videos, photos, and music. Many users enjoy listening to music on VK, as it has a large collection of songs from various genres and artists. However, sometimes you may want to download music from VK to your device, so you can listen to it offline, without ads, or with other players. How can you do that?</p>
4
- <p>In this article, we will show you four available ways to download music from VK in 2023. You can choose the one that suits your needs and preferences best. Let's get started!</p>
5
- <h2>download vk music</h2><br /><p><b><b>Download File</b> &#10084;&#10084;&#10084; <a href="https://jinyurl.com/2uNUpF">https://jinyurl.com/2uNUpF</a></b></p><br /><br />
6
- <h2>Method 1: Use TunesKit Audio Capture to Record and Download VK Music</h2>
7
- <p>TunesKit Audio Capture is a powerful audio recorder for Windows and Mac that can capture any sound from your computer. It can record and download VK music and other streaming audios from any programs and websites. It can also save the recordings in any format, including MP3, WAV, FLAC, AAC, etc. It preserves the original audio quality and ID3 tags of the VK music. Moreover, it supports batch recording and editing of multiple audio tracks.</p>
8
- <p>To use TunesKit Audio Capture to download music from VK, you need to follow these steps:</p>
9
- <ol>
10
- <li>Download and install TunesKit Audio Capture on your computer.</li>
11
- <li>Launch the program and check if there is a browser on the program list. If not, you can add it by drag-and-drop.</li>
12
- <li>Go to the VK website and find the music you want to download.</li>
13
- <li>Click the Format button and select MP3 or any other format you prefer.</li>
14
- <li>Play the music on VK and TunesKit Audio Capture will start recording it automatically.</li>
15
- <li>When the music ends, click the Stop button and edit the audio if needed.</li>
16
- <li>Save the audio file to your computer and enjoy it offline.</li>
17
- </ol>
18
- <h2>Method 2: Use VK Music Downloader Extension for Chrome to Download VK Music</h2>
19
- <p>VK Music Downloader is a free extension for Chrome that helps you download your music on VK.com. It saves the original name of the soundtrack and allows you to download all playlists and groups of songs at once. It has no ads and the code is open and not obfuscated. However, it does not support batch downloading of songs.</p>
20
- <p>To use VK Music Downloader extension for Chrome to download music from VK, you need to follow these steps:</p>
21
- <ol>
22
- <li>Add the extension to your Chrome browser from [6](https://chrome.google.com/webstore/detail/%D1%81%D0%BA%D0%B0%D1%87%D0%B0%D1%82%D1%8C-%D0%BC%D1%83%D0%B7%D1%8B%D0%BA%D1%83-%D1%81-%D0%B2%D0%BA/bgmpjmdignpongmfjpgaikghaajeidid?hl=en).</li>
23
- <li>Go to the VK website and find the music you want to download.</li>
24
- <li>Click on the green arrow icon next to the song title and select Download.</li>
25
- <li>Save the audio file to your computer and enjoy it offline.</li>
26
- </ol>
27
- <h2>Method 3: Use SaveFrom.net Online Service to Download VK Music</h2>
28
- <p>SaveFrom.net is an online service that allows you to download videos and audios from various websites, including YouTube, Facebook, Instagram, Vimeo, and VK. It supports various formats, such as MP4, MP3, WEBM, etc. It is easy to use and does not require any installation or registration.</p>
29
- <p>To use SaveFrom.net online service to download music from VK, you need to follow these steps:</p>
30
- <ol>
31
- <li>Go to [11](https://en -savefrom.net/) and paste the URL of the VK music you want to download.</li>
32
- <li>Click on the Download button and choose the format and quality you prefer.</li>
33
- <li>Save the audio file to your computer and enjoy it offline.</li>
34
- </ol>
35
- <h2>Method 4: Use Music Downloader for VK Extension for Chrome to Download VK Music</h2>
36
- <p>Music Downloader for VK is another free extension for Chrome that enables you to download music from VK.com. It adds a download button to each song on the VK website and allows you to download multiple songs at once. It also supports downloading music from other websites, such as SoundCloud, Bandcamp, YouTube, etc. However, it may not work with some songs due to copyright issues.</p>
37
- <p>How to download music from VKontakte<br />
38
- VK music downloader Chrome extension<br />
39
- VK MP3 downloader online<br />
40
- Best VK music downloader for Windows and Mac<br />
41
- Download VK music to iPhone or Android<br />
42
- VK music downloader app for PC<br />
43
- Free VK music downloader software<br />
44
- Download VK music playlist in one click<br />
45
- Download VK music with original quality and ID3 tags<br />
46
- Download VK music in MP3, AAC, FLAC, WAV, M4A, or M4B format<br />
47
- Download VK music without registration or login<br />
48
- Download VK music with subtitles or lyrics<br />
49
- Download VK music videos and convert to audio<br />
50
- Download VK music offline and play without internet<br />
51
- Download VK music from private or public groups<br />
52
- Download VK music by genre, artist, album, or song name<br />
53
- Download VK music faster and safer<br />
54
- Download VK music legally and ethically<br />
55
- Download VK music without ads or malware<br />
56
- Download VK music with high speed and stability<br />
57
- Download unlimited VK music for free<br />
58
- Download multiple VK music tracks simultaneously<br />
59
- Download VK music and transfer to iTunes or Spotify<br />
60
- Download VK music and burn to CD or DVD<br />
61
- Download VK music and edit with audio editor<br />
62
- Download VK music and set as ringtone or alarm<br />
63
- Download VK music and share with friends or family<br />
64
- Download VK music and sync to cloud storage or devices<br />
65
- Download VK music and enjoy on any player or device<br />
66
- Download VK music and create your own playlist or library<br />
67
- Tips and tricks for downloading VK music easily and efficiently<br />
68
- Reviews and ratings of the best VK music downloaders in 2023<br />
69
- Comparison of different methods to download VK music online or offline<br />
70
- Pros and cons of various types of VK music downloaders for different needs<br />
71
- FAQs and solutions for downloading VK music from Vkontakte<br />
72
- How to download HD or 4K VK music videos from Vkontakte<br />
73
- How to download live or streaming VK music from Vkontakte<br />
74
- How to download podcasts or audiobooks from Vkontakte<br />
75
- How to download radio or DJ mixes from Vkontakte<br />
76
- How to download karaoke or instrumental tracks from Vkontakte<br />
77
- How to download remixes or covers from Vkontakte<br />
78
- How to download soundtracks or background music from Vkontakte<br />
79
- How to download classical or jazz music from Vkontakte<br />
80
- How to download rock or metal music from Vkontakte<br />
81
- How to download pop or dance music from Vkontakte<br />
82
- How to download rap or hip hop music from Vkontakte<br />
83
- How to download country or folk music from Vkontakte<br />
84
- How to download reggae or ska music from Vkontakte<br />
85
- How to download electronic or ambient music from Vkontakte</p>
86
- <p>To use Music Downloader for VK extension for Chrome to download music from VK, you need to follow these steps:</p>
87
- <ol>
88
- <li>Add the extension to your Chrome browser from [10](https://chrome.google.com/webstore/detail/music-downloader-for-vk/ahkohdihdjccebcfgjgffmpdjjknhgla?hl=en).</li>
89
- <li>Go to the VK website and find the music you want to download.</li>
90
- <li>Click on the download button next to the song title and select Download.</li>
91
- <li>Save the audio file to your computer and enjoy it offline.</li>
92
- </ol>
93
- <h2>Conclusion: How to Download VK Music in 2023</h2>
94
- <p>In conclusion, we have shown you four available ways to download music from VK in 2023. You can use TunesKit Audio Capture, VK Music Downloader, SaveFrom.net, or Music Downloader for VK to get your favorite songs from VK.com. Each method has its own advantages and disadvantages, so you can choose the one that works best for you. We recommend TunesKit Audio Capture as the best method, as it can record and download any sound from your computer with high quality and ID3 tags. It also supports batch recording and editing of multiple audio tracks.</p>
95
- <p>We hope this article has helped you learn how to download music from VK in 2023. If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!</p>
96
- <h2>FAQs: How to Download VK Music in 2023</h2>
97
- <h3>Q1: Is it legal to download music from VK?</h3>
98
- <p>A1: It depends on the source and the purpose of downloading music from VK. If the music is uploaded by the original artist or authorized by them, then it is legal to download it for personal use. However, if the music is pirated or infringes on someone else's rights, then it is illegal to download it. You should always respect the intellectual property rights of the creators and follow the terms of service of VK.com.</p>
99
- <h3>Q2: How can I download music from VK on my Android phone?</h3>
100
- <p>A2: You can use an app called SnapTube to download music from VK on your Android phone. SnapTube is a video and music downloader that supports various websites, including YouTube, Facebook, Instagram, VK, etc. You can download SnapTube from [9](https://www.snaptubeapp.com/). To use SnapTube to download music from VK, you need to follow these steps:</p>
101
- <ol>
102
- <li>Open SnapTube and select VK from the list of supported sites.</li>
103
- <li>Login with your VK account and find the music you want to download.</li>
104
- <li>Tap on the Download button at the bottom right corner of the screen and choose MP3 or any other format you prefer.</li>
105
- <li>Save the audio file to your phone and enjoy it offline.</li>
106
- </ol>
107
- <h3>Q3: How can I download music from VK on my iPhone?</h3>
108
- <p>A3: You can use an app called Documents by Readdle to download music from VK on your iPhone. Documents by Readdle is a file manager and media player that also has a built-in browser and downloader. You can download Documents by Readdle from [8](https://apps.apple.com/us/app/documents-by-readdle/id364901807). To use Documents by Readdle to download music from VK, you need to follow these steps:</p>
109
- <ol>
110
- <li>Open Documents by Readdle and tap on the Browser icon at the bottom right corner of the screen.</li>
111
- <li>Go to [7](https://en-savefrom.net/) and paste the URL of the VK music you want to download.</li>
112
- <li>Tap on the Download button and choose MP3 or any other format you prefer.</li>
113
- <li>Save the audio file to your iPhone and enjoy it offline.</li>
114
- </ol>
115
- <h3>Q4: How can I transfer downloaded music from my computer to my phone?</ <h3>Q4: How can I transfer downloaded music from my computer to my phone?</h3>
116
- <p>A4: There are different ways to transfer downloaded music from your computer to your phone, depending on the type of your phone and the software you use. Here are some common methods:</p>
117
- <ul>
118
- <li>Use a USB cable to connect your phone to your computer and copy the music files to your phone's storage or SD card.</li>
119
- <li>Use a cloud service, such as Google Drive, Dropbox, or iCloud, to upload the music files from your computer and download them to your phone.</li>
120
- <li>Use a wireless transfer app, such as AirDroid, Shareit, or Xender, to send the music files from your computer to your phone via Wi-Fi or Bluetooth.</li>
121
- <li>Use a music streaming app, such as Spotify, Apple Music, or Amazon Music, to sync the music files from your computer to your phone.</li>
122
- </ul>
123
- <h3>Q5: How can I play downloaded music from VK on my phone?</h3>
124
- <p>A5: You can play downloaded music from VK on your phone using any music player app that supports the format of the audio files. For example, you can use VLC, MX Player, Poweramp, or Musicolet to play MP3, WAV, FLAC, or AAC files. You can also use the default music player app on your phone or the Documents by Readdle app if you downloaded the music using it.</p>
125
- <h4></h4></p> 401be4b1e0<br />
126
- <br />
127
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/Applio-RVC-Fork/utils/clonerepo_experimental.py DELETED
@@ -1,253 +0,0 @@
1
- import os
2
- import subprocess
3
- import shutil
4
- from concurrent.futures import ThreadPoolExecutor, as_completed
5
- from tqdm.notebook import tqdm
6
- from pathlib import Path
7
- import requests
8
-
9
- def run_script():
10
- def run_cmd(cmd):
11
- process = subprocess.run(cmd, shell=True, check=True, text=True)
12
- return process.stdout
13
-
14
- # Change the current directory to /content/
15
- os.chdir('/content/')
16
- print("Changing dir to /content/")
17
-
18
- # Your function to edit the file
19
- def edit_file(file_path):
20
- temp_file_path = "/tmp/temp_file.py"
21
- changes_made = False
22
- with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file:
23
- previous_line = ""
24
- second_previous_line = ""
25
- for line in file:
26
- new_line = line.replace("value=160", "value=128")
27
- if new_line != line:
28
- print("Replaced 'value=160' with 'value=128'")
29
- changes_made = True
30
- line = new_line
31
-
32
- new_line = line.replace("crepe hop length: 160", "crepe hop length: 128")
33
- if new_line != line:
34
- print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'")
35
- changes_made = True
36
- line = new_line
37
-
38
- new_line = line.replace("value=0.88", "value=0.75")
39
- if new_line != line:
40
- print("Replaced 'value=0.88' with 'value=0.75'")
41
- changes_made = True
42
- line = new_line
43
-
44
- if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line:
45
- new_line = line.replace("value=1,", "value=0.25,")
46
- if new_line != line:
47
- print("Replaced 'value=1,' with 'value=0.25,' based on the condition")
48
- changes_made = True
49
- line = new_line
50
-
51
- if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line:
52
- new_line = line.replace("value=20,", "value=500,")
53
- if new_line != line:
54
- print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH")
55
- changes_made = True
56
- line = new_line
57
-
58
- if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line:
59
- if 'value="pm",' in line:
60
- new_line = line.replace('value="pm",', 'value="mangio-crepe",')
61
- if new_line != line:
62
- print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition")
63
- changes_made = True
64
- line = new_line
65
-
66
- new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"')
67
- if new_line != line:
68
- print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'")
69
- changes_made = True
70
- line = new_line
71
-
72
- if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line:
73
- if 'value=i18n("否"),' in line:
74
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
75
- if new_line != line:
76
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST")
77
- changes_made = True
78
- line = new_line
79
-
80
- if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line:
81
- if 'value=i18n("否"),' in line:
82
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
83
- if new_line != line:
84
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS")
85
- changes_made = True
86
- line = new_line
87
-
88
- temp_file.write(line)
89
- second_previous_line = previous_line
90
- previous_line = line
91
-
92
- # After finished, we replace the original file with the temp one
93
- import shutil
94
- shutil.move(temp_file_path, file_path)
95
-
96
- if changes_made:
97
- print("Changes made and file saved successfully.")
98
- else:
99
- print("No changes were needed.")
100
-
101
- # Define the repo path
102
- repo_path = '/content/Applio-RVC-Fork'
103
-
104
- def copy_all_files_in_directory(src_dir, dest_dir):
105
- # Iterate over all files in source directory
106
- for item in Path(src_dir).glob('*'):
107
- if item.is_file():
108
- # Copy each file to destination directory
109
- shutil.copy(item, dest_dir)
110
- else:
111
- # If it's a directory, make a new directory in the destination and copy the files recursively
112
- new_dest = Path(dest_dir) / item.name
113
- new_dest.mkdir(exist_ok=True)
114
- copy_all_files_in_directory(str(item), str(new_dest))
115
-
116
- def clone_and_copy_repo(repo_path):
117
- # New repository link
118
- new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/"
119
- # Temporary path to clone the repository
120
- temp_repo_path = "/content/temp_Applio-RVC-Fork"
121
- # New folder name
122
- new_folder_name = "Applio-RVC-Fork"
123
-
124
- # Clone the latest code from the new repository to a temporary location
125
- run_cmd(f"git clone {new_repo_link} {temp_repo_path}")
126
- os.chdir(temp_repo_path)
127
-
128
- run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402")
129
- run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4")
130
- run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679")
131
- run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8")
132
- run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61")
133
- run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de")
134
- run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec")
135
- run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902")
136
- run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27")
137
- run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb")
138
- run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764")
139
- run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8")
140
- run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51")
141
- run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2")
142
- run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7")
143
- run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862")
144
- run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9")
145
- run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398")
146
- run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2")
147
- run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a")
148
- run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b")
149
- run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157")
150
- run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742")
151
- run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9")
152
- run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9")
153
- run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77")
154
-
155
- # Edit the file here, before copying
156
- #edit_file(f"{temp_repo_path}/infer-web.py")
157
-
158
- # Copy all files from the cloned repository to the existing path
159
- copy_all_files_in_directory(temp_repo_path, repo_path)
160
- print(f"Copying all {new_folder_name} files from GitHub.")
161
-
162
- # Change working directory back to /content/
163
- os.chdir('/content/')
164
- print("Changed path back to /content/")
165
-
166
- # Remove the temporary cloned repository
167
- shutil.rmtree(temp_repo_path)
168
-
169
- # Call the function
170
- clone_and_copy_repo(repo_path)
171
-
172
- # Download the credentials file for RVC archive sheet
173
- os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True)
174
- run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json")
175
-
176
- # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case
177
- shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True)
178
- shutil.rmtree('/content/torchcrepe', ignore_errors=True)
179
-
180
- # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository
181
- run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git")
182
- shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/')
183
- shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder
184
-
185
- # Change the current directory to /content/Applio-RVC-Fork
186
- os.chdir('/content/Applio-RVC-Fork')
187
- os.makedirs('pretrained', exist_ok=True)
188
- os.makedirs('uvr5_weights', exist_ok=True)
189
-
190
- def download_file(url, filepath):
191
- response = requests.get(url, stream=True)
192
- response.raise_for_status()
193
-
194
- with open(filepath, "wb") as file:
195
- for chunk in response.iter_content(chunk_size=8192):
196
- if chunk:
197
- file.write(chunk)
198
-
199
- def download_pretrained_models():
200
- pretrained_models = {
201
- "pretrained": [
202
- "D40k.pth",
203
- "G40k.pth",
204
- "f0D40k.pth",
205
- "f0G40k.pth"
206
- ],
207
- "pretrained_v2": [
208
- "D40k.pth",
209
- "G40k.pth",
210
- "f0D40k.pth",
211
- "f0G40k.pth",
212
- "f0G48k.pth",
213
- "f0D48k.pth"
214
- ],
215
- "uvr5_weights": [
216
- "HP2-人声vocals+非人声instrumentals.pth",
217
- "HP5-主旋律人声vocals+其他instrumentals.pth",
218
- "VR-DeEchoNormal.pth",
219
- "VR-DeEchoDeReverb.pth",
220
- "VR-DeEchoAggressive.pth",
221
- "HP5_only_main_vocal.pth",
222
- "HP3_all_vocals.pth",
223
- "HP2_all_vocals.pth"
224
- ]
225
- }
226
- part2 = "I"
227
- base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/"
228
- base_path = "/content/Applio-RVC-Fork/"
229
- base_pathm = base_path
230
-
231
- # Calculate total number of files to download
232
- total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt
233
-
234
- with tqdm(total=total_files, desc="Downloading files") as pbar:
235
- for folder, models in pretrained_models.items():
236
- folder_path = os.path.join(base_path, folder)
237
- os.makedirs(folder_path, exist_ok=True)
238
- for model in models:
239
- url = base_url + folder + "/" + model
240
- filepath = os.path.join(folder_path, model)
241
- download_file(url, filepath)
242
- pbar.update()
243
-
244
- # Download hubert_base.pt to the base path
245
- hubert_url = base_url + "hubert_base.pt"
246
- hubert_filepath = os.path.join(base_pathm, "hubert_base.pt")
247
- download_file(hubert_url, hubert_filepath)
248
- pbar.update()
249
- def clone_repository(run_download):
250
- with ThreadPoolExecutor(max_workers=2) as executor:
251
- executor.submit(run_script)
252
- if run_download:
253
- executor.submit(download_pretrained_models)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/solvers/base.py DELETED
@@ -1,631 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from abc import ABC, abstractmethod
8
- from contextlib import contextmanager
9
- from pathlib import Path
10
- import typing as tp
11
-
12
- import flashy
13
- import omegaconf
14
- import torch
15
- from torch import nn
16
-
17
- from .. import optim
18
- from ..optim import fsdp
19
- from ..utils import checkpoint
20
- from ..utils.autocast import TorchAutocast
21
- from ..utils.best_state import BestStateDictManager
22
- from ..utils.deadlock import DeadlockDetect
23
- from ..utils.profiler import Profiler
24
- from ..utils.utils import copy_state, dict_from_config, model_hash, with_rank_rng
25
-
26
-
27
- class StandardSolver(ABC, flashy.BaseSolver):
28
- """Standard solver for AudioCraft.
29
-
30
- The standard solver implements a base training loop with the following stages:
31
- train, valid, evaluate and generate that are expected to be all defined for
32
- solvers in AudioCraft. It also provides a nice default management of Dora history replay,
33
- checkpoint management across epoch, and logging configuration.
34
-
35
- AudioCraft solvers must inherit from the StandardSolver and define the methods
36
- associated to each stage as well as the show, build_model and build_dataloaders methods.
37
- """
38
- def __init__(self, cfg: omegaconf.DictConfig):
39
- super().__init__()
40
- self.logger.info(f"Instantiating solver {self.__class__.__name__} for XP {self.xp.sig}")
41
- self.logger.info(f"All XP logs are stored in {self.xp.folder}")
42
- self.cfg = cfg
43
- self.device = cfg.device
44
- self.model: nn.Module
45
- self._continue_best_source_keys = ['best_state', 'fsdp_best_state']
46
- self._fsdp_modules: tp.List[fsdp.FSDP] = []
47
- self._ema_sources: nn.ModuleDict = nn.ModuleDict()
48
- self.ema: tp.Optional[optim.ModuleDictEMA] = None
49
- self.dataloaders: tp.Dict[str, torch.utils.data.DataLoader] = dict()
50
- self._log_updates = self.cfg.logging.get('log_updates', 10)
51
- if self.cfg.logging.log_tensorboard:
52
- self.init_tensorboard(**self.cfg.get('tensorboard'))
53
- if self.cfg.logging.log_wandb and self:
54
- self.init_wandb(**self.cfg.get('wandb'))
55
- # keep a copy of the best performing state for stateful objects
56
- # used for evaluation and generation stages
57
- dtype_best: tp.Optional[torch.dtype] = None
58
- if self.cfg.fsdp.use:
59
- dtype_best = getattr(torch, self.cfg.fsdp.param_dtype) # type: ignore
60
- assert isinstance(dtype_best, torch.dtype)
61
- elif self.cfg.autocast:
62
- dtype_best = getattr(torch, self.cfg.autocast_dtype) # type: ignore
63
- assert isinstance(dtype_best, torch.dtype)
64
- self.best_state: BestStateDictManager = BestStateDictManager(dtype=dtype_best)
65
- # Hacky support for keeping a copy of the full best state in rank0.
66
- self.fsdp_best_state: tp.Dict[str, tp.Any] = {}
67
- self.register_stateful('best_state', 'fsdp_best_state') # register best_state object to keep it in state_dict
68
- self._new_best_state: bool = False # should save a new checkpoint
69
- # instantiate datasets and appropriate number of updates per epoch
70
- self.build_dataloaders()
71
- if self.cfg.execute_only is None:
72
- assert 'train' in self.dataloaders, "The train dataset split must be provided."
73
- assert 'valid' in self.dataloaders, "The valid dataset split must be provided."
74
- self.train_updates_per_epoch = len(self.dataloaders['train']) if 'train' in self.dataloaders else 0
75
- if self.cfg.optim.updates_per_epoch:
76
- self.train_updates_per_epoch = self.cfg.optim.updates_per_epoch
77
- self.total_updates = self.train_updates_per_epoch * self.cfg.optim.epochs
78
- # instantiate model & exponential moving average on the model
79
- self.build_model()
80
- self.logger.info("Model hash: %s", model_hash(self.model))
81
- assert 'model' in self.stateful.sources, \
82
- "Please register the model to stateful with self.register_stateful('model') in build_model."
83
- self.profiler = Profiler(self.model, **self.cfg.profiler)
84
- self.initialize_ema()
85
- self.register_stateful('ema')
86
- assert self.ema is None or 'ema' in self.stateful.sources, \
87
- "Please register the ema to stateful with self.register_stateful('ema') in build_model."
88
- self.deadlock_detect = DeadlockDetect(**self.cfg.deadlock)
89
- # basic statistics on the trained model
90
- model_size = sum(p.numel() for p in self.model.parameters() if p.requires_grad) / 1e6
91
- # one copy of grad, one copy of momentum, one copy of denominator and model weights.
92
- # and 4 bytes for each float!
93
- mem_usage = model_size * 4 * 4 / 1000
94
- self.logger.info("Model size: %.2f M params", model_size)
95
- self.logger.info("Base memory usage, with model, grad and optim: %.2f GB", mem_usage)
96
-
97
- @property
98
- def autocast(self):
99
- """Convenient autocast (or not) using the solver configuration."""
100
- return TorchAutocast(enabled=self.cfg.autocast, device_type=self.device, dtype=self.autocast_dtype)
101
-
102
- def _get_state_source(self, name) -> flashy.state.StateDictSource:
103
- # Internal utility to get a state source from the solver
104
- return self.stateful.sources[name]
105
-
106
- @property
107
- def best_metric_name(self) -> tp.Optional[str]:
108
- """Metric name used to identify the best state. This metric should be stored in the metrics
109
- used on the stage for best state identification (most likely, `valid`). If None, then
110
- no best state is saved.
111
- """
112
- return None
113
-
114
- def register_best_state(self, *args: str):
115
- """Register state sources in `BestStateDictManager` to keep their best states along with their
116
- latest states. The best state will be used at evaluation stages instead of the latest states.
117
-
118
- Shortcut around `BestStateDictManager.register` method. You can pass any number of
119
- attribute, included nested attributes and those will be included into the checkpoints
120
- and automatically restored when `BaseSolver.restore` is called.
121
- """
122
- for name in args:
123
- state_source = self._get_state_source(name)
124
- assert name in self.stateful.sources, "Registered states in best should be registered in stateful first!"
125
- self.best_state.register(name, state_source)
126
-
127
- def register_ema(self, *args: str):
128
- """Register state sources for exponential moving average.
129
-
130
- The registered sources are used to instantiate a ModuleDictEMA instance.
131
- The ModuleDictEMA keeps a `nn.ModuleDict` module that is updated when self.ema.step() is called
132
- and swapped with the original state sources with self.swap_ema_state() method.
133
-
134
- Usage:
135
- self.register_ema('model')
136
- """
137
- assert self.ema is None, "Cannot register state source to already instantiated EMA."
138
- for name in args:
139
- self._ema_sources[name] = getattr(self, name)
140
-
141
- def wrap_with_fsdp(self, model: torch.nn.Module, *args, **kwargs):
142
- model = fsdp.wrap_with_fsdp(self.cfg.fsdp, model, *args, **kwargs)
143
- if isinstance(model, fsdp.FSDP):
144
- self._fsdp_modules.append(model)
145
- return model
146
-
147
- def update_best_state_from_stage(self, stage_name: str = 'valid'):
148
- """Update latest best state based on pending metrics of a given stage. This method relies
149
- on the `BestStateDictManager.update` method to update the best state_dict with latest weights
150
- if the registered states happen to match to the best performing setup.
151
- """
152
- if self.best_metric_name is None:
153
- # when no best metric is defined, the last state is always the best
154
- self._new_best_state = True
155
- self.logger.info("Updating best state with current state.")
156
- else:
157
- assert stage_name in self._pending_metrics, f"Metrics for stage {stage_name} not found."
158
- assert self.best_metric_name in self._pending_metrics[stage_name], \
159
- f"Best metric not found in {stage_name} metrics. Cannot register best state"
160
- current_score = self._pending_metrics[stage_name][self.best_metric_name]
161
- all_best_metric_scores = [
162
- past_metrics[stage_name][self.best_metric_name]
163
- for past_metrics in self.history
164
- ]
165
- all_best_metric_scores.append(current_score)
166
- best_score = min(all_best_metric_scores)
167
- self._new_best_state = current_score == best_score
168
- if self._new_best_state:
169
- old_best = min(all_best_metric_scores[:-1] + [float('inf')])
170
- self.logger.info(
171
- f"New best state with {self.best_metric_name}={current_score:.3f} (was {old_best:.3f})")
172
-
173
- if self._new_best_state:
174
- if self.cfg.fsdp.use:
175
- # this will give an empty state dict on all ranks but the rank 0
176
- # which will have a copy in memory of the full model.
177
- with fsdp.switch_to_full_state_dict(self._fsdp_modules):
178
- for name in self.best_state.states.keys():
179
- state_source = self._get_state_source(name)
180
- self.best_state.update(name, state_source)
181
- # we save to a different dict.
182
- self.fsdp_best_state.update(self.best_state.state_dict())
183
- # We cannot efficiently load fsdp_best_state when using FSDP,
184
- # so we have do do a second pass, with the local shards.
185
- for name in self.best_state.states.keys():
186
- state_source = self._get_state_source(name)
187
- self.best_state.update(name, state_source)
188
-
189
- def _load_new_state_dict(self, state_dict: dict) -> dict:
190
- old_states = {}
191
- for name, new_state in state_dict.items():
192
- state_source = self._get_state_source(name)
193
- old_states[name] = copy_state(state_source.state_dict())
194
- state_source.load_state_dict(new_state)
195
- return old_states
196
-
197
- @contextmanager
198
- def swap_best_state(self):
199
- self.logger.debug(f"Swapping to best state for: {', '.join(self.best_state.state_dict().keys())}")
200
- old_states = self._load_new_state_dict(self.best_state.state_dict())
201
- try:
202
- yield
203
- finally:
204
- self.logger.debug("Swapping back from best to original state")
205
- for name, old_state in old_states.items():
206
- state_source = self._get_state_source(name)
207
- state_source.load_state_dict(old_state)
208
-
209
- @contextmanager
210
- def swap_ema_state(self):
211
- if self.ema is None:
212
- yield
213
- else:
214
- ema_state_dict = self.ema.state_dict()['state']
215
- self.logger.debug(f"Swapping to EMA state for: {', '.join(ema_state_dict.keys())}")
216
- old_states = self._load_new_state_dict(ema_state_dict)
217
- try:
218
- yield
219
- finally:
220
- self.logger.debug("Swapping back from EMA state to original state")
221
- for name, old_state in old_states.items():
222
- state_source = self._get_state_source(name)
223
- state_source.load_state_dict(old_state)
224
-
225
- @property
226
- def is_training(self):
227
- return self.current_stage == 'train'
228
-
229
- def log_model_summary(self, model: nn.Module):
230
- """Log model summary, architecture and size of the model."""
231
- self.logger.info(model)
232
- mb = sum(p.numel() for p in model.parameters()) * 4 / 2 ** 20
233
- self.logger.info("Size: %.1f MB", mb)
234
-
235
- @abstractmethod
236
- def build_model(self):
237
- """Method to implement to initialize model."""
238
- ...
239
-
240
- def initialize_ema(self):
241
- """Initialize exponential moving average with the registered sources.
242
- EMA object is created if the optim.ema.model.decay value is non-null.
243
- """
244
- from .builders import get_ema
245
- self.ema = get_ema(self._ema_sources, self.cfg.optim.ema)
246
- if self.ema is None:
247
- self.logger.info('No EMA on the model.')
248
- else:
249
- assert self.cfg.optim.ema.updates > 0
250
- self.logger.info(
251
- f'Initializing EMA on the model with decay = {self.ema.decay}'
252
- f' every {self.cfg.optim.ema.updates} updates'
253
- )
254
-
255
- @abstractmethod
256
- def build_dataloaders(self):
257
- """Method to implement to initialize dataloaders."""
258
- ...
259
-
260
- @abstractmethod
261
- def show(self):
262
- """Method to log any information without running the job."""
263
- ...
264
-
265
- @property
266
- def log_updates(self):
267
- # convenient access to log updates
268
- return self._log_updates
269
-
270
- def checkpoint_path(self, **kwargs):
271
- kwargs.setdefault('use_fsdp', self.cfg.fsdp.use)
272
- return self.folder / checkpoint.checkpoint_name(**kwargs)
273
-
274
- def epoch_checkpoint_path(self, epoch: int, **kwargs):
275
- kwargs.setdefault('use_fsdp', self.cfg.fsdp.use)
276
- return self.folder / checkpoint.checkpoint_name(str(epoch), **kwargs)
277
-
278
- def checkpoint_path_with_name(self, name: str, **kwargs):
279
- kwargs.setdefault('use_fsdp', self.cfg.fsdp.use)
280
- return self.folder / checkpoint.checkpoint_name(name=name, **kwargs)
281
-
282
- def save_checkpoints(self):
283
- """Save checkpoint, optionally keeping a copy for a given epoch."""
284
- is_sharded = self.cfg.fsdp.use
285
- if not flashy.distrib.is_rank_zero() and not is_sharded:
286
- return
287
- self.logger.info("Model hash: %s", model_hash(self.model))
288
- state = self.state_dict()
289
- epoch = self.epoch - 1 # pushing metrics will increase the epoch in Flashy, so we do -1 here
290
-
291
- # save minimal state_dict as new checkpoint every X epoch
292
- if self.cfg.checkpoint.save_every:
293
- if epoch % self.cfg.checkpoint.save_every == 0:
294
- minimal_state = state
295
- if self.cfg.checkpoint.keep_every_states is not None and len(self.cfg.checkpoint.keep_every_states) > 0:
296
- minimal_state = {
297
- name: source for name, source in state.items()
298
- if name in self.cfg.checkpoint.keep_every_states
299
- }
300
- epoch_checkpoint_path = self.epoch_checkpoint_path(epoch)
301
- checkpoint.save_checkpoint(minimal_state, epoch_checkpoint_path, is_sharded)
302
-
303
- # save checkpoint as latest checkpoint
304
- if self.cfg.checkpoint.save_last:
305
- last_checkpoint_path = self.checkpoint_path()
306
- checkpoint.save_checkpoint(state, last_checkpoint_path, is_sharded)
307
-
308
- # flush any stale checkpoint to reduce disk footprint
309
- checkpoint.flush_stale_checkpoints(self.checkpoint_path())
310
-
311
- def load_from_pretrained(self, name: str) -> dict:
312
- raise NotImplementedError("Solver does not provide a way to load pretrained models.")
313
-
314
- def load_checkpoints(self, load_best: bool = False, ignore_state_keys: tp.List[str] = []) -> tp.Optional[dict]:
315
- """Load last checkpoint or the one specified in continue_from.
316
-
317
- Args:
318
- load_best (bool): Whether to load from best state dict or not.
319
- Best state dict is always used when not loading the current xp.
320
- ignore_state_keys (list of str): List of sources to ignore when loading the state, e.g. `optimizer`.
321
- Returns:
322
- state (dict, optional): The loaded state dictionary.
323
- """
324
- # load checkpoints from xp folder or cfg.continue_from
325
- is_sharded = self.cfg.fsdp.use
326
- load_from_path: tp.Optional[Path] = None
327
- checkpoint_source: tp.Optional[checkpoint.CheckpointSource] = None
328
-
329
- if load_best:
330
- self.logger.info("Trying to load state_dict from best state.")
331
-
332
- state: tp.Optional[dict] = None
333
- rank0_checkpoint_path = self.checkpoint_path(use_fsdp=False)
334
- current_checkpoint_path = self.checkpoint_path()
335
- _pretrained_prefix = '//pretrained/'
336
- continue_pretrained = (self.cfg.continue_from or '').startswith(_pretrained_prefix)
337
- if rank0_checkpoint_path.exists():
338
- self.logger.info(f"Loading existing checkpoint: {current_checkpoint_path}")
339
- load_from_path = current_checkpoint_path
340
- checkpoint.check_sharded_checkpoint(current_checkpoint_path, rank0_checkpoint_path)
341
- checkpoint_source = checkpoint.CheckpointSource.CURRENT_XP
342
- elif self.cfg.continue_from and not continue_pretrained:
343
- self.logger.info(f"Continuing from provided checkpoint: {self.cfg.continue_from}")
344
- # we're always continuing from consolidated checkpoints: self.cfg.use_fsdp and not continue_best
345
- load_from_path = checkpoint.resolve_checkpoint_path(self.cfg.continue_from, use_fsdp=False)
346
- if load_from_path is None:
347
- self.logger.error('Could not resolve the continue_from checkpoint %s', self.cfg.continue_from)
348
- raise RuntimeError(f'Could not resolve continue_from checkpoint {self.cfg.continue_from}')
349
- checkpoint_source = checkpoint.CheckpointSource.OTHER
350
-
351
- if load_from_path is not None:
352
- state = checkpoint.load_checkpoint(load_from_path, is_sharded)
353
- elif continue_pretrained:
354
- self.logger.info("Loading a pretrained model. Ignoring 'load_best' and 'ignore_state_keys' params.")
355
- state = self.load_from_pretrained(self.cfg.continue_from[len(_pretrained_prefix):])
356
- checkpoint_source = checkpoint.CheckpointSource.PRETRAINED
357
- load_best = True
358
-
359
- # checkpoints are not from the current xp, we only retrieve the best state
360
- if checkpoint_source is not None and checkpoint_source != checkpoint.CheckpointSource.CURRENT_XP:
361
- assert state is not None
362
- self.logger.info("Checkpoint source is not the current xp: Load state_dict from best state.")
363
- load_best = True
364
- state = {key: state[key] for key in self._continue_best_source_keys if key in state}
365
- # loaded checkpoints are FSDP checkpoints: we're reading the best state
366
- # from FSDP and we drop the regular best_state
367
- if 'fsdp_best_state' in state and state['fsdp_best_state']:
368
- state.pop('best_state', None)
369
- self.logger.info("... Loaded checkpoint has FSDP best state")
370
- # FSDP is enabled in the solver, if the loaded checkpoints do not have FSDP support
371
- # then we're initializing FSDP best state with the regular best state
372
- elif self.cfg.fsdp.use:
373
- if 'fsdp_best_state' not in state or not state['fsdp_best_state']:
374
- # we swap non-FSDP checkpoints best_state to FSDP-compatible best state
375
- state['fsdp_best_state'] = state.pop('best_state')
376
- self.logger.info("... Loaded checkpoint does not have FSDP best state. Use regular best state")
377
-
378
- if state is not None:
379
- if load_best:
380
- self.logger.info("Ignoring keys when loading best %r", ignore_state_keys)
381
- for key in set(ignore_state_keys):
382
- if key in state:
383
- state.pop(key)
384
- has_best_state = 'best_state' in state or 'fsdp_best_state' in state
385
- assert has_best_state, ("Trying to load best state but neither 'best_state'",
386
- " or 'fsdp_best_state' found in checkpoints.")
387
- self.load_state_dict(state)
388
-
389
- # for FSDP, let's make extra sure nothing bad happened with out of sync
390
- # checkpoints across workers.
391
- epoch = float(self.epoch)
392
- avg_epoch = flashy.distrib.average_metrics({'epoch': epoch})['epoch']
393
- if avg_epoch != epoch:
394
- raise RuntimeError(
395
- f"Inconsistent loading of checkpoints happened, our epoch is {epoch} "
396
- f"but average of epochs is {avg_epoch}, at least one gpu must have a "
397
- "different epoch number.")
398
-
399
- # on load_best, properly reinitialize state_dict, best states and ema
400
- # otherwise we load from the current xp and don't alter anything
401
- if load_best:
402
- self.logger.info("Loading state_dict from best state.")
403
- if not self.cfg.fsdp.use and self.fsdp_best_state:
404
- # loading from an FSDP checkpoint but with FSDP deactivated
405
- self.logger.info("... Loading from FSDP best state dict.")
406
- self.best_state.load_state_dict(self.fsdp_best_state)
407
-
408
- # if load_best, we permanently override the regular state_dict with the best state
409
- if self.cfg.fsdp.use:
410
- self.logger.info("FSDP is used, loading from FSDP best state.")
411
- with fsdp.switch_to_full_state_dict(self._fsdp_modules):
412
- # this might be really fragile but okay for now.
413
- self.load_state_dict(self.fsdp_best_state)
414
- else:
415
- # we permanently swap the stateful objects to their best state
416
- self._load_new_state_dict(self.best_state.state_dict())
417
-
418
- # the EMA modules should also be instantiated with best state.
419
- # the easiest way to do so is to reinitialize a new EMA with best state loaded.
420
- if self.ema is not None:
421
- self.logger.info("Re-initializing EMA from best state")
422
- self.initialize_ema()
423
-
424
- if self.cfg.fsdp.use:
425
- self.logger.info("Re-initializing best state after using FSDP best state.")
426
- for name in self.best_state.states.keys():
427
- state_source = self._get_state_source(name)
428
- self.best_state.update(name, state_source)
429
-
430
- return state
431
-
432
- def restore(self, load_best: bool = False, replay_metrics: bool = False,
433
- ignore_state_keys: tp.List[str] = []) -> bool:
434
- """Restore the status of a solver for a given xp.
435
-
436
- Args:
437
- load_best (bool): if `True`, load the best state from the checkpoint.
438
- replay_metrics (bool): if `True`, logs all the metrics from past epochs.
439
- ignore_state_keys (list of str): list of sources to ignore when loading the state, e.g. `optimizer`.
440
- """
441
- self.logger.info("Restoring weights and history.")
442
- restored_checkpoints = self.load_checkpoints(load_best, ignore_state_keys)
443
-
444
- self.logger.info("Model hash: %s", model_hash(self.model))
445
-
446
- if replay_metrics and len(self.history) > 0:
447
- self.logger.info("Replaying past metrics...")
448
- for epoch, stages in enumerate(self.history):
449
- for stage_name, metrics in stages.items():
450
- # We manually log the metrics summary to the result logger
451
- # as we don't want to add them to the pending metrics
452
- self.result_logger._log_summary(stage_name, metrics, step=epoch + 1, step_name='epoch',
453
- formatter=self.get_formatter(stage_name))
454
- return restored_checkpoints is not None
455
-
456
- def commit(self, save_checkpoints: bool = True):
457
- """Commit metrics to dora and save checkpoints at the end of an epoch."""
458
- # we override commit to introduce more complex checkpoint saving behaviors
459
- self.history.append(self._pending_metrics) # This will increase self.epoch
460
- if save_checkpoints:
461
- self.save_checkpoints()
462
- self._start_epoch()
463
- if flashy.distrib.is_rank_zero():
464
- self.xp.link.update_history(self.history)
465
-
466
- def run_epoch(self):
467
- """Run a single epoch with all stages.
468
-
469
- Metrics for a given stage are stored in _pending_metrics and committed by the solver afterwards.
470
- Children solvers can extend this method with custom behavior, e.g.:
471
-
472
- def run_epoch(self):
473
- ... # custom code
474
- super().run_epoch()
475
- ... # custom code
476
- """
477
- self.run_stage('train', self.train)
478
- with torch.no_grad():
479
- with self.swap_ema_state():
480
- self.run_stage('valid', self.valid)
481
- # the best state is updated with EMA states if available
482
- self.update_best_state_from_stage('valid')
483
- with self.swap_best_state():
484
- if self.should_run_stage('evaluate'):
485
- self.run_stage('evaluate', self.evaluate)
486
- if self.should_run_stage('generate'):
487
- self.run_stage('generate', with_rank_rng()(self.generate))
488
-
489
- def run(self):
490
- """Training loop."""
491
- assert len(self.state_dict()) > 0
492
- self.restore(replay_metrics=True) # load checkpoint and replay history
493
- self.log_hyperparams(dict_from_config(self.cfg))
494
- for epoch in range(self.epoch, self.cfg.optim.epochs + 1):
495
- if self.should_stop_training():
496
- return
497
- self.run_epoch()
498
- # Commit will send the metrics to Dora and save checkpoints by default.
499
- self.commit()
500
-
501
- def should_stop_training(self) -> bool:
502
- """Check whether we should stop training or not."""
503
- return self.epoch > self.cfg.optim.epochs
504
-
505
- def should_run_stage(self, stage_name) -> bool:
506
- """Check whether we want to run the specified stages."""
507
- stage_every = self.cfg[stage_name].get('every', None)
508
- is_last_epoch = self.epoch == self.cfg.optim.epochs
509
- is_epoch_every = (stage_every and self.epoch % stage_every == 0)
510
- return is_last_epoch or is_epoch_every
511
-
512
- @abstractmethod
513
- def run_step(self, idx: int, batch: tp.Any, metrics: dict):
514
- """Perform one training or valid step on a given batch."""
515
- ...
516
-
517
- def common_train_valid(self, dataset_split: str, **kwargs: tp.Any):
518
- """Common logic for train and valid stages."""
519
- self.model.train(self.is_training)
520
-
521
- loader = self.dataloaders[dataset_split]
522
- # get a different order for distributed training, otherwise this will get ignored
523
- if flashy.distrib.world_size() > 1 \
524
- and isinstance(loader.sampler, torch.utils.data.distributed.DistributedSampler):
525
- loader.sampler.set_epoch(self.epoch)
526
- updates_per_epoch = self.train_updates_per_epoch if self.is_training else len(loader)
527
- if self.cfg.benchmark_no_load:
528
- self.logger.warning("Fake loading for benchmarking: re-using first batch")
529
- batch = next(iter(loader))
530
- loader = [batch] * updates_per_epoch # type: ignore
531
- lp = self.log_progress(self.current_stage, loader, total=updates_per_epoch, updates=self.log_updates)
532
- average = flashy.averager() # epoch wise average
533
- instant_average = flashy.averager() # average between two logging
534
- metrics: dict = {}
535
-
536
- with self.profiler, self.deadlock_detect: # profiler will only run for the first 20 updates.
537
- for idx, batch in enumerate(lp):
538
- self.deadlock_detect.update('batch')
539
- if idx >= updates_per_epoch:
540
- break
541
- metrics = {}
542
- metrics = self.run_step(idx, batch, metrics)
543
- self.deadlock_detect.update('step')
544
- # run EMA step
545
- if self.ema is not None and self.is_training and (idx + 1) % self.cfg.optim.ema.updates == 0:
546
- self.logger.debug("EMA model step")
547
- self.ema.step()
548
- self.deadlock_detect.update('ema')
549
- self.profiler.step()
550
- instant_metrics = instant_average(metrics)
551
- if lp.update(**instant_metrics):
552
- instant_average = flashy.averager() # reset averager between two logging
553
- metrics = average(metrics) # epoch wise average
554
- self.deadlock_detect.update('end_batch')
555
-
556
- metrics = flashy.distrib.average_metrics(metrics, updates_per_epoch)
557
- return metrics
558
-
559
- def train(self):
560
- """Train stage."""
561
- return self.common_train_valid('train')
562
-
563
- def valid(self):
564
- """Valid stage."""
565
- return self.common_train_valid('valid')
566
-
567
- @abstractmethod
568
- def evaluate(self):
569
- """Evaluate stage."""
570
- ...
571
-
572
- @abstractmethod
573
- def generate(self):
574
- """Generate stage."""
575
- ...
576
-
577
- def run_one_stage(self, stage_name: str):
578
- """Run only the specified stage.
579
- This method is useful to only generate samples from a trained experiment
580
- or rerun the validation or evaluation stages.
581
- """
582
- fn = {
583
- 'generate': with_rank_rng()(self.generate),
584
- 'evaluate': self.evaluate,
585
- 'valid': self.valid,
586
- }
587
- if stage_name not in fn:
588
- raise ValueError(f'Trying to run stage {stage_name} is not supported.')
589
- assert len(self.state_dict()) > 0
590
- self._start_epoch()
591
- with torch.no_grad(), self.swap_best_state():
592
- self.run_stage(stage_name, fn[stage_name])
593
- if not self.cfg.execute_inplace:
594
- self.commit(save_checkpoints=False)
595
-
596
- @staticmethod
597
- def get_eval_solver_from_sig(sig: str, dtype: tp.Optional[str] = None,
598
- device: tp.Optional[str] = None, autocast: bool = True,
599
- batch_size: tp.Optional[int] = None,
600
- override_cfg: tp.Optional[tp.Union[dict, omegaconf.DictConfig]] = None,
601
- **kwargs):
602
- """Mostly a convenience function around audiocraft.train.get_solver_from_sig,
603
- populating all the proper param, deactivating EMA, FSDP, loading the best state,
604
- basically all you need to get a solver ready to "play" with in single GPU mode
605
- and with minimal memory overhead.
606
-
607
- Args:
608
- sig (str): signature to load.
609
- dtype (str or None): potential dtype, as a string, i.e. 'float16'.
610
- device (str or None): potential device, as a string, i.e. 'cuda'.
611
- override_cfg (dict or omegaconf.DictConfig or None): potential device, as a string, i.e. 'cuda'.
612
- """
613
- from audiocraft import train
614
- our_override_cfg: tp.Dict[str, tp.Any] = {'optim': {'ema': {'use': False}}}
615
- our_override_cfg['autocast'] = autocast
616
- if dtype is not None:
617
- our_override_cfg['dtype'] = dtype
618
- if device is not None:
619
- our_override_cfg['device'] = device
620
- if batch_size is not None:
621
- our_override_cfg['dataset'] = {'batch_size': batch_size}
622
- if override_cfg is None:
623
- override_cfg = {}
624
- override_cfg = omegaconf.OmegaConf.merge(
625
- omegaconf.DictConfig(override_cfg), omegaconf.DictConfig(our_override_cfg)) # type: ignore
626
- solver = train.get_solver_from_sig(
627
- sig, override_cfg=override_cfg,
628
- load_best=True, disable_fsdp=True,
629
- ignore_state_keys=['optimizer', 'ema'], **kwargs)
630
- solver.model.eval()
631
- return solver
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/__init__.py DELETED
File without changes
spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/__init__.py DELETED
@@ -1,3 +0,0 @@
1
- from . import clap
2
- from . import audio
3
- from . import utils
 
 
 
 
spaces/AIGText/GlyphControl/transfer.py DELETED
@@ -1,26 +0,0 @@
1
- from omegaconf import OmegaConf
2
- from scripts.rendertext_tool import Render_Text, load_model_from_config
3
- import torch
4
-
5
- # cfg = OmegaConf.load("other_configs/config_ema.yaml")
6
- # model = load_model_from_config(cfg, "model_states.pt", verbose=True)
7
- # model = load_model_from_config(cfg, "mp_rank_00_model_states.pt", verbose=True)
8
-
9
- cfg = OmegaConf.load("other_configs/config_ema_unlock.yaml")
10
- epoch_idx = 39
11
- model = load_model_from_config(cfg, "epoch={:0>6d}.ckpt".format(epoch_idx), verbose=True)
12
-
13
- from pytorch_lightning.callbacks import ModelCheckpoint
14
- with model.ema_scope("store ema weights"):
15
- model_sd = model.state_dict()
16
- store_sd = {}
17
- for key in model_sd:
18
- if "ema" in key:
19
- continue
20
- store_sd[key] = model_sd[key]
21
- file_content = {
22
- 'state_dict': store_sd
23
- }
24
- torch.save(file_content, f"textcaps5K_epoch_{epoch_idx+1}_model_wo_ema.ckpt")
25
- print("has stored the transfered ckpt.")
26
- print("trial ends!")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGuardians/SummarizeWikipediaDocument/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Summaraize
3
- emoji: 🏢
4
- colorFrom: green
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.12.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup_in1k.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/resnet50.py', '../_base_/datasets/imagenet_bs64.py',
3
- '../_base_/schedules/imagenet_bs2048.py', '../_base_/default_runtime.py'
4
- ]
 
 
 
 
 
spaces/Ababababababbababa/Arabic_poem_classifier/app.py DELETED
@@ -1,36 +0,0 @@
1
- import gradio as gr
2
-
3
- description = "التعرف على خاصيات البيت الشعري"
4
- title = """هذا البرنامج يقوم بالتعرف على مختلف خاصيات البيت من الشعر.
5
- يمكنكم إختيار الخاصية من بين:
6
- - التعرف على البحر
7
- - التعرف على الروي
8
- التعرف على الموضوع-"""
9
-
10
- examples = [["سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا"], ["قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"]]
11
-
12
-
13
- meter = gr.Interface.load("huggingface/Yah216/Arabic_poem_meter_3",
14
- description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه",
15
- examples=examples, title = "التعرف على البحر",
16
- inputs = gr.inputs.Textbox(lines = 3, label = "البيت")
17
-
18
- )
19
- rawiy = gr.Interface.load("huggingface/Yah216/Poem_Qafiyah_Detection",
20
- title ="التعرف على الروي",
21
- examples=examples,
22
- description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه",
23
- inputs = gr.inputs.Textbox(lines = 3, label = "البيت")
24
-
25
- )
26
- subject = gr.Interface.load(
27
- "huggingface/zenkri/autotrain-Arabic_Poetry_by_Subject-920730230",
28
- title="التعرف على الموضوع",
29
- examples=examples,
30
- description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه",
31
- inputs = gr.inputs.Textbox(lines = 3, label = "البيت")
32
-
33
- )
34
- demo = gr.TabbedInterface([meter, rawiy, subject], ["التعرف على البحر","التعرف على الروي","التعرف على الموضوع"])
35
- demo.launch()
36
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/T2I-Adapter/experiments/README.md DELETED
File without changes
spaces/AkitoP/umamusume_bert_vits2/models.py DELETED
@@ -1,986 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- import modules
8
- import attentions
9
- import monotonic_align
10
-
11
- from torch.nn import Conv1d, ConvTranspose1d, Conv2d
12
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
13
-
14
- from commons import init_weights, get_padding
15
- from text import symbols, num_tones, num_languages
16
-
17
-
18
- class DurationDiscriminator(nn.Module): # vits2
19
- def __init__(
20
- self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0
21
- ):
22
- super().__init__()
23
-
24
- self.in_channels = in_channels
25
- self.filter_channels = filter_channels
26
- self.kernel_size = kernel_size
27
- self.p_dropout = p_dropout
28
- self.gin_channels = gin_channels
29
-
30
- self.drop = nn.Dropout(p_dropout)
31
- self.conv_1 = nn.Conv1d(
32
- in_channels, filter_channels, kernel_size, padding=kernel_size // 2
33
- )
34
- self.norm_1 = modules.LayerNorm(filter_channels)
35
- self.conv_2 = nn.Conv1d(
36
- filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
37
- )
38
- self.norm_2 = modules.LayerNorm(filter_channels)
39
- self.dur_proj = nn.Conv1d(1, filter_channels, 1)
40
-
41
- self.pre_out_conv_1 = nn.Conv1d(
42
- 2 * filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
43
- )
44
- self.pre_out_norm_1 = modules.LayerNorm(filter_channels)
45
- self.pre_out_conv_2 = nn.Conv1d(
46
- filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
47
- )
48
- self.pre_out_norm_2 = modules.LayerNorm(filter_channels)
49
-
50
- if gin_channels != 0:
51
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
52
-
53
- self.output_layer = nn.Sequential(nn.Linear(filter_channels, 1), nn.Sigmoid())
54
-
55
- def forward_probability(self, x, x_mask, dur, g=None):
56
- dur = self.dur_proj(dur)
57
- x = torch.cat([x, dur], dim=1)
58
- x = self.pre_out_conv_1(x * x_mask)
59
- x = torch.relu(x)
60
- x = self.pre_out_norm_1(x)
61
- x = self.drop(x)
62
- x = self.pre_out_conv_2(x * x_mask)
63
- x = torch.relu(x)
64
- x = self.pre_out_norm_2(x)
65
- x = self.drop(x)
66
- x = x * x_mask
67
- x = x.transpose(1, 2)
68
- output_prob = self.output_layer(x)
69
- return output_prob
70
-
71
- def forward(self, x, x_mask, dur_r, dur_hat, g=None):
72
- x = torch.detach(x)
73
- if g is not None:
74
- g = torch.detach(g)
75
- x = x + self.cond(g)
76
- x = self.conv_1(x * x_mask)
77
- x = torch.relu(x)
78
- x = self.norm_1(x)
79
- x = self.drop(x)
80
- x = self.conv_2(x * x_mask)
81
- x = torch.relu(x)
82
- x = self.norm_2(x)
83
- x = self.drop(x)
84
-
85
- output_probs = []
86
- for dur in [dur_r, dur_hat]:
87
- output_prob = self.forward_probability(x, x_mask, dur, g)
88
- output_probs.append(output_prob)
89
-
90
- return output_probs
91
-
92
-
93
- class TransformerCouplingBlock(nn.Module):
94
- def __init__(
95
- self,
96
- channels,
97
- hidden_channels,
98
- filter_channels,
99
- n_heads,
100
- n_layers,
101
- kernel_size,
102
- p_dropout,
103
- n_flows=4,
104
- gin_channels=0,
105
- share_parameter=False,
106
- ):
107
- super().__init__()
108
- self.channels = channels
109
- self.hidden_channels = hidden_channels
110
- self.kernel_size = kernel_size
111
- self.n_layers = n_layers
112
- self.n_flows = n_flows
113
- self.gin_channels = gin_channels
114
-
115
- self.flows = nn.ModuleList()
116
-
117
- self.wn = (
118
- attentions.FFT(
119
- hidden_channels,
120
- filter_channels,
121
- n_heads,
122
- n_layers,
123
- kernel_size,
124
- p_dropout,
125
- isflow=True,
126
- gin_channels=self.gin_channels,
127
- )
128
- if share_parameter
129
- else None
130
- )
131
-
132
- for i in range(n_flows):
133
- self.flows.append(
134
- modules.TransformerCouplingLayer(
135
- channels,
136
- hidden_channels,
137
- kernel_size,
138
- n_layers,
139
- n_heads,
140
- p_dropout,
141
- filter_channels,
142
- mean_only=True,
143
- wn_sharing_parameter=self.wn,
144
- gin_channels=self.gin_channels,
145
- )
146
- )
147
- self.flows.append(modules.Flip())
148
-
149
- def forward(self, x, x_mask, g=None, reverse=False):
150
- if not reverse:
151
- for flow in self.flows:
152
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
153
- else:
154
- for flow in reversed(self.flows):
155
- x = flow(x, x_mask, g=g, reverse=reverse)
156
- return x
157
-
158
-
159
- class StochasticDurationPredictor(nn.Module):
160
- def __init__(
161
- self,
162
- in_channels,
163
- filter_channels,
164
- kernel_size,
165
- p_dropout,
166
- n_flows=4,
167
- gin_channels=0,
168
- ):
169
- super().__init__()
170
- filter_channels = in_channels # it needs to be removed from future version.
171
- self.in_channels = in_channels
172
- self.filter_channels = filter_channels
173
- self.kernel_size = kernel_size
174
- self.p_dropout = p_dropout
175
- self.n_flows = n_flows
176
- self.gin_channels = gin_channels
177
-
178
- self.log_flow = modules.Log()
179
- self.flows = nn.ModuleList()
180
- self.flows.append(modules.ElementwiseAffine(2))
181
- for i in range(n_flows):
182
- self.flows.append(
183
- modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)
184
- )
185
- self.flows.append(modules.Flip())
186
-
187
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
188
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
189
- self.post_convs = modules.DDSConv(
190
- filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout
191
- )
192
- self.post_flows = nn.ModuleList()
193
- self.post_flows.append(modules.ElementwiseAffine(2))
194
- for i in range(4):
195
- self.post_flows.append(
196
- modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)
197
- )
198
- self.post_flows.append(modules.Flip())
199
-
200
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
201
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
202
- self.convs = modules.DDSConv(
203
- filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout
204
- )
205
- if gin_channels != 0:
206
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
207
-
208
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
209
- x = torch.detach(x)
210
- x = self.pre(x)
211
- if g is not None:
212
- g = torch.detach(g)
213
- x = x + self.cond(g)
214
- x = self.convs(x, x_mask)
215
- x = self.proj(x) * x_mask
216
-
217
- if not reverse:
218
- flows = self.flows
219
- assert w is not None
220
-
221
- logdet_tot_q = 0
222
- h_w = self.post_pre(w)
223
- h_w = self.post_convs(h_w, x_mask)
224
- h_w = self.post_proj(h_w) * x_mask
225
- e_q = (
226
- torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype)
227
- * x_mask
228
- )
229
- z_q = e_q
230
- for flow in self.post_flows:
231
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
232
- logdet_tot_q += logdet_q
233
- z_u, z1 = torch.split(z_q, [1, 1], 1)
234
- u = torch.sigmoid(z_u) * x_mask
235
- z0 = (w - u) * x_mask
236
- logdet_tot_q += torch.sum(
237
- (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]
238
- )
239
- logq = (
240
- torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q**2)) * x_mask, [1, 2])
241
- - logdet_tot_q
242
- )
243
-
244
- logdet_tot = 0
245
- z0, logdet = self.log_flow(z0, x_mask)
246
- logdet_tot += logdet
247
- z = torch.cat([z0, z1], 1)
248
- for flow in flows:
249
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
250
- logdet_tot = logdet_tot + logdet
251
- nll = (
252
- torch.sum(0.5 * (math.log(2 * math.pi) + (z**2)) * x_mask, [1, 2])
253
- - logdet_tot
254
- )
255
- return nll + logq # [b]
256
- else:
257
- flows = list(reversed(self.flows))
258
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
259
- z = (
260
- torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype)
261
- * noise_scale
262
- )
263
- for flow in flows:
264
- z = flow(z, x_mask, g=x, reverse=reverse)
265
- z0, z1 = torch.split(z, [1, 1], 1)
266
- logw = z0
267
- return logw
268
-
269
-
270
- class DurationPredictor(nn.Module):
271
- def __init__(
272
- self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0
273
- ):
274
- super().__init__()
275
-
276
- self.in_channels = in_channels
277
- self.filter_channels = filter_channels
278
- self.kernel_size = kernel_size
279
- self.p_dropout = p_dropout
280
- self.gin_channels = gin_channels
281
-
282
- self.drop = nn.Dropout(p_dropout)
283
- self.conv_1 = nn.Conv1d(
284
- in_channels, filter_channels, kernel_size, padding=kernel_size // 2
285
- )
286
- self.norm_1 = modules.LayerNorm(filter_channels)
287
- self.conv_2 = nn.Conv1d(
288
- filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
289
- )
290
- self.norm_2 = modules.LayerNorm(filter_channels)
291
- self.proj = nn.Conv1d(filter_channels, 1, 1)
292
-
293
- if gin_channels != 0:
294
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
295
-
296
- def forward(self, x, x_mask, g=None):
297
- x = torch.detach(x)
298
- if g is not None:
299
- g = torch.detach(g)
300
- x = x + self.cond(g)
301
- x = self.conv_1(x * x_mask)
302
- x = torch.relu(x)
303
- x = self.norm_1(x)
304
- x = self.drop(x)
305
- x = self.conv_2(x * x_mask)
306
- x = torch.relu(x)
307
- x = self.norm_2(x)
308
- x = self.drop(x)
309
- x = self.proj(x * x_mask)
310
- return x * x_mask
311
-
312
-
313
- class TextEncoder(nn.Module):
314
- def __init__(
315
- self,
316
- n_vocab,
317
- out_channels,
318
- hidden_channels,
319
- filter_channels,
320
- n_heads,
321
- n_layers,
322
- kernel_size,
323
- p_dropout,
324
- gin_channels=0,
325
- ):
326
- super().__init__()
327
- self.n_vocab = n_vocab
328
- self.out_channels = out_channels
329
- self.hidden_channels = hidden_channels
330
- self.filter_channels = filter_channels
331
- self.n_heads = n_heads
332
- self.n_layers = n_layers
333
- self.kernel_size = kernel_size
334
- self.p_dropout = p_dropout
335
- self.gin_channels = gin_channels
336
- self.emb = nn.Embedding(len(symbols), hidden_channels)
337
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
338
- self.tone_emb = nn.Embedding(num_tones, hidden_channels)
339
- nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels**-0.5)
340
- self.language_emb = nn.Embedding(num_languages, hidden_channels)
341
- nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels**-0.5)
342
- self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
343
- self.ja_bert_proj = nn.Conv1d(768, hidden_channels, 1)
344
-
345
- self.encoder = attentions.Encoder(
346
- hidden_channels,
347
- filter_channels,
348
- n_heads,
349
- n_layers,
350
- kernel_size,
351
- p_dropout,
352
- gin_channels=self.gin_channels,
353
- )
354
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
355
-
356
- def forward(self, x, x_lengths, tone, language, bert, ja_bert, g=None):
357
- bert_emb = self.bert_proj(bert).transpose(1, 2)
358
- ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2)
359
- x = (
360
- self.emb(x)
361
- + self.tone_emb(tone)
362
- + self.language_emb(language)
363
- + bert_emb
364
- + ja_bert_emb
365
- ) * math.sqrt(
366
- self.hidden_channels
367
- ) # [b, t, h]
368
- x = torch.transpose(x, 1, -1) # [b, h, t]
369
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
370
- x.dtype
371
- )
372
-
373
- x = self.encoder(x * x_mask, x_mask, g=g)
374
- stats = self.proj(x) * x_mask
375
-
376
- m, logs = torch.split(stats, self.out_channels, dim=1)
377
- return x, m, logs, x_mask
378
-
379
-
380
- class ResidualCouplingBlock(nn.Module):
381
- def __init__(
382
- self,
383
- channels,
384
- hidden_channels,
385
- kernel_size,
386
- dilation_rate,
387
- n_layers,
388
- n_flows=4,
389
- gin_channels=0,
390
- ):
391
- super().__init__()
392
- self.channels = channels
393
- self.hidden_channels = hidden_channels
394
- self.kernel_size = kernel_size
395
- self.dilation_rate = dilation_rate
396
- self.n_layers = n_layers
397
- self.n_flows = n_flows
398
- self.gin_channels = gin_channels
399
-
400
- self.flows = nn.ModuleList()
401
- for i in range(n_flows):
402
- self.flows.append(
403
- modules.ResidualCouplingLayer(
404
- channels,
405
- hidden_channels,
406
- kernel_size,
407
- dilation_rate,
408
- n_layers,
409
- gin_channels=gin_channels,
410
- mean_only=True,
411
- )
412
- )
413
- self.flows.append(modules.Flip())
414
-
415
- def forward(self, x, x_mask, g=None, reverse=False):
416
- if not reverse:
417
- for flow in self.flows:
418
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
419
- else:
420
- for flow in reversed(self.flows):
421
- x = flow(x, x_mask, g=g, reverse=reverse)
422
- return x
423
-
424
-
425
- class PosteriorEncoder(nn.Module):
426
- def __init__(
427
- self,
428
- in_channels,
429
- out_channels,
430
- hidden_channels,
431
- kernel_size,
432
- dilation_rate,
433
- n_layers,
434
- gin_channels=0,
435
- ):
436
- super().__init__()
437
- self.in_channels = in_channels
438
- self.out_channels = out_channels
439
- self.hidden_channels = hidden_channels
440
- self.kernel_size = kernel_size
441
- self.dilation_rate = dilation_rate
442
- self.n_layers = n_layers
443
- self.gin_channels = gin_channels
444
-
445
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
446
- self.enc = modules.WN(
447
- hidden_channels,
448
- kernel_size,
449
- dilation_rate,
450
- n_layers,
451
- gin_channels=gin_channels,
452
- )
453
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
454
-
455
- def forward(self, x, x_lengths, g=None):
456
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
457
- x.dtype
458
- )
459
- x = self.pre(x) * x_mask
460
- x = self.enc(x, x_mask, g=g)
461
- stats = self.proj(x) * x_mask
462
- m, logs = torch.split(stats, self.out_channels, dim=1)
463
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
464
- return z, m, logs, x_mask
465
-
466
-
467
- class Generator(torch.nn.Module):
468
- def __init__(
469
- self,
470
- initial_channel,
471
- resblock,
472
- resblock_kernel_sizes,
473
- resblock_dilation_sizes,
474
- upsample_rates,
475
- upsample_initial_channel,
476
- upsample_kernel_sizes,
477
- gin_channels=0,
478
- ):
479
- super(Generator, self).__init__()
480
- self.num_kernels = len(resblock_kernel_sizes)
481
- self.num_upsamples = len(upsample_rates)
482
- self.conv_pre = Conv1d(
483
- initial_channel, upsample_initial_channel, 7, 1, padding=3
484
- )
485
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
486
-
487
- self.ups = nn.ModuleList()
488
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
489
- self.ups.append(
490
- weight_norm(
491
- ConvTranspose1d(
492
- upsample_initial_channel // (2**i),
493
- upsample_initial_channel // (2 ** (i + 1)),
494
- k,
495
- u,
496
- padding=(k - u) // 2,
497
- )
498
- )
499
- )
500
-
501
- self.resblocks = nn.ModuleList()
502
- for i in range(len(self.ups)):
503
- ch = upsample_initial_channel // (2 ** (i + 1))
504
- for j, (k, d) in enumerate(
505
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
506
- ):
507
- self.resblocks.append(resblock(ch, k, d))
508
-
509
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
510
- self.ups.apply(init_weights)
511
-
512
- if gin_channels != 0:
513
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
514
-
515
- def forward(self, x, g=None):
516
- x = self.conv_pre(x)
517
- if g is not None:
518
- x = x + self.cond(g)
519
-
520
- for i in range(self.num_upsamples):
521
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
522
- x = self.ups[i](x)
523
- xs = None
524
- for j in range(self.num_kernels):
525
- if xs is None:
526
- xs = self.resblocks[i * self.num_kernels + j](x)
527
- else:
528
- xs += self.resblocks[i * self.num_kernels + j](x)
529
- x = xs / self.num_kernels
530
- x = F.leaky_relu(x)
531
- x = self.conv_post(x)
532
- x = torch.tanh(x)
533
-
534
- return x
535
-
536
- def remove_weight_norm(self):
537
- print("Removing weight norm...")
538
- for layer in self.ups:
539
- remove_weight_norm(layer)
540
- for layer in self.resblocks:
541
- layer.remove_weight_norm()
542
-
543
-
544
- class DiscriminatorP(torch.nn.Module):
545
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
546
- super(DiscriminatorP, self).__init__()
547
- self.period = period
548
- self.use_spectral_norm = use_spectral_norm
549
- norm_f = weight_norm if use_spectral_norm is False else spectral_norm
550
- self.convs = nn.ModuleList(
551
- [
552
- norm_f(
553
- Conv2d(
554
- 1,
555
- 32,
556
- (kernel_size, 1),
557
- (stride, 1),
558
- padding=(get_padding(kernel_size, 1), 0),
559
- )
560
- ),
561
- norm_f(
562
- Conv2d(
563
- 32,
564
- 128,
565
- (kernel_size, 1),
566
- (stride, 1),
567
- padding=(get_padding(kernel_size, 1), 0),
568
- )
569
- ),
570
- norm_f(
571
- Conv2d(
572
- 128,
573
- 512,
574
- (kernel_size, 1),
575
- (stride, 1),
576
- padding=(get_padding(kernel_size, 1), 0),
577
- )
578
- ),
579
- norm_f(
580
- Conv2d(
581
- 512,
582
- 1024,
583
- (kernel_size, 1),
584
- (stride, 1),
585
- padding=(get_padding(kernel_size, 1), 0),
586
- )
587
- ),
588
- norm_f(
589
- Conv2d(
590
- 1024,
591
- 1024,
592
- (kernel_size, 1),
593
- 1,
594
- padding=(get_padding(kernel_size, 1), 0),
595
- )
596
- ),
597
- ]
598
- )
599
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
600
-
601
- def forward(self, x):
602
- fmap = []
603
-
604
- # 1d to 2d
605
- b, c, t = x.shape
606
- if t % self.period != 0: # pad first
607
- n_pad = self.period - (t % self.period)
608
- x = F.pad(x, (0, n_pad), "reflect")
609
- t = t + n_pad
610
- x = x.view(b, c, t // self.period, self.period)
611
-
612
- for layer in self.convs:
613
- x = layer(x)
614
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
615
- fmap.append(x)
616
- x = self.conv_post(x)
617
- fmap.append(x)
618
- x = torch.flatten(x, 1, -1)
619
-
620
- return x, fmap
621
-
622
-
623
- class DiscriminatorS(torch.nn.Module):
624
- def __init__(self, use_spectral_norm=False):
625
- super(DiscriminatorS, self).__init__()
626
- norm_f = weight_norm if use_spectral_norm is False else spectral_norm
627
- self.convs = nn.ModuleList(
628
- [
629
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
630
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
631
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
632
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
633
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
634
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
635
- ]
636
- )
637
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
638
-
639
- def forward(self, x):
640
- fmap = []
641
-
642
- for layer in self.convs:
643
- x = layer(x)
644
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
645
- fmap.append(x)
646
- x = self.conv_post(x)
647
- fmap.append(x)
648
- x = torch.flatten(x, 1, -1)
649
-
650
- return x, fmap
651
-
652
-
653
- class MultiPeriodDiscriminator(torch.nn.Module):
654
- def __init__(self, use_spectral_norm=False):
655
- super(MultiPeriodDiscriminator, self).__init__()
656
- periods = [2, 3, 5, 7, 11]
657
-
658
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
659
- discs = discs + [
660
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
661
- ]
662
- self.discriminators = nn.ModuleList(discs)
663
-
664
- def forward(self, y, y_hat):
665
- y_d_rs = []
666
- y_d_gs = []
667
- fmap_rs = []
668
- fmap_gs = []
669
- for i, d in enumerate(self.discriminators):
670
- y_d_r, fmap_r = d(y)
671
- y_d_g, fmap_g = d(y_hat)
672
- y_d_rs.append(y_d_r)
673
- y_d_gs.append(y_d_g)
674
- fmap_rs.append(fmap_r)
675
- fmap_gs.append(fmap_g)
676
-
677
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
678
-
679
-
680
- class ReferenceEncoder(nn.Module):
681
- """
682
- inputs --- [N, Ty/r, n_mels*r] mels
683
- outputs --- [N, ref_enc_gru_size]
684
- """
685
-
686
- def __init__(self, spec_channels, gin_channels=0):
687
- super().__init__()
688
- self.spec_channels = spec_channels
689
- ref_enc_filters = [32, 32, 64, 64, 128, 128]
690
- K = len(ref_enc_filters)
691
- filters = [1] + ref_enc_filters
692
- convs = [
693
- weight_norm(
694
- nn.Conv2d(
695
- in_channels=filters[i],
696
- out_channels=filters[i + 1],
697
- kernel_size=(3, 3),
698
- stride=(2, 2),
699
- padding=(1, 1),
700
- )
701
- )
702
- for i in range(K)
703
- ]
704
- self.convs = nn.ModuleList(convs)
705
- # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) # noqa: E501
706
-
707
- out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
708
- self.gru = nn.GRU(
709
- input_size=ref_enc_filters[-1] * out_channels,
710
- hidden_size=256 // 2,
711
- batch_first=True,
712
- )
713
- self.proj = nn.Linear(128, gin_channels)
714
-
715
- def forward(self, inputs, mask=None):
716
- N = inputs.size(0)
717
- out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
718
- for conv in self.convs:
719
- out = conv(out)
720
- # out = wn(out)
721
- out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
722
-
723
- out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
724
- T = out.size(1)
725
- N = out.size(0)
726
- out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
727
-
728
- self.gru.flatten_parameters()
729
- memory, out = self.gru(out) # out --- [1, N, 128]
730
-
731
- return self.proj(out.squeeze(0))
732
-
733
- def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
734
- for i in range(n_convs):
735
- L = (L - kernel_size + 2 * pad) // stride + 1
736
- return L
737
-
738
-
739
- class SynthesizerTrn(nn.Module):
740
- """
741
- Synthesizer for Training
742
- """
743
-
744
- def __init__(
745
- self,
746
- n_vocab,
747
- spec_channels,
748
- segment_size,
749
- inter_channels,
750
- hidden_channels,
751
- filter_channels,
752
- n_heads,
753
- n_layers,
754
- kernel_size,
755
- p_dropout,
756
- resblock,
757
- resblock_kernel_sizes,
758
- resblock_dilation_sizes,
759
- upsample_rates,
760
- upsample_initial_channel,
761
- upsample_kernel_sizes,
762
- n_speakers=256,
763
- gin_channels=256,
764
- use_sdp=True,
765
- n_flow_layer=4,
766
- n_layers_trans_flow=6,
767
- flow_share_parameter=False,
768
- use_transformer_flow=True,
769
- **kwargs
770
- ):
771
- super().__init__()
772
- self.n_vocab = n_vocab
773
- self.spec_channels = spec_channels
774
- self.inter_channels = inter_channels
775
- self.hidden_channels = hidden_channels
776
- self.filter_channels = filter_channels
777
- self.n_heads = n_heads
778
- self.n_layers = n_layers
779
- self.kernel_size = kernel_size
780
- self.p_dropout = p_dropout
781
- self.resblock = resblock
782
- self.resblock_kernel_sizes = resblock_kernel_sizes
783
- self.resblock_dilation_sizes = resblock_dilation_sizes
784
- self.upsample_rates = upsample_rates
785
- self.upsample_initial_channel = upsample_initial_channel
786
- self.upsample_kernel_sizes = upsample_kernel_sizes
787
- self.segment_size = segment_size
788
- self.n_speakers = n_speakers
789
- self.gin_channels = gin_channels
790
- self.n_layers_trans_flow = n_layers_trans_flow
791
- self.use_spk_conditioned_encoder = kwargs.get(
792
- "use_spk_conditioned_encoder", True
793
- )
794
- self.use_sdp = use_sdp
795
- self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
796
- self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
797
- self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
798
- self.current_mas_noise_scale = self.mas_noise_scale_initial
799
- if self.use_spk_conditioned_encoder and gin_channels > 0:
800
- self.enc_gin_channels = gin_channels
801
- self.enc_p = TextEncoder(
802
- n_vocab,
803
- inter_channels,
804
- hidden_channels,
805
- filter_channels,
806
- n_heads,
807
- n_layers,
808
- kernel_size,
809
- p_dropout,
810
- gin_channels=self.enc_gin_channels,
811
- )
812
- self.dec = Generator(
813
- inter_channels,
814
- resblock,
815
- resblock_kernel_sizes,
816
- resblock_dilation_sizes,
817
- upsample_rates,
818
- upsample_initial_channel,
819
- upsample_kernel_sizes,
820
- gin_channels=gin_channels,
821
- )
822
- self.enc_q = PosteriorEncoder(
823
- spec_channels,
824
- inter_channels,
825
- hidden_channels,
826
- 5,
827
- 1,
828
- 16,
829
- gin_channels=gin_channels,
830
- )
831
- if use_transformer_flow:
832
- self.flow = TransformerCouplingBlock(
833
- inter_channels,
834
- hidden_channels,
835
- filter_channels,
836
- n_heads,
837
- n_layers_trans_flow,
838
- 5,
839
- p_dropout,
840
- n_flow_layer,
841
- gin_channels=gin_channels,
842
- share_parameter=flow_share_parameter,
843
- )
844
- else:
845
- self.flow = ResidualCouplingBlock(
846
- inter_channels,
847
- hidden_channels,
848
- 5,
849
- 1,
850
- n_flow_layer,
851
- gin_channels=gin_channels,
852
- )
853
- self.sdp = StochasticDurationPredictor(
854
- hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels
855
- )
856
- self.dp = DurationPredictor(
857
- hidden_channels, 256, 3, 0.5, gin_channels=gin_channels
858
- )
859
-
860
- if n_speakers > 1:
861
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
862
- else:
863
- self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
864
-
865
- def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert, ja_bert):
866
- if self.n_speakers > 0:
867
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
868
- else:
869
- g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
870
- x, m_p, logs_p, x_mask = self.enc_p(
871
- x, x_lengths, tone, language, bert, ja_bert, g=g
872
- )
873
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
874
- z_p = self.flow(z, y_mask, g=g)
875
-
876
- with torch.no_grad():
877
- # negative cross-entropy
878
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
879
- neg_cent1 = torch.sum(
880
- -0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True
881
- ) # [b, 1, t_s]
882
- neg_cent2 = torch.matmul(
883
- -0.5 * (z_p**2).transpose(1, 2), s_p_sq_r
884
- ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
885
- neg_cent3 = torch.matmul(
886
- z_p.transpose(1, 2), (m_p * s_p_sq_r)
887
- ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
888
- neg_cent4 = torch.sum(
889
- -0.5 * (m_p**2) * s_p_sq_r, [1], keepdim=True
890
- ) # [b, 1, t_s]
891
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
892
- if self.use_noise_scaled_mas:
893
- epsilon = (
894
- torch.std(neg_cent)
895
- * torch.randn_like(neg_cent)
896
- * self.current_mas_noise_scale
897
- )
898
- neg_cent = neg_cent + epsilon
899
-
900
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
901
- attn = (
902
- monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1))
903
- .unsqueeze(1)
904
- .detach()
905
- )
906
-
907
- w = attn.sum(2)
908
-
909
- l_length_sdp = self.sdp(x, x_mask, w, g=g)
910
- l_length_sdp = l_length_sdp / torch.sum(x_mask)
911
-
912
- logw_ = torch.log(w + 1e-6) * x_mask
913
- logw = self.dp(x, x_mask, g=g)
914
- l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(
915
- x_mask
916
- ) # for averaging
917
-
918
- l_length = l_length_dp + l_length_sdp
919
-
920
- # expand prior
921
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
922
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
923
-
924
- z_slice, ids_slice = commons.rand_slice_segments(
925
- z, y_lengths, self.segment_size
926
- )
927
- o = self.dec(z_slice, g=g)
928
- return (
929
- o,
930
- l_length,
931
- attn,
932
- ids_slice,
933
- x_mask,
934
- y_mask,
935
- (z, z_p, m_p, logs_p, m_q, logs_q),
936
- (x, logw, logw_),
937
- )
938
-
939
- def infer(
940
- self,
941
- x,
942
- x_lengths,
943
- sid,
944
- tone,
945
- language,
946
- bert,
947
- ja_bert,
948
- noise_scale=0.667,
949
- length_scale=1,
950
- noise_scale_w=0.8,
951
- max_len=None,
952
- sdp_ratio=0,
953
- y=None,
954
- ):
955
- # x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
956
- # g = self.gst(y)
957
- if self.n_speakers > 0:
958
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
959
- else:
960
- g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
961
- x, m_p, logs_p, x_mask = self.enc_p(
962
- x, x_lengths, tone, language, bert, ja_bert, g=g
963
- )
964
- logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (
965
- sdp_ratio
966
- ) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
967
- w = torch.exp(logw) * x_mask * length_scale
968
- w_ceil = torch.ceil(w)
969
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
970
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(
971
- x_mask.dtype
972
- )
973
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
974
- attn = commons.generate_path(w_ceil, attn_mask)
975
-
976
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(
977
- 1, 2
978
- ) # [b, t', t], [b, t, d] -> [b, d, t']
979
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(
980
- 1, 2
981
- ) # [b, t', t], [b, t, d] -> [b, d, t']
982
-
983
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
984
- z = self.flow(z_p, y_mask, g=g, reverse=True)
985
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
986
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Akshat231/super_space/app.py DELETED
@@ -1,122 +0,0 @@
1
- ##THIS IS FOR SUPER-RESOLUTION\
2
-
3
- import gradio as gr
4
- from PIL import Image
5
- import tensorflow as tf
6
- import tensorflow_hub as hub
7
- import numpy as np
8
- import requests
9
- import cv2
10
- from tensorflow.python.keras.layers import Add, Conv2D, Input, Lambda
11
- from tensorflow.python.keras.models import Model
12
-
13
-
14
- super_resolution='./weights.h5'
15
-
16
-
17
- pre_mean = np.array([0.4488, 0.4371, 0.4040]) * 255
18
-
19
-
20
- #HELPER FUN
21
- def normalize(x, rgb_mean=pre_mean):
22
- return (x - rgb_mean) / 127.5
23
-
24
- #HELPER FUN
25
- def pixel_shuffle(scale):
26
- return lambda x: tf.nn.depth_to_space(x, scale)
27
-
28
- #HELPER FUN
29
- def denormalize(x, rgb_mean=pre_mean):
30
- return x * 127.5 + rgb_mean
31
-
32
-
33
- #MAIN FUN
34
- def res_block(x_in, filters, scaling):
35
- x = Conv2D(filters, 3, padding='same', activation='relu')(x_in)
36
- x = Conv2D(filters, 3, padding='same')(x)
37
- x =tf.keras.layers.LeakyReLU(alpha = 0.01)(x)
38
- x = tf.keras.layers.BatchNormalization()(x)
39
- if scaling:
40
- x = Lambda(lambda t: t * scaling)(x)
41
- x = Add()([x_in, x])
42
- return x
43
-
44
-
45
-
46
- #HELPER FUN
47
- def upsample(x, scale, num_filters):
48
- def upsample_1(x, factor, **kwargs):
49
- x = Conv2D(num_filters * (factor ** 2), 3, padding='same', **kwargs)(x)
50
- return Lambda(pixel_shuffle(scale=factor))(x)
51
-
52
- if scale == 2:
53
- x = upsample_1(x, 2, name='conv2d_1_scale_2')
54
- elif scale == 3:
55
- x = upsample_1(x, 3, name='conv2d_1_scale_3')
56
- elif scale == 4:
57
- x = upsample_1(x, 2, name='conv2d_1_scale_2')
58
- x = upsample_1(x, 2, name='conv2d_2_scale_2')
59
-
60
- return x
61
-
62
- #MAIN FUN
63
- def super_res(scale, num_filters=64, num_res_blocks=8, res_block_scaling=None):
64
- x_in = Input(shape=(None, None, 3))
65
- x = Lambda(normalize)(x_in)
66
- x = b = Conv2D(num_filters, 3, padding='same')(x)
67
-
68
- for i in range(num_res_blocks):
69
- b = res_block(b, num_filters, res_block_scaling)
70
- b = Conv2D(num_filters, 3, padding='same')(b)
71
- x = Add()([x, b])
72
-
73
- x = upsample(x, scale, num_filters)
74
- x = Conv2D(3, 3, padding='same')(x)
75
-
76
- x = Lambda(denormalize)(x)
77
- return Model(x_in, x, name="super_res")
78
-
79
-
80
-
81
-
82
- def load_image(path):
83
- return np.array(path)
84
-
85
-
86
-
87
-
88
- def resolve(model, lr_batch):
89
- lr_batch = tf.cast(lr_batch, tf.float32)
90
- sr_batch = model(lr_batch)
91
- sr_batch = tf.clip_by_value(sr_batch, 0, 255)
92
- sr_batch = tf.round(sr_batch)
93
- sr_batch = tf.cast(sr_batch, tf.uint8)
94
- return sr_batch
95
-
96
-
97
-
98
- def resolve_single(model, lr):
99
- return resolve(model, tf.expand_dims(lr, axis=0))[0]
100
-
101
-
102
-
103
- model=super_res(scale=4, num_res_blocks=16)
104
-
105
-
106
- model.load_weights(super_resolution)
107
-
108
-
109
- def predict_image(image):
110
- lr=load_image(image)
111
- sr = resolve_single(model, lr)
112
- numpy_array = sr.numpy()
113
- ima = Image.fromarray(numpy_array)
114
- return ima
115
-
116
-
117
-
118
- image=gr.inputs.Image()
119
-
120
- irface=gr.Interface(fn=predict_image, inputs=image, outputs=image,interpretation='default')
121
-
122
- irface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AkshayKollimarala/MYAIVOICESPEECH/app.py DELETED
@@ -1,164 +0,0 @@
1
- import os
2
- import re
3
- import requests
4
- import json
5
- import gradio as gr
6
- from langchain.chat_models import ChatOpenAI
7
- from langchain import LLMChain, PromptTemplate
8
- from langchain.memory import ConversationBufferMemory
9
-
10
- OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
11
- PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY')
12
- PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID')
13
-
14
- PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID')
15
- play_ht_api_get_audio_url = "https://play.ht/api/v2/tts"
16
-
17
-
18
- template = """You are a helpful assistant to answer user queries.
19
- {chat_history}
20
- User: {user_message}
21
- Chatbot:"""
22
-
23
- prompt = PromptTemplate(
24
- input_variables=["chat_history", "user_message"], template=template
25
- )
26
-
27
- memory = ConversationBufferMemory(memory_key="chat_history")
28
-
29
- llm_chain = LLMChain(
30
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
31
- prompt=prompt,
32
- verbose=True,
33
- memory=memory,
34
- )
35
-
36
- headers = {
37
- "accept": "text/event-stream",
38
- "content-type": "application/json",
39
- "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY,
40
- "X-USER-ID": PLAY_HT_USER_ID
41
- }
42
-
43
-
44
- def get_payload(text):
45
- return {
46
- "text": text,
47
- "voice": PLAY_HT_VOICE_ID,
48
- "quality": "medium",
49
- "output_format": "mp3",
50
- "speed": 1,
51
- "sample_rate": 24000,
52
- "seed": None,
53
- "temperature": None
54
- }
55
-
56
- def get_generated_audio(text):
57
- payload = get_payload(text)
58
- generated_response = {}
59
- try:
60
- response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers)
61
- response.raise_for_status()
62
- generated_response["type"]= 'SUCCESS'
63
- generated_response["response"] = response.text
64
- except requests.exceptions.RequestException as e:
65
- generated_response["type"]= 'ERROR'
66
- try:
67
- response_text = json.loads(response.text)
68
- if response_text['error_message']:
69
- generated_response["response"] = response_text['error_message']
70
- else:
71
- generated_response["response"] = response.text
72
- except Exception as e:
73
- generated_response["response"] = response.text
74
- except Exception as e:
75
- generated_response["type"]= 'ERROR'
76
- generated_response["response"] = response.text
77
- return generated_response
78
-
79
- def extract_urls(text):
80
- # Define the regex pattern for URLs
81
- url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*'
82
-
83
- # Find all occurrences of URLs in the text
84
- urls = re.findall(url_pattern, text)
85
-
86
- return urls
87
-
88
- def get_audio_reply_for_question(text):
89
- generated_audio_event = get_generated_audio(text)
90
- #From get_generated_audio, you will get events in a string format, from that we need to extract the url
91
- final_response = {
92
- "audio_url": '',
93
- "message": ''
94
- }
95
- if generated_audio_event["type"] == 'SUCCESS':
96
- audio_urls = extract_urls(generated_audio_event["response"])
97
- if len(audio_urls) == 0:
98
- final_response['message'] = "No audio file link found in generated event"
99
- else:
100
- final_response['audio_url'] = audio_urls[-1]
101
- else:
102
- final_response['message'] = generated_audio_event['response']
103
- return final_response
104
-
105
- def download_url(url):
106
- try:
107
- # Send a GET request to the URL to fetch the content
108
- final_response = {
109
- 'content':'',
110
- 'error':''
111
- }
112
- response = requests.get(url)
113
- # Check if the request was successful (status code 200)
114
- if response.status_code == 200:
115
- final_response['content'] = response.content
116
- else:
117
- final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}"
118
- except Exception as e:
119
- final_response['error'] = f"Failed to download the URL. Error: {e}"
120
- return final_response
121
-
122
- def get_filename_from_url(url):
123
- # Use os.path.basename() to extract the file name from the URL
124
- file_name = os.path.basename(url)
125
- return file_name
126
-
127
- def get_text_response(user_message):
128
- response = llm_chain.predict(user_message = user_message)
129
- return response
130
-
131
- def get_text_response_and_audio_response(user_message):
132
- response = get_text_response(user_message) # Getting the reply from Open AI
133
- audio_reply_for_question_response = get_audio_reply_for_question(response)
134
- final_response = {
135
- 'output_file_path': '',
136
- 'message':''
137
- }
138
- audio_url = audio_reply_for_question_response['audio_url']
139
- if audio_url:
140
- output_file_path=get_filename_from_url(audio_url)
141
- download_url_response = download_url(audio_url)
142
- audio_content = download_url_response['content']
143
- if audio_content:
144
- with open(output_file_path, "wb") as audio_file:
145
- audio_file.write(audio_content)
146
- final_response['output_file_path'] = output_file_path
147
- else:
148
- final_response['message'] = download_url_response['error']
149
- else:
150
- final_response['message'] = audio_reply_for_question_response['message']
151
- return final_response
152
-
153
- def chat_bot_response(message, history):
154
- text_and_audio_response = get_text_response_and_audio_response(message)
155
- output_file_path = text_and_audio_response['output_file_path']
156
- if output_file_path:
157
- return (text_and_audio_response['output_file_path'],)
158
- else:
159
- return text_and_audio_response['message']
160
-
161
- demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"])
162
-
163
- if __name__ == "__main__":
164
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_769x769_80k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/deeplabv3_r50-d8.py',
3
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_80k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(align_corners=True),
8
- auxiliary_head=dict(align_corners=True),
9
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
 
 
 
 
 
 
 
 
 
 
spaces/Araby/BRATArA/app.py DELETED
@@ -1,43 +0,0 @@
1
- import streamlit as st
2
- from transformers import GPT2TokenizerFast, AutoModelForCausalLM
3
- from arabert.preprocess import ArabertPreprocessor
4
-
5
- # Load model and tokenizer and the model
6
-
7
- model_name = "malmarjeh/gpt2"
8
- tokenizer = GPT2TokenizerFast.from_pretrained("aubmindlab/aragpt2-base")
9
- model = AutoModelForCausalLM.from_pretrained(model_name)
10
- preprocessor = ArabertPreprocessor(model_name=model_name)
11
-
12
- # Streamlit UI
13
- st.title('Arabic Text Summarizer | By M.Araby')
14
- text = st.text_area("Paste your Arabic text here:")
15
-
16
- if st.button('Summarize'):
17
- if text:
18
- # Preprocess and tokenize input text
19
- processed_text = preprocessor.preprocess(text)
20
- formatted_text = '\n النص: ' + processed_text + ' \n الملخص: \n '
21
- tokenizer.add_special_tokens({'pad_token': '<pad>'})
22
- tokens = tokenizer.batch_encode_plus([formatted_text], return_tensors='pt', padding='max_length',
23
- max_length=150)
24
-
25
- # Generate summary
26
- output = model.generate(
27
- input_ids=tokens['input_ids'],
28
- repetition_penalty=2.0,
29
- num_beams=5,
30
- max_length=600,
31
- pad_token_id=tokenizer.pad_token_id,
32
- eos_token_id=tokenizer.eos_token_id,
33
- bos_token_id=tokenizer.bos_token_id,
34
- )
35
-
36
- # Decode and display the summarized text
37
- result = tokenizer.decode(output[0][150:], skip_special_tokens=True).strip()
38
- st.subheader("Original Text Input")
39
- st.write(text)
40
- st.subheader("Summarized Text Idea")
41
- st.write(result)
42
- else:
43
- st.warning("Please enter Arabic text to summarize.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arijit-hazra/my-image-captioner/app.py DELETED
@@ -1,50 +0,0 @@
1
- import re
2
- import string
3
- import gradio as gr
4
- import tensorflow as tf
5
- from load_model import build
6
-
7
- IMG_SHAPE = (224,224,3)
8
-
9
-
10
- def custom_standardization(s):
11
- s = tf.strings.lower(s)
12
- s = tf.strings.regex_replace(s, f'[{re.escape(string.punctuation)}]', '')
13
- s = tf.strings.join(['[START]', s, '[END]'], separator=' ')
14
- return s
15
-
16
- model = build()
17
-
18
- rescale = lambda image : tf.image.resize(tf.convert_to_tensor(image), IMG_SHAPE[:-1])
19
-
20
- def single_img_transcribe(image, temperature=1):
21
- initial = model.word_to_index([['[START]']]) # (batch, sequence)
22
- img_features = model.feature_extractor(rescale(image)[tf.newaxis, ...])
23
-
24
- tokens = initial # (batch, sequence)
25
- for n in range(50):
26
- preds = model((img_features, tokens)).numpy() # (batch, sequence, vocab)
27
- preds = preds[:,-1, :] #(batch, vocab)
28
- if temperature==0:
29
- next = tf.argmax(preds, axis=-1)[:, tf.newaxis] # (batch, 1)
30
- else:
31
- next = tf.random.categorical(preds/temperature, num_samples=1) # (batch, 1)
32
- tokens = tf.concat([tokens, next], axis=1) # (batch, sequence)
33
-
34
- if next[0] == model.word_to_index('[END]'):
35
- break
36
-
37
- words = model.index_to_word(tokens[0, 1:-1])
38
- result = tf.strings.reduce_join(words, axis=-1, separator=' ')
39
- return result.numpy().decode()
40
-
41
- def img_transcribes(image):
42
- result = []
43
- for t in [0,0.5,1]:
44
- result.append(single_img_transcribe(image, t))
45
- return result
46
-
47
- gr.Interface(fn=img_transcribes,
48
- inputs=gr.Image(type="pil"),
49
- outputs=["text","text","text"]
50
- ).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Armandoliv/t5-summarize-app-scitldr/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: T5 Summarize App Scitldr
3
- emoji: 💻
4
- colorFrom: red
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/app.py DELETED
@@ -1,129 +0,0 @@
1
- import argparse
2
- from functools import partial
3
- import cv2
4
- import requests
5
- import os
6
- from io import BytesIO
7
- from PIL import Image
8
- import numpy as np
9
- from pathlib import Path
10
- import gradio as gr
11
-
12
- import warnings
13
-
14
- import torch
15
-
16
- os.system("python setup.py build develop --user")
17
- os.system("pip install packaging==21.3")
18
- warnings.filterwarnings("ignore")
19
-
20
-
21
- from groundingdino.models import build_model
22
- from groundingdino.util.slconfig import SLConfig
23
- from groundingdino.util.utils import clean_state_dict
24
- from groundingdino.util.inference import annotate, load_image, predict
25
- import groundingdino.datasets.transforms as T
26
-
27
- from huggingface_hub import hf_hub_download
28
-
29
-
30
-
31
- # Use this command for evaluate the GLIP-T model
32
- config_file = "groundingdino/config/GroundingDINO_SwinT_OGC.py"
33
- ckpt_repo_id = "ShilongLiu/GroundingDINO"
34
- ckpt_filenmae = "groundingdino_swint_ogc.pth"
35
-
36
-
37
- def load_model_hf(model_config_path, repo_id, filename, device='cpu'):
38
- args = SLConfig.fromfile(model_config_path)
39
- model = build_model(args)
40
- args.device = device
41
-
42
- cache_file = hf_hub_download(repo_id=repo_id, filename=filename)
43
- checkpoint = torch.load(cache_file, map_location='cpu')
44
- log = model.load_state_dict(clean_state_dict(checkpoint['model']), strict=False)
45
- print("Model loaded from {} \n => {}".format(cache_file, log))
46
- _ = model.eval()
47
- return model
48
-
49
- def image_transform_grounding(init_image):
50
- transform = T.Compose([
51
- T.RandomResize([800], max_size=1333),
52
- T.ToTensor(),
53
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
54
- ])
55
- image, _ = transform(init_image, None) # 3, h, w
56
- return init_image, image
57
-
58
- def image_transform_grounding_for_vis(init_image):
59
- transform = T.Compose([
60
- T.RandomResize([800], max_size=1333),
61
- ])
62
- image, _ = transform(init_image, None) # 3, h, w
63
- return image
64
-
65
- model = load_model_hf(config_file, ckpt_repo_id, ckpt_filenmae)
66
-
67
- def run_grounding(input_image, grounding_caption, box_threshold, text_threshold):
68
- init_image = input_image.convert("RGB")
69
- original_size = init_image.size
70
-
71
- _, image_tensor = image_transform_grounding(init_image)
72
- image_pil: Image = image_transform_grounding_for_vis(init_image)
73
-
74
- # run grounidng
75
- boxes, logits, phrases = predict(model, image_tensor, grounding_caption, box_threshold, text_threshold, device='cpu')
76
- annotated_frame = annotate(image_source=np.asarray(image_pil), boxes=boxes, logits=logits, phrases=phrases)
77
- image_with_box = Image.fromarray(cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB))
78
-
79
-
80
- return image_with_box
81
-
82
- if __name__ == "__main__":
83
-
84
- css = """
85
- #mkd {
86
- height: 500px;
87
- overflow: auto;
88
- border: 1px solid #ccc;
89
- }
90
- """
91
- block = gr.Blocks(css=css).queue()
92
- with block:
93
- gr.Markdown("<h1><center>Grounding DINO<h1><center>")
94
- gr.Markdown("<h3><center>Open-World Detection with <a href='https://github.com/Arulkumar03/SOTA-Grounding-DINO.ipynb'>Grounding DINO</a><h3><center>")
95
- gr.Markdown("<h3><center>Note the model runs on CPU, so it may take a while to run the model.<h3><center>")
96
-
97
- with gr.Row():
98
- with gr.Column():
99
- input_image = gr.Image(source='upload', type="pil")
100
- grounding_caption = gr.Textbox(label="Detection Prompt")
101
- run_button = gr.Button(label="Run")
102
- with gr.Accordion("Advanced options", open=False):
103
- box_threshold = gr.Slider(
104
- label="Box Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001
105
- )
106
- text_threshold = gr.Slider(
107
- label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001
108
- )
109
-
110
- with gr.Column():
111
- gallery = gr.outputs.Image(
112
- type="pil",
113
- # label="grounding results"
114
- ).style(full_width=True, full_height=True)
115
- # gallery = gr.Gallery(label="Generated images", show_label=False).style(
116
- # grid=[1], height="auto", container=True, full_width=True, full_height=True)
117
-
118
- run_button.click(fn=run_grounding, inputs=[
119
- input_image, grounding_caption, box_threshold, text_threshold], outputs=[gallery])
120
- gr.Examples(
121
- [["watermelon.jpg", "watermelon", 0.25, 0.25]],
122
- inputs = [input_image, grounding_caption, box_threshold, text_threshold],
123
- outputs = [gallery],
124
- fn=run_grounding,
125
- cache_examples=True,
126
- label='Try this example input!'
127
- )
128
- block.launch(share=True, show_api=False, show_error=True)
129
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Audio-AGI/WavJourney/README.md DELETED
@@ -1,111 +0,0 @@
1
- ---
2
- title: WavJourney
3
- emoji: 🔥
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: docker
7
- pinned: false
8
- license: cc-by-nc-4.0
9
- ---
10
- # <span style="color: blue;">🎵</span> WavJourney: Compositional Audio Creation with LLMs
11
- [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2307.14335) [![GitHub Stars](https://img.shields.io/github/stars/Audio-AGI/WavJourney?style=social)](https://github.com/Audio-AGI/WavJourney/) [![githubio](https://img.shields.io/badge/GitHub.io-Demo_Page-blue?logo=Github&style=flat-square)](https://audio-agi.github.io/WavJourney_demopage/) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Audio-AGI/WavJourney)
12
-
13
-
14
- This repository contains the official implementation of ["WavJourney: Compositional Audio Creation with Large Language Models"](https://audio-agi.github.io/WavJourney_demopage/WavJourney_arXiv.pdf).
15
-
16
- Starting with a text prompt, WavJourney can create audio content with engaging storylines encompassing personalized speakers, lifelike speech in context, emotionally resonant music compositions, and impactful sound effects that enhance the auditory experience. Check the audio examples in the [Project Page](https://audio-agi.github.io/WavJourney_demopage/)!
17
-
18
- <!-- <p align="center">
19
- <img align="middle" width="800" src="assets/WavJourney.png"/>
20
- </p> -->
21
-
22
- <hr>
23
-
24
-
25
- ## Preliminaries
26
- 1. Install the environment:
27
- ```bash
28
- bash ./scripts/EnvsSetup.sh
29
- ```
30
- 2. Activate the conda environment:
31
- ```bash
32
- conda activate WavJourney
33
- ```
34
-
35
- 3. (Optional) You can modify the default configuration in `config.yaml`, check the details described in the configuration file.
36
- 4. Pre-download the models (might take some time):
37
- ```bash
38
- python scripts/download_models.py
39
- ```
40
-
41
- 5. Set the WAVJOURNEY_OPENAI_KEY in the environment variable for accessing [GPT-4 API](https://platform.openai.com/account/api-keys) [[Guidance](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4)]
42
- ```bash
43
- export WAVJOURNEY_OPENAI_KEY=your_openai_key_here
44
- ```
45
-
46
- 6. Set environment variables for using API services
47
- ```bash
48
- # Set the port for the WAVJOURNEY service to 8021
49
- export WAVJOURNEY_SERVICE_PORT=8021
50
-
51
- # Set the URL for the WAVJOURNEY service to 127.0.0.1
52
- export WAVJOURNEY_SERVICE_URL=127.0.0.1
53
-
54
- # Limit the maximum script lines for WAVJOURNEY to 999
55
- export WAVJOURNEY_MAX_SCRIPT_LINES=999
56
- ```
57
-
58
-
59
- 7. Start Python API services (e.g., Text-to-Speech, Text-to-Audio)
60
- ```bash
61
- bash scripts/start_services.sh
62
- ```
63
-
64
- ## Web APP
65
- ```bash
66
- bash scripts/start_ui.sh
67
- ```
68
-
69
- ## Commandline Usage
70
- ```bash
71
- python wavjourney_cli.py -f --input-text "Generate a one-minute introduction to quantum mechanics"
72
- ```
73
-
74
-
75
- ## Kill the services
76
- You can kill the running services via this command:
77
- ```bash
78
- python scripts/kill_services.py
79
- ```
80
-
81
- ## (Advanced features) Speaker customization
82
- You can add voice presets to WavJourney to customize the voice actors. Simply provide the voice id, the description and a sample wav file, and WavJourney will pick the voice automatically based on the audio script. Predefined system voice presets are in `data/voice_presets`.
83
-
84
- You can manage voice presets via UI. Specifically, if you want to add voice to voice presets. Run the script via command line below:
85
- ```bash
86
- python add_voice_preset.py --id "id" --desc "description" --wav-path path/to/wav --session-id ''
87
- ```
88
- What makes for good voice prompt? See detailed instructions <a href="https://github.com/gitmylo/bark-voice-cloning-HuBERT-quantizer">here</a>.
89
- ## Hardware requirement
90
- - The VRAM of the GPU in the default configuration should be greater than 16 GB.
91
- - Operation system: Linux.
92
-
93
- ## Citation
94
- If you find this work useful, you can cite the paper below:
95
-
96
- @article{liu2023wavjourney,
97
- title = {WavJourney: Compositional Audio Creation with Large Language Models},
98
- author = {Liu, Xubo and Zhu, Zhongkai and Liu, Haohe and Yuan, Yi and Huang, Qiushi and Liang, Jinhua and Cao, Yin and Kong, Qiuqiang and Plumbley, Mark D and Wang, Wenwu},
99
- journal = {arXiv preprint arXiv:2307.14335},
100
- year = {2023}
101
- }
102
-
103
- [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/liuxubo)
104
-
105
- ## Appreciation
106
- - [Bark](https://github.com/suno-ai/bark) for a zero-shot text-to-speech synthesis model.
107
- - [AudioCraft](https://github.com/facebookresearch/audiocraft) for state-of-the-art audio generation models.
108
-
109
- ## Disclaimer
110
- We are not responsible for audio generated using semantics created by this model. Just don't use it for illegal purposes.
111
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AutoLLM/AutoAgents/autoagents/spaces/app.py DELETED
@@ -1,153 +0,0 @@
1
- import os
2
- import asyncio
3
- import random
4
- from datetime import date, datetime, timezone, timedelta
5
- from ast import literal_eval
6
-
7
- import streamlit as st
8
- import openai
9
-
10
- from autoagents.utils.constants import MAIN_HEADER, MAIN_CAPTION, SAMPLE_QUESTIONS
11
- from autoagents.agents.search import ActionRunner
12
-
13
- from langchain.chat_models import ChatOpenAI
14
-
15
-
16
- async def run():
17
- output_acc = ""
18
- st.session_state["random"] = random.randint(0, 99)
19
- if "task" not in st.session_state:
20
- st.session_state.task = None
21
- if "model_name" not in st.session_state:
22
- st.session_state.model_name = "gpt-3.5-turbo"
23
-
24
- st.set_page_config(
25
- page_title="Search Agent",
26
- page_icon="🤖",
27
- layout="wide",
28
- initial_sidebar_state="expanded",
29
- )
30
-
31
- st.title(MAIN_HEADER)
32
- st.caption(MAIN_CAPTION)
33
-
34
- with st.form("my_form", clear_on_submit=False):
35
- st.markdown("<style> .inter { white-space: pre-line; } </style>", unsafe_allow_html=True)
36
- user_input = st.text_input(
37
- "You: ",
38
- key="input",
39
- placeholder="Ask me anything ...",
40
- label_visibility="hidden",
41
- )
42
-
43
- submitted = st.form_submit_button(
44
- "Search", help="Hit to submit the search query."
45
- )
46
-
47
- # Ask the user to enter their OpenAI API key
48
- if (api_key := st.sidebar.text_input("OpenAI api-key", type="password")):
49
- api_org = None
50
- else:
51
- api_key, api_org = os.getenv("OPENAI_API_KEY"), os.getenv("OPENAI_API_ORG")
52
- with st.sidebar:
53
- model_dict = {
54
- "gpt-3.5-turbo": "GPT-3.5-turbo",
55
- "gpt-4": "GPT-4 (Better but slower)",
56
- }
57
- st.radio(
58
- "OpenAI model",
59
- model_dict.keys(),
60
- key="model_name",
61
- format_func=lambda x: model_dict[x],
62
- )
63
-
64
- time_zone = str(datetime.now(timezone(timedelta(0))).astimezone().tzinfo)
65
- st.markdown(f"**The system time zone is {time_zone} and the date is {date.today()}**")
66
-
67
- st.markdown("**Example Queries:**")
68
- for q in SAMPLE_QUESTIONS:
69
- st.markdown(f"*{q}*")
70
-
71
- if not api_key:
72
- st.warning(
73
- "API key required to try this app. The API key is not stored in any form. [This](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) might help."
74
- )
75
- elif api_org and st.session_state.model_name == "gpt-4":
76
- st.warning(
77
- "The free API key does not support GPT-4. Please switch to GPT-3.5-turbo or input your own API key."
78
- )
79
- else:
80
- outputq = asyncio.Queue()
81
- runner = ActionRunner(outputq,
82
- ChatOpenAI(openai_api_key=api_key,
83
- openai_organization=api_org,
84
- temperature=0,
85
- model_name=st.session_state.model_name),
86
- persist_logs=True) # log to HF-dataset
87
-
88
- async def cleanup(e):
89
- st.error(e)
90
- await st.session_state.task
91
- st.session_state.task = None
92
- st.stop()
93
-
94
- placeholder = st.empty()
95
-
96
- if user_input and submitted:
97
- if st.session_state.task is not None:
98
- with placeholder.container():
99
- st.session_state.task.cancel()
100
- st.warning("Previous search aborted", icon="⚠️")
101
-
102
- st.session_state.task = asyncio.create_task(
103
- runner.run(user_input, outputq)
104
- )
105
- iterations = 0
106
- with st.expander("Search Results", expanded=True):
107
- while True:
108
- with st.spinner("Wait for it..."):
109
- output = await outputq.get()
110
- placeholder.empty()
111
- if isinstance(output, Exception):
112
- if isinstance(output, openai.error.AuthenticationError):
113
- await cleanup(f"AuthenticationError: Invalid OpenAI API key.")
114
- elif isinstance(output, openai.error.InvalidRequestError) \
115
- and output._message == "The model: `gpt-4` does not exist":
116
- await cleanup(f"The free API key does not support GPT-4. Please switch to GPT-3.5-turbo or input your own API key.")
117
- elif isinstance(output, openai.error.OpenAIError):
118
- await cleanup(output)
119
- elif isinstance(output, RuntimeWarning):
120
- st.warning(output)
121
- continue
122
- else:
123
- await cleanup("Something went wrong. Please try searching again.")
124
- return
125
- try:
126
- output_fmt = literal_eval(output)
127
- st.json(output_fmt, expanded=False)
128
- st.write("---")
129
- iterations += 1
130
- except:
131
- output_acc += "\n" + output
132
- st.markdown(f"<div class=\"inter\"> {output} </div>",
133
- unsafe_allow_html=True)
134
- if iterations >= runner.agent_executor.max_iterations:
135
- await cleanup(
136
- f"Maximum iterations ({iterations}) exceeded. You can try running the search again or try a variation of the query."
137
- )
138
- return
139
- if "Final Answer:" in output:
140
- break
141
- # Found the answer
142
- final_answer = await st.session_state.task
143
- final_answer = final_answer.replace("$", "\$")
144
- # st.success accepts md
145
- st.success(final_answer, icon="✅")
146
- st.balloons()
147
- st.session_state.task = None
148
- st.stop()
149
-
150
- if __name__ == "__main__":
151
- loop = asyncio.new_event_loop()
152
- loop.set_debug(enabled=False)
153
- loop.run_until_complete(run())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/vid2vid-zero/Dockerfile DELETED
@@ -1,57 +0,0 @@
1
- FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04
2
- ENV DEBIAN_FRONTEND=noninteractive
3
- RUN apt-get update && \
4
- apt-get upgrade -y && \
5
- apt-get install -y --no-install-recommends \
6
- git \
7
- git-lfs \
8
- wget \
9
- curl \
10
- # ffmpeg \
11
- ffmpeg \
12
- x264 \
13
- # python build dependencies \
14
- build-essential \
15
- libssl-dev \
16
- zlib1g-dev \
17
- libbz2-dev \
18
- libreadline-dev \
19
- libsqlite3-dev \
20
- libncursesw5-dev \
21
- xz-utils \
22
- tk-dev \
23
- libxml2-dev \
24
- libxmlsec1-dev \
25
- libffi-dev \
26
- liblzma-dev && \
27
- apt-get clean && \
28
- rm -rf /var/lib/apt/lists/*
29
-
30
- RUN useradd -m -u 1000 user
31
- USER user
32
- ENV HOME=/home/user \
33
- PATH=/home/user/.local/bin:${PATH}
34
- WORKDIR ${HOME}/app
35
-
36
- RUN curl https://pyenv.run | bash
37
- ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH}
38
- ENV PYTHON_VERSION=3.10.9
39
- RUN pyenv install ${PYTHON_VERSION} && \
40
- pyenv global ${PYTHON_VERSION} && \
41
- pyenv rehash && \
42
- pip install --no-cache-dir -U pip setuptools wheel
43
-
44
- RUN pip install --no-cache-dir -U torch==1.13.1 torchvision==0.14.1
45
- COPY --chown=1000 requirements.txt /tmp/requirements.txt
46
- RUN pip install --no-cache-dir -U -r /tmp/requirements.txt
47
-
48
- COPY --chown=1000 . ${HOME}/app
49
- # RUN cd Tune-A-Video && patch -p1 < ../patch
50
- ENV PYTHONPATH=${HOME}/app \
51
- PYTHONUNBUFFERED=1 \
52
- GRADIO_ALLOW_FLAGGING=never \
53
- GRADIO_NUM_PORTS=1 \
54
- GRADIO_SERVER_NAME=0.0.0.0 \
55
- GRADIO_THEME=huggingface \
56
- SYSTEM=spaces
57
- CMD ["python", "app.py"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bala2-03-2003/MygenvioceAI/app.py DELETED
@@ -1,164 +0,0 @@
1
- import os
2
- import re
3
- import requests
4
- import json
5
- import gradio as gr
6
- from langchain.chat_models import ChatOpenAI
7
- from langchain import LLMChain, PromptTemplate
8
- from langchain.memory import ConversationBufferMemory
9
-
10
- OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
11
- PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY')
12
- PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID')
13
-
14
- PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID')
15
- play_ht_api_get_audio_url = "https://play.ht/api/v2/tts"
16
-
17
-
18
- template = """You are a helpful assistant to answer user queries.
19
- {chat_history}
20
- User: {user_message}
21
- Chatbot:"""
22
-
23
- prompt = PromptTemplate(
24
- input_variables=["chat_history", "user_message"], template=template
25
- )
26
-
27
- memory = ConversationBufferMemory(memory_key="chat_history")
28
-
29
- llm_chain = LLMChain(
30
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
31
- prompt=prompt,
32
- verbose=True,
33
- memory=memory,
34
- )
35
-
36
- headers = {
37
- "accept": "text/event-stream",
38
- "content-type": "application/json",
39
- "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY,
40
- "X-USER-ID": PLAY_HT_USER_ID
41
- }
42
-
43
-
44
- def get_payload(text):
45
- return {
46
- "text": text,
47
- "voice": PLAY_HT_VOICE_ID,
48
- "quality": "medium",
49
- "output_format": "mp3",
50
- "speed": 1,
51
- "sample_rate": 24000,
52
- "seed": None,
53
- "temperature": None
54
- }
55
-
56
- def get_generated_audio(text):
57
- payload = get_payload(text)
58
- generated_response = {}
59
- try:
60
- response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers)
61
- response.raise_for_status()
62
- generated_response["type"]= 'SUCCESS'
63
- generated_response["response"] = response.text
64
- except requests.exceptions.RequestException as e:
65
- generated_response["type"]= 'ERROR'
66
- try:
67
- response_text = json.loads(response.text)
68
- if response_text['error_message']:
69
- generated_response["response"] = response_text['error_message']
70
- else:
71
- generated_response["response"] = response.text
72
- except Exception as e:
73
- generated_response["response"] = response.text
74
- except Exception as e:
75
- generated_response["type"]= 'ERROR'
76
- generated_response["response"] = response.text
77
- return generated_response
78
-
79
- def extract_urls(text):
80
- # Define the regex pattern for URLs
81
- url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*'
82
-
83
- # Find all occurrences of URLs in the text
84
- urls = re.findall(url_pattern, text)
85
-
86
- return urls
87
-
88
- def get_audio_reply_for_question(text):
89
- generated_audio_event = get_generated_audio(text)
90
- #From get_generated_audio, you will get events in a string format, from that we need to extract the url
91
- final_response = {
92
- "audio_url": '',
93
- "message": ''
94
- }
95
- if generated_audio_event["type"] == 'SUCCESS':
96
- audio_urls = extract_urls(generated_audio_event["response"])
97
- if len(audio_urls) == 0:
98
- final_response['message'] = "No audio file link found in generated event"
99
- else:
100
- final_response['audio_url'] = audio_urls[-1]
101
- else:
102
- final_response['message'] = generated_audio_event['response']
103
- return final_response
104
-
105
- def download_url(url):
106
- try:
107
- # Send a GET request to the URL to fetch the content
108
- final_response = {
109
- 'content':'',
110
- 'error':''
111
- }
112
- response = requests.get(url)
113
- # Check if the request was successful (status code 200)
114
- if response.status_code == 200:
115
- final_response['content'] = response.content
116
- else:
117
- final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}"
118
- except Exception as e:
119
- final_response['error'] = f"Failed to download the URL. Error: {e}"
120
- return final_response
121
-
122
- def get_filename_from_url(url):
123
- # Use os.path.basename() to extract the file name from the URL
124
- file_name = os.path.basename(url)
125
- return file_name
126
-
127
- def get_text_response(user_message):
128
- response = llm_chain.predict(user_message = user_message)
129
- return response
130
-
131
- def get_text_response_and_audio_response(user_message):
132
- response = get_text_response(user_message) # Getting the reply from Open AI
133
- audio_reply_for_question_response = get_audio_reply_for_question(response)
134
- final_response = {
135
- 'output_file_path': '',
136
- 'message':''
137
- }
138
- audio_url = audio_reply_for_question_response['audio_url']
139
- if audio_url:
140
- output_file_path=get_filename_from_url(audio_url)
141
- download_url_response = download_url(audio_url)
142
- audio_content = download_url_response['content']
143
- if audio_content:
144
- with open(output_file_path, "wb") as audio_file:
145
- audio_file.write(audio_content)
146
- final_response['output_file_path'] = output_file_path
147
- else:
148
- final_response['message'] = download_url_response['error']
149
- else:
150
- final_response['message'] = audio_reply_for_question_response['message']
151
- return final_response
152
-
153
- def chat_bot_response(message, history):
154
- text_and_audio_response = get_text_response_and_audio_response(message)
155
- output_file_path = text_and_audio_response['output_file_path']
156
- if output_file_path:
157
- return (text_and_audio_response['output_file_path'],)
158
- else:
159
- return text_and_audio_response['message']
160
-
161
- demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"])
162
-
163
- if __name__ == "__main__":
164
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BartPoint/VoiceChange/app_multi.py DELETED
@@ -1,469 +0,0 @@
1
- from typing import Union
2
-
3
- from argparse import ArgumentParser
4
-
5
- import asyncio
6
- import json
7
- import hashlib
8
- from os import path, getenv
9
-
10
- import gradio as gr
11
-
12
- import torch
13
-
14
- import numpy as np
15
- import librosa
16
-
17
- import edge_tts
18
-
19
- import config
20
- import util
21
- from infer_pack.models import (
22
- SynthesizerTrnMs768NSFsid,
23
- SynthesizerTrnMs768NSFsid_nono
24
- )
25
- from vc_infer_pipeline import VC
26
-
27
- # Reference: https://huggingface.co/spaces/zomehwh/rvc-models/blob/main/app.py#L21 # noqa
28
- in_hf_space = getenv('SYSTEM') == 'spaces'
29
-
30
- # Argument parsing
31
- arg_parser = ArgumentParser()
32
- arg_parser.add_argument(
33
- '--hubert',
34
- default=getenv('RVC_HUBERT', 'hubert_base.pt'),
35
- help='path to hubert base model (default: hubert_base.pt)'
36
- )
37
- arg_parser.add_argument(
38
- '--config',
39
- default=getenv('RVC_MULTI_CFG', 'multi_config.json'),
40
- help='path to config file (default: multi_config.json)'
41
- )
42
- arg_parser.add_argument(
43
- '--api',
44
- action='store_true',
45
- help='enable api endpoint'
46
- )
47
- arg_parser.add_argument(
48
- '--cache-examples',
49
- action='store_true',
50
- help='enable example caching, please remember delete gradio_cached_examples folder when example config has been modified' # noqa
51
- )
52
- args = arg_parser.parse_args()
53
-
54
- app_css = '''
55
- #model_info img {
56
- max-width: 100px;
57
- max-height: 100px;
58
- float: right;
59
- }
60
-
61
- #model_info p {
62
- margin: unset;
63
- }
64
- '''
65
-
66
- app = gr.Blocks(
67
- theme=gr.themes.Soft(primary_hue="orange", secondary_hue="slate"),
68
- css=app_css,
69
- analytics_enabled=False
70
- )
71
-
72
- # Load hubert model
73
- hubert_model = util.load_hubert_model(config.device, args.hubert)
74
- hubert_model.eval()
75
-
76
- # Load models
77
- multi_cfg = json.load(open(args.config, 'r'))
78
- loaded_models = []
79
-
80
- for model_name in multi_cfg.get('models'):
81
- print(f'Loading model: {model_name}')
82
-
83
- # Load model info
84
- model_info = json.load(
85
- open(path.join('model', model_name, 'config.json'), 'r')
86
- )
87
-
88
- # Load RVC checkpoint
89
- cpt = torch.load(
90
- path.join('model', model_name, model_info['model']),
91
- map_location='cpu'
92
- )
93
- tgt_sr = cpt['config'][-1]
94
- cpt['config'][-3] = cpt['weight']['emb_g.weight'].shape[0] # n_spk
95
-
96
- if_f0 = cpt.get('f0', 1)
97
- net_g: Union[SynthesizerTrnMs768NSFsid, SynthesizerTrnMs768NSFsid_nono]
98
- if if_f0 == 1:
99
- net_g = SynthesizerTrnMs768NSFsid(
100
- *cpt['config'],
101
- is_half=util.is_half(config.device)
102
- )
103
- else:
104
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt['config'])
105
-
106
- del net_g.enc_q
107
-
108
- # According to original code, this thing seems necessary.
109
- print(net_g.load_state_dict(cpt['weight'], strict=False))
110
-
111
- net_g.eval().to(config.device)
112
- net_g = net_g.half() if util.is_half(config.device) else net_g.float()
113
-
114
- vc = VC(tgt_sr, config)
115
-
116
- loaded_models.append(dict(
117
- name=model_name,
118
- metadata=model_info,
119
- vc=vc,
120
- net_g=net_g,
121
- if_f0=if_f0,
122
- target_sr=tgt_sr
123
- ))
124
-
125
- print(f'Models loaded: {len(loaded_models)}')
126
-
127
- # Edge TTS speakers
128
- tts_speakers_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) # noqa
129
-
130
-
131
- # https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer-web.py#L118 # noqa
132
- def vc_func(
133
- input_audio, model_index, pitch_adjust, f0_method, feat_ratio,
134
- filter_radius, rms_mix_rate, resample_option
135
- ):
136
- if input_audio is None:
137
- return (None, 'Please provide input audio.')
138
-
139
- if model_index is None:
140
- return (None, 'Please select a model.')
141
-
142
- model = loaded_models[model_index]
143
-
144
- # Reference: so-vits
145
- (audio_samp, audio_npy) = input_audio
146
-
147
- # https://huggingface.co/spaces/zomehwh/rvc-models/blob/main/app.py#L49
148
- # Can be change well, we will see
149
- if (audio_npy.shape[0] / audio_samp) > 320 and in_hf_space:
150
- return (None, 'Input audio is longer than 60 secs.')
151
-
152
- # Bloody hell: https://stackoverflow.com/questions/26921836/
153
- if audio_npy.dtype != np.float32: # :thonk:
154
- audio_npy = (
155
- audio_npy / np.iinfo(audio_npy.dtype).max
156
- ).astype(np.float32)
157
-
158
- if len(audio_npy.shape) > 1:
159
- audio_npy = librosa.to_mono(audio_npy.transpose(1, 0))
160
-
161
- if audio_samp != 16000:
162
- audio_npy = librosa.resample(
163
- audio_npy,
164
- orig_sr=audio_samp,
165
- target_sr=16000
166
- )
167
-
168
- pitch_int = int(pitch_adjust)
169
-
170
- resample = (
171
- 0 if resample_option == 'Disable resampling'
172
- else int(resample_option)
173
- )
174
-
175
- times = [0, 0, 0]
176
-
177
- checksum = hashlib.sha512()
178
- checksum.update(audio_npy.tobytes())
179
-
180
- output_audio = model['vc'].pipeline(
181
- hubert_model,
182
- model['net_g'],
183
- model['metadata'].get('speaker_id', 0),
184
- audio_npy,
185
- checksum.hexdigest(),
186
- times,
187
- pitch_int,
188
- f0_method,
189
- path.join('model', model['name'], model['metadata']['feat_index']),
190
- feat_ratio,
191
- model['if_f0'],
192
- filter_radius,
193
- model['target_sr'],
194
- resample,
195
- rms_mix_rate,
196
- 'v2'
197
- )
198
-
199
- out_sr = (
200
- resample if resample >= 16000 and model['target_sr'] != resample
201
- else model['target_sr']
202
- )
203
-
204
- print(f'npy: {times[0]}s, f0: {times[1]}s, infer: {times[2]}s')
205
- return ((out_sr, output_audio), 'Success')
206
-
207
-
208
- async def edge_tts_vc_func(
209
- input_text, model_index, tts_speaker, pitch_adjust, f0_method, feat_ratio,
210
- filter_radius, rms_mix_rate, resample_option
211
- ):
212
- if input_text is None:
213
- return (None, 'Please provide TTS text.')
214
-
215
- if tts_speaker is None:
216
- return (None, 'Please select TTS speaker.')
217
-
218
- if model_index is None:
219
- return (None, 'Please select a model.')
220
-
221
- speaker = tts_speakers_list[tts_speaker]['ShortName']
222
- (tts_np, tts_sr) = await util.call_edge_tts(speaker, input_text)
223
- return vc_func(
224
- (tts_sr, tts_np),
225
- model_index,
226
- pitch_adjust,
227
- f0_method,
228
- feat_ratio,
229
- filter_radius,
230
- rms_mix_rate,
231
- resample_option
232
- )
233
-
234
-
235
- def update_model_info(model_index):
236
- if model_index is None:
237
- return str(
238
- '### Model info\n'
239
- 'Please select a model from dropdown above.'
240
- )
241
-
242
- model = loaded_models[model_index]
243
- model_icon = model['metadata'].get('icon', '')
244
-
245
- return str(
246
- '### Model info\n'
247
- '![model icon]({icon})'
248
- '**{name}**\n\n'
249
- 'Author: {author}\n\n'
250
- 'Source: {source}\n\n'
251
- '{note}'
252
- ).format(
253
- name=model['metadata'].get('name'),
254
- author=model['metadata'].get('author', 'Anonymous'),
255
- source=model['metadata'].get('source', 'Unknown'),
256
- note=model['metadata'].get('note', ''),
257
- icon=(
258
- model_icon
259
- if model_icon.startswith(('http://', 'https://'))
260
- else '/file/model/%s/%s' % (model['name'], model_icon)
261
- )
262
- )
263
-
264
-
265
- def _example_vc(
266
- input_audio, model_index, pitch_adjust, f0_method, feat_ratio,
267
- filter_radius, rms_mix_rate, resample_option
268
- ):
269
- (audio, message) = vc_func(
270
- input_audio, model_index, pitch_adjust, f0_method, feat_ratio,
271
- filter_radius, rms_mix_rate, resample_option
272
- )
273
- return (
274
- audio,
275
- message,
276
- update_model_info(model_index)
277
- )
278
-
279
-
280
- async def _example_edge_tts(
281
- input_text, model_index, tts_speaker, pitch_adjust, f0_method, feat_ratio,
282
- filter_radius, rms_mix_rate, resample_option
283
- ):
284
- (audio, message) = await edge_tts_vc_func(
285
- input_text, model_index, tts_speaker, pitch_adjust, f0_method,
286
- feat_ratio, filter_radius, rms_mix_rate, resample_option
287
- )
288
- return (
289
- audio,
290
- message,
291
- update_model_info(model_index)
292
- )
293
-
294
-
295
- with app:
296
- gr.Markdown(
297
- '## A simplistic Web interface\n'
298
- 'RVC interface, project based on [RVC-WebUI](https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI)' # thx noqa
299
- 'A lot of inspiration from what\'s already out there, including [zomehwh/rvc-models](https://huggingface.co/spaces/zomehwh/rvc-models) & [DJQmUKV/rvc-inference](https://huggingface.co/spaces/DJQmUKV/rvc-inference).\n ' # thx noqa
300
- )
301
-
302
- with gr.Row():
303
- with gr.Column():
304
- with gr.Tab('Audio conversion'):
305
- input_audio = gr.Audio(label='Input audio')
306
-
307
- vc_convert_btn = gr.Button('Convert', variant='primary')
308
-
309
- with gr.Tab('TTS conversion'):
310
- tts_input = gr.TextArea(
311
- label='TTS input text'
312
- )
313
- tts_speaker = gr.Dropdown(
314
- [
315
- '%s (%s)' % (
316
- s['FriendlyName'],
317
- s['Gender']
318
- )
319
- for s in tts_speakers_list
320
- ],
321
- label='TTS speaker',
322
- type='index'
323
- )
324
-
325
- tts_convert_btn = gr.Button('Convert', variant='primary')
326
-
327
- pitch_adjust = gr.Slider(
328
- label='Pitch',
329
- minimum=-24,
330
- maximum=24,
331
- step=1,
332
- value=0
333
- )
334
- f0_method = gr.Radio(
335
- label='f0 methods',
336
- choices=['pm', 'harvest', 'crepe'],
337
- value='pm',
338
- interactive=True
339
- )
340
-
341
- with gr.Accordion('Advanced options', open=False):
342
- feat_ratio = gr.Slider(
343
- label='Feature ratio',
344
- minimum=0,
345
- maximum=1,
346
- step=0.1,
347
- value=0.6
348
- )
349
- filter_radius = gr.Slider(
350
- label='Filter radius',
351
- minimum=0,
352
- maximum=7,
353
- step=1,
354
- value=3
355
- )
356
- rms_mix_rate = gr.Slider(
357
- label='Volume envelope mix rate',
358
- minimum=0,
359
- maximum=1,
360
- step=0.1,
361
- value=1
362
- )
363
- resample_rate = gr.Dropdown(
364
- [
365
- 'Disable resampling',
366
- '16000',
367
- '22050',
368
- '44100',
369
- '48000'
370
- ],
371
- label='Resample rate',
372
- value='Disable resampling'
373
- )
374
-
375
- with gr.Column():
376
- # Model select
377
- model_index = gr.Dropdown(
378
- [
379
- '%s - %s' % (
380
- m['metadata'].get('source', 'Unknown'),
381
- m['metadata'].get('name')
382
- )
383
- for m in loaded_models
384
- ],
385
- label='Model',
386
- type='index'
387
- )
388
-
389
- # Model info
390
- with gr.Box():
391
- model_info = gr.Markdown(
392
- '### Model info\n'
393
- 'Please select a model from dropdown above.',
394
- elem_id='model_info'
395
- )
396
-
397
- output_audio = gr.Audio(label='Output audio')
398
- output_msg = gr.Textbox(label='Output message')
399
-
400
- multi_examples = multi_cfg.get('examples')
401
- if (
402
- multi_examples and
403
- multi_examples.get('vc') and multi_examples.get('tts_vc')
404
- ):
405
- with gr.Accordion('Sweet sweet examples', open=False):
406
- with gr.Row():
407
- # VC Example
408
- if multi_examples.get('vc'):
409
- gr.Examples(
410
- label='Audio conversion examples',
411
- examples=multi_examples.get('vc'),
412
- inputs=[
413
- input_audio, model_index, pitch_adjust, f0_method,
414
- feat_ratio
415
- ],
416
- outputs=[output_audio, output_msg, model_info],
417
- fn=_example_vc,
418
- cache_examples=args.cache_examples,
419
- run_on_click=args.cache_examples
420
- )
421
-
422
- # Edge TTS Example
423
- if multi_examples.get('tts_vc'):
424
- gr.Examples(
425
- label='TTS conversion examples',
426
- examples=multi_examples.get('tts_vc'),
427
- inputs=[
428
- tts_input, model_index, tts_speaker, pitch_adjust,
429
- f0_method, feat_ratio
430
- ],
431
- outputs=[output_audio, output_msg, model_info],
432
- fn=_example_edge_tts,
433
- cache_examples=args.cache_examples,
434
- run_on_click=args.cache_examples
435
- )
436
-
437
- vc_convert_btn.click(
438
- vc_func,
439
- [
440
- input_audio, model_index, pitch_adjust, f0_method, feat_ratio,
441
- filter_radius, rms_mix_rate, resample_rate
442
- ],
443
- [output_audio, output_msg],
444
- api_name='audio_conversion'
445
- )
446
-
447
- tts_convert_btn.click(
448
- edge_tts_vc_func,
449
- [
450
- tts_input, model_index, tts_speaker, pitch_adjust, f0_method,
451
- feat_ratio, filter_radius, rms_mix_rate, resample_rate
452
- ],
453
- [output_audio, output_msg],
454
- api_name='tts_conversion'
455
- )
456
-
457
- model_index.change(
458
- update_model_info,
459
- inputs=[model_index],
460
- outputs=[model_info],
461
- show_progress=False,
462
- queue=False
463
- )
464
-
465
- app.queue(
466
- concurrency_count=1,
467
- max_size=20,
468
- api_open=args.api
469
- ).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Aparcamiento De Coches Multijugador Sudfrica Edicin Apk Descargar.md DELETED
@@ -1,134 +0,0 @@
1
-
2
- <h1>Aparcamiento de coches multijugador Sudáfrica Edición APK Descargar: Una guía para los usuarios de Android</h1>
3
- <p>Si usted está buscando un juego de estacionamiento de coches realista y divertido que ofrece más que solo estacionamiento, es posible que desee echa un vistazo a Parking Multijugador. Y si usted está en Sudáfrica o desea experimentar la cultura del automóvil de Sudáfrica, es posible que desee probar el Aparcamiento Multijugador South áfrica Edition APK. En este artículo, te diremos qué es el Car Parking Multiplayer, qué hace que South áfrica Edition sea diferente, cómo descargarlo e instalarlo en tu dispositivo Android, y algunos consejos y trucos para jugarlo. </p>
4
- <h2>¿Qué es el Aparcamiento Multijugador? </h2>
5
- <p>Car Parking Multiplayer es un juego que puede engañarte con su nombre bastante engañoso. Pero, es mucho más que solo estar aparcando tu coche. Es una experiencia de mundo abierto donde se puede conducir gratis y sí, todavía trabajar en ese aparcamiento si lo desea. Incluso puedes saltar de tu coche y caminar. Hay diferentes áreas que se pueden explorar en el juego. Cada una es como su propio mundo abierto. Puedes optar por jugar en modo de un solo jugador o en modo online si quieres una escena más caótica (de forma divertida) . </p>
6
- <h2>aparcamiento de coches multijugador sudáfrica edición apk descargar</h2><br /><p><b><b>DOWNLOAD</b> === <a href="https://bltlly.com/2v6IE9">https://bltlly.com/2v6IE9</a></b></p><br /><br />
7
- <h3>Características de Aparcamiento Multijugador</h3>
8
- <p>Aparcamiento de coches multijugador tiene las siguientes características :</p>
9
- <ul>
10
- <li>Modo multijugador de mundo abierto <ul>
11
- <li>Caminar libremente</li>
12
- <li>Mundo abierto libre con gasolineras reales y servicios de automóviles</li>
13
- <li>Compite contra jugadores reales en las carreras multijugador</li>
14
- <li>Intercambiar coches con jugadores reales</li>
15
- <li>Miles de jugadores reales cada día</li>
16
- <li>Lista de amigos</li>
17
- <li>Chat de voz</li>
18
- <li>Modo de policía</li>
19
- </ul>
20
- </li>
21
- <li>Personalización del coche <ul>
22
- <li>Suspensión ajustable, ángulo de rueda y más</li>
23
- <li>Ajuste del motor: motor de intercambio, turbo, caja de cambios y escape</li>
24
- <li>Visual auto tungs: Vynils dinámico, partes del cuerpo del coche</li>
25
- </ul>
26
- </li>
27
- <li>Mundo abierto de alta calidad <ul>
28
-
29
- <li>100 coches con el interior real</li>
30
- <li>16 pieles de jugador</li>
31
- <li>Edificios con interior</li>
32
- </ul>
33
- </li>
34
- <li>Juego interesante <ul>
35
- <li>82 desafíos de estacionamiento y conducción en la vida real</li>
36
- <li>Diferentes vehículos: Grúa, camioneta, camiones, coches deportivos y clásicos</li>
37
- </ul>
38
- </li>
39
- </ul>
40
- <h3>Opiniones de Aparcamiento multijugador</h3>
41
- <p>Aparcamiento de coches multijugador ha recibido en su mayoría críticas positivas de los usuarios en Google Play Store y App Store. Tiene una calificación de 4.4 de 5 estrellas en Google Play Store y una calificación de 4.3 de 5 estrellas en App Store . Estos son algunos de los comentarios de los usuarios:</p>
42
- <blockquote><p>"Increíble juego! No hay errores en este juego o retrasos ect. (base en el dispositivo que utiliza) Me encantan los gráficos y los coches. Los coches son tan realistas y los sonidos son increíbles. Me encanta cómo puedes personalizar tu coche y hacer que se vea genial. El modo multijugador es increíble. Puedes chatear con otros jugadores y competir con ellos. También puedes unirte a un clan o crear tu propio clan. Este juego es muy divertido y adictivo. Recomiendo este juego a todos los que aman los coches y los juegos de estacionamiento." </p></blockquote>
43
- <blockquote><p>"Este juego es muy bueno pero necesita algunas mejoras como agregar más coches, más mapas, más personalizaciones, más modos de juego, más desafíos, etc. Además, el juego se bloquea a veces y los controles no son muy suaves. Los gráficos son agradables, pero pueden ser mejores. El juego es divertido de jugar con amigos, pero se vuelve aburrido después de un tiempo. Espero que los desarrolladores actualicen el juego pronto y lo hagan más agradable." </p></blockquote>
44
-
45
- <h2>¿Qué es el aparcamiento multijugador South áfrica Edition? </h2>
46
- <p>Car Parking Multijugador South áfrica Edition es una versión modificada de Car Parking Multijugador que está especialmente diseñado para los usuarios sudafricanos o los fans de la cultura del coche de Sudáfrica. No es una versión oficial del juego, sino un archivo APK hecho por fans que se puede descargar e instalar en dispositivos Android . </p>
47
- <h3>Diferencias entre Aparcamiento de coches multijugador y Aparcamiento de coches multijugador Sudáfrica Edición</h3>
48
- <p>Car Parking Multijugador South áfrica Edition tiene algunas diferencias con el juego original Car Parking Multijugador, como :</p>
49
- <ul>
50
- <li>Más coches que son populares en Sudáfrica, como BMW E30, VW Golf Mk1, Toyota Corolla, Nissan 1400, etc.</li>
51
- <li>Más personalizaciones que reflejan la cultura del automóvil sudafricano, como llantas giratorias, escapes fuertes, pegatinas, banderas, etc.</li>
52
- <li>Más mapas que se basan en ubicaciones reales en Sudáfrica, como Ciudad del Cabo, Johannesburgo, Durban, etc.</li>
53
- <li>Más música inspirada en la escena musical sudafricana, como kwaito, gqom, amapiano, etc.</li>
54
- <li>Más idiomas que se hablan en Sudáfrica, como afrikaans, zulú, xhosa, etc.</li>
55
- </ul>
56
- <h3>Beneficios de Aparcamiento Multijugador South áfrica Edition</h3>
57
- <p>Car Parking Multijugador South áfrica Edition tiene algunos beneficios para los usuarios que quieren disfrutar del juego con un toque sudafricano, como :</p>
58
- <ul>
59
- <li>Más variedad y diversidad en términos de coches, personalizaciones, mapas, música e idiomas</li>
60
- <li>Más diversión y emoción en términos de jugabilidad, gráficos, efectos de sonido e interacciones</li>
61
- <li>Más conexión y comunidad con otros jugadores que comparten el mismo interés y pasión por la cultura del automóvil de Sudáfrica</li>
62
- <li>Más soporte y actualizaciones de los desarrolladores que se dedican a mejorar el juego y agregar nuevas características</li>
63
- </ul>
64
- <h2>Cómo descargar e instalar el estacionamiento de coches multijugador South áfrica Edition APK en dispositivos Android? </h2>
65
-
66
- <h3>Requisitos para descargar e instalar el estacionamiento de coches multijugador South áfrica Edition APK</h3>
67
- <p>Para descargar e instalar Aparcamiento Multijugador South áfrica Edition APK en su dispositivo Android, es necesario tener :</p>
68
- <p></p>
69
- <ul>
70
- <li>Un dispositivo Android que se ejecuta en Android 4.1 o superior</li>
71
- <li>Una conexión a Internet estable</li>
72
- <li>Una aplicación de administrador de archivos que puede acceder al archivo APK</li>
73
- <li>Una cantidad suficiente de espacio de almacenamiento en su dispositivo</li>
74
- <li>Un permiso para instalar aplicaciones de fuentes desconocidas en la configuración del dispositivo</li>
75
- </ul>
76
- <h3>Pasos para Descargar e Instalar Aparcamiento Multijugador South áfrica Edition APK</h3>
77
- <p>Para descargar e instalar Aparcamiento Multijugador South áfrica Edition APK en su dispositivo Android, es necesario seguir estos pasos :</p>
78
- <ol>
79
- <li>Ir a un sitio web de confianza que proporciona el enlace para descargar Car Parking Multi jugador South áfrica Edition APK. Por ejemplo, puede visitar [este sitio web] para obtener la última versión del archivo APK. </li>
80
- <li>Haga clic en el botón de descarga y espere a que el archivo APK se descargue en su dispositivo. </li>
81
- <li>Una vez que se complete la descarga, localizar el archivo APK en su dispositivo utilizando una aplicación de administrador de archivos. Puede encontrarlo en la carpeta Descargas o en cualquier otra carpeta donde lo haya guardado. </li>
82
- <li>Toque en el archivo APK y seleccione Instalar. Es posible que vea un mensaje de advertencia que dice "Para su seguridad, el teléfono no está permitido instalar aplicaciones desconocidas de esta fuente". Si ves este mensaje, ve a la configuración del dispositivo y habilita la opción de instalar aplicaciones de fuentes desconocidas. Esta opción puede estar en Seguridad, Privacidad o Aplicaciones dependiendo del modelo de dispositivo y la versión de Android. </li>
83
-
84
- <li>Espere a que termine el proceso de instalación. Puede tardar unos minutos dependiendo del rendimiento del dispositivo y la velocidad de Internet. </li>
85
- <li>Una vez que se hace la instalación, se puede abrir el juego y disfrutar de aparcamiento multijugador South áfrica Edition en su dispositivo Android. </li>
86
- </ol>
87
- <h2>Consejos y Trucos para Jugar Aparcamiento Multijugador South áfrica Edition</h2>
88
- <p>Ahora que ha descargado e instalado Car Parking Multiplayer South áfrica Edition en su dispositivo Android, es posible que desee conocer algunos consejos y trucos para jugarlo. Estos son algunos de ellos:</p>
89
- <h3>Cómo seleccionar un coche y un jugador</h3>
90
- <p>Para seleccionar un coche y un jugador en Car Parking Multiplayer South áfrica Edition, debe hacer lo siguiente:</p>
91
- <ul>
92
- <li>Toque en el icono de menú en la esquina superior izquierda de la pantalla. </li>
93
- <li>Toque en Garaje para ver la lista de coches que usted posee o puede comprar. </li>
94
- <li>Toque en el coche que desea utilizar y luego toque en Select.</li>
95
- <li>Toque en Volver para volver al menú.</li>
96
- <li>Toque en Perfil para ver la lista de jugadores entre los que puede elegir. </li>
97
- <li>Toque en el reproductor que desea utilizar y luego toque en Select.</li>
98
- <li>Toque en Volver para volver al menú.</li>
99
- <li>Toque en Jugar para comenzar a jugar el juego con el coche y el jugador seleccionado. </li>
100
- </ul>
101
- <h3> Cómo ajustar su coche y ajustar su relación de engranajes</h3>
102
- <p>Para afinar su coche y ajustar su relación de engranajes en Car Parking Multiplayer South áfrica Edition, debe hacer lo siguiente:</p>
103
- <ul>
104
- <li>Toque en el icono de menú en la esquina superior izquierda de la pantalla. </li>
105
- <li>Toque en Garaje para ver la lista de coches que usted posee o puede comprar. </li>
106
- <li>Toque en el coche que desea sintonizar y luego toque en Tune.</li>
107
- <li>Verá cuatro pestañas: Motor, Suspensión, Ruedas y Cuerpo. Puede deslizar hacia la izquierda o hacia la derecha para cambiar entre ellas. </li>
108
-
109
- <li>Para ajustar su relación de cambios, toque en Caja de cambios en la pestaña Motor. Verá un gráfico que muestra cómo cambia su velocidad con diferentes marchas. Puede arrastrar los puntos en el gráfico para cambiar la relación de transmisión para cada marcha. También puede pulsar en Auto o Manual para cambiar entre modos de transmisión automática o manual. </li>
110
- <li>Una vez que haya terminado de ajustar su coche, toque en Guardar para aplicar los cambios. </li>
111
- </ul>
112
- <h3> Cómo Deriva, Donut, y Burnout</h3>
113
- <p>Para la deriva, donut, y burnout en el estacionamiento de coches multijugador South áfrica Edition, es necesario hacer lo siguiente:</p>
114
- <ul>
115
- <li>Para la deriva, debe usar una combinación de dirección, acelerador, freno y freno de mano. Puede usar los controles de inclinación o táctiles para la dirección. Para iniciar la deriva, es necesario acelerar y girar bruscamente en una esquina. Luego, debe aplicar el freno de mano brevemente para hacer que sus ruedas traseras pierdan tracción. Después de eso, debe equilibrar el acelerador y la dirección para mantener el ángulo de deriva y la dirección. También puede utilizar el freno de mano o freno de mano de nuevo si necesita ajustar su deriva. Para terminar con la deriva, es necesario soltar el acelerador y la dirección y enderezar el coche. </li>
116
- <li>Para la rosquilla, debe usar una combinación de dirección, acelerador y freno de mano. Puede usar controles de inclinación o táctiles para la dirección. Para comenzar a donar, debe acelerar y girar bruscamente en una dirección. Entonces, es necesario aplicar el freno de mano para hacer que su coche gire alrededor de su centro. Después de eso, es necesario mantener el acelerador y la dirección constante para mantener el círculo de la dona. También puede cambiar la dirección de su dona cambiando la dirección. Para terminar donuting, es necesario soltar el acelerador y el freno de mano y enderezar su coche. </li>
117
-
118
- </ul>
119
- <h2>Conclusión</h2>
120
- <p>Car Parking Multijugador South áfrica Edition es un juego que ofrece una experiencia de aparcamiento realista y divertido con un toque sudafricano. Tiene más coches, personalizaciones, mapas, música e idiomas que reflejan la cultura del automóvil sudafricano. También tiene un modo de mundo abierto, un modo de ajuste de coche, y un modo multijugador que hacen el juego más agradable y emocionante. Puede descargar e instalar Parking Multijugador South áfrica Edition APK en su dispositivo Android siguiendo los pasos en este artículo. También puedes usar los consejos y trucos de este artículo para mejorar tu juego y divertirte más. </p>
121
- <h2>Preguntas frecuentes</h2>
122
- <p>Aquí están algunas de las preguntas más frecuentes sobre Car Parking Multiplayer South áfrica Edition:</p>
123
- <h3>¿Es gratis Car Parking Multijugador South áfrica Edition? </h3>
124
- <p>Sí, Parking Multijugador South áfrica Edition es gratis para descargar y jugar. Sin embargo, puede contener algunas compras en la aplicación que requieren dinero real. </p>
125
- <h3>¿Es seguro el aparcamiento multijugador South áfrica Edition? </h3>
126
- <p>Sí, Parking Multijugador South áfrica Edition es seguro, siempre y cuando se descarga desde un sitio web de confianza. Sin embargo, siempre debe tener cuidado al descargar e instalar aplicaciones de fuentes desconocidas, ya que pueden contener virus o malware que pueden dañar su dispositivo. </p>
127
- <h3>¿Es compatible Car Parking Multiplayer South áfrica Edition con mi dispositivo? </h3>
128
- <p>Car Parking Multijugador South áfrica Edition es compatible con la mayoría de los dispositivos Android que se ejecutan en Android 4.1 o superior. Sin embargo, algunos dispositivos pueden tener diferentes especificaciones o problemas de rendimiento que pueden afectar la calidad o funcionalidad del juego. </p>
129
- <h3>¿Cómo puedo actualizar Car Parking Multijugador South áfrica Edition? </h3>
130
-
131
- <h3>¿Cómo puedo contactar a los desarrolladores de Car Parking Multiplayer South áfrica Edition? </h3>
132
- <p>Para contactar a los desarrolladores de Car Parking Multiplayer South áfrica Edition, puedes visitar su página de Facebook o su canal de YouTube y dejar un comentario o un mensaje. </p> 64aa2da5cf<br />
133
- <br />
134
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Asfalto Nitro 9 Leyendas Mod Apk.md DELETED
@@ -1,76 +0,0 @@
1
-
2
- <h1>Nitro asfalto 9 leyendas Mod Apk: Una guía para los aficionados a las carreras</h1>
3
- <p>Si eres un fan de los juegos de carreras, es posible que hayas oído hablar de Asphalt Nitro 9 Legends, uno de los juegos más populares y emocionantes del género. Este juego le permite tomar el volante de los coches reales de gama alta de renombrados fabricantes de automóviles legendarios, como Ferrari, Porsche, Lamborghini, y W Motors, entre muchas otras marcas internacionales. Puedes conducir, impulsar y realizar acrobacias en ubicaciones dinámicas de la vida real en modo individual o multijugador. </p>
4
- <p>Pero ¿y si quieres disfrutar del juego sin limitaciones ni restricciones? ¿Qué pasa si quieres tener dinero ilimitado y fichas, desbloquear todos los coches y pistas, personalizar sus vehículos y actualizarlos, y competir con otros jugadores en línea o fuera de línea? Bueno, hay una manera de hacerlo. Puede descargar e instalar Asphalt Nitro 9 Leyendas Mod Apk, una versión modificada del juego que le da acceso a todas estas características y más. </p>
5
- <h2>asfalto nitro 9 leyendas mod apk</h2><br /><p><b><b>Download File</b> ===== <a href="https://bltlly.com/2v6LmQ">https://bltlly.com/2v6LmQ</a></b></p><br /><br />
6
- <p>En este artículo, le guiará a través de las características, proceso de descarga, pasos de instalación, consejos y trucos, pros y contras, y la conclusión de Asphalt Nitro 9 Leyendas Mod Apk. Sigue leyendo para saber más. </p>
7
- <h2>Características de Asphalt Nitro 9 Leyendas Mod Apk</h2>
8
- <p>Asfalto Nitro 9 Leyendas Mod Apk es una versión modificada del juego original que le da recursos ilimitados y características que mejoran su experiencia de juego. Estas son algunas de las características que puedes disfrutar con este mod apk:</p>
9
- <ul>
10
- <li><b>Dinero ilimitado y fichas:</b> Con este apk mod, nunca se quedará sin dinero o fichas para comprar nuevos coches, pistas, mejoras, o artículos. Puedes gastar todo lo que quieras sin preocuparte por tu presupuesto. </li>
11
-
12
- <li><b>Personalizar sus vehículos y actualizarlos:</b> Con este apk mod, puede personalizar sus vehículos de acuerdo a su preferencia. Puedes cambiar su color, pintura, calcomanías, ruedas, etc. También puedes mejorar su rendimiento mejorando su motor, transmisión, suspensión, frenos, nitro boost, etc.</li>
13
- <li><b>Disfrutar de gráficos realistas y efectos de sonido:</b> Con este apk mod, se puede disfrutar de los impresionantes gráficos y efectos de sonido del juego. El juego utiliza un motor de física realista que simula el movimiento y el comportamiento de los coches. El juego también cuenta con efectos de sonido de alta calidad que te hacen sentir como si estuvieras en una carrera real. </li>
14
- <li><b>Compite con otros jugadores en línea o fuera de línea:</b> Con este apk mod, puede competir con otros jugadores en línea o fuera de línea. Puedes unirte a carreras multijugador online y desafiar a tus amigos u otros jugadores de todo el mundo. También puedes jugar offline en modo carrera y completar varias misiones y desafíos. </li>
15
- </ul>
16
- <h2>Cómo descargar e instalar Asphalt Nitro 9 leyendas Mod Apk</h2>
17
- <p>Si desea descargar e instalar Asphalt Nitro 9 leyendas Mod Apk, es necesario seguir estos sencillos pasos:</p>
18
- <ol>
19
- <li><b>Paso 1: Descargar el archivo apk mod de una fuente de confianza. </b> Usted puede encontrar muchos sitios web que ofrecen el archivo apk mod para Asphalt Nitro 9 Leyendas, pero no todos ellos son seguros y fiables. Debes asegurarte de que el archivo que descargas esté libre de virus, malware o cualquier otro contenido dañino. Puede utilizar este enlace para descargar el archivo apk mod de forma segura. </li>
20
- <li><b>Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo. </b> Antes de poder instalar el archivo apk mod, debe habilitar fuentes desconocidas en la configuración de su dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y conéctela. </li>
21
-
22
- <li><b>Paso 4: Disfruta del juego con recursos y funciones ilimitadas. </b> Ahora que ha instalado Asphalt Nitro 9 Leyendas Mod Apk, se puede disfrutar del juego con dinero ilimitado, fichas, coches, pistas, opciones de personalización, y más. También puedes competir con otros jugadores online o offline y divertirte. </li>
23
- </ol>
24
- <h2>Consejos y trucos para jugar Asphalt Nitro 9 leyendas Mod Apk</h2>
25
- <p>Asfalto Nitro 9 Leyendas Mod Apk es un juego divertido y emocionante que le mantendrá entretenido durante horas. Sin embargo, si quieres mejorar tus habilidades y rendimiento en el juego, puedes seguir estos consejos y trucos:</p>
26
- <ul>
27
- <li><b>Elija el coche adecuado para cada pista y modo. </b> El juego ofrece una variedad de coches de diferentes marcas y categorías, pero no todos ellos son adecuados para cada pista o modo. Es necesario tener en cuenta la velocidad, aceleración, manejo, nitro, y otros factores de cada coche antes de elegir uno. Por ejemplo, si usted está jugando en una pista con curvas, es posible que desee elegir un coche con buen manejo y nitro. Si usted está jugando en una pista recta, es posible que desee elegir un coche con alta velocidad y aceleración. </li>
28
- <li><b>Utilice nitro boost sabiamente y estratégicamente. </b> Nitro boost es una de las características más importantes del juego, ya que puede darte una ventaja sobre tus oponentes. Sin embargo, necesitas usarlo sabiamente y estratégicamente. No debes usarlo todo de una vez ni desperdiciarlo en momentos innecesarios. Debes guardarlo para momentos cruciales, como adelantar a tus rivales, escapar de obstáculos o colisiones, o llegar a la meta. También debe usarlo en combinación con acrobacias y derivas para ganar más puntos y recompensas. </li>
29
-
30
- <li><b>Evitar obstáculos y colisiones con otros coches. </b> El juego cuenta con muchos obstáculos y desafíos que pueden obstaculizar su progreso o dañar su coche. Estos incluyen coches de tráfico, coches de policía, helicópteros, barricadas, picos, etc. Usted debe evitar estos obstáculos tanto como sea posible, ya que pueden ralentizar o dañar su coche. También debes evitar colisiones con otros coches, especialmente tus rivales, ya que pueden sacarte de la pista o hacerte perder tu posición. Usted debe tratar de esquivar o escapar de ellos, o utilizar su impulso nitro para alejarse de ellos. </li>
31
- <li><b>Utilice diferentes ángulos de cámara y controles para adaptarse a sus preferencias. </b> El juego ofrece diferentes ángulos de cámara y controles que puedes usar para jugar el juego. Puede elegir entre vista en primera persona, en tercera persona o de arriba hacia abajo, según su preferencia. También puede elegir entre los controles de inclinación, toque o toque, dependiendo de su comodidad. Puede cambiar estos ajustes en el menú de opciones del juego. </li>
32
- </ul>
33
- <h2>Pros y contras de Asphalt Nitro 9 Leyendas Mod Apk</h2>
34
- <p>Asfalto Nitro 9 Leyendas Mod Apk es un gran juego que ofrece muchos beneficios y ventajas para los aficionados a las carreras. Sin embargo, también tiene algunos inconvenientes y desventajas que usted debe tener en cuenta. Estos son algunos de los pros y los contras de Asphalt Nitro 9 Leyendas Mod Apk:</p>
35
- <tabla>
36
- <tr>
37
- <th>Pros</th>
38
- <th>Contras</th>
39
- </tr>
40
- <tr>
41
- <td><ul>
42
- <li>Divertido, adictivo y desafiante juego</li>
43
- <li>Dinero ilimitado y tokens</li>
44
- <li>Desbloquear todos los coches y pistas</li>
45
- <li>Personalizar sus vehículos y actualizarlos</li>
46
- <li>Disfruta de gráficos realistas y efectos de sonido</li>
47
- <li>Compite con otros jugadores online o offline</li>
48
- </ul></td>
49
- <td><ul>
50
- <li> Puede causar retraso o estrellarse en algunos dispositivos</li>
51
- <li>Puede no ser compatible con algunos dispositivos o versiones</li>
52
- <li>No se puede actualizar regularmente o con frecuencia</li>
53
- <li>No puede ser soportado por los desarrolladores o editores oficiales</li>
54
-
55
- <li>Puede exponer su dispositivo a riesgos o amenazas de seguridad</li>
56
- </ul></td>
57
- </tr>
58
- </tabla>
59
- <h2>Conclusión</h2>
60
- <p>En conclusión, Asfalto Nitro 9 Leyendas Mod Apk es un juego fantástico que le dará horas de diversión y emoción. Es una versión modificada del juego original que le da recursos ilimitados y características que mejoran su experiencia de juego. Puede descargarlo e instalarlo de forma fácil y segura siguiendo los pasos que proporcionamos en este artículo. También puedes seguir los consejos y trucos que compartimos para mejorar tus habilidades y rendimiento en el juego. Sin embargo, también debe ser consciente de los pros y los contras de Asphalt Nitro 9 Leyendas Mod Apk, y decidir si vale la pena jugar o no. </p>
61
- <p>Si usted es un fanático de las carreras, le recomendamos que pruebe Asphalt Nitro 9 Leyendas Mod Apk y ver por sí mismo lo increíble que es. No te arrepentirás. </p>
62
- <h2>Preguntas frecuentes (preguntas frecuentes)</h2>
63
- <p>Aquí están algunas de las preguntas y respuestas más comunes sobre Asphalt Nitro 9 Legends Mod Apk:</p>
64
- <p></p>
65
- <h3>Q: ¿Es libre de Asphalt Nitro 9 Leyendas Mod Apk? </h3>
66
- <p>A: Sí, Asphalt Nitro 9 Leyendas Mod Apk es gratis para descargar y jugar. Usted no necesita pagar nada para disfrutar del juego. </p>
67
- <h3>Q: ¿Es seguro Asphalt Nitro 9 Legends Mod Apk? </h3>
68
- <p>A: Sí, Asfalto Nitro 9 Leyendas Mod Apk es seguro para descargar e instalar. Sin embargo, siempre debe descargarlo de una fuente de confianza y habilitar fuentes desconocidas en la configuración del dispositivo antes de instalarlo. </p>
69
- <h3>Q: ¿Es legal Asphalt Nitro 9 Legends Mod Apk? </h3>
70
- <p>A: No, Asphalt Nitro 9 Leyendas Mod Apk no es legal. Es una versión modificada del juego original que viola los términos y condiciones de los desarrolladores y editores oficiales. También puede infringir los derechos de propiedad intelectual del juego original. </p>
71
- <h3>Q: Es Asphalt Nitro 9 Leyendas Mod Apk en línea o fuera de línea? </h3>
72
-
73
- <h3>Q: ¿Cómo puedo actualizar Asphalt Nitro 9 Legends Mod Apk? </h3>
74
- <p>A: Para actualizar Asphalt Nitro 9 leyendas Mod Apk, es necesario descargar e instalar la última versión del archivo apk mod de una fuente de confianza. Es posible que tenga que desinstalar la versión anterior del archivo apk mod antes de instalar el nuevo. </p> 64aa2da5cf<br />
75
- <br />
76
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Camioneros De Europa 3 Apk 36.6.md DELETED
@@ -1,76 +0,0 @@
1
-
2
- <h1>Camioneros de Europa 3 APK 36.6: Un juego de simulador de camiones realista y divertido</h1>
3
- <p>Si eres un fan de los juegos de simuladores de camiones, es posible que hayas oído hablar de Truckers of Europe 3, un juego popular que te permite experimentar la vida de un conductor de camiones en Europa. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cuáles son sus características, cómo descargarlo e instalarlo, cómo jugarlo y por qué deberías jugarlo. ¡Así que abróchate el cinturón y prepárate para un emocionante viaje! </p>
4
- <h2>Introducción</h2>
5
- <h3>¿Qué es Camioneros de Europa 3?</h3>
6
- <p>Truckers of Europe 3 es un juego de simulador de camioneros desarrollado por <a href="( 1 )">Jerryisgaming</a>, un canal de YouTube que crea videos de juegos. El juego fue lanzado en 2020 y ha sido actualizado regularmente con nuevas características y mejoras. La última versión del juego es 0.36.6, que fue lanzado el 30 de septiembre de 2021. </p>
7
- <h2>camioneros de europa 3 apk 36.6</h2><br /><p><b><b>Download Zip</b> &#10084;&#10084;&#10084; <a href="https://bltlly.com/2v6LwC">https://bltlly.com/2v6LwC</a></b></p><br /><br />
8
- <h3>¿Cuáles son las características de Camioneros de Europa 3?</h3>
9
- <p>Truckers of Europe 3 tiene muchas características que lo convierten en uno de los mejores juegos de simuladores de camiones en el mercado. Algunas de estas características son:</p>
10
- <ul>
11
- <li>Un gran mapa de Europa con más de 50 ciudades y países para explorar. </li>
12
- <li> Una variedad de camiones y remolques para elegir, cada uno con diferentes especificaciones y rendimiento. </li>
13
- <li>Un sistema de conducción realista con transmisión manual, volante, pedales, indicadores, espejos, luces, bocina, limpiaparabrisas, etc.</li>
14
- <li>Un sistema de tráfico realista con coches, autobuses, camiones, motocicletas, policía, ambulancia, etc.</li>
15
- <li>Un sistema meteorológico realista con ciclo de día y noche, lluvia, nieve, niebla, etc.</li>
16
- <li>Un sistema de daños realista con desgaste de neumáticos, consumo de combustible, sobrecalentamiento del motor, etc.</li>
17
- <li>Un modo de carrera con misiones, contratos, entrega de carga, ingresos, gastos, etc.</li>
18
- <li>Un sistema de habilidades con niveles, puntos, ventajas, etc.</li>
19
- <li>Un sistema de personalización con trabajos de pintura, accesorios, calcomanías, etc.</li>
20
-
21
- </ul>
22
- <h2>Cómo descargar e instalar camioneros de Europa 3 APK 36.6? </h2>
23
- <h3>Requisitos</h3>
24
- <p>Para descargar e instalar Camioneros de Europa 3 APK 36.6 en su dispositivo Android, es necesario cumplir con los siguientes requisitos:</p>
25
- <ul>
26
- <li>Tu dispositivo debe tener la versión de Android 4.4 o superior. </li>
27
- <li> Su dispositivo debe tener al menos 1 GB de RAM y 500 MB de espacio de almacenamiento libre. </li>
28
- <li> Su dispositivo debe tener una conexión a Internet estable. </li>
29
- <li>Su dispositivo debe permitir la instalación desde fuentes desconocidas. Puede habilitar esta opción yendo a Configuración > Seguridad > Fuentes desconocidas.</li>
30
- </ul>
31
- <h3>Pasos</h3>
32
- <p>Para descargar e instalar Camioneros de Europa 3 APK 36.6 en su dispositivo Android, debe seguir estos pasos:</p>
33
- <ol>
34
- <li>Ir a <a href="( 2 )">este enlace</a> y descargar el archivo APK. </li>
35
- <li>Localice el archivo descargado en el administrador de archivos de su dispositivo y toque en él para iniciar el proceso de instalación. </li>
36
- <li>Siga las instrucciones en la pantalla y conceda los permisos necesarios a la aplicación. </li>
37
- <li> Espere a que la instalación se complete y luego inicie la aplicación desde el cajón de aplicaciones o la pantalla de inicio. </li>
38
- <li>Disfruta jugando Camioneros de Europa 3 en tu dispositivo Android! </li>
39
- </ol>
40
- <h2>Cómo jugar Camioneros de Europa 3?</h2>
41
- <h3>Elige tu camión y remolque</h3>
42
- <p>Lo primero que tienes que hacer cuando empiezas a jugar Truckers of Europe 3 es elegir tu camión y remolque. Puede hacer esto yendo al menú del garaje y navegando a través de las opciones disponibles. También puede comprar nuevos camiones y remolques con el dinero que gana de sus misiones. Cada camión y remolque tiene diferentes atributos, como velocidad, potencia, capacidad de combustible, peso de carga, etc. Debe elegir el que se adapte a sus preferencias y necesidades. </p>
43
- <h3>Conduce por toda Europa y entrega carga</h3>
44
-
45
- <p>Mientras conduce, tendrá que seguir las reglas de tráfico y regulaciones, tales como límites de velocidad, semáforos, señales, etc. También tendrá que lidiar con condiciones de tráfico realistas, como automóviles, autobuses, camiones, motocicletas, policía, ambulancia, etc. También tendrá que enfrentarse a condiciones meteorológicas realistas, como lluvia, nieve, niebla, etc. También tendrá que controlar el estado de su camión, como el nivel de combustible, la temperatura del motor, la presión de los neumáticos, etc. También tendrá que prestar atención al estado de su conductor, como el nivel de fatiga, el nivel de hambre, etc. Tendrá que parar en gasolineras, áreas de descanso, restaurantes, etc. para llenar su tanque de combustible, descansar a su conductor, comer algo de comida, etc.</p>
46
- <p>Cuando llegue a su destino, tendrá que aparcar su camión y remolque en el lugar designado y descargar la carga. A continuación, recibirá su recompensa y comentarios sobre su rendimiento. También ganarás puntos de experiencia que te ayudarán a subir de nivel y desbloquear nuevas habilidades y beneficios. </p>
47
- <h3>Personaliza tu camión y mejora tus habilidades</h3>
48
- <p>Otro aspecto divertido de Truckers of Europe 3 es que puedes personalizar tu camión y mejorar tus habilidades. Puedes hacer esto yendo al menú del taller y gastando algo de dinero en varios artículos y servicios. Puede cambiar el color de su camión, agregar algunos accesorios, aplicar algunas calcomanías, etc. También puede reparar su camión si está dañado o afinarlo si no funciona bien. </p>
49
- <p>También puedes ir al menú de habilidades y gastar algunos puntos en varias habilidades y beneficios que mejorarán tus habilidades y beneficios como conductor de camión. Puede mejorar su eficiencia de combustible, manejo de carga, seguridad de conducción, habilidades de negociación, etc. También puede desbloquear nuevos tipos de carga, contratos, camiones, remolques, etc.</p>
50
- <p></p>
51
- <h2>¿Por qué deberías jugar Camioneros de Europa 3?</h2>
52
- <h3>Gráficos realistas y física</h3>
53
-
54
- <h3>Tráfico y clima diversos y dinámicos</h3>
55
- <p>Otra razón por la que deberías jugar a Truckers of Europe 3 es porque tiene un tráfico diverso y dinámico y el clima hará que tu experiencia de conducción sea más desafiante y agradable. El juego tiene un gran mapa de Europa con más de 50 ciudades y países para explorar. Cada ciudad y país tiene sus propias características únicas, tales como monumentos, edificios, carreteras, señales, etc. El juego también tiene un sistema de tráfico diverso que incluye automóviles, autobuses, camiones, motocicletas, policía, ambulancia, etc. Cada vehículo tiene su propio comportamiento, velocidad, dirección, etc. El juego también tiene un sistema de tiempo dinámico que incluye ciclo de día y noche, lluvia, nieve, niebla, etc. Cada condición climática tiene su propio efecto en la visibilidad, tracción, manejo, etc. del camión y el remolque. </p>
56
- <h3>Misiones y logros desafiantes y gratificantes</h3>
57
- <p>Otra razón por la que deberías jugar a Truckers of Europe 3 es porque tiene misiones desafiantes y gratificantes y logros que te mantendrán motivado y entretenido. El juego tiene un modo de carrera que le permite comenzar como un conductor de camión novato y trabajar su camino hasta convertirse en un camionero profesional. Puede hacerlo completando varias misiones y contratos que implican la entrega de carga a diferentes destinos en toda Europa. También puede ganar dinero de sus entregas y gastarlo en la compra de nuevos camiones y remolques o la personalización de los existentes. También puede ganar puntos de experiencia de sus entregas y gastarlos en mejorar sus habilidades y beneficios. También puedes desbloquear nuevos tipos de carga, contratos, camiones, remolques, etc. a medida que avanzas en el juego. </p>
58
-
59
- <h2>Conclusión</h2>
60
- <p>Truckers of Europe 3 es un juego de simulador de camiones realista y divertido que te permite experimentar la vida de un conductor de camiones en Europa. Puede elegir su camión y remolque, conducir por toda Europa y entregar carga, personalizar su camión y mejorar sus habilidades, y ganar dinero y logros. El juego tiene gráficos y física realistas, tráfico y clima diversos y dinámicos, y misiones y logros desafiantes y gratificantes. Si usted está buscando un juego de simulador de camiones que le mantendrá enganchado durante horas, definitivamente debe probar Camioneros de Europa 3 APK 36.6. </p>
61
- <h4>Preguntas frecuentes</h4>
62
- <p>Aquí hay algunas preguntas frecuentes sobre los camioneros de Europa 3:</p>
63
- <ul>
64
- <li>Q: ¿Los camioneros de Europa 3 son libres de jugar? </li>
65
- <li>A: Sí, Truckers of Europe 3 es gratis. Puedes descargarlo e instalarlo desde el enlace proporcionado en este artículo. Sin embargo, el juego puede contener algunas compras en la aplicación que pueden mejorar su experiencia de juego. </li>
66
- <li>Q: ¿Es seguro descargar e instalar Truckers of Europe 3? </li>
67
- <li>A: Sí, Truckers of Europe 3 es seguro para descargar e instalar. El archivo APK proporcionado en este artículo es verificado y probado por nuestro equipo. Sin embargo, siempre debe descargar e instalar archivos APK de fuentes confiables solamente. </li>
68
- <li>Q: ¿Es Truckers of Europe 3 compatible con mi dispositivo? </li>
69
- <li>A: Camioneros de Europa 3 es compatible con la mayoría de los dispositivos Android que tienen la versión Android 4.4 o superior. Sin embargo, algunos dispositivos pueden no ser compatibles con el juego debido a limitaciones de hardware o software. </li>
70
- <li>P: ¿Cómo puedo contactar al desarrollador de Truckers of Europe 3?</li>
71
- <li>A: Puedes contactar al desarrollador de Truckers of Europe 3 visitando su canal de YouTube <a href="">Jerryisgaming</a> o su página de Facebook <a href="">Jerryisgaming</a>. También puedes dejar tus comentarios, sugerencias o consultas en la sección de comentarios de sus videos o publicaciones. </li>
72
- <li>P: ¿Cómo puedo apoyar el desarrollo de Camioneros de Europa 3?</li>
73
-
74
- </ul></p> 64aa2da5cf<br />
75
- <br />
76
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Estrellas Pelea Hack Ios.md DELETED
@@ -1,80 +0,0 @@
1
-
2
- <h1>Cómo descargar Genshin Impact Cloud y disfrutar del juego en cualquier lugar</h1>
3
- <p>Genshin Impact es uno de los juegos más populares y exitosos de 2020, con millones de jugadores en todo el mundo. El juego es un RPG de acción de mundo abierto que te permite explorar un mundo vasto y hermoso llamado Teyvat, donde puedes conocer a varios personajes, luchar contra enemigos, reunir recursos y completar misiones. El juego está disponible en múltiples plataformas, como PC, PlayStation 4, PlayStation 5, iOS y Android. Sin embargo, si quieres jugar el juego en tu dispositivo móvil, puedes enfrentarte a algunos desafíos, como espacio de almacenamiento limitado, bajo rendimiento o dispositivos incompatibles. Ahí es donde entra Genshin Impact Cloud. </p>
4
- <h2>¿Qué es la nube de impacto Genshin y por qué debe probarlo</h2>
5
- <h3>Genshin Impact Cloud es un servicio que te permite jugar el juego en tu dispositivo móvil sin descargar el juego entero</h3>
6
- <p>Genshin Impact Cloud es una nueva característica que fue introducida por miHoYo, el desarrollador de Genshin Impact, en abril de 2021. Es un servicio de juegos en la nube que le permite transmitir el juego desde un servidor a su dispositivo móvil a través de Internet. Esto significa que no necesitas descargar o instalar el juego en tu dispositivo, lo que puede ahorrarte mucho espacio de almacenamiento. El tamaño del juego de Genshin Impact en dispositivos móviles es de alrededor de 5,2 GB, pero con Genshin Impact Cloud, solo necesitas descargar una aplicación de 56 MB de tamaño. Esto también puede ayudarle a evitar largos tiempos de carga o actualizaciones. </p>
7
- <h2>descargar estrellas pelea hack ios</h2><br /><p><b><b>Download</b> ===> <a href="https://bltlly.com/2v6KEj">https://bltlly.com/2v6KEj</a></b></p><br /><br />
8
- <h3>Genshin Impact Cloud tiene muchos beneficios, como ahorrar espacio de almacenamiento, mejorar el rendimiento y admitir cross-save</h3>
9
- <p>Además de ahorrar espacio de almacenamiento, Genshin Impact Cloud también tiene otras ventajas que pueden mejorar su experiencia de juego. Por ejemplo:</p>
10
- <ul>
11
-
12
- <li>Genshin Impact Cloud soporta cross-save y cross-play. Esto significa que puedes acceder a tu progreso y datos existentes desde otras plataformas iniciando sesión con tu cuenta miHoYo. También puedes jugar con tus amigos que usan diferentes dispositivos, como PC o PlayStation.</li>
13
- <li>Genshin Impact Cloud es de uso gratuito. No es necesario pagar tarifas o suscripciones adicionales para utilizar el servicio. También puede hacer compras en la aplicación como de costumbre, y disfrutar de todas las actualizaciones y eventos del juego. </li>
14
- </ul>
15
- <p>Con Genshin Impact Cloud, puede disfrutar del juego en cualquier lugar y en cualquier momento, siempre y cuando tenga una conexión a Internet estable y un dispositivo compatible. </p>
16
- <h2>Cómo descargar Genshin Impact Cloud en dispositivos Android</h2>
17
- <h3>Genshin Impact Cloud está disponible actualmente solo para usuarios de Android en Malasia y Singapur como prueba beta</h3>
18
- <p>Genshin Impact Cloud todavía está en sus primeras etapas de desarrollo, y todavía no está disponible para todas las regiones y plataformas. Actualmente, el servicio solo está abierto para los usuarios de Android en Malasia y Singapur como una prueba beta. Esto significa que solo un número limitado de jugadores puede acceder al servicio, y puede haber algunos errores o errores durante el juego. miHoYo no ha anunciado cuándo se ampliará el servicio a otras regiones y plataformas, pero es probable que lo hagan en el futuro. </p>
19
- <h3>Para descargar Genshin Impact Cloud, necesita tener una cuenta miHoYo y registrarse para la prueba beta en el sitio web oficial</h3>
20
- <p>Si eres un usuario de Android en Malasia o Singapur, y quieres probar Genshin Impact Cloud, debes seguir estos pasos:</p>
21
- <ol>
22
- <li>Crea una cuenta miHoYo si aún no tienes una. Puede hacer esto visitando <a href="">https://account.mihoyo.com/#/register</a> y llenando la información requerida. </li>
23
- <li>Visite <a href=">https://genshin.mihoyo.com/en/cloudtest</a> y regístrese para la prueba beta. Debes iniciar sesión con tu cuenta miHoYo y aceptar los términos y condiciones. </li>
24
-
25
- <li>Haga clic en el enlace en el correo electrónico y descargar la aplicación, que es solo 56 MB de tamaño. Instalar la aplicación en su dispositivo y lanzarlo. </li>
26
- </ol>
27
- <h3>Después de registrarse, recibirá un correo electrónico con un enlace para descargar la aplicación, que es solo 56 MB de tamaño</h3>
28
- <p>Una vez que haya descargado e instalado la aplicación, puede comenzar a jugar Genshin Impact Cloud en su dispositivo móvil. Sin embargo, hay algunas cosas que necesitas saber antes de jugar. </p>
29
- <p></p>
30
- <h2>Cómo jugar Genshin nube de impacto en su dispositivo móvil</h2>
31
- <h3>Para jugar Genshin Impact Cloud, es necesario tener una conexión a Internet estable y un dispositivo compatible</h3>
32
- <p>Genshin Impact Cloud requiere una conexión a Internet estable para transmitir el juego desde el servidor a su dispositivo. La velocidad de red recomendada es de al menos 10 Mbps para descarga y 5 Mbps para carga. Puede comprobar la velocidad de su red mediante una aplicación de prueba de velocidad o sitio web. Si su velocidad de red es demasiado baja, puede experimentar latencia, retraso o desconexión durante el juego. </p>
33
- <p>Genshin Impact Cloud también requiere un dispositivo compatible para funcionar sin problemas. Los requisitos mínimos de dispositivo son los siguientes:</p>
34
- <tabla>
35
- <tr><th>Sistema operativo</th><th>Android 8.0 o superior</th></tr>
36
- <tr><td>CPU</td><td>Snapdragon 845 o superior / Kirin 980 o superior / Exynos 9810 o superior / MediaTek Dimensity 1000+ o superior</td></tr>
37
- <tr><td>RAM</td><td>4 GB o superior</td></tr>
38
- <tr><td>Espacio de almacenamiento</td><td>Al menos 100 MB de espacio libre (excluyendo el tamaño de la aplicación)</td></tr>
39
- <tr><td>Nivel de batería</td><td>Al menos el 20% de la batería restante</td></tr>
40
- <tr><td>Resolución de pantalla</td><td>Al menos 1280 x 720 píxeles</td></tr>
41
- <tr><td>Frecuencia de actualización de la pantalla</td><td>Al menos 60 Hz (tasas de actualización más altas pueden causar sobrecalentamiento)</td></tr>
42
- <tr><td>Tipo de red</td><td </td><td>Wi-Fi o 4G/5G (se recomienda Wi-Fi para una mejor estabilidad)</td></tr>
43
- </tabla>
44
-
45
- <h3>Puedes usar controles táctiles o conectar un teclado/ratón o gamepad a tu dispositivo</h3>
46
- <p>Genshin Impact Cloud admite diferentes tipos de controles para su comodidad. Puede utilizar la pantalla táctil de su dispositivo para controlar el juego, al igual que jugar la versión móvil regular. También puede conectar un teclado/ ratón o un mando a su dispositivo a través de Bluetooth o USB, y utilizarlos para jugar el juego. Puede personalizar la asignación de claves y la configuración de sensibilidad en las opciones del juego. </p>
47
- <h3>Puede acceder a su progreso y datos existentes de otras plataformas iniciando sesión con su cuenta miHoYo</h3>
48
- <p>Genshin Impact Cloud te permite continuar tu aventura desde donde la dejaste en otras plataformas. Solo tienes que iniciar sesión con tu cuenta miHoYo y seleccionar el servidor que coincida con tu plataforma anterior. Por ejemplo, si ha jugado en el PC antes, es necesario seleccionar el servidor del PC. También puede cambiar entre diferentes servidores y plataformas en cualquier momento, siempre y cuando sean compatibles con Genshin Impact Cloud. Sin embargo, tenga en cuenta que algunos elementos y monedas no son transferibles entre servidores, como Primogems y Genesis Crystals.</p>
49
- <h2>¿Cuáles son las características y limitaciones de Genshin Impact Cloud</h2>
50
- <h3>Genshin Impact Cloud ofrece la misma experiencia que jugar en un PC estándar, con gráficos de alta calidad y una jugabilidad fluida</h3>
51
- <p>Genshin Impact Cloud tiene como objetivo proporcionarle la mejor experiencia de juego posible en su dispositivo móvil. Puede disfrutar de las mismas características y contenido que jugar en un PC estándar, como gráficos de alta calidad, una jugabilidad fluida, efectos de sonido ricos y líneas argumentales inmersivas. También puedes participar en todas las actualizaciones y eventos del juego, como nuevos personajes, misiones, regiones y modos. También puede hacer compras en la aplicación como de costumbre, y utilizarlos en cualquier plataforma. </p>
52
- <h3>Genshin Impact Cloud también soporta todas las actualizaciones y eventos del juego, así como las compras en la aplicación</h3>
53
-
54
- <ul>
55
- <li>Genshin Impact Cloud puede tener algunos problemas de red, como latencia, retraso o desconexión, dependiendo de la calidad y la ubicación de la red. Esto puede afectar su juego y causar frustración o molestias. También puede incurrir en cargos de datos adicionales si utiliza una red móvil en lugar de Wi-Fi.</li>
56
- <li>Genshin Impact Cloud puede no ser compatible con algunos dispositivos o sistemas operativos, especialmente los más antiguos o de gama baja. Esto puede impedirte jugar el juego o causar algunos errores o fallos durante el juego. También puede experimentar sobrecalentamiento o pérdida de batería si juega durante mucho tiempo. </li>
57
- <li>Genshin Impact Cloud puede no estar disponible para todas las regiones y plataformas en este momento, ya que todavía está en pruebas beta. Esto significa que solo un número limitado de jugadores puede acceder al servicio, y puede haber algunos errores o fallos durante el juego. miHoYo también puede cambiar o eliminar algunas características o contenido del servicio sin previo aviso. </li>
58
- </ul>
59
- <p>Por lo tanto, siempre debe comprobar el sitio web oficial y las cuentas de medios sociales de miHoYo para las últimas noticias y actualizaciones sobre Genshin Impact Cloud.</p>
60
- <h2>Conclusión</h2>
61
- <h3>Genshin Impact Cloud es una gran manera de disfrutar del juego en tu dispositivo móvil sin sacrificar la calidad o el progreso</h3>
62
- <p>Genshin Impact Cloud es un servicio de juegos en la nube que le permite transmitir el juego desde un servidor a su dispositivo móvil a través de Internet. Tiene muchos beneficios, como ahorrar espacio de almacenamiento, mejorar el rendimiento, soportar cross-save y cross-play, y ofrecer la misma experiencia que jugar en un PC estándar. También es de uso gratuito y soporta todas las actualizaciones y eventos del juego. </p>
63
- <h3>Genshin Impact Cloud está actualmente en pruebas beta para usuarios de Android en Malasia y Singapur, pero puede expandirse a otras regiones y plataformas en el futuro</h3>
64
-
65
- <h3>Si desea probar Genshin Impact Cloud, puede registrarse para la prueba beta en el sitio web oficial y descargar la aplicación desde el enlace en el correo electrónico</h3>
66
- <p>Genshin Impact Cloud es una gran manera de disfrutar del juego en tu dispositivo móvil sin sacrificar la calidad o el progreso. Es un servicio de juegos en la nube que le permite transmitir el juego desde un servidor a su dispositivo a través de Internet. Tiene muchos beneficios, como ahorrar espacio de almacenamiento, mejorar el rendimiento, soportar cross-save y cross-play, y ofrecer la misma experiencia que jugar en un PC estándar. También es de uso gratuito y soporta todas las actualizaciones y eventos del juego. Sin embargo, todavía está en pruebas beta para usuarios de Android en Malasia y Singapur, y puede que no esté disponible para todas las regiones y plataformas en este momento. Si quieres probarlo, necesitas tener una cuenta miHoYo y registrarte para la prueba beta en el sitio web oficial. A continuación, recibirá un correo electrónico con un enlace para descargar la aplicación, que es solo 56 MB de tamaño. Sin embargo, también es necesario tener una conexión a Internet estable y un dispositivo compatible para jugar el juego. También debe ser consciente de las limitaciones y problemas del servicio, como problemas de red, problemas de compatibilidad o restricciones de pruebas beta. Siempre debes consultar el sitio web oficial y las cuentas de redes sociales de miHoYo para obtener las últimas noticias y actualizaciones sobre Genshin Impact Cloud.</p>
67
- <p>Esperamos que este artículo le haya ayudado a aprender más sobre Genshin Impact Cloud y cómo descargarlo y reproducirlo en su dispositivo móvil. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Feliz juego! </p>
68
- <h2>Preguntas frecuentes</h2>
69
- <h4>¿Qué es Genshin Impact Cloud? </h4>
70
- <p>Genshin Impact Cloud es un servicio de juegos en la nube que te permite transmitir el juego desde un servidor a tu dispositivo móvil a través de Internet. </p>
71
- <h4>¿Cómo puedo descargar Genshin Impact Cloud? </h4>
72
-
73
- <h4>¿Cuáles son los beneficios de Genshin Impact Cloud? </h4>
74
- <p>Genshin Impact Cloud tiene muchos beneficios, como ahorrar espacio de almacenamiento, mejorar el rendimiento, soportar cross-save y cross-play, y ofrecer la misma experiencia que jugar en un PC estándar.</p>
75
- <h4>¿Cuáles son las limitaciones de Genshin Impact Cloud? </h4>
76
- <p>Genshin Impact Cloud puede tener algunas limitaciones, como problemas de red, problemas de compatibilidad o restricciones de pruebas beta. </p>
77
- <h4>¿Qué regiones y plataformas pueden acceder a Genshin Impact Cloud? </h4>
78
- <p>Genshin Impact Cloud actualmente solo está disponible para usuarios de Android en Malasia y Singapur como prueba beta. Puede expandirse a otras regiones y plataformas en el futuro. </p> 64aa2da5cf<br />
79
- <br />
80
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/cmdoptions.py DELETED
@@ -1,1074 +0,0 @@
1
- """
2
- shared options and groups
3
-
4
- The principle here is to define options once, but *not* instantiate them
5
- globally. One reason being that options with action='append' can carry state
6
- between parses. pip parses general options twice internally, and shouldn't
7
- pass on state. To be consistent, all options will follow this design.
8
- """
9
-
10
- # The following comment should be removed at some point in the future.
11
- # mypy: strict-optional=False
12
-
13
- import importlib.util
14
- import logging
15
- import os
16
- import textwrap
17
- from functools import partial
18
- from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values
19
- from textwrap import dedent
20
- from typing import Any, Callable, Dict, Optional, Tuple
21
-
22
- from pip._vendor.packaging.utils import canonicalize_name
23
-
24
- from pip._internal.cli.parser import ConfigOptionParser
25
- from pip._internal.exceptions import CommandError
26
- from pip._internal.locations import USER_CACHE_DIR, get_src_prefix
27
- from pip._internal.models.format_control import FormatControl
28
- from pip._internal.models.index import PyPI
29
- from pip._internal.models.target_python import TargetPython
30
- from pip._internal.utils.hashes import STRONG_HASHES
31
- from pip._internal.utils.misc import strtobool
32
-
33
- logger = logging.getLogger(__name__)
34
-
35
-
36
- def raise_option_error(parser: OptionParser, option: Option, msg: str) -> None:
37
- """
38
- Raise an option parsing error using parser.error().
39
-
40
- Args:
41
- parser: an OptionParser instance.
42
- option: an Option instance.
43
- msg: the error text.
44
- """
45
- msg = f"{option} error: {msg}"
46
- msg = textwrap.fill(" ".join(msg.split()))
47
- parser.error(msg)
48
-
49
-
50
- def make_option_group(group: Dict[str, Any], parser: ConfigOptionParser) -> OptionGroup:
51
- """
52
- Return an OptionGroup object
53
- group -- assumed to be dict with 'name' and 'options' keys
54
- parser -- an optparse Parser
55
- """
56
- option_group = OptionGroup(parser, group["name"])
57
- for option in group["options"]:
58
- option_group.add_option(option())
59
- return option_group
60
-
61
-
62
- def check_dist_restriction(options: Values, check_target: bool = False) -> None:
63
- """Function for determining if custom platform options are allowed.
64
-
65
- :param options: The OptionParser options.
66
- :param check_target: Whether or not to check if --target is being used.
67
- """
68
- dist_restriction_set = any(
69
- [
70
- options.python_version,
71
- options.platforms,
72
- options.abis,
73
- options.implementation,
74
- ]
75
- )
76
-
77
- binary_only = FormatControl(set(), {":all:"})
78
- sdist_dependencies_allowed = (
79
- options.format_control != binary_only and not options.ignore_dependencies
80
- )
81
-
82
- # Installations or downloads using dist restrictions must not combine
83
- # source distributions and dist-specific wheels, as they are not
84
- # guaranteed to be locally compatible.
85
- if dist_restriction_set and sdist_dependencies_allowed:
86
- raise CommandError(
87
- "When restricting platform and interpreter constraints using "
88
- "--python-version, --platform, --abi, or --implementation, "
89
- "either --no-deps must be set, or --only-binary=:all: must be "
90
- "set and --no-binary must not be set (or must be set to "
91
- ":none:)."
92
- )
93
-
94
- if check_target:
95
- if dist_restriction_set and not options.target_dir:
96
- raise CommandError(
97
- "Can not use any platform or abi specific options unless "
98
- "installing via '--target'"
99
- )
100
-
101
-
102
- def _path_option_check(option: Option, opt: str, value: str) -> str:
103
- return os.path.expanduser(value)
104
-
105
-
106
- def _package_name_option_check(option: Option, opt: str, value: str) -> str:
107
- return canonicalize_name(value)
108
-
109
-
110
- class PipOption(Option):
111
- TYPES = Option.TYPES + ("path", "package_name")
112
- TYPE_CHECKER = Option.TYPE_CHECKER.copy()
113
- TYPE_CHECKER["package_name"] = _package_name_option_check
114
- TYPE_CHECKER["path"] = _path_option_check
115
-
116
-
117
- ###########
118
- # options #
119
- ###########
120
-
121
- help_: Callable[..., Option] = partial(
122
- Option,
123
- "-h",
124
- "--help",
125
- dest="help",
126
- action="help",
127
- help="Show help.",
128
- )
129
-
130
- debug_mode: Callable[..., Option] = partial(
131
- Option,
132
- "--debug",
133
- dest="debug_mode",
134
- action="store_true",
135
- default=False,
136
- help=(
137
- "Let unhandled exceptions propagate outside the main subroutine, "
138
- "instead of logging them to stderr."
139
- ),
140
- )
141
-
142
- isolated_mode: Callable[..., Option] = partial(
143
- Option,
144
- "--isolated",
145
- dest="isolated_mode",
146
- action="store_true",
147
- default=False,
148
- help=(
149
- "Run pip in an isolated mode, ignoring environment variables and user "
150
- "configuration."
151
- ),
152
- )
153
-
154
- require_virtualenv: Callable[..., Option] = partial(
155
- Option,
156
- "--require-virtualenv",
157
- "--require-venv",
158
- dest="require_venv",
159
- action="store_true",
160
- default=False,
161
- help=(
162
- "Allow pip to only run in a virtual environment; "
163
- "exit with an error otherwise."
164
- ),
165
- )
166
-
167
- override_externally_managed: Callable[..., Option] = partial(
168
- Option,
169
- "--break-system-packages",
170
- dest="override_externally_managed",
171
- action="store_true",
172
- help="Allow pip to modify an EXTERNALLY-MANAGED Python installation",
173
- )
174
-
175
- python: Callable[..., Option] = partial(
176
- Option,
177
- "--python",
178
- dest="python",
179
- help="Run pip with the specified Python interpreter.",
180
- )
181
-
182
- verbose: Callable[..., Option] = partial(
183
- Option,
184
- "-v",
185
- "--verbose",
186
- dest="verbose",
187
- action="count",
188
- default=0,
189
- help="Give more output. Option is additive, and can be used up to 3 times.",
190
- )
191
-
192
- no_color: Callable[..., Option] = partial(
193
- Option,
194
- "--no-color",
195
- dest="no_color",
196
- action="store_true",
197
- default=False,
198
- help="Suppress colored output.",
199
- )
200
-
201
- version: Callable[..., Option] = partial(
202
- Option,
203
- "-V",
204
- "--version",
205
- dest="version",
206
- action="store_true",
207
- help="Show version and exit.",
208
- )
209
-
210
- quiet: Callable[..., Option] = partial(
211
- Option,
212
- "-q",
213
- "--quiet",
214
- dest="quiet",
215
- action="count",
216
- default=0,
217
- help=(
218
- "Give less output. Option is additive, and can be used up to 3"
219
- " times (corresponding to WARNING, ERROR, and CRITICAL logging"
220
- " levels)."
221
- ),
222
- )
223
-
224
- progress_bar: Callable[..., Option] = partial(
225
- Option,
226
- "--progress-bar",
227
- dest="progress_bar",
228
- type="choice",
229
- choices=["on", "off"],
230
- default="on",
231
- help="Specify whether the progress bar should be used [on, off] (default: on)",
232
- )
233
-
234
- log: Callable[..., Option] = partial(
235
- PipOption,
236
- "--log",
237
- "--log-file",
238
- "--local-log",
239
- dest="log",
240
- metavar="path",
241
- type="path",
242
- help="Path to a verbose appending log.",
243
- )
244
-
245
- no_input: Callable[..., Option] = partial(
246
- Option,
247
- # Don't ask for input
248
- "--no-input",
249
- dest="no_input",
250
- action="store_true",
251
- default=False,
252
- help="Disable prompting for input.",
253
- )
254
-
255
- keyring_provider: Callable[..., Option] = partial(
256
- Option,
257
- "--keyring-provider",
258
- dest="keyring_provider",
259
- choices=["auto", "disabled", "import", "subprocess"],
260
- default="auto",
261
- help=(
262
- "Enable the credential lookup via the keyring library if user input is allowed."
263
- " Specify which mechanism to use [disabled, import, subprocess]."
264
- " (default: disabled)"
265
- ),
266
- )
267
-
268
- proxy: Callable[..., Option] = partial(
269
- Option,
270
- "--proxy",
271
- dest="proxy",
272
- type="str",
273
- default="",
274
- help="Specify a proxy in the form scheme://[user:passwd@]proxy.server:port.",
275
- )
276
-
277
- retries: Callable[..., Option] = partial(
278
- Option,
279
- "--retries",
280
- dest="retries",
281
- type="int",
282
- default=5,
283
- help="Maximum number of retries each connection should attempt "
284
- "(default %default times).",
285
- )
286
-
287
- timeout: Callable[..., Option] = partial(
288
- Option,
289
- "--timeout",
290
- "--default-timeout",
291
- metavar="sec",
292
- dest="timeout",
293
- type="float",
294
- default=15,
295
- help="Set the socket timeout (default %default seconds).",
296
- )
297
-
298
-
299
- def exists_action() -> Option:
300
- return Option(
301
- # Option when path already exist
302
- "--exists-action",
303
- dest="exists_action",
304
- type="choice",
305
- choices=["s", "i", "w", "b", "a"],
306
- default=[],
307
- action="append",
308
- metavar="action",
309
- help="Default action when a path already exists: "
310
- "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.",
311
- )
312
-
313
-
314
- cert: Callable[..., Option] = partial(
315
- PipOption,
316
- "--cert",
317
- dest="cert",
318
- type="path",
319
- metavar="path",
320
- help=(
321
- "Path to PEM-encoded CA certificate bundle. "
322
- "If provided, overrides the default. "
323
- "See 'SSL Certificate Verification' in pip documentation "
324
- "for more information."
325
- ),
326
- )
327
-
328
- client_cert: Callable[..., Option] = partial(
329
- PipOption,
330
- "--client-cert",
331
- dest="client_cert",
332
- type="path",
333
- default=None,
334
- metavar="path",
335
- help="Path to SSL client certificate, a single file containing the "
336
- "private key and the certificate in PEM format.",
337
- )
338
-
339
- index_url: Callable[..., Option] = partial(
340
- Option,
341
- "-i",
342
- "--index-url",
343
- "--pypi-url",
344
- dest="index_url",
345
- metavar="URL",
346
- default=PyPI.simple_url,
347
- help="Base URL of the Python Package Index (default %default). "
348
- "This should point to a repository compliant with PEP 503 "
349
- "(the simple repository API) or a local directory laid out "
350
- "in the same format.",
351
- )
352
-
353
-
354
- def extra_index_url() -> Option:
355
- return Option(
356
- "--extra-index-url",
357
- dest="extra_index_urls",
358
- metavar="URL",
359
- action="append",
360
- default=[],
361
- help="Extra URLs of package indexes to use in addition to "
362
- "--index-url. Should follow the same rules as "
363
- "--index-url.",
364
- )
365
-
366
-
367
- no_index: Callable[..., Option] = partial(
368
- Option,
369
- "--no-index",
370
- dest="no_index",
371
- action="store_true",
372
- default=False,
373
- help="Ignore package index (only looking at --find-links URLs instead).",
374
- )
375
-
376
-
377
- def find_links() -> Option:
378
- return Option(
379
- "-f",
380
- "--find-links",
381
- dest="find_links",
382
- action="append",
383
- default=[],
384
- metavar="url",
385
- help="If a URL or path to an html file, then parse for links to "
386
- "archives such as sdist (.tar.gz) or wheel (.whl) files. "
387
- "If a local path or file:// URL that's a directory, "
388
- "then look for archives in the directory listing. "
389
- "Links to VCS project URLs are not supported.",
390
- )
391
-
392
-
393
- def trusted_host() -> Option:
394
- return Option(
395
- "--trusted-host",
396
- dest="trusted_hosts",
397
- action="append",
398
- metavar="HOSTNAME",
399
- default=[],
400
- help="Mark this host or host:port pair as trusted, even though it "
401
- "does not have valid or any HTTPS.",
402
- )
403
-
404
-
405
- def constraints() -> Option:
406
- return Option(
407
- "-c",
408
- "--constraint",
409
- dest="constraints",
410
- action="append",
411
- default=[],
412
- metavar="file",
413
- help="Constrain versions using the given constraints file. "
414
- "This option can be used multiple times.",
415
- )
416
-
417
-
418
- def requirements() -> Option:
419
- return Option(
420
- "-r",
421
- "--requirement",
422
- dest="requirements",
423
- action="append",
424
- default=[],
425
- metavar="file",
426
- help="Install from the given requirements file. "
427
- "This option can be used multiple times.",
428
- )
429
-
430
-
431
- def editable() -> Option:
432
- return Option(
433
- "-e",
434
- "--editable",
435
- dest="editables",
436
- action="append",
437
- default=[],
438
- metavar="path/url",
439
- help=(
440
- "Install a project in editable mode (i.e. setuptools "
441
- '"develop mode") from a local project path or a VCS url.'
442
- ),
443
- )
444
-
445
-
446
- def _handle_src(option: Option, opt_str: str, value: str, parser: OptionParser) -> None:
447
- value = os.path.abspath(value)
448
- setattr(parser.values, option.dest, value)
449
-
450
-
451
- src: Callable[..., Option] = partial(
452
- PipOption,
453
- "--src",
454
- "--source",
455
- "--source-dir",
456
- "--source-directory",
457
- dest="src_dir",
458
- type="path",
459
- metavar="dir",
460
- default=get_src_prefix(),
461
- action="callback",
462
- callback=_handle_src,
463
- help="Directory to check out editable projects into. "
464
- 'The default in a virtualenv is "<venv path>/src". '
465
- 'The default for global installs is "<current dir>/src".',
466
- )
467
-
468
-
469
- def _get_format_control(values: Values, option: Option) -> Any:
470
- """Get a format_control object."""
471
- return getattr(values, option.dest)
472
-
473
-
474
- def _handle_no_binary(
475
- option: Option, opt_str: str, value: str, parser: OptionParser
476
- ) -> None:
477
- existing = _get_format_control(parser.values, option)
478
- FormatControl.handle_mutual_excludes(
479
- value,
480
- existing.no_binary,
481
- existing.only_binary,
482
- )
483
-
484
-
485
- def _handle_only_binary(
486
- option: Option, opt_str: str, value: str, parser: OptionParser
487
- ) -> None:
488
- existing = _get_format_control(parser.values, option)
489
- FormatControl.handle_mutual_excludes(
490
- value,
491
- existing.only_binary,
492
- existing.no_binary,
493
- )
494
-
495
-
496
- def no_binary() -> Option:
497
- format_control = FormatControl(set(), set())
498
- return Option(
499
- "--no-binary",
500
- dest="format_control",
501
- action="callback",
502
- callback=_handle_no_binary,
503
- type="str",
504
- default=format_control,
505
- help="Do not use binary packages. Can be supplied multiple times, and "
506
- 'each time adds to the existing value. Accepts either ":all:" to '
507
- 'disable all binary packages, ":none:" to empty the set (notice '
508
- "the colons), or one or more package names with commas between "
509
- "them (no colons). Note that some packages are tricky to compile "
510
- "and may fail to install when this option is used on them.",
511
- )
512
-
513
-
514
- def only_binary() -> Option:
515
- format_control = FormatControl(set(), set())
516
- return Option(
517
- "--only-binary",
518
- dest="format_control",
519
- action="callback",
520
- callback=_handle_only_binary,
521
- type="str",
522
- default=format_control,
523
- help="Do not use source packages. Can be supplied multiple times, and "
524
- 'each time adds to the existing value. Accepts either ":all:" to '
525
- 'disable all source packages, ":none:" to empty the set, or one '
526
- "or more package names with commas between them. Packages "
527
- "without binary distributions will fail to install when this "
528
- "option is used on them.",
529
- )
530
-
531
-
532
- platforms: Callable[..., Option] = partial(
533
- Option,
534
- "--platform",
535
- dest="platforms",
536
- metavar="platform",
537
- action="append",
538
- default=None,
539
- help=(
540
- "Only use wheels compatible with <platform>. Defaults to the "
541
- "platform of the running system. Use this option multiple times to "
542
- "specify multiple platforms supported by the target interpreter."
543
- ),
544
- )
545
-
546
-
547
- # This was made a separate function for unit-testing purposes.
548
- def _convert_python_version(value: str) -> Tuple[Tuple[int, ...], Optional[str]]:
549
- """
550
- Convert a version string like "3", "37", or "3.7.3" into a tuple of ints.
551
-
552
- :return: A 2-tuple (version_info, error_msg), where `error_msg` is
553
- non-None if and only if there was a parsing error.
554
- """
555
- if not value:
556
- # The empty string is the same as not providing a value.
557
- return (None, None)
558
-
559
- parts = value.split(".")
560
- if len(parts) > 3:
561
- return ((), "at most three version parts are allowed")
562
-
563
- if len(parts) == 1:
564
- # Then we are in the case of "3" or "37".
565
- value = parts[0]
566
- if len(value) > 1:
567
- parts = [value[0], value[1:]]
568
-
569
- try:
570
- version_info = tuple(int(part) for part in parts)
571
- except ValueError:
572
- return ((), "each version part must be an integer")
573
-
574
- return (version_info, None)
575
-
576
-
577
- def _handle_python_version(
578
- option: Option, opt_str: str, value: str, parser: OptionParser
579
- ) -> None:
580
- """
581
- Handle a provided --python-version value.
582
- """
583
- version_info, error_msg = _convert_python_version(value)
584
- if error_msg is not None:
585
- msg = "invalid --python-version value: {!r}: {}".format(
586
- value,
587
- error_msg,
588
- )
589
- raise_option_error(parser, option=option, msg=msg)
590
-
591
- parser.values.python_version = version_info
592
-
593
-
594
- python_version: Callable[..., Option] = partial(
595
- Option,
596
- "--python-version",
597
- dest="python_version",
598
- metavar="python_version",
599
- action="callback",
600
- callback=_handle_python_version,
601
- type="str",
602
- default=None,
603
- help=dedent(
604
- """\
605
- The Python interpreter version to use for wheel and "Requires-Python"
606
- compatibility checks. Defaults to a version derived from the running
607
- interpreter. The version can be specified using up to three dot-separated
608
- integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor
609
- version can also be given as a string without dots (e.g. "37" for 3.7.0).
610
- """
611
- ),
612
- )
613
-
614
-
615
- implementation: Callable[..., Option] = partial(
616
- Option,
617
- "--implementation",
618
- dest="implementation",
619
- metavar="implementation",
620
- default=None,
621
- help=(
622
- "Only use wheels compatible with Python "
623
- "implementation <implementation>, e.g. 'pp', 'jy', 'cp', "
624
- " or 'ip'. If not specified, then the current "
625
- "interpreter implementation is used. Use 'py' to force "
626
- "implementation-agnostic wheels."
627
- ),
628
- )
629
-
630
-
631
- abis: Callable[..., Option] = partial(
632
- Option,
633
- "--abi",
634
- dest="abis",
635
- metavar="abi",
636
- action="append",
637
- default=None,
638
- help=(
639
- "Only use wheels compatible with Python abi <abi>, e.g. 'pypy_41'. "
640
- "If not specified, then the current interpreter abi tag is used. "
641
- "Use this option multiple times to specify multiple abis supported "
642
- "by the target interpreter. Generally you will need to specify "
643
- "--implementation, --platform, and --python-version when using this "
644
- "option."
645
- ),
646
- )
647
-
648
-
649
- def add_target_python_options(cmd_opts: OptionGroup) -> None:
650
- cmd_opts.add_option(platforms())
651
- cmd_opts.add_option(python_version())
652
- cmd_opts.add_option(implementation())
653
- cmd_opts.add_option(abis())
654
-
655
-
656
- def make_target_python(options: Values) -> TargetPython:
657
- target_python = TargetPython(
658
- platforms=options.platforms,
659
- py_version_info=options.python_version,
660
- abis=options.abis,
661
- implementation=options.implementation,
662
- )
663
-
664
- return target_python
665
-
666
-
667
- def prefer_binary() -> Option:
668
- return Option(
669
- "--prefer-binary",
670
- dest="prefer_binary",
671
- action="store_true",
672
- default=False,
673
- help="Prefer older binary packages over newer source packages.",
674
- )
675
-
676
-
677
- cache_dir: Callable[..., Option] = partial(
678
- PipOption,
679
- "--cache-dir",
680
- dest="cache_dir",
681
- default=USER_CACHE_DIR,
682
- metavar="dir",
683
- type="path",
684
- help="Store the cache data in <dir>.",
685
- )
686
-
687
-
688
- def _handle_no_cache_dir(
689
- option: Option, opt: str, value: str, parser: OptionParser
690
- ) -> None:
691
- """
692
- Process a value provided for the --no-cache-dir option.
693
-
694
- This is an optparse.Option callback for the --no-cache-dir option.
695
- """
696
- # The value argument will be None if --no-cache-dir is passed via the
697
- # command-line, since the option doesn't accept arguments. However,
698
- # the value can be non-None if the option is triggered e.g. by an
699
- # environment variable, like PIP_NO_CACHE_DIR=true.
700
- if value is not None:
701
- # Then parse the string value to get argument error-checking.
702
- try:
703
- strtobool(value)
704
- except ValueError as exc:
705
- raise_option_error(parser, option=option, msg=str(exc))
706
-
707
- # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool()
708
- # converted to 0 (like "false" or "no") caused cache_dir to be disabled
709
- # rather than enabled (logic would say the latter). Thus, we disable
710
- # the cache directory not just on values that parse to True, but (for
711
- # backwards compatibility reasons) also on values that parse to False.
712
- # In other words, always set it to False if the option is provided in
713
- # some (valid) form.
714
- parser.values.cache_dir = False
715
-
716
-
717
- no_cache: Callable[..., Option] = partial(
718
- Option,
719
- "--no-cache-dir",
720
- dest="cache_dir",
721
- action="callback",
722
- callback=_handle_no_cache_dir,
723
- help="Disable the cache.",
724
- )
725
-
726
- no_deps: Callable[..., Option] = partial(
727
- Option,
728
- "--no-deps",
729
- "--no-dependencies",
730
- dest="ignore_dependencies",
731
- action="store_true",
732
- default=False,
733
- help="Don't install package dependencies.",
734
- )
735
-
736
- ignore_requires_python: Callable[..., Option] = partial(
737
- Option,
738
- "--ignore-requires-python",
739
- dest="ignore_requires_python",
740
- action="store_true",
741
- help="Ignore the Requires-Python information.",
742
- )
743
-
744
- no_build_isolation: Callable[..., Option] = partial(
745
- Option,
746
- "--no-build-isolation",
747
- dest="build_isolation",
748
- action="store_false",
749
- default=True,
750
- help="Disable isolation when building a modern source distribution. "
751
- "Build dependencies specified by PEP 518 must be already installed "
752
- "if this option is used.",
753
- )
754
-
755
- check_build_deps: Callable[..., Option] = partial(
756
- Option,
757
- "--check-build-dependencies",
758
- dest="check_build_deps",
759
- action="store_true",
760
- default=False,
761
- help="Check the build dependencies when PEP517 is used.",
762
- )
763
-
764
-
765
- def _handle_no_use_pep517(
766
- option: Option, opt: str, value: str, parser: OptionParser
767
- ) -> None:
768
- """
769
- Process a value provided for the --no-use-pep517 option.
770
-
771
- This is an optparse.Option callback for the no_use_pep517 option.
772
- """
773
- # Since --no-use-pep517 doesn't accept arguments, the value argument
774
- # will be None if --no-use-pep517 is passed via the command-line.
775
- # However, the value can be non-None if the option is triggered e.g.
776
- # by an environment variable, for example "PIP_NO_USE_PEP517=true".
777
- if value is not None:
778
- msg = """A value was passed for --no-use-pep517,
779
- probably using either the PIP_NO_USE_PEP517 environment variable
780
- or the "no-use-pep517" config file option. Use an appropriate value
781
- of the PIP_USE_PEP517 environment variable or the "use-pep517"
782
- config file option instead.
783
- """
784
- raise_option_error(parser, option=option, msg=msg)
785
-
786
- # If user doesn't wish to use pep517, we check if setuptools and wheel are installed
787
- # and raise error if it is not.
788
- packages = ("setuptools", "wheel")
789
- if not all(importlib.util.find_spec(package) for package in packages):
790
- msg = (
791
- f"It is not possible to use --no-use-pep517 "
792
- f"without {' and '.join(packages)} installed."
793
- )
794
- raise_option_error(parser, option=option, msg=msg)
795
-
796
- # Otherwise, --no-use-pep517 was passed via the command-line.
797
- parser.values.use_pep517 = False
798
-
799
-
800
- use_pep517: Any = partial(
801
- Option,
802
- "--use-pep517",
803
- dest="use_pep517",
804
- action="store_true",
805
- default=None,
806
- help="Use PEP 517 for building source distributions "
807
- "(use --no-use-pep517 to force legacy behaviour).",
808
- )
809
-
810
- no_use_pep517: Any = partial(
811
- Option,
812
- "--no-use-pep517",
813
- dest="use_pep517",
814
- action="callback",
815
- callback=_handle_no_use_pep517,
816
- default=None,
817
- help=SUPPRESS_HELP,
818
- )
819
-
820
-
821
- def _handle_config_settings(
822
- option: Option, opt_str: str, value: str, parser: OptionParser
823
- ) -> None:
824
- key, sep, val = value.partition("=")
825
- if sep != "=":
826
- parser.error(f"Arguments to {opt_str} must be of the form KEY=VAL") # noqa
827
- dest = getattr(parser.values, option.dest)
828
- if dest is None:
829
- dest = {}
830
- setattr(parser.values, option.dest, dest)
831
- if key in dest:
832
- if isinstance(dest[key], list):
833
- dest[key].append(val)
834
- else:
835
- dest[key] = [dest[key], val]
836
- else:
837
- dest[key] = val
838
-
839
-
840
- config_settings: Callable[..., Option] = partial(
841
- Option,
842
- "-C",
843
- "--config-settings",
844
- dest="config_settings",
845
- type=str,
846
- action="callback",
847
- callback=_handle_config_settings,
848
- metavar="settings",
849
- help="Configuration settings to be passed to the PEP 517 build backend. "
850
- "Settings take the form KEY=VALUE. Use multiple --config-settings options "
851
- "to pass multiple keys to the backend.",
852
- )
853
-
854
- build_options: Callable[..., Option] = partial(
855
- Option,
856
- "--build-option",
857
- dest="build_options",
858
- metavar="options",
859
- action="append",
860
- help="Extra arguments to be supplied to 'setup.py bdist_wheel'.",
861
- )
862
-
863
- global_options: Callable[..., Option] = partial(
864
- Option,
865
- "--global-option",
866
- dest="global_options",
867
- action="append",
868
- metavar="options",
869
- help="Extra global options to be supplied to the setup.py "
870
- "call before the install or bdist_wheel command.",
871
- )
872
-
873
- no_clean: Callable[..., Option] = partial(
874
- Option,
875
- "--no-clean",
876
- action="store_true",
877
- default=False,
878
- help="Don't clean up build directories.",
879
- )
880
-
881
- pre: Callable[..., Option] = partial(
882
- Option,
883
- "--pre",
884
- action="store_true",
885
- default=False,
886
- help="Include pre-release and development versions. By default, "
887
- "pip only finds stable versions.",
888
- )
889
-
890
- disable_pip_version_check: Callable[..., Option] = partial(
891
- Option,
892
- "--disable-pip-version-check",
893
- dest="disable_pip_version_check",
894
- action="store_true",
895
- default=False,
896
- help="Don't periodically check PyPI to determine whether a new version "
897
- "of pip is available for download. Implied with --no-index.",
898
- )
899
-
900
- root_user_action: Callable[..., Option] = partial(
901
- Option,
902
- "--root-user-action",
903
- dest="root_user_action",
904
- default="warn",
905
- choices=["warn", "ignore"],
906
- help="Action if pip is run as a root user. By default, a warning message is shown.",
907
- )
908
-
909
-
910
- def _handle_merge_hash(
911
- option: Option, opt_str: str, value: str, parser: OptionParser
912
- ) -> None:
913
- """Given a value spelled "algo:digest", append the digest to a list
914
- pointed to in a dict by the algo name."""
915
- if not parser.values.hashes:
916
- parser.values.hashes = {}
917
- try:
918
- algo, digest = value.split(":", 1)
919
- except ValueError:
920
- parser.error(
921
- "Arguments to {} must be a hash name " # noqa
922
- "followed by a value, like --hash=sha256:"
923
- "abcde...".format(opt_str)
924
- )
925
- if algo not in STRONG_HASHES:
926
- parser.error(
927
- "Allowed hash algorithms for {} are {}.".format( # noqa
928
- opt_str, ", ".join(STRONG_HASHES)
929
- )
930
- )
931
- parser.values.hashes.setdefault(algo, []).append(digest)
932
-
933
-
934
- hash: Callable[..., Option] = partial(
935
- Option,
936
- "--hash",
937
- # Hash values eventually end up in InstallRequirement.hashes due to
938
- # __dict__ copying in process_line().
939
- dest="hashes",
940
- action="callback",
941
- callback=_handle_merge_hash,
942
- type="string",
943
- help="Verify that the package's archive matches this "
944
- "hash before installing. Example: --hash=sha256:abcdef...",
945
- )
946
-
947
-
948
- require_hashes: Callable[..., Option] = partial(
949
- Option,
950
- "--require-hashes",
951
- dest="require_hashes",
952
- action="store_true",
953
- default=False,
954
- help="Require a hash to check each requirement against, for "
955
- "repeatable installs. This option is implied when any package in a "
956
- "requirements file has a --hash option.",
957
- )
958
-
959
-
960
- list_path: Callable[..., Option] = partial(
961
- PipOption,
962
- "--path",
963
- dest="path",
964
- type="path",
965
- action="append",
966
- help="Restrict to the specified installation path for listing "
967
- "packages (can be used multiple times).",
968
- )
969
-
970
-
971
- def check_list_path_option(options: Values) -> None:
972
- if options.path and (options.user or options.local):
973
- raise CommandError("Cannot combine '--path' with '--user' or '--local'")
974
-
975
-
976
- list_exclude: Callable[..., Option] = partial(
977
- PipOption,
978
- "--exclude",
979
- dest="excludes",
980
- action="append",
981
- metavar="package",
982
- type="package_name",
983
- help="Exclude specified package from the output",
984
- )
985
-
986
-
987
- no_python_version_warning: Callable[..., Option] = partial(
988
- Option,
989
- "--no-python-version-warning",
990
- dest="no_python_version_warning",
991
- action="store_true",
992
- default=False,
993
- help="Silence deprecation warnings for upcoming unsupported Pythons.",
994
- )
995
-
996
-
997
- # Features that are now always on. A warning is printed if they are used.
998
- ALWAYS_ENABLED_FEATURES = [
999
- "no-binary-enable-wheel-cache", # always on since 23.1
1000
- ]
1001
-
1002
- use_new_feature: Callable[..., Option] = partial(
1003
- Option,
1004
- "--use-feature",
1005
- dest="features_enabled",
1006
- metavar="feature",
1007
- action="append",
1008
- default=[],
1009
- choices=[
1010
- "fast-deps",
1011
- "truststore",
1012
- ]
1013
- + ALWAYS_ENABLED_FEATURES,
1014
- help="Enable new functionality, that may be backward incompatible.",
1015
- )
1016
-
1017
- use_deprecated_feature: Callable[..., Option] = partial(
1018
- Option,
1019
- "--use-deprecated",
1020
- dest="deprecated_features_enabled",
1021
- metavar="feature",
1022
- action="append",
1023
- default=[],
1024
- choices=[
1025
- "legacy-resolver",
1026
- ],
1027
- help=("Enable deprecated functionality, that will be removed in the future."),
1028
- )
1029
-
1030
-
1031
- ##########
1032
- # groups #
1033
- ##########
1034
-
1035
- general_group: Dict[str, Any] = {
1036
- "name": "General Options",
1037
- "options": [
1038
- help_,
1039
- debug_mode,
1040
- isolated_mode,
1041
- require_virtualenv,
1042
- python,
1043
- verbose,
1044
- version,
1045
- quiet,
1046
- log,
1047
- no_input,
1048
- keyring_provider,
1049
- proxy,
1050
- retries,
1051
- timeout,
1052
- exists_action,
1053
- trusted_host,
1054
- cert,
1055
- client_cert,
1056
- cache_dir,
1057
- no_cache,
1058
- disable_pip_version_check,
1059
- no_color,
1060
- no_python_version_warning,
1061
- use_new_feature,
1062
- use_deprecated_feature,
1063
- ],
1064
- }
1065
-
1066
- index_group: Dict[str, Any] = {
1067
- "name": "Package Index Options",
1068
- "options": [
1069
- index_url,
1070
- extra_index_url,
1071
- no_index,
1072
- find_links,
1073
- ],
1074
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/markup.py DELETED
@@ -1,246 +0,0 @@
1
- import re
2
- from ast import literal_eval
3
- from operator import attrgetter
4
- from typing import Callable, Iterable, List, Match, NamedTuple, Optional, Tuple, Union
5
-
6
- from ._emoji_replace import _emoji_replace
7
- from .emoji import EmojiVariant
8
- from .errors import MarkupError
9
- from .style import Style
10
- from .text import Span, Text
11
-
12
- RE_TAGS = re.compile(
13
- r"""((\\*)\[([a-z#/@][^[]*?)])""",
14
- re.VERBOSE,
15
- )
16
-
17
- RE_HANDLER = re.compile(r"^([\w.]*?)(\(.*?\))?$")
18
-
19
-
20
- class Tag(NamedTuple):
21
- """A tag in console markup."""
22
-
23
- name: str
24
- """The tag name. e.g. 'bold'."""
25
- parameters: Optional[str]
26
- """Any additional parameters after the name."""
27
-
28
- def __str__(self) -> str:
29
- return (
30
- self.name if self.parameters is None else f"{self.name} {self.parameters}"
31
- )
32
-
33
- @property
34
- def markup(self) -> str:
35
- """Get the string representation of this tag."""
36
- return (
37
- f"[{self.name}]"
38
- if self.parameters is None
39
- else f"[{self.name}={self.parameters}]"
40
- )
41
-
42
-
43
- _ReStringMatch = Match[str] # regex match object
44
- _ReSubCallable = Callable[[_ReStringMatch], str] # Callable invoked by re.sub
45
- _EscapeSubMethod = Callable[[_ReSubCallable, str], str] # Sub method of a compiled re
46
-
47
-
48
- def escape(
49
- markup: str,
50
- _escape: _EscapeSubMethod = re.compile(r"(\\*)(\[[a-z#/@][^[]*?])").sub,
51
- ) -> str:
52
- """Escapes text so that it won't be interpreted as markup.
53
-
54
- Args:
55
- markup (str): Content to be inserted in to markup.
56
-
57
- Returns:
58
- str: Markup with square brackets escaped.
59
- """
60
-
61
- def escape_backslashes(match: Match[str]) -> str:
62
- """Called by re.sub replace matches."""
63
- backslashes, text = match.groups()
64
- return f"{backslashes}{backslashes}\\{text}"
65
-
66
- markup = _escape(escape_backslashes, markup)
67
- return markup
68
-
69
-
70
- def _parse(markup: str) -> Iterable[Tuple[int, Optional[str], Optional[Tag]]]:
71
- """Parse markup in to an iterable of tuples of (position, text, tag).
72
-
73
- Args:
74
- markup (str): A string containing console markup
75
-
76
- """
77
- position = 0
78
- _divmod = divmod
79
- _Tag = Tag
80
- for match in RE_TAGS.finditer(markup):
81
- full_text, escapes, tag_text = match.groups()
82
- start, end = match.span()
83
- if start > position:
84
- yield start, markup[position:start], None
85
- if escapes:
86
- backslashes, escaped = _divmod(len(escapes), 2)
87
- if backslashes:
88
- # Literal backslashes
89
- yield start, "\\" * backslashes, None
90
- start += backslashes * 2
91
- if escaped:
92
- # Escape of tag
93
- yield start, full_text[len(escapes) :], None
94
- position = end
95
- continue
96
- text, equals, parameters = tag_text.partition("=")
97
- yield start, None, _Tag(text, parameters if equals else None)
98
- position = end
99
- if position < len(markup):
100
- yield position, markup[position:], None
101
-
102
-
103
- def render(
104
- markup: str,
105
- style: Union[str, Style] = "",
106
- emoji: bool = True,
107
- emoji_variant: Optional[EmojiVariant] = None,
108
- ) -> Text:
109
- """Render console markup in to a Text instance.
110
-
111
- Args:
112
- markup (str): A string containing console markup.
113
- emoji (bool, optional): Also render emoji code. Defaults to True.
114
-
115
- Raises:
116
- MarkupError: If there is a syntax error in the markup.
117
-
118
- Returns:
119
- Text: A test instance.
120
- """
121
- emoji_replace = _emoji_replace
122
- if "[" not in markup:
123
- return Text(
124
- emoji_replace(markup, default_variant=emoji_variant) if emoji else markup,
125
- style=style,
126
- )
127
- text = Text(style=style)
128
- append = text.append
129
- normalize = Style.normalize
130
-
131
- style_stack: List[Tuple[int, Tag]] = []
132
- pop = style_stack.pop
133
-
134
- spans: List[Span] = []
135
- append_span = spans.append
136
-
137
- _Span = Span
138
- _Tag = Tag
139
-
140
- def pop_style(style_name: str) -> Tuple[int, Tag]:
141
- """Pop tag matching given style name."""
142
- for index, (_, tag) in enumerate(reversed(style_stack), 1):
143
- if tag.name == style_name:
144
- return pop(-index)
145
- raise KeyError(style_name)
146
-
147
- for position, plain_text, tag in _parse(markup):
148
- if plain_text is not None:
149
- # Handle open brace escapes, where the brace is not part of a tag.
150
- plain_text = plain_text.replace("\\[", "[")
151
- append(emoji_replace(plain_text) if emoji else plain_text)
152
- elif tag is not None:
153
- if tag.name.startswith("/"): # Closing tag
154
- style_name = tag.name[1:].strip()
155
-
156
- if style_name: # explicit close
157
- style_name = normalize(style_name)
158
- try:
159
- start, open_tag = pop_style(style_name)
160
- except KeyError:
161
- raise MarkupError(
162
- f"closing tag '{tag.markup}' at position {position} doesn't match any open tag"
163
- ) from None
164
- else: # implicit close
165
- try:
166
- start, open_tag = pop()
167
- except IndexError:
168
- raise MarkupError(
169
- f"closing tag '[/]' at position {position} has nothing to close"
170
- ) from None
171
-
172
- if open_tag.name.startswith("@"):
173
- if open_tag.parameters:
174
- handler_name = ""
175
- parameters = open_tag.parameters.strip()
176
- handler_match = RE_HANDLER.match(parameters)
177
- if handler_match is not None:
178
- handler_name, match_parameters = handler_match.groups()
179
- parameters = (
180
- "()" if match_parameters is None else match_parameters
181
- )
182
-
183
- try:
184
- meta_params = literal_eval(parameters)
185
- except SyntaxError as error:
186
- raise MarkupError(
187
- f"error parsing {parameters!r} in {open_tag.parameters!r}; {error.msg}"
188
- )
189
- except Exception as error:
190
- raise MarkupError(
191
- f"error parsing {open_tag.parameters!r}; {error}"
192
- ) from None
193
-
194
- if handler_name:
195
- meta_params = (
196
- handler_name,
197
- meta_params
198
- if isinstance(meta_params, tuple)
199
- else (meta_params,),
200
- )
201
-
202
- else:
203
- meta_params = ()
204
-
205
- append_span(
206
- _Span(
207
- start, len(text), Style(meta={open_tag.name: meta_params})
208
- )
209
- )
210
- else:
211
- append_span(_Span(start, len(text), str(open_tag)))
212
-
213
- else: # Opening tag
214
- normalized_tag = _Tag(normalize(tag.name), tag.parameters)
215
- style_stack.append((len(text), normalized_tag))
216
-
217
- text_length = len(text)
218
- while style_stack:
219
- start, tag = style_stack.pop()
220
- style = str(tag)
221
- if style:
222
- append_span(_Span(start, text_length, style))
223
-
224
- text.spans = sorted(spans[::-1], key=attrgetter("start"))
225
- return text
226
-
227
-
228
- if __name__ == "__main__": # pragma: no cover
229
-
230
- MARKUP = [
231
- "[red]Hello World[/red]",
232
- "[magenta]Hello [b]World[/b]",
233
- "[bold]Bold[italic] bold and italic [/bold]italic[/italic]",
234
- "Click [link=https://www.willmcgugan.com]here[/link] to visit my Blog",
235
- ":warning-emoji: [bold red blink] DANGER![/]",
236
- ]
237
-
238
- from pip._vendor.rich import print
239
- from pip._vendor.rich.table import Table
240
-
241
- grid = Table("Markup", "Result", padding=(0, 1))
242
-
243
- for markup in MARKUP:
244
- grid.add_row(Text(markup), markup)
245
-
246
- print(grid)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Scripts/deactivate.bat DELETED
@@ -1,22 +0,0 @@
1
- @echo off
2
-
3
- if defined _OLD_VIRTUAL_PROMPT (
4
- set "PROMPT=%_OLD_VIRTUAL_PROMPT%"
5
- )
6
- set _OLD_VIRTUAL_PROMPT=
7
-
8
- if defined _OLD_VIRTUAL_PYTHONHOME (
9
- set "PYTHONHOME=%_OLD_VIRTUAL_PYTHONHOME%"
10
- set _OLD_VIRTUAL_PYTHONHOME=
11
- )
12
-
13
- if defined _OLD_VIRTUAL_PATH (
14
- set "PATH=%_OLD_VIRTUAL_PATH%"
15
- )
16
-
17
- set _OLD_VIRTUAL_PATH=
18
-
19
- set VIRTUAL_ENV=
20
- set VIRTUAL_ENV_PROMPT=
21
-
22
- :END
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CAMP-ViL/Xplainer/inference.py DELETED
@@ -1,116 +0,0 @@
1
- import argparse
2
- import gc
3
- from pathlib import Path
4
-
5
- import torch
6
- from torch.utils.data import DataLoader
7
- from tqdm import tqdm
8
-
9
- from chestxray14 import ChestXray14Dataset
10
- from chexpert import CheXpertDataset
11
- from descriptors import disease_descriptors_chexpert, disease_descriptors_chestxray14
12
- from model import InferenceModel
13
- from utils import calculate_auroc
14
-
15
- torch.multiprocessing.set_sharing_strategy('file_system')
16
-
17
-
18
- def inference_chexpert():
19
- split = 'test'
20
- dataset = CheXpertDataset(f'data/chexpert/{split}_labels.csv') # also do test
21
- dataloader = DataLoader(dataset, batch_size=1, shuffle=False, collate_fn=lambda x: x, num_workers=0)
22
- inference_model = InferenceModel()
23
- all_descriptors = inference_model.get_all_descriptors(disease_descriptors_chexpert)
24
-
25
- all_labels = []
26
- all_probs_neg = []
27
-
28
- for batch in tqdm(dataloader):
29
- batch = batch[0]
30
- image_paths, labels, keys = batch
31
- image_paths = [Path(image_path) for image_path in image_paths]
32
- agg_probs = []
33
- agg_negative_probs = []
34
- for image_path in image_paths:
35
- probs, negative_probs = inference_model.get_descriptor_probs(image_path, descriptors=all_descriptors)
36
- agg_probs.append(probs)
37
- agg_negative_probs.append(negative_probs)
38
- probs = {} # Aggregated
39
- negative_probs = {} # Aggregated
40
- for key in agg_probs[0].keys():
41
- probs[key] = sum([p[key] for p in agg_probs]) / len(agg_probs) # Mean Aggregation
42
-
43
- for key in agg_negative_probs[0].keys():
44
- negative_probs[key] = sum([p[key] for p in agg_negative_probs]) / len(agg_negative_probs) # Mean Aggregation
45
-
46
- disease_probs, negative_disease_probs = inference_model.get_diseases_probs(disease_descriptors_chexpert, pos_probs=probs,
47
- negative_probs=negative_probs)
48
- predicted_diseases, prob_vector_neg_prompt = inference_model.get_predictions_bin_prompting(disease_descriptors_chexpert,
49
- disease_probs=disease_probs,
50
- negative_disease_probs=negative_disease_probs,
51
- keys=keys)
52
- all_labels.append(labels)
53
- all_probs_neg.append(prob_vector_neg_prompt)
54
-
55
- all_labels = torch.stack(all_labels)
56
- all_probs_neg = torch.stack(all_probs_neg)
57
-
58
- # evaluation
59
- existing_mask = sum(all_labels, 0) > 0
60
- all_labels_clean = all_labels[:, existing_mask]
61
- all_probs_neg_clean = all_probs_neg[:, existing_mask]
62
- all_keys_clean = [key for idx, key in enumerate(keys) if existing_mask[idx]]
63
-
64
- overall_auroc, per_disease_auroc = calculate_auroc(all_probs_neg_clean, all_labels_clean)
65
- print(f"AUROC: {overall_auroc:.5f}\n")
66
- for idx, key in enumerate(all_keys_clean):
67
- print(f'{key}: {per_disease_auroc[idx]:.5f}')
68
-
69
-
70
- def inference_chestxray14():
71
- dataset = ChestXray14Dataset(f'data/chestxray14/Data_Entry_2017_v2020_modified.csv')
72
- dataloader = DataLoader(dataset, batch_size=1, shuffle=False, collate_fn=lambda x: x, num_workers=1)
73
- inference_model = InferenceModel()
74
- all_descriptors = inference_model.get_all_descriptors(disease_descriptors_chestxray14)
75
-
76
- all_labels = []
77
- all_probs_neg = []
78
- for batch in tqdm(dataloader):
79
- batch = batch[0]
80
- image_path, labels, keys = batch
81
- image_path = Path(image_path)
82
- probs, negative_probs = inference_model.get_descriptor_probs(image_path, descriptors=all_descriptors)
83
- disease_probs, negative_disease_probs = inference_model.get_diseases_probs(disease_descriptors_chestxray14, pos_probs=probs,
84
- negative_probs=negative_probs)
85
- predicted_diseases, prob_vector_neg_prompt = inference_model.get_predictions_bin_prompting(disease_descriptors_chestxray14,
86
- disease_probs=disease_probs,
87
- negative_disease_probs=negative_disease_probs,
88
- keys=keys)
89
- all_labels.append(labels)
90
- all_probs_neg.append(prob_vector_neg_prompt)
91
- gc.collect()
92
-
93
- all_labels = torch.stack(all_labels)
94
- all_probs_neg = torch.stack(all_probs_neg)
95
-
96
- existing_mask = sum(all_labels, 0) > 0
97
- all_labels_clean = all_labels[:, existing_mask]
98
- all_probs_neg_clean = all_probs_neg[:, existing_mask]
99
- all_keys_clean = [key for idx, key in enumerate(keys) if existing_mask[idx]]
100
-
101
- overall_auroc, per_disease_auroc = calculate_auroc(all_probs_neg_clean[:, 1:], all_labels_clean[:, 1:])
102
- print(f"AUROC: {overall_auroc:.5f}\n")
103
- for idx, key in enumerate(all_keys_clean[1:]):
104
- print(f'{key}: {per_disease_auroc[idx]:.5f}')
105
-
106
-
107
- if __name__ == '__main__':
108
- # add argument parser
109
- parser = argparse.ArgumentParser()
110
- parser.add_argument('--dataset', type=str, default='chexpert', help='chexpert or chestxray14')
111
- args = parser.parse_args()
112
-
113
- if args.dataset == 'chexpert':
114
- inference_chexpert()
115
- elif args.dataset == 'chestxray14':
116
- inference_chestxray14()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_facade_category.h DELETED
@@ -1,253 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/detail/type_traits.h>
21
- #include <thrust/iterator/detail/host_system_tag.h>
22
- #include <thrust/iterator/detail/device_system_tag.h>
23
- #include <thrust/iterator/detail/any_system_tag.h>
24
- #include <thrust/iterator/iterator_categories.h>
25
- #include <thrust/iterator/detail/iterator_traversal_tags.h>
26
- #include <thrust/iterator/detail/is_iterator_category.h>
27
- #include <thrust/iterator/detail/iterator_category_with_system_and_traversal.h>
28
- #include <thrust/iterator/detail/iterator_category_to_traversal.h>
29
-
30
- namespace thrust
31
- {
32
-
33
- namespace detail
34
- {
35
-
36
-
37
- // adapted from http://www.boost.org/doc/libs/1_37_0/libs/iterator/doc/iterator_facade.html#iterator-category
38
- //
39
- // in our implementation, R need not be a reference type to result in a category
40
- // derived from forward_XXX_iterator_tag
41
- //
42
- // iterator-category(T,V,R) :=
43
- // if(T is convertible to input_host_iterator_tag
44
- // || T is convertible to output_host_iterator_tag
45
- // || T is convertible to input_device_iterator_tag
46
- // || T is convertible to output_device_iterator_tag
47
- // )
48
- // return T
49
- //
50
- // else if (T is not convertible to incrementable_traversal_tag)
51
- // the program is ill-formed
52
- //
53
- // else return a type X satisfying the following two constraints:
54
- //
55
- // 1. X is convertible to X1, and not to any more-derived
56
- // type, where X1 is defined by:
57
- //
58
- // if (T is convertible to forward_traversal_tag)
59
- // {
60
- // if (T is convertible to random_access_traversal_tag)
61
- // X1 = random_access_host_iterator_tag
62
- // else if (T is convertible to bidirectional_traversal_tag)
63
- // X1 = bidirectional_host_iterator_tag
64
- // else
65
- // X1 = forward_host_iterator_tag
66
- // }
67
- // else
68
- // {
69
- // if (T is convertible to single_pass_traversal_tag
70
- // && R is convertible to V)
71
- // X1 = input_host_iterator_tag
72
- // else
73
- // X1 = T
74
- // }
75
- //
76
- // 2. category-to-traversal(X) is convertible to the most
77
- // derived traversal tag type to which X is also convertible,
78
- // and not to any more-derived traversal tag type.
79
-
80
-
81
- template<typename System, typename Traversal, typename ValueParam, typename Reference>
82
- struct iterator_facade_default_category;
83
-
84
-
85
- // Thrust's implementation of iterator_facade_default_category is slightly
86
- // different from Boost's equivalent.
87
- // Thrust does not check is_convertible<Reference, ValueParam> because Reference
88
- // may not be a complete type at this point, and implementations of is_convertible
89
- // typically require that both types be complete.
90
- // Instead, it simply assumes that if is_convertible<Traversal, single_pass_traversal_tag>,
91
- // then the category is input_iterator_tag
92
-
93
-
94
- // this is the function for standard system iterators
95
- template<typename Traversal, typename ValueParam, typename Reference>
96
- struct iterator_facade_default_category_std :
97
- thrust::detail::eval_if<
98
- thrust::detail::is_convertible<Traversal, thrust::forward_traversal_tag>::value,
99
- thrust::detail::eval_if<
100
- thrust::detail::is_convertible<Traversal, thrust::random_access_traversal_tag>::value,
101
- thrust::detail::identity_<std::random_access_iterator_tag>,
102
- thrust::detail::eval_if<
103
- thrust::detail::is_convertible<Traversal, thrust::bidirectional_traversal_tag>::value,
104
- thrust::detail::identity_<std::bidirectional_iterator_tag>,
105
- thrust::detail::identity_<std::forward_iterator_tag>
106
- >
107
- >,
108
- thrust::detail::eval_if< // XXX note we differ from Boost here
109
- thrust::detail::is_convertible<Traversal, thrust::single_pass_traversal_tag>::value,
110
- thrust::detail::identity_<std::input_iterator_tag>,
111
- thrust::detail::identity_<Traversal>
112
- >
113
- >
114
- {
115
- }; // end iterator_facade_default_category_std
116
-
117
-
118
- // this is the function for host system iterators
119
- template<typename Traversal, typename ValueParam, typename Reference>
120
- struct iterator_facade_default_category_host :
121
- thrust::detail::eval_if<
122
- thrust::detail::is_convertible<Traversal, thrust::forward_traversal_tag>::value,
123
- thrust::detail::eval_if<
124
- thrust::detail::is_convertible<Traversal, thrust::random_access_traversal_tag>::value,
125
- thrust::detail::identity_<thrust::random_access_host_iterator_tag>,
126
- thrust::detail::eval_if<
127
- thrust::detail::is_convertible<Traversal, thrust::bidirectional_traversal_tag>::value,
128
- thrust::detail::identity_<thrust::bidirectional_host_iterator_tag>,
129
- thrust::detail::identity_<thrust::forward_host_iterator_tag>
130
- >
131
- >,
132
- thrust::detail::eval_if< // XXX note we differ from Boost here
133
- thrust::detail::is_convertible<Traversal, thrust::single_pass_traversal_tag>::value,
134
- thrust::detail::identity_<thrust::input_host_iterator_tag>,
135
- thrust::detail::identity_<Traversal>
136
- >
137
- >
138
- {
139
- }; // end iterator_facade_default_category_host
140
-
141
-
142
- // this is the function for device system iterators
143
- template<typename Traversal, typename ValueParam, typename Reference>
144
- struct iterator_facade_default_category_device :
145
- thrust::detail::eval_if<
146
- thrust::detail::is_convertible<Traversal, thrust::forward_traversal_tag>::value,
147
- thrust::detail::eval_if<
148
- thrust::detail::is_convertible<Traversal, thrust::random_access_traversal_tag>::value,
149
- thrust::detail::identity_<thrust::random_access_device_iterator_tag>,
150
- thrust::detail::eval_if<
151
- thrust::detail::is_convertible<Traversal, thrust::bidirectional_traversal_tag>::value,
152
- thrust::detail::identity_<thrust::bidirectional_device_iterator_tag>,
153
- thrust::detail::identity_<thrust::forward_device_iterator_tag>
154
- >
155
- >,
156
- thrust::detail::eval_if<
157
- thrust::detail::is_convertible<Traversal, thrust::single_pass_traversal_tag>::value, // XXX note we differ from Boost here
158
- thrust::detail::identity_<thrust::input_device_iterator_tag>,
159
- thrust::detail::identity_<Traversal>
160
- >
161
- >
162
- {
163
- }; // end iterator_facade_default_category_device
164
-
165
-
166
- // this is the function for any system iterators
167
- template<typename Traversal, typename ValueParam, typename Reference>
168
- struct iterator_facade_default_category_any
169
- {
170
- typedef thrust::detail::iterator_category_with_system_and_traversal<
171
- typename iterator_facade_default_category_std<Traversal, ValueParam, Reference>::type,
172
- thrust::any_system_tag,
173
- Traversal
174
- > type;
175
- }; // end iterator_facade_default_category_any
176
-
177
-
178
- template<typename System, typename Traversal, typename ValueParam, typename Reference>
179
- struct iterator_facade_default_category
180
- // check for any system
181
- : thrust::detail::eval_if<
182
- thrust::detail::is_convertible<System, thrust::any_system_tag>::value,
183
- iterator_facade_default_category_any<Traversal, ValueParam, Reference>,
184
-
185
- // check for host system
186
- thrust::detail::eval_if<
187
- thrust::detail::is_convertible<System, thrust::host_system_tag>::value,
188
- iterator_facade_default_category_host<Traversal, ValueParam, Reference>,
189
-
190
- // check for device system
191
- thrust::detail::eval_if<
192
- thrust::detail::is_convertible<System, thrust::device_system_tag>::value,
193
- iterator_facade_default_category_device<Traversal, ValueParam, Reference>,
194
-
195
- // if we don't recognize the system, get a standard iterator category
196
- // and combine it with System & Traversal
197
- thrust::detail::identity_<
198
- thrust::detail::iterator_category_with_system_and_traversal<
199
- typename iterator_facade_default_category_std<Traversal, ValueParam, Reference>::type,
200
- System,
201
- Traversal
202
- >
203
- >
204
- >
205
- >
206
- >
207
- {};
208
-
209
-
210
- template<typename System, typename Traversal, typename ValueParam, typename Reference>
211
- struct iterator_facade_category_impl
212
- {
213
- typedef typename iterator_facade_default_category<
214
- System,Traversal,ValueParam,Reference
215
- >::type category;
216
-
217
- // we must be able to deduce both Traversal & System from category
218
- // otherwise, munge them all together
219
- typedef typename thrust::detail::eval_if<
220
- thrust::detail::and_<
221
- thrust::detail::is_same<
222
- Traversal,
223
- typename thrust::detail::iterator_category_to_traversal<category>::type
224
- >,
225
- thrust::detail::is_same<
226
- System,
227
- typename thrust::detail::iterator_category_to_system<category>::type
228
- >
229
- >::value,
230
- thrust::detail::identity_<category>,
231
- thrust::detail::identity_<thrust::detail::iterator_category_with_system_and_traversal<category,System,Traversal> >
232
- >::type type;
233
- }; // end iterator_facade_category_impl
234
-
235
-
236
- template<typename CategoryOrSystem,
237
- typename CategoryOrTraversal,
238
- typename ValueParam,
239
- typename Reference>
240
- struct iterator_facade_category
241
- {
242
- typedef typename
243
- thrust::detail::eval_if<
244
- thrust::detail::is_iterator_category<CategoryOrTraversal>::value,
245
- thrust::detail::identity_<CategoryOrTraversal>, // categories are fine as-is
246
- iterator_facade_category_impl<CategoryOrSystem, CategoryOrTraversal, ValueParam, Reference>
247
- >::type type;
248
- }; // end iterator_facade_category
249
-
250
-
251
- } // end detail
252
- } // end thrust
253
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/necks/channel_mapper.py DELETED
@@ -1,74 +0,0 @@
1
- import torch.nn as nn
2
- from mmcv.cnn import ConvModule, xavier_init
3
-
4
- from ..builder import NECKS
5
-
6
-
7
- @NECKS.register_module()
8
- class ChannelMapper(nn.Module):
9
- r"""Channel Mapper to reduce/increase channels of backbone features.
10
-
11
- This is used to reduce/increase channels of backbone features.
12
-
13
- Args:
14
- in_channels (List[int]): Number of input channels per scale.
15
- out_channels (int): Number of output channels (used at each scale).
16
- kernel_size (int, optional): kernel_size for reducing channels (used
17
- at each scale). Default: 3.
18
- conv_cfg (dict, optional): Config dict for convolution layer.
19
- Default: None.
20
- norm_cfg (dict, optional): Config dict for normalization layer.
21
- Default: None.
22
- act_cfg (dict, optional): Config dict for activation layer in
23
- ConvModule. Default: dict(type='ReLU').
24
-
25
- Example:
26
- >>> import torch
27
- >>> in_channels = [2, 3, 5, 7]
28
- >>> scales = [340, 170, 84, 43]
29
- >>> inputs = [torch.rand(1, c, s, s)
30
- ... for c, s in zip(in_channels, scales)]
31
- >>> self = ChannelMapper(in_channels, 11, 3).eval()
32
- >>> outputs = self.forward(inputs)
33
- >>> for i in range(len(outputs)):
34
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
35
- outputs[0].shape = torch.Size([1, 11, 340, 340])
36
- outputs[1].shape = torch.Size([1, 11, 170, 170])
37
- outputs[2].shape = torch.Size([1, 11, 84, 84])
38
- outputs[3].shape = torch.Size([1, 11, 43, 43])
39
- """
40
-
41
- def __init__(self,
42
- in_channels,
43
- out_channels,
44
- kernel_size=3,
45
- conv_cfg=None,
46
- norm_cfg=None,
47
- act_cfg=dict(type='ReLU')):
48
- super(ChannelMapper, self).__init__()
49
- assert isinstance(in_channels, list)
50
-
51
- self.convs = nn.ModuleList()
52
- for in_channel in in_channels:
53
- self.convs.append(
54
- ConvModule(
55
- in_channel,
56
- out_channels,
57
- kernel_size,
58
- padding=(kernel_size - 1) // 2,
59
- conv_cfg=conv_cfg,
60
- norm_cfg=norm_cfg,
61
- act_cfg=act_cfg))
62
-
63
- # default init_weights for conv(msra) and norm in ConvModule
64
- def init_weights(self):
65
- """Initialize the weights of ChannelMapper module."""
66
- for m in self.modules():
67
- if isinstance(m, nn.Conv2d):
68
- xavier_init(m, distribution='uniform')
69
-
70
- def forward(self, inputs):
71
- """Forward function."""
72
- assert len(inputs) == len(self.convs)
73
- outs = [self.convs[i](inputs[i]) for i in range(len(inputs))]
74
- return tuple(outs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/datasets/README.md DELETED
@@ -1,140 +0,0 @@
1
- # Use Builtin Datasets
2
-
3
- A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog)
4
- for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc).
5
- This document explains how to setup the builtin datasets so they can be used by the above APIs.
6
- [Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`,
7
- and how to add new datasets to them.
8
-
9
- Detectron2 has builtin support for a few datasets.
10
- The datasets are assumed to exist in a directory specified by the environment variable
11
- `DETECTRON2_DATASETS`.
12
- Under this directory, detectron2 will look for datasets in the structure described below, if needed.
13
- ```
14
- $DETECTRON2_DATASETS/
15
- coco/
16
- lvis/
17
- cityscapes/
18
- VOC20{07,12}/
19
- ```
20
-
21
- You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`.
22
- If left unset, the default is `./datasets` relative to your current working directory.
23
-
24
- The [model zoo](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md)
25
- contains configs and models that use these builtin datasets.
26
-
27
- ## Expected dataset structure for [COCO instance/keypoint detection](https://cocodataset.org/#download):
28
-
29
- ```
30
- coco/
31
- annotations/
32
- instances_{train,val}2017.json
33
- person_keypoints_{train,val}2017.json
34
- {train,val}2017/
35
- # image files that are mentioned in the corresponding json
36
- ```
37
-
38
- You can use the 2014 version of the dataset as well.
39
-
40
- Some of the builtin tests (`dev/run_*_tests.sh`) uses a tiny version of the COCO dataset,
41
- which you can download with `./datasets/prepare_for_tests.sh`.
42
-
43
- ## Expected dataset structure for PanopticFPN:
44
-
45
- Extract panoptic annotations from [COCO website](https://cocodataset.org/#download)
46
- into the following structure:
47
- ```
48
- coco/
49
- annotations/
50
- panoptic_{train,val}2017.json
51
- panoptic_{train,val}2017/ # png annotations
52
- panoptic_stuff_{train,val}2017/ # generated by the script mentioned below
53
- ```
54
-
55
- Install panopticapi by:
56
- ```
57
- pip install git+https://github.com/cocodataset/panopticapi.git
58
- ```
59
- Then, run `python datasets/prepare_panoptic_fpn.py`, to extract semantic annotations from panoptic annotations.
60
-
61
- ## Expected dataset structure for [LVIS instance segmentation](https://www.lvisdataset.org/dataset):
62
- ```
63
- coco/
64
- {train,val,test}2017/
65
- lvis/
66
- lvis_v0.5_{train,val}.json
67
- lvis_v0.5_image_info_test.json
68
- lvis_v1_{train,val}.json
69
- lvis_v1_image_info_test{,_challenge}.json
70
- ```
71
-
72
- Install lvis-api by:
73
- ```
74
- pip install git+https://github.com/lvis-dataset/lvis-api.git
75
- ```
76
-
77
- To evaluate models trained on the COCO dataset using LVIS annotations,
78
- run `python datasets/prepare_cocofied_lvis.py` to prepare "cocofied" LVIS annotations.
79
-
80
- ## Expected dataset structure for [cityscapes](https://www.cityscapes-dataset.com/downloads/):
81
- ```
82
- cityscapes/
83
- gtFine/
84
- train/
85
- aachen/
86
- color.png, instanceIds.png, labelIds.png, polygons.json,
87
- labelTrainIds.png
88
- ...
89
- val/
90
- test/
91
- # below are generated Cityscapes panoptic annotation
92
- cityscapes_panoptic_train.json
93
- cityscapes_panoptic_train/
94
- cityscapes_panoptic_val.json
95
- cityscapes_panoptic_val/
96
- cityscapes_panoptic_test.json
97
- cityscapes_panoptic_test/
98
- leftImg8bit/
99
- train/
100
- val/
101
- test/
102
- ```
103
- Install cityscapes scripts by:
104
- ```
105
- pip install git+https://github.com/mcordts/cityscapesScripts.git
106
- ```
107
-
108
- Note: to create labelTrainIds.png, first prepare the above structure, then run cityscapesescript with:
109
- ```
110
- CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createTrainIdLabelImgs.py
111
- ```
112
- These files are not needed for instance segmentation.
113
-
114
- Note: to generate Cityscapes panoptic dataset, run cityscapesescript with:
115
- ```
116
- CITYSCAPES_DATASET=/path/to/abovementioned/cityscapes python cityscapesscripts/preparation/createPanopticImgs.py
117
- ```
118
- These files are not needed for semantic and instance segmentation.
119
-
120
- ## Expected dataset structure for [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/index.html):
121
- ```
122
- VOC20{07,12}/
123
- Annotations/
124
- ImageSets/
125
- Main/
126
- trainval.txt
127
- test.txt
128
- # train.txt or val.txt, if you use these splits
129
- JPEGImages/
130
- ```
131
-
132
- ## Expected dataset structure for [ADE20k Scene Parsing](http://sceneparsing.csail.mit.edu/):
133
- ```
134
- ADEChallengeData2016/
135
- annotations/
136
- annotations_detectron2/
137
- images/
138
- objectInfo150.txt
139
- ```
140
- The directory `annotations_detectron2` is generated by running `python datasets/prepare_ade20k_sem_seg.py`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarlDennis/HYTTS/text/ger_to_ipa.py DELETED
@@ -1,397 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import re
3
- from os.path import join, abspath, dirname
4
- from collections import defaultdict
5
- import epitran
6
-
7
- epi = epitran.Epitran("deu-Latn-nar")
8
-
9
-
10
- def mode_type(mode_in):
11
- """In the case of "sql", this will return an sqlite cursor."""
12
- if mode_in.lower() == "sql":
13
- import sqlite3
14
- conn = sqlite3.connect(join(abspath(dirname(__file__)), "./Resources/de.db"))
15
- return conn.cursor()
16
-
17
-
18
- #TESTS
19
- #NUMBERS ARE TOO HARD!
20
-
21
-
22
-
23
- def preprocess(words):
24
- """Returns a string of words stripped of punctuation"""
25
- punct_str = '!"#$%&\'()*+,-./:;<=>/?@[\\]^_`{|}~«» '
26
- return ' '.join([w.strip(punct_str).lower() for w in words.split()])
27
-
28
-
29
- def preserve_punc(words):
30
- """converts words to IPA and finds punctuation before and after the word."""
31
- words_preserved = []
32
- for w in words.split():
33
- punct_list = ["", preprocess(w), ""]
34
- before = re.search("^([^A-Za-z0-9]+)[A-Za-z]", w)
35
- after = re.search("[A-Za-z]([^A-Za-z0-9]+)$", w)
36
- if before:
37
- punct_list[0] = str(before.group(1))
38
- if after:
39
- punct_list[2] = str(after.group(1))
40
- words_preserved.append(punct_list)
41
- return words_preserved
42
-
43
-
44
-
45
- def apply_punct(triple, as_str=False):
46
- """places surrounding punctuation back on center on a list of preserve_punc triples"""
47
- if type(triple[0]) == list:
48
- for i, t in enumerate(triple):
49
- triple[i] = str(''.join(triple[i]))
50
- if as_str:
51
- return ' '.join(triple)
52
- return triple
53
- if as_str:
54
- return str(''.join(t for t in triple))
55
- return [''.join(t for t in triple)]
56
-
57
-
58
- def _punct_replace_word(original, transcription):
59
- """Get the IPA transcription of word with the original punctuation marks"""
60
- for i, trans_list in enumerate(transcription):
61
- for j, item in enumerate(trans_list):
62
- triple = [original[i][0]] + [item] + [original[i][2]]
63
- transcription[i][j] = apply_punct(triple, as_str=True)
64
- return transcription
65
-
66
-
67
- def fetch_words(words_in, db_type="sql"):
68
- """fetches a list of words from the database"""
69
- asset = mode_type(db_type)
70
- f_result = []
71
- if db_type.lower() == "sql":
72
- for word in words_in:
73
- asset.execute("SELECT Words, phonemes FROM De_words WHERE Words IN (?)", (word,))
74
- result = asset.fetchall()
75
- flag = True
76
- try:
77
- f_result.append(result.pop())
78
- flag = False
79
- except IndexError:
80
- pass
81
- if result == [] and flag is True:
82
- result = epi.transliterate(word)
83
- f_result.append((word, result))
84
- f_result = list(filter(None,f_result))
85
- f_set = set(f_result)
86
- d = defaultdict(list)
87
- for k, v in f_set:
88
- d[k].append(v)
89
- return list(d.items())
90
-
91
- def get_deu(tokens_in, db_type="sql"):
92
- """query the SQL database for the words and return the phonemes in the order of user_in"""
93
- result = fetch_words(tokens_in, db_type)
94
- ordered = []
95
- for word in tokens_in:
96
- this_word = [[i[1] for i in result if i[0] == word]][0]
97
- if this_word:
98
- ordered.append(this_word[0])
99
- else:
100
- ordered.append(["__IGNORE__" + word])
101
- return ordered
102
-
103
-
104
- def deu_to_ipa(deu_list, mark=True):
105
- """converts the deu word lists into IPA transcriptions"""
106
- symbols = {}
107
- ipa_list = [] # the final list of IPA tokens to be returned
108
- for word_list in deu_list:
109
- ipa_word_list = [] # the word list for each word
110
- for word in word_list:
111
- if re.sub("\d*", "", word.replace("__IGNORE__", "")) == "":
112
- pass # do not delete token if it's all numbers
113
- else:
114
- word = re.sub("[0-9]", "", word)
115
- ipa_form = ''
116
- if word.startswith("__IGNORE__"):
117
- ipa_form = word.replace("__IGNORE__", "")
118
- # mark words we couldn't transliterate with an asterisk:
119
-
120
- if mark:
121
- if not re.sub("\d*", "", ipa_form) == "":
122
- ipa_form += "*"
123
- else:
124
- for piece in word.split(" "):
125
- marked = False
126
- unmarked = piece
127
- if piece[0] in ["ˈ", "ˌ"] or piece[0] is None:
128
- marked = True
129
- mark = piece
130
- unmarked = piece[1:]
131
-
132
- if unmarked in symbols:
133
- if marked:
134
- ipa_form += mark + symbols[unmarked]
135
- else:
136
- ipa_form += symbols[unmarked]
137
-
138
- else:
139
- ipa_form += piece
140
- swap_list = [["ˈər", "əˈr"], ["ˈie", "iˈe"]]
141
- for sym in swap_list:
142
- if not ipa_form.startswith(sym[0]):
143
- ipa_form = ipa_form.replace(sym[0], sym[1])
144
- ipa_word_list.append(ipa_form)
145
- ipa_list.append(sorted(list(set(ipa_word_list))))
146
- return ipa_list
147
-
148
-
149
- def get_top(ipa_list):
150
- """Returns only the one result for a query. If multiple entries for words are found, only the first is used."""
151
- return ' '.join([word_list[-1] for word_list in ipa_list])
152
-
153
-
154
- def get_all(ipa_list):
155
- """utilizes an algorithm to discover and return all possible combinations of IPA transcriptions"""
156
- final_size = 1
157
- for word_list in ipa_list:
158
- final_size *= len(word_list)
159
- list_all = ["" for s in range(final_size)]
160
- for i in range(len(ipa_list)):
161
- if i == 0:
162
- swtich_rate = final_size / len(ipa_list[i])
163
- else:
164
- swtich_rate /= len(ipa_list[i])
165
- k = 0
166
- for j in range(final_size):
167
- if (j+1) % int(swtich_rate) == 0:
168
- k += 1
169
- if k == len(ipa_list[i]):
170
- k = 0
171
- list_all[j] = list_all[j] + ipa_list[i][k] + " "
172
- return sorted([sent[:-1] for sent in list_all])
173
-
174
-
175
- def ipa_list(words_in, keep_punct=True, db_type="sql"):
176
- """Returns a list of all the discovered IPA transcriptions for each word."""
177
- if type(words_in) == str:
178
- words = [preserve_punc(w.lower())[0] for w in words_in.split()]
179
- else:
180
- words = [preserve_punc(w.lower())[0] for w in words_in]
181
- deu = get_deu([w[1] for w in words], db_type=db_type)
182
- ipa = deu_to_ipa(deu)
183
- if keep_punct:
184
- ipa = _punct_replace_word(words, ipa)
185
- return ipa
186
-
187
-
188
- def isin_deu(word, db_type="sql"):
189
- """checks if a word is in the deu dictionary. Doesn't strip punctuation.
190
- If given more than one word, returns True only if all words are present."""
191
- if type(word) == str:
192
- word = [preprocess(w) for w in word.split()]
193
- results = fetch_words(word, db_type)
194
- as_set = list(set(t[0] for t in results))
195
- return len(as_set) == len(set(word))
196
-
197
- def replace_number(text):
198
- text = text.replace("1","eins ")
199
- text = text.replace("2","zwei ")
200
- text = text.replace("3","drei ")
201
- text = text.replace("4","vier ")
202
- text = text.replace("5","fünf ")
203
- text = text.replace("6","sechs ")
204
- text = text.replace("7","sieben ")
205
- text = text.replace("8","acht ")
206
- text = text.replace("9","neun ")
207
- text = text.replace("0","null ")
208
- return text
209
-
210
-
211
-
212
- def convert(text, retrieve_all=False, keep_punct=True, mode="sql"):
213
- """takes either a string or list of German words and converts them to IPA"""
214
- text = replace_number(text)
215
- ipa = ipa_list(
216
- words_in=text,
217
- keep_punct=keep_punct,
218
- db_type=mode)
219
- if retrieve_all:
220
- return get_all(ipa)
221
- return get_top(ipa)
222
-
223
-
224
-
225
- _decimal_number_re = re.compile(r'\d+\,\d+')
226
- _euros_pre = re.compile(r'€([0-9\,]*[0-9]+)')
227
- _euros_re = re.compile(r'([0-9\,]*[0-9]+)€')
228
- _ordinal_re = re.compile(r'(der |die |das )([0-9]+)\.')
229
- _clock_re=re.compile(r'\d{1,2}\:\d{2}')
230
- _number_re = re.compile(r'[0-9]+')
231
-
232
- def base(text):
233
- text = text.replace("1", "eins ")
234
- text = text.replace("2", "zwei ")
235
- text = text.replace("3", "drei ")
236
- text = text.replace("4", "vier ")
237
- text = text.replace("5", "fünf ")
238
- text = text.replace("6", "sechs ")
239
- text = text.replace("7", "sieben ")
240
- text = text.replace("8", "acht ")
241
- text = text.replace("9", "neun ")
242
- text = text.replace("0", "null ")
243
- return text
244
-
245
- def tens_to_word(num):
246
- tens = num[0]
247
- ones = num[1]
248
- ones_word = base(ones)
249
-
250
- if num =="10":
251
- return "zehn"
252
- elif num=="11":
253
- return "elf"
254
- elif num=="12":
255
- return "zwölf"
256
-
257
- if tens == "1":
258
- if ones == "6":
259
- ones_word = ones_word[:-1]
260
- elif ones == "7":
261
- ones_word = ones_word[:-2]
262
- return ones_word + "zehn"
263
- else:
264
- tens_word = base(tens)
265
- if ones == "1":
266
- ones_word = ones_word[:-1]
267
- if tens == "2":
268
- tens_word = "zwan"
269
- elif tens == "6":
270
- tens_word = tens_word[:-1]
271
- elif tens == "7":
272
- tens_word = tens_word[:-2]
273
- if tens == "3":
274
- tens_word += "ßig"
275
- else:
276
- tens_word += "zig"
277
- if ones == "0":
278
- return tens_word
279
- else:
280
- return ones_word + " und " + tens_word
281
-
282
- def huns_to_word(num):
283
- huns = num[0]
284
- tens = num[1]
285
-
286
- if huns == "1":
287
- huns_word= "hundert"
288
- else:
289
- huns_word = base(huns)+" hundert"
290
-
291
- remain = num_to_word(num[1:])
292
- if remain != "":
293
- remain = " " + remain
294
- return huns_word + remain
295
-
296
- def thos_to_word(num):
297
- thos = num[0]
298
- if thos == "1":
299
- thos_word= "tausend"
300
- else:
301
- thos_word = base(thos)+" tausend"
302
- remain=num_to_word(num[1:])
303
- if remain!="":
304
- remain=" "+remain
305
- return thos_word+remain
306
-
307
- def num_to_word(num):
308
- num=num.lstrip("0")
309
- if num=="":
310
- return("")
311
- digit=len(num)
312
- if digit==1:
313
- return base(num)
314
- elif digit==2:
315
- return tens_to_word(num)
316
- elif digit == 3:
317
- return huns_to_word(num)
318
- elif digit == 4:
319
- return thos_to_word(num)
320
- else:
321
- return base(num)
322
-
323
- def number_to_words(m):
324
- m=m.group(0).lstrip("0")
325
- if m=="":
326
- return"null"
327
- return num_to_word(m)
328
-
329
- def _expand_ordinal(m):
330
-
331
- pre=m.group(1)
332
- m = m.group(2).lstrip("0")
333
-
334
- if m=="":
335
- return"NULL"
336
- num=int(m)
337
- if num<=19 & num>=1:
338
- if num ==1:
339
- return "erste"
340
- elif num==3:
341
- return "dritte"
342
- elif num==7:
343
- return "siebte"
344
- elif num==8:
345
- return "achte"
346
- else:
347
- return pre + num_to_word(m) + "te"
348
- else:
349
- return pre + num_to_word(m) + "ste"
350
-
351
- def _expand_decimal(m):
352
- match=m.group(0)
353
- parts = match.split(',')
354
- if int(parts[0])==0:
355
- return '%s komma %s' % ("null", base(parts[1]))
356
- return '%s komma %s' % (num_to_word(parts[0]),base(parts[1]))
357
-
358
- def _expand_euros(m):
359
- match = m.group(1)
360
- parts = match.split(',')
361
- if len(parts) > 2:
362
- return match + ' euro' # Unexpected format
363
- euros = int(parts[0]) if parts[0] else 0
364
- cents = int(parts[1])*10 if len(parts) > 1 and parts[1] else 0
365
- if euros and cents:
366
- return '%s euro %s' % (euros, cents)
367
- elif euros:
368
- return '%s euro' % (euros)
369
- elif cents:
370
- return '%s cent' % (cents)
371
- else:
372
- return 'null euro'
373
-
374
- def _expand_clock(m):
375
- match = m.group(0)
376
- parts = match.split(':')
377
- if int(parts[0]) == 0:
378
- return '%s Uhr %s' % ("null",num_to_word(parts[1]))
379
- elif int(parts[0]) == 1:
380
- return '%s Uhr %s' % ("ein", num_to_word(parts[1]))
381
- return '%s Uhr %s' % (num_to_word(parts[0]),num_to_word(parts[1]))
382
-
383
- def normalize_numbers(text):
384
- text = re.sub(_euros_pre, _expand_euros, text)
385
- text = re.sub(_euros_re, _expand_euros, text)
386
- text = re.sub(_clock_re, _expand_clock, text)
387
- text = re.sub(_decimal_number_re, _expand_decimal, text)
388
- text = re.sub(_ordinal_re, _expand_ordinal, text)
389
- text = re.sub(_number_re, number_to_words, text)
390
- text=text.replace(" "," ")
391
- return text
392
-
393
- def collapse_whitespace(text):
394
- return re.sub(r'\s+', ' ', text)
395
-
396
- def mark_dark_l(text):
397
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChrisCaviar/ControlNet-v1-1/app_normal.py DELETED
@@ -1,104 +0,0 @@
1
- #!/usr/bin/env python
2
-
3
- import gradio as gr
4
-
5
- from utils import randomize_seed_fn
6
-
7
-
8
- def create_demo(process, max_images=12, default_num_images=3):
9
- with gr.Blocks() as demo:
10
- with gr.Row():
11
- with gr.Column():
12
- image = gr.Image()
13
- prompt = gr.Textbox(label='Prompt')
14
- run_button = gr.Button('Run')
15
- with gr.Accordion('Advanced options', open=False):
16
- preprocessor_name = gr.Radio(label='Preprocessor',
17
- choices=['NormalBae', 'None'],
18
- type='value',
19
- value='NormalBae')
20
- num_samples = gr.Slider(label='Images',
21
- minimum=1,
22
- maximum=max_images,
23
- value=default_num_images,
24
- step=1)
25
- image_resolution = gr.Slider(label='Image resolution',
26
- minimum=256,
27
- maximum=512,
28
- value=512,
29
- step=256)
30
- preprocess_resolution = gr.Slider(
31
- label='Preprocess resolution',
32
- minimum=128,
33
- maximum=512,
34
- value=384,
35
- step=1)
36
- num_steps = gr.Slider(label='Number of steps',
37
- minimum=1,
38
- maximum=100,
39
- value=20,
40
- step=1)
41
- guidance_scale = gr.Slider(label='Guidance scale',
42
- minimum=0.1,
43
- maximum=30.0,
44
- value=9.0,
45
- step=0.1)
46
- seed = gr.Slider(label='Seed',
47
- minimum=0,
48
- maximum=1000000,
49
- step=1,
50
- value=0,
51
- randomize=True)
52
- randomize_seed = gr.Checkbox(label='Randomize seed',
53
- value=True)
54
- a_prompt = gr.Textbox(
55
- label='Additional prompt',
56
- value='best quality, extremely detailed')
57
- n_prompt = gr.Textbox(
58
- label='Negative prompt',
59
- value=
60
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
61
- )
62
- with gr.Column():
63
- result = gr.Gallery(label='Output', show_label=False).style(
64
- columns=2, object_fit='scale-down')
65
- inputs = [
66
- image,
67
- prompt,
68
- a_prompt,
69
- n_prompt,
70
- num_samples,
71
- image_resolution,
72
- preprocess_resolution,
73
- num_steps,
74
- guidance_scale,
75
- seed,
76
- preprocessor_name,
77
- ]
78
- prompt.submit(
79
- fn=randomize_seed_fn,
80
- inputs=[seed, randomize_seed],
81
- outputs=seed,
82
- ).then(
83
- fn=process,
84
- inputs=inputs,
85
- outputs=result,
86
- )
87
- run_button.click(
88
- fn=randomize_seed_fn,
89
- inputs=[seed, randomize_seed],
90
- outputs=seed,
91
- ).then(
92
- fn=process,
93
- inputs=inputs,
94
- outputs=result,
95
- api_name='normal',
96
- )
97
- return demo
98
-
99
-
100
- if __name__ == '__main__':
101
- from model import Model
102
- model = Model(task_name='NormalBae')
103
- demo = create_demo(model.process_normal)
104
- demo.queue().launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/knock/__init__.py DELETED
@@ -1,28 +0,0 @@
1
- from pathlib import Path
2
- from typing import List
3
-
4
- from PIL.Image import Image as IMG
5
- from pil_utils import BuildImage
6
-
7
- from meme_generator import add_meme
8
- from meme_generator.utils import save_gif
9
-
10
- img_dir = Path(__file__).parent / "images"
11
-
12
-
13
- def knock(images: List[BuildImage], texts, args):
14
- img = images[0].convert("RGBA").square()
15
- # fmt: off
16
- locs = [(60, 308, 210, 195), (60, 308, 210, 198), (45, 330, 250, 172), (58, 320, 218, 180),
17
- (60, 310, 215, 193), (40, 320, 250, 285), (48, 308, 226, 192), (51, 301, 223, 200)]
18
- # fmt: on
19
- frames: List[IMG] = []
20
- for i in range(8):
21
- frame = BuildImage.open(img_dir / f"{i}.png")
22
- x, y, w, h = locs[i]
23
- frame.paste(img.resize((w, h)), (x, y), below=True)
24
- frames.append(frame.image)
25
- return save_gif(frames, 0.04)
26
-
27
-
28
- add_meme("knock", knock, min_images=1, max_images=1, keywords=["敲"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Clebersla/RVC_V2_Huggingface_Version/i18n.py DELETED
@@ -1,28 +0,0 @@
1
- import locale
2
- import json
3
- import os
4
-
5
-
6
- def load_language_list(language):
7
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
8
- language_list = json.load(f)
9
- return language_list
10
-
11
-
12
- class I18nAuto:
13
- def __init__(self, language=None):
14
- if language in ["Auto", None]:
15
- language = locale.getdefaultlocale()[
16
- 0
17
- ] # getlocale can't identify the system's language ((None, None))
18
- if not os.path.exists(f"./i18n/{language}.json"):
19
- language = "en_US"
20
- self.language = language
21
- # print("Use Language:", language)
22
- self.language_map = load_language_list(language)
23
-
24
- def __call__(self, key):
25
- return self.language_map.get(key, key)
26
-
27
- def print(self):
28
- print("Use Language:", self.language)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Codecooker/rvcapi/src/mdx.py DELETED
@@ -1,287 +0,0 @@
1
- import gc
2
- import hashlib
3
- import os
4
- import queue
5
- import threading
6
- import warnings
7
-
8
- import librosa
9
- import numpy as np
10
- import onnxruntime as ort
11
- import soundfile as sf
12
- import torch
13
- from tqdm import tqdm
14
-
15
- warnings.filterwarnings("ignore")
16
- stem_naming = {'Vocals': 'Instrumental', 'Other': 'Instruments', 'Instrumental': 'Vocals', 'Drums': 'Drumless', 'Bass': 'Bassless'}
17
-
18
-
19
- class MDXModel:
20
- def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000):
21
- self.dim_f = dim_f
22
- self.dim_t = dim_t
23
- self.dim_c = 4
24
- self.n_fft = n_fft
25
- self.hop = hop
26
- self.stem_name = stem_name
27
- self.compensation = compensation
28
-
29
- self.n_bins = self.n_fft // 2 + 1
30
- self.chunk_size = hop * (self.dim_t - 1)
31
- self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device)
32
-
33
- out_c = self.dim_c
34
-
35
- self.freq_pad = torch.zeros([1, out_c, self.n_bins - self.dim_f, self.dim_t]).to(device)
36
-
37
- def stft(self, x):
38
- x = x.reshape([-1, self.chunk_size])
39
- x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True)
40
- x = torch.view_as_real(x)
41
- x = x.permute([0, 3, 1, 2])
42
- x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 4, self.n_bins, self.dim_t])
43
- return x[:, :, :self.dim_f]
44
-
45
- def istft(self, x, freq_pad=None):
46
- freq_pad = self.freq_pad.repeat([x.shape[0], 1, 1, 1]) if freq_pad is None else freq_pad
47
- x = torch.cat([x, freq_pad], -2)
48
- # c = 4*2 if self.target_name=='*' else 2
49
- x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 2, self.n_bins, self.dim_t])
50
- x = x.permute([0, 2, 3, 1])
51
- x = x.contiguous()
52
- x = torch.view_as_complex(x)
53
- x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True)
54
- return x.reshape([-1, 2, self.chunk_size])
55
-
56
-
57
- class MDX:
58
- DEFAULT_SR = 44100
59
- # Unit: seconds
60
- DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR
61
- DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR
62
-
63
- DEFAULT_PROCESSOR = 0
64
-
65
- def __init__(self, model_path: str, params: MDXModel, processor=DEFAULT_PROCESSOR):
66
-
67
- # Set the device and the provider (CPU or CUDA)
68
- self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu')
69
- self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider']
70
-
71
- self.model = params
72
-
73
- # Load the ONNX model using ONNX Runtime
74
- self.ort = ort.InferenceSession(model_path, providers=self.provider)
75
- # Preload the model for faster performance
76
- self.ort.run(None, {'input': torch.rand(1, 4, params.dim_f, params.dim_t).numpy()})
77
- self.process = lambda spec: self.ort.run(None, {'input': spec.cpu().numpy()})[0]
78
-
79
- self.prog = None
80
-
81
- @staticmethod
82
- def get_hash(model_path):
83
- try:
84
- with open(model_path, 'rb') as f:
85
- f.seek(- 10000 * 1024, 2)
86
- model_hash = hashlib.md5(f.read()).hexdigest()
87
- except:
88
- model_hash = hashlib.md5(open(model_path, 'rb').read()).hexdigest()
89
-
90
- return model_hash
91
-
92
- @staticmethod
93
- def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE):
94
- """
95
- Segment or join segmented wave array
96
-
97
- Args:
98
- wave: (np.array) Wave array to be segmented or joined
99
- combine: (bool) If True, combines segmented wave array. If False, segments wave array.
100
- chunk_size: (int) Size of each segment (in samples)
101
- margin_size: (int) Size of margin between segments (in samples)
102
-
103
- Returns:
104
- numpy array: Segmented or joined wave array
105
- """
106
-
107
- if combine:
108
- processed_wave = None # Initializing as None instead of [] for later numpy array concatenation
109
- for segment_count, segment in enumerate(wave):
110
- start = 0 if segment_count == 0 else margin_size
111
- end = None if segment_count == len(wave) - 1 else -margin_size
112
- if margin_size == 0:
113
- end = None
114
- if processed_wave is None: # Create array for first segment
115
- processed_wave = segment[:, start:end]
116
- else: # Concatenate to existing array for subsequent segments
117
- processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1)
118
-
119
- else:
120
- processed_wave = []
121
- sample_count = wave.shape[-1]
122
-
123
- if chunk_size <= 0 or chunk_size > sample_count:
124
- chunk_size = sample_count
125
-
126
- if margin_size > chunk_size:
127
- margin_size = chunk_size
128
-
129
- for segment_count, skip in enumerate(range(0, sample_count, chunk_size)):
130
-
131
- margin = 0 if segment_count == 0 else margin_size
132
- end = min(skip + chunk_size + margin_size, sample_count)
133
- start = skip - margin
134
-
135
- cut = wave[:, start:end].copy()
136
- processed_wave.append(cut)
137
-
138
- if end == sample_count:
139
- break
140
-
141
- return processed_wave
142
-
143
- def pad_wave(self, wave):
144
- """
145
- Pad the wave array to match the required chunk size
146
-
147
- Args:
148
- wave: (np.array) Wave array to be padded
149
-
150
- Returns:
151
- tuple: (padded_wave, pad, trim)
152
- - padded_wave: Padded wave array
153
- - pad: Number of samples that were padded
154
- - trim: Number of samples that were trimmed
155
- """
156
- n_sample = wave.shape[1]
157
- trim = self.model.n_fft // 2
158
- gen_size = self.model.chunk_size - 2 * trim
159
- pad = gen_size - n_sample % gen_size
160
-
161
- # Padded wave
162
- wave_p = np.concatenate((np.zeros((2, trim)), wave, np.zeros((2, pad)), np.zeros((2, trim))), 1)
163
-
164
- mix_waves = []
165
- for i in range(0, n_sample + pad, gen_size):
166
- waves = np.array(wave_p[:, i:i + self.model.chunk_size])
167
- mix_waves.append(waves)
168
-
169
- mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device)
170
-
171
- return mix_waves, pad, trim
172
-
173
- def _process_wave(self, mix_waves, trim, pad, q: queue.Queue, _id: int):
174
- """
175
- Process each wave segment in a multi-threaded environment
176
-
177
- Args:
178
- mix_waves: (torch.Tensor) Wave segments to be processed
179
- trim: (int) Number of samples trimmed during padding
180
- pad: (int) Number of samples padded during padding
181
- q: (queue.Queue) Queue to hold the processed wave segments
182
- _id: (int) Identifier of the processed wave segment
183
-
184
- Returns:
185
- numpy array: Processed wave segment
186
- """
187
- mix_waves = mix_waves.split(1)
188
- with torch.no_grad():
189
- pw = []
190
- for mix_wave in mix_waves:
191
- self.prog.update()
192
- spec = self.model.stft(mix_wave)
193
- processed_spec = torch.tensor(self.process(spec))
194
- processed_wav = self.model.istft(processed_spec.to(self.device))
195
- processed_wav = processed_wav[:, :, trim:-trim].transpose(0, 1).reshape(2, -1).cpu().numpy()
196
- pw.append(processed_wav)
197
- processed_signal = np.concatenate(pw, axis=-1)[:, :-pad]
198
- q.put({_id: processed_signal})
199
- return processed_signal
200
-
201
- def process_wave(self, wave: np.array, mt_threads=1):
202
- """
203
- Process the wave array in a multi-threaded environment
204
-
205
- Args:
206
- wave: (np.array) Wave array to be processed
207
- mt_threads: (int) Number of threads to be used for processing
208
-
209
- Returns:
210
- numpy array: Processed wave array
211
- """
212
- self.prog = tqdm(total=0)
213
- chunk = wave.shape[-1] // mt_threads
214
- waves = self.segment(wave, False, chunk)
215
-
216
- # Create a queue to hold the processed wave segments
217
- q = queue.Queue()
218
- threads = []
219
- for c, batch in enumerate(waves):
220
- mix_waves, pad, trim = self.pad_wave(batch)
221
- self.prog.total = len(mix_waves) * mt_threads
222
- thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c))
223
- thread.start()
224
- threads.append(thread)
225
- for thread in threads:
226
- thread.join()
227
- self.prog.close()
228
-
229
- processed_batches = []
230
- while not q.empty():
231
- processed_batches.append(q.get())
232
- processed_batches = [list(wave.values())[0] for wave in
233
- sorted(processed_batches, key=lambda d: list(d.keys())[0])]
234
- assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!'
235
- return self.segment(processed_batches, True, chunk)
236
-
237
-
238
- def run_mdx(model_params, output_dir, model_path, filename, exclude_main=False, exclude_inversion=False, suffix=None, invert_suffix=None, denoise=False, keep_orig=True, m_threads=2):
239
- device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
240
-
241
- device_properties = torch.cuda.get_device_properties(device)
242
- vram_gb = device_properties.total_memory / 1024**3
243
- m_threads = 1 if vram_gb < 8 else 2
244
-
245
- model_hash = MDX.get_hash(model_path)
246
- mp = model_params.get(model_hash)
247
- model = MDXModel(
248
- device,
249
- dim_f=mp["mdx_dim_f_set"],
250
- dim_t=2 ** mp["mdx_dim_t_set"],
251
- n_fft=mp["mdx_n_fft_scale_set"],
252
- stem_name=mp["primary_stem"],
253
- compensation=mp["compensate"]
254
- )
255
-
256
- mdx_sess = MDX(model_path, model)
257
- wave, sr = librosa.load(filename, mono=False, sr=44100)
258
- # normalizing input wave gives better output
259
- peak = max(np.max(wave), abs(np.min(wave)))
260
- wave /= peak
261
- if denoise:
262
- wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads))
263
- wave_processed *= 0.5
264
- else:
265
- wave_processed = mdx_sess.process_wave(wave, m_threads)
266
- # return to previous peak
267
- wave_processed *= peak
268
- stem_name = model.stem_name if suffix is None else suffix
269
-
270
- main_filepath = None
271
- if not exclude_main:
272
- main_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav")
273
- sf.write(main_filepath, wave_processed.T, sr)
274
-
275
- invert_filepath = None
276
- if not exclude_inversion:
277
- diff_stem_name = stem_naming.get(stem_name) if invert_suffix is None else invert_suffix
278
- stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name
279
- invert_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav")
280
- sf.write(invert_filepath, (-wave_processed.T * model.compensation) + wave.T, sr)
281
-
282
- if not keep_orig:
283
- os.remove(filename)
284
-
285
- del mdx_sess, wave_processed, wave
286
- gc.collect()
287
- return main_filepath, invert_filepath
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cvandi/remake/scripts/generate_meta_info_pairdata.py DELETED
@@ -1,49 +0,0 @@
1
- import argparse
2
- import glob
3
- import os
4
-
5
-
6
- def main(args):
7
- txt_file = open(args.meta_info, 'w')
8
- # sca images
9
- img_paths_gt = sorted(glob.glob(os.path.join(args.input[0], '*')))
10
- img_paths_lq = sorted(glob.glob(os.path.join(args.input[1], '*')))
11
-
12
- assert len(img_paths_gt) == len(img_paths_lq), ('GT folder and LQ folder should have the same length, but got '
13
- f'{len(img_paths_gt)} and {len(img_paths_lq)}.')
14
-
15
- for img_path_gt, img_path_lq in zip(img_paths_gt, img_paths_lq):
16
- # get the relative paths
17
- img_name_gt = os.path.relpath(img_path_gt, args.root[0])
18
- img_name_lq = os.path.relpath(img_path_lq, args.root[1])
19
- print(f'{img_name_gt}, {img_name_lq}')
20
- txt_file.write(f'{img_name_gt}, {img_name_lq}\n')
21
-
22
-
23
- if __name__ == '__main__':
24
- """This script is used to generate meta info (txt file) for paired images.
25
- """
26
- parser = argparse.ArgumentParser()
27
- parser.add_argument(
28
- '--input',
29
- nargs='+',
30
- default=['datasets/DF2K/DIV2K_train_HR_sub', 'datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub'],
31
- help='Input folder, should be [gt_folder, lq_folder]')
32
- parser.add_argument('--root', nargs='+', default=[None, None], help='Folder root, will use the ')
33
- parser.add_argument(
34
- '--meta_info',
35
- type=str,
36
- default='datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt',
37
- help='txt path for meta info')
38
- args = parser.parse_args()
39
-
40
- assert len(args.input) == 2, 'Input folder should have two elements: gt folder and lq folder'
41
- assert len(args.root) == 2, 'Root path should have two elements: root for gt folder and lq folder'
42
- os.makedirs(os.path.dirname(args.meta_info), exist_ok=True)
43
- for i in range(2):
44
- if args.input[i].endswith('/'):
45
- args.input[i] = args.input[i][:-1]
46
- if args.root[i] is None:
47
- args.root[i] = os.path.dirname(args.input[i])
48
-
49
- main(args)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DEVINKofficial/Onodofthenorth-SD_PixelArt_SpriteSheet_Generator/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/Onodofthenorth/SD_PixelArt_SpriteSheet_Generator").launch()
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/GdImageFile.py DELETED
@@ -1,97 +0,0 @@
1
- #
2
- # The Python Imaging Library.
3
- # $Id$
4
- #
5
- # GD file handling
6
- #
7
- # History:
8
- # 1996-04-12 fl Created
9
- #
10
- # Copyright (c) 1997 by Secret Labs AB.
11
- # Copyright (c) 1996 by Fredrik Lundh.
12
- #
13
- # See the README file for information on usage and redistribution.
14
- #
15
-
16
-
17
- """
18
- .. note::
19
- This format cannot be automatically recognized, so the
20
- class is not registered for use with :py:func:`PIL.Image.open()`. To open a
21
- gd file, use the :py:func:`PIL.GdImageFile.open()` function instead.
22
-
23
- .. warning::
24
- THE GD FORMAT IS NOT DESIGNED FOR DATA INTERCHANGE. This
25
- implementation is provided for convenience and demonstrational
26
- purposes only.
27
- """
28
-
29
-
30
- from . import ImageFile, ImagePalette, UnidentifiedImageError
31
- from ._binary import i16be as i16
32
- from ._binary import i32be as i32
33
-
34
-
35
- class GdImageFile(ImageFile.ImageFile):
36
- """
37
- Image plugin for the GD uncompressed format. Note that this format
38
- is not supported by the standard :py:func:`PIL.Image.open()` function. To use
39
- this plugin, you have to import the :py:mod:`PIL.GdImageFile` module and
40
- use the :py:func:`PIL.GdImageFile.open()` function.
41
- """
42
-
43
- format = "GD"
44
- format_description = "GD uncompressed images"
45
-
46
- def _open(self):
47
- # Header
48
- s = self.fp.read(1037)
49
-
50
- if i16(s) not in [65534, 65535]:
51
- msg = "Not a valid GD 2.x .gd file"
52
- raise SyntaxError(msg)
53
-
54
- self.mode = "L" # FIXME: "P"
55
- self._size = i16(s, 2), i16(s, 4)
56
-
57
- true_color = s[6]
58
- true_color_offset = 2 if true_color else 0
59
-
60
- # transparency index
61
- tindex = i32(s, 7 + true_color_offset)
62
- if tindex < 256:
63
- self.info["transparency"] = tindex
64
-
65
- self.palette = ImagePalette.raw(
66
- "XBGR", s[7 + true_color_offset + 4 : 7 + true_color_offset + 4 + 256 * 4]
67
- )
68
-
69
- self.tile = [
70
- (
71
- "raw",
72
- (0, 0) + self.size,
73
- 7 + true_color_offset + 4 + 256 * 4,
74
- ("L", 0, 1),
75
- )
76
- ]
77
-
78
-
79
- def open(fp, mode="r"):
80
- """
81
- Load texture from a GD image file.
82
-
83
- :param fp: GD file name, or an opened file handle.
84
- :param mode: Optional mode. In this version, if the mode argument
85
- is given, it must be "r".
86
- :returns: An image instance.
87
- :raises OSError: If the image could not be read.
88
- """
89
- if mode != "r":
90
- msg = "bad mode"
91
- raise ValueError(msg)
92
-
93
- try:
94
- return GdImageFile(fp)
95
- except SyntaxError as e:
96
- msg = "cannot identify this image file"
97
- raise UnidentifiedImageError(msg) from e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dana19/ImageRecognition_FaceCount/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Headcount
3
- emoji: 💻
4
- colorFrom: green
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.3
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DemoLou/moe-tts/text/thai.py DELETED
@@ -1,44 +0,0 @@
1
- import re
2
- from num_thai.thainumbers import NumThai
3
-
4
-
5
- num = NumThai()
6
-
7
- # List of (Latin alphabet, Thai) pairs:
8
- _latin_to_thai = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
9
- ('a', 'เอ'),
10
- ('b','บี'),
11
- ('c','ซี'),
12
- ('d','ดี'),
13
- ('e','อี'),
14
- ('f','เอฟ'),
15
- ('g','จี'),
16
- ('h','เอช'),
17
- ('i','ไอ'),
18
- ('j','เจ'),
19
- ('k','เค'),
20
- ('l','แอล'),
21
- ('m','เอ็ม'),
22
- ('n','เอ็น'),
23
- ('o','โอ'),
24
- ('p','พี'),
25
- ('q','คิว'),
26
- ('r','แอร์'),
27
- ('s','เอส'),
28
- ('t','ที'),
29
- ('u','ยู'),
30
- ('v','วี'),
31
- ('w','ดับเบิลยู'),
32
- ('x','เอ็กซ์'),
33
- ('y','วาย'),
34
- ('z','ซี')
35
- ]]
36
-
37
-
38
- def num_to_thai(text):
39
- return re.sub(r'(?:\d+(?:,?\d+)?)+(?:\.\d+(?:,?\d+)?)?', lambda x: ''.join(num.NumberToTextThai(float(x.group(0).replace(',', '')))), text)
40
-
41
- def latin_to_thai(text):
42
- for regex, replacement in _latin_to_thai:
43
- text = re.sub(regex, replacement, text)
44
- return text