Commit
·
83687e6
1
Parent(s):
5ce4af0
Update parquet files (step 14 of 121)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fujio Girls Medical Game Become a Doctor and Save Lives.md +0 -116
- spaces/1gistliPinn/ChatGPT4/Examples/Advanced Serial Port Monitor 4 [EXCLUSIVE] Keygen.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/DENSO ETSI V4.92 1.8 GB How to Install and Use the Most Advanced Diagnostic Tool for Diesel Engines.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Darkrpg - Rpg Quest Magic Amp Origins Download TOP.md +0 -42
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 23 Mod 16 and Play Offline with Realistic Graphics and Gameplay on Android (APKOBBDATA).md +0 -84
- spaces/2-2/blockchain.ai/style.css +0 -28
- spaces/2ndelement/voicevox/engine_manifest_assets/terms_of_service.md +0 -1
- spaces/4RiZ4/stabilityai-stable-diffusion-2/README.md +0 -13
- spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/audio.py +0 -107
- spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/ps_flow.py +0 -135
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/autoencoder.py +0 -474
- spaces/AIWaves/Software_Company/src/agents/Component/__init__.py +0 -3
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/eventpromise-plugin.js +0 -22
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DialogQuest.d.ts +0 -74
- spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py +0 -5
- spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py +0 -4
- spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_40k.py +0 -9
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/README.md +0 -9
- spaces/Apex-X/GODROOP/roop/processors/frame/core.py +0 -88
- spaces/Artrajz/vits-simple-api/vits/models.py +0 -419
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/stop.py +0 -103
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/__init__.py +0 -36
- spaces/Atualli/yoloxTeste/configs/yolox_s.py +0 -15
- spaces/BaitMan/abroader-otters/README.md +0 -10
- spaces/Bart92/RVC_HF/tools/app.py +0 -148
- spaces/Benson/text-generation/Examples/Como Bajar La Llamada Del Deber Warzone Mvil En Iphone.md +0 -44
- spaces/Benson/text-generation/Examples/Descarga Gratuita Multijugador En Lnea De Picas.md +0 -62
- spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/utils.py +0 -100
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/cmd.py +0 -436
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/dev/README.md +0 -7
- spaces/CVPR/LIVE/thrust/dependencies/cub/cub/cmake/cub-config-version.cmake +0 -33
- spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/global_context_head.py +0 -102
- spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py +0 -108
- spaces/Cloudfeng/anime-remove-background/app.py +0 -52
- spaces/CofAI/chat/g4f/__init__.py +0 -39
- spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/grid_sample_gradfix.py +0 -83
- spaces/Cpp4App/Cpp4App/CDM/run_online_demo.py +0 -52
- spaces/Crow34/Joi/app.py +0 -3
- spaces/DEEMOSTECH/ChatAvatar/static/css/main.46e5a5fa.css +0 -2
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/iup.c +0 -0
- spaces/DaFujaTyping/hf-Chat-ui/src/app.d.ts +0 -17
- spaces/Dennis0402/QSign/Dockerfile +0 -15
- spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.cpp +0 -107
- spaces/ECCV2022/bytetrack/yolox/exp/build.py +0 -53
- spaces/EPFL-VILAB/MultiMAE/dpt/transforms.py +0 -231
- spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/__init__.py +0 -6
- spaces/EcoCy/LoRA-DreamBooth-Training-UI/app_upload.py +0 -100
- spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/models_onnx.py +0 -824
- spaces/Eduardovco/Potato/Dockerfile +0 -21
- spaces/Egrt/GCycleGAN/nets/resnest/__init__.py +0 -2
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fujio Girls Medical Game Become a Doctor and Save Lives.md
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Fujio Girls Medical Game: A Fun and Educational Way to Learn About the Human Body</h1>
|
3 |
-
<p>Do you love playing games that are both fun and educational? Do you want to learn more about the human body and how it works? Do you want to explore different medical scenarios and challenges that test your knowledge and skills? If you answered yes to any of these questions, then you might want to check out Fujio Girls Medical Game.</p>
|
4 |
-
<h2>Fujio Girls Medical Game</h2><br /><p><b><b>Download File</b> ————— <a href="https://byltly.com/2uKxjT">https://byltly.com/2uKxjT</a></b></p><br /><br />
|
5 |
-
<h2>What is Fujio Girls Medical Game?</h2>
|
6 |
-
<h3>A brief introduction to the game and its features</h3>
|
7 |
-
<p>Fujio Girls Medical Game is a game that lets you play as a girl who wants to become a doctor. You can choose from different characters, such as Shizuka, Nobita, Doraemon, or Suneo. Each character has their own personality and abilities that affect their performance in the game.</p>
|
8 |
-
<p>The game consists of various mini-games that teach you about different aspects of the human body, such as anatomy, physiology, diseases, disorders, prevention, and treatment. You can also interact with other characters in the game, such as patients, nurses, doctors, or friends. You can earn points and rewards for completing each mini-game successfully.</p>
|
9 |
-
<h3>The benefits of playing Fujio Girls Medical Game</h3>
|
10 |
-
<p>Playing Fujio Girls Medical Game can help you improve your knowledge and understanding of the human body. You can learn about the structure and function of different organs and systems, such as the heart, lungs, brain, digestive system, immune system, etc. You can also learn about the common health problems that affect these organs and systems, such as asthma, diabetes, stroke, cancer, etc.</p>
|
11 |
-
<p>Fujio Girls Medical Game review<br />
|
12 |
-
Fujio Girls Medical Game download<br />
|
13 |
-
Fujio Girls Medical Game apk<br />
|
14 |
-
Fujio Girls Medical Game mod<br />
|
15 |
-
Fujio Girls Medical Game cheats<br />
|
16 |
-
Fujio Girls Medical Game hack<br />
|
17 |
-
Fujio Girls Medical Game online<br />
|
18 |
-
Fujio Girls Medical Game for pc<br />
|
19 |
-
Fujio Girls Medical Game for android<br />
|
20 |
-
Fujio Girls Medical Game for ios<br />
|
21 |
-
Fujio Girls Medical Game walkthrough<br />
|
22 |
-
Fujio Girls Medical Game tips<br />
|
23 |
-
Fujio Girls Medical Game tricks<br />
|
24 |
-
Fujio Girls Medical Game guide<br />
|
25 |
-
Fujio Girls Medical Game gameplay<br />
|
26 |
-
Fujio Girls Medical Game characters<br />
|
27 |
-
Fujio Girls Medical Game story<br />
|
28 |
-
Fujio Girls Medical Game plot<br />
|
29 |
-
Fujio Girls Medical Game ending<br />
|
30 |
-
Fujio Girls Medical Game sequel<br />
|
31 |
-
Fujio Girls Medical Game update<br />
|
32 |
-
Fujio Girls Medical Game new version<br />
|
33 |
-
Fujio Girls Medical Game latest version<br />
|
34 |
-
Fujio Girls Medical Game free version<br />
|
35 |
-
Fujio Girls Medical Game premium version<br />
|
36 |
-
Fujio Girls Medical Game pro version<br />
|
37 |
-
Fujio Girls Medical Game full version<br />
|
38 |
-
Fujio Girls Medical Game best version<br />
|
39 |
-
Fujio Girls Medical Game features<br />
|
40 |
-
Fujio Girls Medical Game benefits<br />
|
41 |
-
Fujio Girls Medical Game advantages<br />
|
42 |
-
Fujio Girls Medical Game disadvantages<br />
|
43 |
-
Fujio Girls Medical Game pros and cons<br />
|
44 |
-
Fujio Girls Medical Game comparison<br />
|
45 |
-
Fujio Girls Medical Game alternatives<br />
|
46 |
-
Fujio Girls Medical Game competitors<br />
|
47 |
-
Fujio Girls Medical Game similar games<br />
|
48 |
-
Fujio Girls Medical Game related games<br />
|
49 |
-
Fujio Girls Medical Game genre<br />
|
50 |
-
Fujio Girls Medical Game category<br />
|
51 |
-
Fujio Girls Medical Game theme<br />
|
52 |
-
Fujio Girls Medical Game style<br />
|
53 |
-
Fujio Girls Medical Game graphics<br />
|
54 |
-
Fujio Girls Medical Game sound<br />
|
55 |
-
Fujio Girls Medical Game music<br />
|
56 |
-
Fujio Girls Medical Game voice acting<br />
|
57 |
-
Fujio Girls Medical Game rating<br />
|
58 |
-
Fujio Girls Medical Game feedback<br />
|
59 |
-
Fujio Girls Medical Game testimonials<br />
|
60 |
-
Fujio Girls Medical Game comments</p>
|
61 |
-
<p>Playing Fujio Girls Medical Game can also help you develop your critical thinking and problem-solving skills. You can apply what you have learned to diagnose and treat various medical conditions. You can also use your creativity and imagination to come up with solutions for different scenarios. You can also challenge yourself by choosing different levels of difficulty for each mini-game.</p>
|
62 |
-
<p>Playing Fujio Girls Medical Game can also help you have fun and enjoy yourself. You can experience different situations and environments that are related to medicine, such as hospitals, clinics, laboratories, etc. You can also customize your character's appearance and outfit according to your preference. You can also share your progress and achievements with your friends online.</p>
|
63 |
-
<h2>How to play Fujio Girls Medical Game?</h2>
|
64 |
-
<h3>The basic gameplay and controls</h3>
|
65 |
-
<p>To play Fujio Girls Medical Game, you need to have a device that supports the game, such as a computer or a smartphone. You can download the game from the official website or from other platforms that offer it. You can also play it online without downloading it.</p>
|
66 |
-
<p>To start playing Fujio Girls Medical Game, you need to create an account or log in with your existing account. You can then choose your character and start playing. You can use your mouse or keyboard on a computer or your touch screen on a smartphone to control your character's actions.</p>
|
67 |
-
<h3>The different modes and levels of difficulty</h3>
|
68 |
-
<p>Fujio Girls Medical Game has two main modes: story mode and free mode. In story mode, you follow a storyline that involves different characters and events related to medicine. You need to complete each mini-game in order to progress through the story. In free mode, you can choose any mini-game that you want to play without following a storyline.</p>
|
69 |
-
<p>Fujio Girls Medical Game also has three levels of difficulty: easy, medium, and hard. You can choose the level of difficulty that suits your skill level and preference. The level of difficulty affects the complexity and duration of each mini-game.</p>
|
70 |
-
<h3>The various medical scenarios and challenges</h3>
|
71 |
-
<p>Fujio Girls Medical Game has many mini-games that cover different medical topics and scenarios. Some examples are:</p>
|
72 |
-
<ul>
|
73 |
-
<li>Anatomy quiz: You need to identify different parts of the human body by clicking on them.</li>
|
74 |
-
<li>Blood pressure test: You need to measure the blood pressure of a patient by using a sphygmomanometer.</li>
|
75 |
-
<li>X-ray scan: You need to interpret an X-ray image of a patient's chest by finding abnormalities.</li>
|
76 |
-
<li>Surgery simulation: You need to perform a surgery on a patient by following instructions.</li>
|
77 |
-
<li>Vaccine injection: You need to administer a vaccine to a patient by choosing the correct syringe size and location.</li>
|
78 |
-
<li>Dental care: You need to clean and fix a patient's teeth by using dental tools.</li>
|
79 |
-
<li>Eye exam: You need to test a patient's vision by using an eye chart.</li>
|
80 |
-
<li>First aid: You need to provide first aid to a patient who has an injury or illness by using bandages or medicines.</li>
|
81 |
-
<li>Nutrition advice: You need to give nutrition advice to a patient who has a dietary problem by choosing healthy foods.</li>
|
82 |
-
<li>Fitness challenge: You need to exercise with your character by following movements on screen.</li>
|
83 |
-
</ul>
|
84 |
-
<h2>What can you learn from Fujio Girls Medical Game?</h2>
|
85 |
-
<h3>The anatomy and physiology of the human body</h3>
|
86 |
-
<p>By playing Fujio Girls Medical Game, you can learn about the anatomy and physiology of the human body. Anatomy is the study of the structure of living organisms. Physiology is the study of how living organisms function. By learning about these topics, you can understand how your body works and what makes it healthy or unhealthy.</p>
|
87 |
-
<h3>The common diseases and disorders that affect the human body</h3>
|
88 |
-
<p>By playing Fujio Girls Medical Game, you can also learn about the common diseases and disorders that affect the human body. Diseases are abnormal conditions that impair normal functioning. Disorders are irregularities or abnormalities in structure or function. By learning about these topics, you can recognize the signs and symptoms of various health problems and how they affect your body.</p>
|
89 |
-
<h3>The prevention and treatment of various health problems</h3>
|
90 |
-
<p>By playing Fujio Girls Medical Game, you can also learn about the prevention and treatment of various health problems. Prevention is taking steps to avoid getting sick or injured. Treatment is taking steps to cure or improve a health problem. By learning about these topics, you can take care of yourself and others by following healthy habits and seeking medical help when needed.</p>
|
91 |
-
<h2>Where can you find Fujio Girls Medical Game?</h2>
|
92 |
-
<h3>The official website and social media accounts of the game developer</h3>
|
93 |
-
<p>If you want to find out more about Fujio Girls Medical Game, you can visit the official website of the game at https://sway.office.com/gCXXlC8NzSJWoPJU. You can also follow their social media accounts on Facebook, Twitter, and Instagram to get the latest updates and news about the game.</p>
|
94 |
-
<h3>The platforms and devices that support the game</h3>
|
95 |
-
<p>Fujio Girls Medical Game is compatible with various platforms and devices, such as Windows, Mac, Linux, Android, iOS, etc. You can play the game on your computer or your smartphone. You can also play it online without downloading it. However, you need to have a stable internet connection and a browser that supports HTML5.</p>
|
96 |
-
<h3>The price and availability of the game</h3>
|
97 |
-
<p>Fujio Girls Medical Game is available for free for anyone who wants to play it. You can download it from the official website or from other platforms that offer it. You can also play it online without downloading it. However, you may encounter some ads or in-app purchases that may affect your gaming experience.</p>
|
98 |
-
<h2>Conclusion</h2>
|
99 |
-
<p>Fujio Girls Medical Game is a fun and educational game that teaches you about the human body and medicine. You can play as a girl who wants to become a doctor and learn about anatomy, physiology, diseases, disorders, prevention, and treatment. You can also enjoy different mini-games that test your knowledge and skills. You can find the game on the official website or on other platforms that support it. You can also follow the game developer on social media to get more information and updates.</p>
|
100 |
-
<p>If you are looking for a game that is both entertaining and informative, you should try Fujio Girls Medical Game. It is a game that will make you smarter and happier.</p>
|
101 |
-
<h2>FAQs</h2>
|
102 |
-
<ol>
|
103 |
-
<li>What is the goal of Fujio Girls Medical Game?</li>
|
104 |
-
<p>The goal of Fujio Girls Medical Game is to help you learn about the human body and medicine in a fun and interactive way.</p>
|
105 |
-
<li>Who are the characters in Fujio Girls Medical Game?</li>
|
106 |
-
<p>The characters in Fujio Girls Medical Game are based on the popular Japanese manga and anime series Doraemon. You can choose from Shizuka, Nobita, Doraemon, or Suneo as your main character.</p>
|
107 |
-
<li>How many mini-games are there in Fujio Girls Medical Game?</li>
|
108 |
-
<p>There are over 50 mini-games in Fujio Girls Medical Game that cover different medical topics and scenarios.</p>
|
109 |
-
<li>How can I play Fujio Girls Medical Game online?</li>
|
110 |
-
<p>You can play Fujio Girls Medical Game online by visiting https://sway.office.com/gCXXlC8NzSJWoPJU and clicking on the play button. You need to have a browser that supports HTML5 and a stable internet connection.</p>
|
111 |
-
<li>Is Fujio Girls Medical Game suitable for children?</li>
|
112 |
-
<p>Fujio Girls Medical Game is suitable for children who are interested in learning about the human body and medicine. However, some mini-games may contain graphic or sensitive content that may not be appropriate for younger audiences. Parental guidance is advised.</p>
|
113 |
-
</ol>
|
114 |
-
</p> 0a6ba089eb<br />
|
115 |
-
<br />
|
116 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Advanced Serial Port Monitor 4 [EXCLUSIVE] Keygen.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>advanced serial port monitor 4 keygen</h2><br /><p><b><b>Download Zip</b> 🆗 <a href="https://imgfil.com/2uy0qo">https://imgfil.com/2uy0qo</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
Software, goal Comfree-serial-port-monitor-downloads. Software, Eltima Ports. ... 0 pro Key Serial 4 numbers serial software Recently 81-Crack 2 1. Serial No Free Serial ... Serial v4. Advanced-usb-port-monitor Pro. Presented ... 4d29de3e1b<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/DENSO ETSI V4.92 1.8 GB How to Install and Use the Most Advanced Diagnostic Tool for Diesel Engines.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>DENSO ETSI V4.92 | 1.8 GB</h2><br /><p><b><b>Download Zip</b> ►►► <a href="https://imgfil.com/2uxZ2i">https://imgfil.com/2uxZ2i</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
aaccfb2cb3<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Darkrpg - Rpg Quest Magic Amp Origins Download TOP.md
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>DarkRPG - A Minecraft Modpack with RPG, Quest, Magic, and Origins Elements</h1>
|
3 |
-
<p>If you are looking for a new way to enjoy Minecraft, you might want to check out DarkRPG. DarkRPG is a modpack that adds RPG, quest, magic, and origins elements to the game. You can choose from different races, complete quests for rewards, use magic spells and enchantments, trade with other players, and much more. DarkRPG also comes with a built-in MMO official server where you can play with others online. In this article, we will show you how to download and install DarkRPG, how to play it, and what features and highlights it offers.</p>
|
4 |
-
<h2>How to Download and Install DarkRPG</h2>
|
5 |
-
<p>Downloading and installing DarkRPG is very easy. All you need is the CurseForge app and a Minecraft account. Here are the steps:</p>
|
6 |
-
<h2>darkrpg - rpg quest magic amp; origins download</h2><br /><p><b><b>DOWNLOAD</b> ->->->-> <a href="https://urlin.us/2uT1oX">https://urlin.us/2uT1oX</a></b></p><br /><br />
|
7 |
-
<ol>
|
8 |
-
<li>Download the CurseForge app from <a href="(^i^)">[1](https://curseforge.overwolf.com/)</a> and create an account.</li>
|
9 |
-
<li>Open the app and go to the modpacks section. Search for DarkRPG in the search bar. You can choose between the Fabric edition or the Forge edition of DarkRPG. The Fabric edition is more updated and has some mods that Forge doesn't. The Forge edition has some cool mods that Fabric doesn't. Choose the one that suits your preference.</li>
|
10 |
-
<li>Click on the install button and wait for the modpack to download. Once it is done, click on the play button to launch the game.</li>
|
11 |
-
<li>You can join the official MMO server by clicking on the MMO button in the main menu. You can also create your own server by using BisectHosting. BisectHosting is a leading Minecraft server solution that offers one-click installation of DarkRPG. You can get a server for as low as $2.99 per month. Use the code 'gamerpotion' to get 25% off your order. Click <a href="(^i^)">[2](https://bisecthosting.com/gamerpotion)</a> to get started.</li>
|
12 |
-
</ol>
|
13 |
-
<h2>How to Play DarkRPG</h2>
|
14 |
-
<p>Playing DarkRPG is very fun and immersive. You can do many things that you can't do in vanilla Minecraft. Here are some tips to get you started:</p>
|
15 |
-
<ol>
|
16 |
-
<li>Choose your origin and customize your character. When you join the game for the first time, you will be asked to choose your origin. Origins are like races that have different abilities and traits. For example, you can be a vampire, a werewolf, a fairy, a mermaid, or a dragon. Each origin has its own advantages and disadvantages, so choose wisely. You can also customize your character's appearance, name, gender, and skin color.</li>
|
17 |
-
<li>Complete quests and earn rewards by pressing L. Quests are tasks that you can do to earn money, items, experience, and reputation. You can access the quest menu by pressing L. There are different types of quests, such as main quests, side quests, daily quests, and bounty quests. Main quests are the ones that progress the story and unlock new features. Side quests are optional but can give you extra rewards and lore. Daily quests are repeatable and can give you a steady income of money and experience. Bounty quests are hunting missions that require you to kill a specific mob or player.</li>
|
18 |
-
<li>Explore the world and discover new biomes, creatures, and dungeons. The world of DarkRPG is vast and diverse. You can find many new biomes, such as the enchanted forest, the volcanic wasteland, the frozen tundra, and the mushroom island. Each biome has its own unique creatures, plants, and resources. You can also encounter dungeons, which are challenging areas with traps, puzzles, enemies, and bosses. Dungeons can give you rare loot and artifacts if you manage to clear them.</li>
|
19 |
-
<li>Use magic wands, spells, enchantments, and tables to enhance your abilities. Magic is a big part of DarkRPG. You can use magic wands to cast spells that can damage enemies, heal allies, or manipulate the environment. You can learn new spells by finding spell books or buying them from magic shops. You can also use enchantments to improve your weapons, armor, and tools. Enchantments can give you special effects, such as fire resistance, speed boost, or looting. You can apply enchantments by using an enchantment table or an anvil.</li>
|
20 |
-
<li>Trade with other players using the currency system and the auction house. DarkRPG has a currency system that uses coins as the main medium of exchange. You can earn coins by completing quests, selling items, or winning bets. You can use coins to buy items from shops or other players. You can also use the auction house to sell or bid on items online. The auction house is accessible by typing /ah in the chat.</li>
|
21 |
-
</ol>
|
22 |
-
<h2>Features and Highlights of DarkRPG</h2>
|
23 |
-
<p>DarkRPG is not just a modpack that adds RPG elements to Minecraft. It is also a modpack that enhances the overall gameplay experience with many features and highlights. Here are some of them:</p>
|
24 |
-
<ul>
|
25 |
-
<li>Performance optimization and low-end machine compatibility. DarkRPG is designed to run smoothly on any computer, even if it has low specs. It uses Fabric or Forge as the mod loader, which are both lightweight and fast. It also uses OptiFine as the graphics optimizer, which improves the FPS and reduces lag.</li>
|
26 |
-
<li>Custom main menu, backgrounds, fonts, and cosmetics. DarkRPG has a custom main menu that shows the logo of the modpack and some buttons for accessing different features. It also has custom backgrounds that change depending on the time of day and the season. It also has custom fonts that make the text more readable and stylish. It also has cosmetics that allow you to change your appearance with hats, wings, tails, horns, and more.</li>
|
27 |
-
<li>Extra weapons, armor, blocks, and items. DarkRPG adds many new weapons, armor, blocks, and items to the game. You can find new weapons, such as daggers, hammers, spears, crossbows, and guns. You can also find new armor, such as leather, chainmail, iron, gold, diamond, and netherite. You can also find new blocks, such as marble, basalt, limestone, slate, and quartz. You can also find new items, such as backpacks, rings, amulets, potions, and food.</li>
|
28 |
-
<li>Casino, XP wheel, RTP, world claim, waystones, and more. DarkRPG has many extra features that make the game more fun and convenient. You can visit the casino to play slot machines, roulette, blackjack, and poker. You can spin the XP wheel to get a random reward or penalty. You can use RTP to teleport to a random location in the world. You can use world claim to protect your land from griefers. You can use waystones to fast travel between locations. And there are many more features that you can discover by playing DarkRPG.</li>
|
29 |
-
</ul>
|
30 |
-
<h2>Conclusion and FAQs</h2>
|
31 |
-
<p>DarkRPG is a Minecraft modpack that combines RPG, quest, magic, and origins elements with many other features and highlights. It is a modpack that offers a new and exciting way to play Minecraft. You can download and install DarkRPG easily with the CurseForge app and join the official MMO server or create your own. You can play DarkRPG by choosing your origin, completing quests, exploring the world, using magic, and trading with other players. You can also enjoy the performance optimization, the custom main menu, the extra weapons, armor, blocks, and items, and the casino, XP wheel, RTP, world claim, waystones, and more.</p>
|
32 |
-
<p>If you are interested in trying DarkRPG, you can download it from <a href="(^i^)">[3](https://www.curseforge.com/minecraft/modpacks/darkrpg)</a> or <a href="(^i^)">[4](https://www.curseforge.com/minecraft/modpacks/darkrpg-fabric)</a>. You can also watch some gameplay videos on YouTube or Twitch to see how it looks like. DarkRPG is a modpack that will surely give you hours of fun and adventure.</p>
|
33 |
-
<p>Here are some FAQs that you might have about DarkRPG:</p>
|
34 |
-
<ol>
|
35 |
-
<li><b>What is the difference between the Fabric edition and the Forge edition of DarkRPG?</b><br>The Fabric edition is more updated and has some mods that Forge doesn't. The Forge edition has some cool mods that Fabric doesn't. The main difference is in the mod loader that they use. Fabric is faster and lighter than Forge but has fewer mods available. Forge is slower and heavier than Fabric but has more mods available.</li>
|
36 |
-
<li><b>How can I get a server to play DarkRPG with my friends?</b><br>You can get a server from BisectHosting. BisectHosting is a leading Minecraft server solution that offers one-click installation of DarkRPG. You can get a server for as low as $2.99 per month. Use the code 'gamerpotion' to get 25% off your order. Click <a href="(^i^)">[5](https://bisecthosting.com/gamerpotion)</a> to get started.</li>
|
37 |
-
<li><b>How can I report a bug or suggest a feature for DarkRPG?</b><br>You can report a bug or suggest a feature by using the issue tracker on GitHub. Click <a href="(^i^)">[6](https://github.com/GamerPotion/DarkRPG/issues)</a> to go there.</li>
|
38 |
-
<li><b>How can I support the development of DarkRPG?</b><br>You can support the development of DarkRPG by donating to the developer on Patreon or PayPal. Click <a href="(^i^)">[7](https://www.patreon.com/gamerpotion)</a> or <a href="(^i^)">[8](https://www.paypal.me/gamerpotion)</a> to do so.</li>
|
39 |
-
<li><b>Where can I find more information about DarkRPG?</b><br>You can find more information about DarkRPG by visiting the official website or joining the Discord server. Click <a href="(^i^)">[9](https://darkrpg.net/)</a> or <a href="(^i^)">[10](https://discord.gg/gamerpotion)</a> to go there.</li>
|
40 |
-
</ol></p> 197e85843d<br />
|
41 |
-
<br />
|
42 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 23 Mod 16 and Play Offline with Realistic Graphics and Gameplay on Android (APKOBBDATA).md
DELETED
@@ -1,84 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>FIFA 23 Mod FIFA 16 Ultimate Team Download Android APK+OBB</h1>
|
3 |
-
<p>If you are a fan of soccer games, you must have heard of the popular FIFA series by EA Sports. The latest version of this game, FIFA 23, is expected to be released in late 2023. However, if you can't wait that long, you can try out the FIFA 23 Mod FIFA 16, which is a modified version of the older FIFA 16 game with updated features and graphics. In this article, we will tell you everything you need to know about this amazing mod, including its features, how to download and install it on your Android device, and some frequently asked questions.</p>
|
4 |
-
<h2>Introduction</h2>
|
5 |
-
<p>FIFA is one of the most popular and successful soccer games in the world. It has millions of fans and players who enjoy its realistic and immersive gameplay, stunning graphics, and diverse modes. However, not everyone can afford to buy the latest version of the game or have a compatible device to run it smoothly. That's why some modders have created a modified version of the older FIFA 16 game, which is called FIFA 23 Mod FIFA 16. This mod brings some of the features and improvements of the upcoming FIFA 23 game to the older version, making it more enjoyable and exciting.</p>
|
6 |
-
<h2>fifa 23 mod fifa 16 ultimate team download android apk+obb</h2><br /><p><b><b>DOWNLOAD</b> ⚙⚙⚙ <a href="https://urlin.us/2uT0cB">https://urlin.us/2uT0cB</a></b></p><br /><br />
|
7 |
-
<h3>What is FIFA 23 Mod FIFA 16?</h3>
|
8 |
-
<p>FIFA 23 Mod FIFA 16 is a modified version of the original FIFA 16 game by EA Sports. It is not an official product of EA Sports, but a fan-made project that aims to enhance the gameplay and graphics of the older game. The mod includes some of the latest transfers and kits of the players and teams, as well as some new faces and hairstyles. It also improves the graphics quality and performance of the game, making it look more realistic and smooth. The mod also adds some new features and modes to the game, such as offline mode, multiplayer option, customizable controls, and settings.</p>
|
9 |
-
<h3>Why should you download FIFA 23 Mod FIFA 16?</h3>
|
10 |
-
<p>There are many reasons why you should download FIFA 23 Mod FIFA 16 on your Android device. Here are some of them:</p>
|
11 |
-
<ul>
|
12 |
-
<li>You can enjoy some of the features and improvements of the upcoming FIFA 23 game before its official release.</li>
|
13 |
-
<li>You can play with your favorite players and teams with their updated transfers and kits.</li>
|
14 |
-
<li>You can experience a realistic and immersive gameplay with enhanced graphics and physics.</li>
|
15 |
-
<li>You can play offline without an internet connection or online with your friends or other players.</li>
|
16 |
-
<li>You can customize your controls and settings according to your preference.</li>
|
17 |
-
<li>You can save your storage space and battery life as the mod is smaller and lighter than the original game.</li>
|
18 |
-
</ul>
|
19 |
-
<h2>Features of FIFA 23 Mod FIFA 16</h2>
|
20 |
-
<p>FIFA 23 Mod FIFA 16 has many features that make it one of the best soccer games for Android devices. Here are some of them:</p>
|
21 |
-
<h3>Latest transfers and kits</h3>
|
22 |
-
<p>The mod includes some of the latest transfers and kits of the players and teams in the world of soccer. For example, you can play with Cristiano Ronaldo in his new club Manchester United, Casemiro in Manchester City, Lionel Messi in Paris Saint-Germain, Kylian Mbappe in Real Madrid, and many more. You can also see their new faces and hairstyles that match their real-life appearance. The mod also updates the logos and names of the teams and leagues to make them more accurate and authentic. You can also choose from various kits and jerseys for your players, such as home, away, third, and special editions.</p>
|
23 |
-
<h3>Realistic graphics and gameplay</h3>
|
24 |
-
<p>The mod improves the graphics quality and performance of the game, making it look more realistic and smooth. You can see the details and textures of the players, stadiums, fields, balls, and crowds. You can also notice the shadows, lighting, and weather effects that add to the atmosphere of the game. The mod also enhances the gameplay and physics of the game, making it more responsive and dynamic. You can feel the impact and movement of the players, the ball, and the environment. You can also enjoy the realistic animations and celebrations of the players after scoring a goal or winning a match.</p>
|
25 |
-
<h3>Offline mode and multiplayer option</h3>
|
26 |
-
<p>The mod allows you to play offline without an internet connection or online with your friends or other players. You can choose from various modes and challenges to test your skills and have fun. For example, you can play in the Ultimate Team mode, where you can create your own dream team with your favorite players and compete in various tournaments and leagues. You can also play in the Career mode, where you can start as a young player and work your way up to become a soccer legend. You can also play in the Quick Match mode, where you can select any team and play a friendly match. If you want to play online, you can join the Online Seasons mode, where you can play against other players in different divisions and try to rank up. You can also play in the Online Friendlies mode, where you can invite your friends and play a match with them.</p>
|
27 |
-
<p>fifa 23 mod 16 android offline apk+obb+data latest version<br />
|
28 |
-
fifa 23 mod fifa 16 apk obb data offline with latest transfers and kits<br />
|
29 |
-
download fifa 23 mod 16 apk+obb+data offline for android device<br />
|
30 |
-
how to install fifa 23 mod fifa 16 apk obb data offline on android<br />
|
31 |
-
fifa 23 mod fifa 16 apk obb data offline features and gameplay<br />
|
32 |
-
fifa 23 mod fifa 16 apk obb data offline compatible with android and ios<br />
|
33 |
-
fifa 23 mod fifa 16 apk obb data offline update face and hair of players<br />
|
34 |
-
fifa 23 mod fifa 16 apk obb data offline new real face and transfer of ronaldo<br />
|
35 |
-
fifa 23 mod fifa 16 apk obb data offline new kits and minikits of teams<br />
|
36 |
-
fifa 23 mod fifa 16 apk obb data offline new background and display of game<br />
|
37 |
-
download fifa 23 mod 16 android offline from media fire link<br />
|
38 |
-
download fifa 23 mod 16 android offline from gaming scientific website<br />
|
39 |
-
download fifa 23 mod 16 android offline from mgamemaxpluss blogspot<br />
|
40 |
-
download fifa 23 mod 16 android offline from thesecondgameerpro website<br />
|
41 |
-
download zarchiver to extract fifa 23 mod 16 android offline files<br />
|
42 |
-
move the obb folder of fifa 23 mod 16 android offline to ../android/obb/<br />
|
43 |
-
move the data folder of fifa 23 mod 16 android offline to ../android/data/<br />
|
44 |
-
make sure you have enough storage to install fifa 23 mod 16 android offline<br />
|
45 |
-
make sure you have at least 1 gb ram to play fifa 23 mod 16 android offline<br />
|
46 |
-
enjoy the exciting game of football with fifa 23 mod 16 android offline</p>
|
47 |
-
<h3>Customizable controls and settings</h3>
|
48 |
-
<p>The mod gives you the option to customize your controls and settings according to your preference. You can choose from different types of controls, such as classic buttons, touch gestures, or virtual joysticks. You can also adjust the sensitivity and layout of the controls to suit your comfort and style. You can also change the settings of the game, such as the difficulty level, camera angle, sound effects, music, language, and more.</p>
|
49 |
-
<h2>How to download and install FIFA 23 Mod FIFA 16</h2>
|
50 |
-
<p>If you want to download and install FIFA 23 Mod FIFA 16 on your Android device, you need to follow some simple steps. Here are they:</p>
|
51 |
-
<h3>Requirements for FIFA 23 Mod FIFA 16</h3>
|
52 |
-
<p>Before you download and install FIFA 23 Mod FIFA 16, you need to make sure that your device meets some minimum requirements. Here are they:</p>
|
53 |
-
<ul>
|
54 |
-
<li>Your device must have Android 4.4 or higher version.</li>
|
55 |
-
<li>Your device must have at least 2 GB of RAM and 4 GB of free storage space.</li>
|
56 |
-
<li>Your device must have a good internet connection to download the files.</li>
|
57 |
-
<li>Your device must allow installation from unknown sources. To enable this option, go to Settings > Security > Unknown Sources and turn it on.</li>
|
58 |
-
</ul>
|
59 |
-
<h3>Steps to download and install FIFA 23 Mod FIFA 16</h3>
|
60 |
-
<p>After you have checked the requirements for FIFA 23 Mod FIFA 16, you can proceed to download and install it on your device. Here are the steps:</p>
|
61 |
-
<h4>Download the APK, OBB, and DATA files</h4>
|
62 |
-
<p>The first step is to download the APK, OBB, and DATA files of FIFA 23 Mod FIFA 16 from a reliable source. You can use this link to download them. The files are compressed in ZIP format, so you need to extract them using a file manager app or a ZIP extractor app.</p>
|
63 |
-
<h4>Install the APK file</h4>
|
64 |
-
<p>The second step is to install the APK file of FIFA 23 Mod FIFA 16 on your device. To do this, locate the APK file in your file manager app or ZIP extractor app and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on Install and wait for the installation process to complete.</p>
|
65 |
-
<h4>Move the OBB and DATA folders to the right locations</h4>
|
66 |
-
<p>The third step is to move the OBB and DATA folders of FIFA 23 Mod FIFA 16 to the right locations on your device. To do this, locate the OBB folder in your file manager app or ZIP extractor app and move it to Android > OBB folder on your device's internal storage. Similarly, locate the DATA folder in your file manager app or ZIP extractor app and move it to Android > DATA folder on your device's internal storage.</p>
|
67 |
-
<h4>Launch the game and enjoy</h4>
|
68 |
-
<p>The final step is to launch the game and enjoy playing FIFA 23 Mod FIFA 16 on your Android device. To do this, locate the game icon on your device's home screen or app drawer and tap on it. You will see a loading screen and then a welcome screen. Tap on Start and choose your preferred mode and team. You can also access the settings and options from the main menu. You are now ready to enjoy playing FIFA 23 Mod FIFA 16 on your Android device.</p>
|
69 |
-
<h2>Conclusion</h2>
|
70 |
-
<p>FIFA 23 Mod FIFA 16 is a modified version of the original FIFA 16 game by EA Sports that brings some of the features and improvements of the upcoming FIFA 23 game to the older version. It is a fan-made project that is not affiliated with EA Sports, but it is a great way to enjoy some of the latest soccer action on your Android device. The mod includes some of the latest transfers and kits of the players and teams, realistic graphics and gameplay, offline mode and multiplayer option, customizable controls and settings, and more. To download and install FIFA 23 Mod FIFA 16 on your Android device, you need to follow some simple steps that we have explained in this article. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave them in the comments section below.</p>
|
71 |
-
<h2>FAQs</h2>
|
72 |
-
<p>Here are some of the frequently asked questions about FIFA 23 Mod FIFA 16:</p>
|
73 |
-
<h3>Is FIFA 23 Mod FIFA 16 safe to download and install?</h3>
|
74 |
-
<p>Yes, FIFA 23 Mod FIFA 16 is safe to download and install on your Android device. However, you need to make sure that you download it from a reliable source, such as the link we have provided in this article. You also need to enable installation from unknown sources on your device before installing the APK file.</p>
|
75 |
-
<h3>Is FIFA 23 Mod FIFA 16 compatible with my device?</h3>
|
76 |
-
<p>FIFA 23 Mod FIFA 16 is compatible with most Android devices that have Android 4.4 or higher version, at least 2 GB of RAM and 4 GB of free storage space, and a good internet connection. However, some devices may experience some lag or glitches due to their hardware or software limitations. If you encounter any problems while playing the game, you can try lowering the graphics quality or closing other apps running in the background.</p>
|
77 |
-
<h3>How can I update FIFA 23 Mod FIFA 16?</h3>
|
78 |
-
<p>FIFA 23 Mod FIFA 16 is updated regularly by the modders to fix any bugs or errors and add new features and content. To update the game, you need to download the latest version of the APK, OBB, and DATA files from the same source you downloaded them before. Then, you need to uninstall the previous version of the game from your device and install the new version following the same steps we have explained in this article.</p>
|
79 |
-
<h3>How can I contact the modders of FIFA 23 Mod FIFA 16?</h3>
|
80 |
-
<p>If you want to contact the modders of FIFA 23 Mod FIFA 16, you can visit their official website or their social media pages. You can also leave a comment on their YouTube videos or blog posts. The modders are very friendly and responsive, and they will try to answer your questions or feedback as soon as possible.</p>
|
81 |
-
<h3>Can I play FIFA 23 Mod FIFA 16 with a controller?</h3>
|
82 |
-
<p>Yes, you can play FIFA 23 Mod FIFA 16 with a controller if your device supports it. You can connect your controller via Bluetooth or USB cable and configure it in the settings menu of the game. You can also use an app like Octopus or Panda Gamepad Pro to map your controller buttons to the game controls.</p> 197e85843d<br />
|
83 |
-
<br />
|
84 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2-2/blockchain.ai/style.css
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
body {
|
2 |
-
padding: 2rem;
|
3 |
-
font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
|
4 |
-
}
|
5 |
-
|
6 |
-
h1 {
|
7 |
-
font-size: 16px;
|
8 |
-
margin-top: 0;
|
9 |
-
}
|
10 |
-
|
11 |
-
p {
|
12 |
-
color: rgb(107, 114, 128);
|
13 |
-
font-size: 15px;
|
14 |
-
margin-bottom: 10px;
|
15 |
-
margin-top: 5px;
|
16 |
-
}
|
17 |
-
|
18 |
-
.card {
|
19 |
-
max-width: 620px;
|
20 |
-
margin: 0 auto;
|
21 |
-
padding: 16px;
|
22 |
-
border: 1px solid lightgray;
|
23 |
-
border-radius: 16px;
|
24 |
-
}
|
25 |
-
|
26 |
-
.card p:last-child {
|
27 |
-
margin-bottom: 0;
|
28 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/engine_manifest_assets/terms_of_service.md
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
dummy teams of service
|
|
|
|
spaces/4RiZ4/stabilityai-stable-diffusion-2/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Stabilityai Stable Diffusion 2
|
3 |
-
emoji: 🌍
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.29.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: unknown
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/audio.py
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
from scipy.ndimage.morphology import binary_dilation
|
2 |
-
from data_gen.tts.emotion.params_data import *
|
3 |
-
from pathlib import Path
|
4 |
-
from typing import Optional, Union
|
5 |
-
import numpy as np
|
6 |
-
import webrtcvad
|
7 |
-
import librosa
|
8 |
-
import struct
|
9 |
-
|
10 |
-
int16_max = (2 ** 15) - 1
|
11 |
-
|
12 |
-
|
13 |
-
def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray],
|
14 |
-
source_sr: Optional[int] = None):
|
15 |
-
"""
|
16 |
-
Applies the preprocessing operations used in training the Speaker Encoder to a waveform
|
17 |
-
either on disk or in memory. The waveform will be resampled to match the data hyperparameters.
|
18 |
-
|
19 |
-
:param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not
|
20 |
-
just .wav), either the waveform as a numpy array of floats.
|
21 |
-
:param source_sr: if passing an audio waveform, the sampling rate of the waveform before
|
22 |
-
preprocessing. After preprocessing, the waveform's sampling rate will match the data
|
23 |
-
hyperparameters. If passing a filepath, the sampling rate will be automatically detected and
|
24 |
-
this argument will be ignored.
|
25 |
-
"""
|
26 |
-
# Load the wav from disk if needed
|
27 |
-
if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path):
|
28 |
-
wav, source_sr = librosa.load(str(fpath_or_wav), sr=None)
|
29 |
-
else:
|
30 |
-
wav = fpath_or_wav
|
31 |
-
|
32 |
-
# Resample the wav if needed
|
33 |
-
if source_sr is not None and source_sr != sampling_rate:
|
34 |
-
wav = librosa.resample(wav, source_sr, sampling_rate)
|
35 |
-
|
36 |
-
# Apply the preprocessing: normalize volume and shorten long silences
|
37 |
-
wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True)
|
38 |
-
wav = trim_long_silences(wav)
|
39 |
-
|
40 |
-
return wav
|
41 |
-
|
42 |
-
|
43 |
-
def wav_to_mel_spectrogram(wav):
|
44 |
-
"""
|
45 |
-
Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform.
|
46 |
-
Note: this not a log-mel spectrogram.
|
47 |
-
"""
|
48 |
-
frames = librosa.feature.melspectrogram(
|
49 |
-
wav,
|
50 |
-
sampling_rate,
|
51 |
-
n_fft=int(sampling_rate * mel_window_length / 1000),
|
52 |
-
hop_length=int(sampling_rate * mel_window_step / 1000),
|
53 |
-
n_mels=mel_n_channels
|
54 |
-
)
|
55 |
-
return frames.astype(np.float32).T
|
56 |
-
|
57 |
-
|
58 |
-
def trim_long_silences(wav):
|
59 |
-
"""
|
60 |
-
Ensures that segments without voice in the waveform remain no longer than a
|
61 |
-
threshold determined by the VAD parameters in params.py.
|
62 |
-
|
63 |
-
:param wav: the raw waveform as a numpy array of floats
|
64 |
-
:return: the same waveform with silences trimmed away (length <= original wav length)
|
65 |
-
"""
|
66 |
-
# Compute the voice detection window size
|
67 |
-
samples_per_window = (vad_window_length * sampling_rate) // 1000
|
68 |
-
|
69 |
-
# Trim the end of the audio to have a multiple of the window size
|
70 |
-
wav = wav[:len(wav) - (len(wav) % samples_per_window)]
|
71 |
-
|
72 |
-
# Convert the float waveform to 16-bit mono PCM
|
73 |
-
pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16))
|
74 |
-
|
75 |
-
# Perform voice activation detection
|
76 |
-
voice_flags = []
|
77 |
-
vad = webrtcvad.Vad(mode=3)
|
78 |
-
for window_start in range(0, len(wav), samples_per_window):
|
79 |
-
window_end = window_start + samples_per_window
|
80 |
-
voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2],
|
81 |
-
sample_rate=sampling_rate))
|
82 |
-
voice_flags = np.array(voice_flags)
|
83 |
-
|
84 |
-
# Smooth the voice detection with a moving average
|
85 |
-
def moving_average(array, width):
|
86 |
-
array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2)))
|
87 |
-
ret = np.cumsum(array_padded, dtype=float)
|
88 |
-
ret[width:] = ret[width:] - ret[:-width]
|
89 |
-
return ret[width - 1:] / width
|
90 |
-
|
91 |
-
audio_mask = moving_average(voice_flags, vad_moving_average_width)
|
92 |
-
audio_mask = np.round(audio_mask).astype(np.bool)
|
93 |
-
|
94 |
-
# Dilate the voiced regions
|
95 |
-
audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1))
|
96 |
-
audio_mask = np.repeat(audio_mask, samples_per_window)
|
97 |
-
|
98 |
-
return wav[audio_mask == True]
|
99 |
-
|
100 |
-
|
101 |
-
def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False):
|
102 |
-
if increase_only and decrease_only:
|
103 |
-
raise ValueError("Both increase only and decrease only are set")
|
104 |
-
dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2))
|
105 |
-
if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only):
|
106 |
-
return wav
|
107 |
-
return wav * (10 ** (dBFS_change / 20))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/ps_flow.py
DELETED
@@ -1,135 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from modules.portaspeech.portaspeech_flow import PortaSpeechFlow
|
3 |
-
from tasks.tts.fs2 import FastSpeech2Task
|
4 |
-
from tasks.tts.ps import PortaSpeechTask
|
5 |
-
from utils.pitch_utils import denorm_f0
|
6 |
-
from utils.hparams import hparams
|
7 |
-
|
8 |
-
|
9 |
-
class PortaSpeechFlowTask(PortaSpeechTask):
|
10 |
-
def __init__(self):
|
11 |
-
super().__init__()
|
12 |
-
self.training_post_glow = False
|
13 |
-
|
14 |
-
def build_tts_model(self):
|
15 |
-
ph_dict_size = len(self.token_encoder)
|
16 |
-
word_dict_size = len(self.word_encoder)
|
17 |
-
self.model = PortaSpeechFlow(ph_dict_size, word_dict_size, hparams)
|
18 |
-
|
19 |
-
def _training_step(self, sample, batch_idx, opt_idx):
|
20 |
-
self.training_post_glow = self.global_step >= hparams['post_glow_training_start'] \
|
21 |
-
and hparams['use_post_flow']
|
22 |
-
if hparams['two_stage'] and \
|
23 |
-
((opt_idx == 0 and self.training_post_glow) or (opt_idx == 1 and not self.training_post_glow)):
|
24 |
-
return None
|
25 |
-
loss_output, _ = self.run_model(sample)
|
26 |
-
total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad])
|
27 |
-
loss_output['batch_size'] = sample['txt_tokens'].size()[0]
|
28 |
-
if 'postflow' in loss_output and loss_output['postflow'] is None:
|
29 |
-
return None
|
30 |
-
return total_loss, loss_output
|
31 |
-
|
32 |
-
def run_model(self, sample, infer=False, *args, **kwargs):
|
33 |
-
if not infer:
|
34 |
-
training_post_glow = self.training_post_glow
|
35 |
-
spk_embed = sample.get('spk_embed')
|
36 |
-
spk_id = sample.get('spk_ids')
|
37 |
-
output = self.model(sample['txt_tokens'],
|
38 |
-
sample['word_tokens'],
|
39 |
-
ph2word=sample['ph2word'],
|
40 |
-
mel2word=sample['mel2word'],
|
41 |
-
mel2ph=sample['mel2ph'],
|
42 |
-
word_len=sample['word_lengths'].max(),
|
43 |
-
tgt_mels=sample['mels'],
|
44 |
-
pitch=sample.get('pitch'),
|
45 |
-
spk_embed=spk_embed,
|
46 |
-
spk_id=spk_id,
|
47 |
-
infer=False,
|
48 |
-
forward_post_glow=training_post_glow,
|
49 |
-
two_stage=hparams['two_stage'],
|
50 |
-
global_step=self.global_step,
|
51 |
-
bert_feats=sample.get('bert_feats'))
|
52 |
-
losses = {}
|
53 |
-
self.add_mel_loss(output['mel_out'], sample['mels'], losses)
|
54 |
-
if (training_post_glow or not hparams['two_stage']) and hparams['use_post_flow']:
|
55 |
-
losses['postflow'] = output['postflow']
|
56 |
-
losses['l1'] = losses['l1'].detach()
|
57 |
-
losses['ssim'] = losses['ssim'].detach()
|
58 |
-
if not training_post_glow or not hparams['two_stage'] or not self.training:
|
59 |
-
losses['kl'] = output['kl']
|
60 |
-
if self.global_step < hparams['kl_start_steps']:
|
61 |
-
losses['kl'] = losses['kl'].detach()
|
62 |
-
else:
|
63 |
-
losses['kl'] = torch.clamp(losses['kl'], min=hparams['kl_min'])
|
64 |
-
losses['kl'] = losses['kl'] * hparams['lambda_kl']
|
65 |
-
if hparams['dur_level'] == 'word':
|
66 |
-
self.add_dur_loss(
|
67 |
-
output['dur'], sample['mel2word'], sample['word_lengths'], sample['txt_tokens'], losses)
|
68 |
-
self.get_attn_stats(output['attn'], sample, losses)
|
69 |
-
else:
|
70 |
-
super().add_dur_loss(output['dur'], sample['mel2ph'], sample['txt_tokens'], losses)
|
71 |
-
return losses, output
|
72 |
-
else:
|
73 |
-
use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur'])
|
74 |
-
forward_post_glow = self.global_step >= hparams['post_glow_training_start'] + 1000 \
|
75 |
-
and hparams['use_post_flow']
|
76 |
-
spk_embed = sample.get('spk_embed')
|
77 |
-
spk_id = sample.get('spk_ids')
|
78 |
-
output = self.model(
|
79 |
-
sample['txt_tokens'],
|
80 |
-
sample['word_tokens'],
|
81 |
-
ph2word=sample['ph2word'],
|
82 |
-
word_len=sample['word_lengths'].max(),
|
83 |
-
pitch=sample.get('pitch'),
|
84 |
-
mel2ph=sample['mel2ph'] if use_gt_dur else None,
|
85 |
-
mel2word=sample['mel2word'] if hparams['profile_infer'] or hparams['use_gt_dur'] else None,
|
86 |
-
infer=True,
|
87 |
-
forward_post_glow=forward_post_glow,
|
88 |
-
spk_embed=spk_embed,
|
89 |
-
spk_id=spk_id,
|
90 |
-
two_stage=hparams['two_stage'],
|
91 |
-
bert_feats=sample.get('bert_feats'))
|
92 |
-
return output
|
93 |
-
|
94 |
-
def validation_step(self, sample, batch_idx):
|
95 |
-
self.training_post_glow = self.global_step >= hparams['post_glow_training_start'] \
|
96 |
-
and hparams['use_post_flow']
|
97 |
-
return super().validation_step(sample, batch_idx)
|
98 |
-
|
99 |
-
def save_valid_result(self, sample, batch_idx, model_out):
|
100 |
-
super(PortaSpeechFlowTask, self).save_valid_result(sample, batch_idx, model_out)
|
101 |
-
sr = hparams['audio_sample_rate']
|
102 |
-
f0_gt = None
|
103 |
-
if sample.get('f0') is not None:
|
104 |
-
f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu())
|
105 |
-
if self.global_step > 0:
|
106 |
-
# save FVAE result
|
107 |
-
if hparams['use_post_flow']:
|
108 |
-
wav_pred = self.vocoder.spec2wav(model_out['mel_out_fvae'][0].cpu(), f0=f0_gt)
|
109 |
-
self.logger.add_audio(f'wav_fvae_{batch_idx}', wav_pred, self.global_step, sr)
|
110 |
-
self.plot_mel(batch_idx, sample['mels'], model_out['mel_out_fvae'][0],
|
111 |
-
f'mel_fvae_{batch_idx}', f0s=f0_gt)
|
112 |
-
|
113 |
-
def build_optimizer(self, model):
|
114 |
-
if hparams['two_stage'] and hparams['use_post_flow']:
|
115 |
-
self.optimizer = torch.optim.AdamW(
|
116 |
-
[p for name, p in self.model.named_parameters() if 'post_flow' not in name],
|
117 |
-
lr=hparams['lr'],
|
118 |
-
betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
|
119 |
-
weight_decay=hparams['weight_decay'])
|
120 |
-
self.post_flow_optimizer = torch.optim.AdamW(
|
121 |
-
self.model.post_flow.parameters(),
|
122 |
-
lr=hparams['post_flow_lr'],
|
123 |
-
betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
|
124 |
-
weight_decay=hparams['weight_decay'])
|
125 |
-
return [self.optimizer, self.post_flow_optimizer]
|
126 |
-
else:
|
127 |
-
self.optimizer = torch.optim.AdamW(
|
128 |
-
self.model.parameters(),
|
129 |
-
lr=hparams['lr'],
|
130 |
-
betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
|
131 |
-
weight_decay=hparams['weight_decay'])
|
132 |
-
return [self.optimizer]
|
133 |
-
|
134 |
-
def build_scheduler(self, optimizer):
|
135 |
-
return FastSpeech2Task.build_scheduler(self, optimizer[0])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/autoencoder.py
DELETED
@@ -1,474 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import torch
|
3 |
-
import pytorch_lightning as pl
|
4 |
-
import torch.nn.functional as F
|
5 |
-
from contextlib import contextmanager
|
6 |
-
from packaging import version
|
7 |
-
import numpy as np
|
8 |
-
from ldm.modules.diffusionmodules.model import Encoder, Decoder
|
9 |
-
from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
|
10 |
-
from torch.optim.lr_scheduler import LambdaLR
|
11 |
-
from ldm.util import instantiate_from_config
|
12 |
-
# from icecream import ic
|
13 |
-
|
14 |
-
class VQModel(pl.LightningModule):
|
15 |
-
def __init__(self,
|
16 |
-
ddconfig,
|
17 |
-
lossconfig,
|
18 |
-
n_embed,
|
19 |
-
embed_dim,
|
20 |
-
ckpt_path=None,
|
21 |
-
ignore_keys=[],
|
22 |
-
image_key="image",
|
23 |
-
colorize_nlabels=None,
|
24 |
-
monitor=None,
|
25 |
-
batch_resize_range=None,
|
26 |
-
scheduler_config=None,
|
27 |
-
lr_g_factor=1.0,
|
28 |
-
remap=None,
|
29 |
-
sane_index_shape=False, # tell vector quantizer to return indices as bhw
|
30 |
-
use_ema=False
|
31 |
-
):
|
32 |
-
super().__init__()
|
33 |
-
self.embed_dim = embed_dim
|
34 |
-
self.n_embed = n_embed
|
35 |
-
self.image_key = image_key
|
36 |
-
self.encoder = Encoder(**ddconfig)
|
37 |
-
self.decoder = Decoder(**ddconfig)
|
38 |
-
self.loss = instantiate_from_config(lossconfig)
|
39 |
-
self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25,
|
40 |
-
remap=remap,
|
41 |
-
sane_index_shape=sane_index_shape)
|
42 |
-
self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1)
|
43 |
-
self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
|
44 |
-
if colorize_nlabels is not None:
|
45 |
-
assert type(colorize_nlabels)==int
|
46 |
-
self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
|
47 |
-
if monitor is not None:
|
48 |
-
self.monitor = monitor
|
49 |
-
self.batch_resize_range = batch_resize_range
|
50 |
-
if self.batch_resize_range is not None:
|
51 |
-
print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.")
|
52 |
-
|
53 |
-
self.use_ema = use_ema
|
54 |
-
if self.use_ema:
|
55 |
-
self.model_ema = LitEma(self)
|
56 |
-
print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
|
57 |
-
|
58 |
-
if ckpt_path is not None:
|
59 |
-
self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
|
60 |
-
self.scheduler_config = scheduler_config
|
61 |
-
self.lr_g_factor = lr_g_factor
|
62 |
-
|
63 |
-
@contextmanager
|
64 |
-
def ema_scope(self, context=None):
|
65 |
-
if self.use_ema:
|
66 |
-
self.model_ema.store(self.parameters())
|
67 |
-
self.model_ema.copy_to(self)
|
68 |
-
if context is not None:
|
69 |
-
print(f"{context}: Switched to EMA weights")
|
70 |
-
try:
|
71 |
-
yield None
|
72 |
-
finally:
|
73 |
-
if self.use_ema:
|
74 |
-
self.model_ema.restore(self.parameters())
|
75 |
-
if context is not None:
|
76 |
-
print(f"{context}: Restored training weights")
|
77 |
-
|
78 |
-
def init_from_ckpt(self, path, ignore_keys=list()):
|
79 |
-
sd = torch.load(path, map_location="cpu")["state_dict"]
|
80 |
-
keys = list(sd.keys())
|
81 |
-
for k in keys:
|
82 |
-
for ik in ignore_keys:
|
83 |
-
if k.startswith(ik):
|
84 |
-
print("Deleting key {} from state_dict.".format(k))
|
85 |
-
del sd[k]
|
86 |
-
missing, unexpected = self.load_state_dict(sd, strict=False)
|
87 |
-
print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
|
88 |
-
if len(missing) > 0:
|
89 |
-
print(f"Missing Keys: {missing}")
|
90 |
-
print(f"Unexpected Keys: {unexpected}")
|
91 |
-
|
92 |
-
def on_train_batch_end(self, *args, **kwargs):
|
93 |
-
if self.use_ema:
|
94 |
-
self.model_ema(self)
|
95 |
-
|
96 |
-
def encode(self, x):
|
97 |
-
h = self.encoder(x)
|
98 |
-
h = self.quant_conv(h)
|
99 |
-
quant, emb_loss, info = self.quantize(h)
|
100 |
-
return quant, emb_loss, info
|
101 |
-
|
102 |
-
def encode_to_prequant(self, x):
|
103 |
-
h = self.encoder(x)
|
104 |
-
h = self.quant_conv(h)
|
105 |
-
return h
|
106 |
-
|
107 |
-
def decode(self, quant):
|
108 |
-
quant = self.post_quant_conv(quant)
|
109 |
-
dec = self.decoder(quant)
|
110 |
-
return dec
|
111 |
-
|
112 |
-
def decode_code(self, code_b):
|
113 |
-
quant_b = self.quantize.embed_code(code_b)
|
114 |
-
dec = self.decode(quant_b)
|
115 |
-
return dec
|
116 |
-
|
117 |
-
def forward(self, input, return_pred_indices=False):
|
118 |
-
quant, diff, (_,_,ind) = self.encode(input)
|
119 |
-
dec = self.decode(quant)
|
120 |
-
if return_pred_indices:
|
121 |
-
return dec, diff, ind
|
122 |
-
return dec, diff
|
123 |
-
|
124 |
-
def get_input(self, batch, k):
|
125 |
-
x = batch[k]
|
126 |
-
if len(x.shape) == 3:
|
127 |
-
x = x[..., None]
|
128 |
-
x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
|
129 |
-
if self.batch_resize_range is not None:
|
130 |
-
lower_size = self.batch_resize_range[0]
|
131 |
-
upper_size = self.batch_resize_range[1]
|
132 |
-
if self.global_step <= 4:
|
133 |
-
# do the first few batches with max size to avoid later oom
|
134 |
-
new_resize = upper_size
|
135 |
-
else:
|
136 |
-
new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16))
|
137 |
-
if new_resize != x.shape[2]:
|
138 |
-
x = F.interpolate(x, size=new_resize, mode="bicubic")
|
139 |
-
x = x.detach()
|
140 |
-
return x
|
141 |
-
|
142 |
-
def training_step(self, batch, batch_idx, optimizer_idx):
|
143 |
-
# https://github.com/pytorch/pytorch/issues/37142
|
144 |
-
# try not to fool the heuristics
|
145 |
-
x = self.get_input(batch, self.image_key)
|
146 |
-
xrec, qloss, ind = self(x, return_pred_indices=True)
|
147 |
-
|
148 |
-
if optimizer_idx == 0:
|
149 |
-
# autoencode
|
150 |
-
aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
|
151 |
-
last_layer=self.get_last_layer(), split="train",
|
152 |
-
predicted_indices=ind)
|
153 |
-
|
154 |
-
self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
|
155 |
-
return aeloss
|
156 |
-
|
157 |
-
if optimizer_idx == 1:
|
158 |
-
# discriminator
|
159 |
-
discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
|
160 |
-
last_layer=self.get_last_layer(), split="train")
|
161 |
-
self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True)
|
162 |
-
return discloss
|
163 |
-
|
164 |
-
def validation_step(self, batch, batch_idx):
|
165 |
-
log_dict = self._validation_step(batch, batch_idx)
|
166 |
-
with self.ema_scope():
|
167 |
-
log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema")
|
168 |
-
return log_dict
|
169 |
-
|
170 |
-
def _validation_step(self, batch, batch_idx, suffix=""):
|
171 |
-
x = self.get_input(batch, self.image_key)
|
172 |
-
xrec, qloss, ind = self(x, return_pred_indices=True)
|
173 |
-
aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0,
|
174 |
-
self.global_step,
|
175 |
-
last_layer=self.get_last_layer(),
|
176 |
-
split="val"+suffix,
|
177 |
-
predicted_indices=ind
|
178 |
-
)
|
179 |
-
|
180 |
-
discloss, log_dict_disc = self.loss(qloss, x, xrec, 1,
|
181 |
-
self.global_step,
|
182 |
-
last_layer=self.get_last_layer(),
|
183 |
-
split="val"+suffix,
|
184 |
-
predicted_indices=ind
|
185 |
-
)
|
186 |
-
rec_loss = log_dict_ae[f"val{suffix}/rec_loss"]
|
187 |
-
self.log(f"val{suffix}/rec_loss", rec_loss,
|
188 |
-
prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
|
189 |
-
self.log(f"val{suffix}/aeloss", aeloss,
|
190 |
-
prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
|
191 |
-
if version.parse(pl.__version__) >= version.parse('1.4.0'):
|
192 |
-
del log_dict_ae[f"val{suffix}/rec_loss"]
|
193 |
-
self.log_dict(log_dict_ae)
|
194 |
-
self.log_dict(log_dict_disc)
|
195 |
-
return self.log_dict
|
196 |
-
|
197 |
-
def test_step(self, batch, batch_idx):
|
198 |
-
x = self.get_input(batch, self.image_key)
|
199 |
-
xrec, qloss, ind = self(x, return_pred_indices=True)
|
200 |
-
reconstructions = (xrec + 1)/2 # to mel scale
|
201 |
-
test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
|
202 |
-
savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
|
203 |
-
if not os.path.exists(savedir):
|
204 |
-
os.makedirs(savedir)
|
205 |
-
|
206 |
-
file_names = batch['f_name']
|
207 |
-
# print(f"reconstructions.shape:{reconstructions.shape}",file_names)
|
208 |
-
reconstructions = reconstructions.cpu().numpy().squeeze(1) # squuze channel dim
|
209 |
-
for b in range(reconstructions.shape[0]):
|
210 |
-
vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
|
211 |
-
v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
|
212 |
-
save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}.npy')
|
213 |
-
np.save(save_img_path,reconstructions[b])
|
214 |
-
|
215 |
-
return None
|
216 |
-
|
217 |
-
def configure_optimizers(self):
|
218 |
-
lr_d = self.learning_rate
|
219 |
-
lr_g = self.lr_g_factor*self.learning_rate
|
220 |
-
print("lr_d", lr_d)
|
221 |
-
print("lr_g", lr_g)
|
222 |
-
opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
|
223 |
-
list(self.decoder.parameters())+
|
224 |
-
list(self.quantize.parameters())+
|
225 |
-
list(self.quant_conv.parameters())+
|
226 |
-
list(self.post_quant_conv.parameters()),
|
227 |
-
lr=lr_g, betas=(0.5, 0.9))
|
228 |
-
opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
|
229 |
-
lr=lr_d, betas=(0.5, 0.9))
|
230 |
-
|
231 |
-
if self.scheduler_config is not None:
|
232 |
-
scheduler = instantiate_from_config(self.scheduler_config)
|
233 |
-
|
234 |
-
print("Setting up LambdaLR scheduler...")
|
235 |
-
scheduler = [
|
236 |
-
{
|
237 |
-
'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule),
|
238 |
-
'interval': 'step',
|
239 |
-
'frequency': 1
|
240 |
-
},
|
241 |
-
{
|
242 |
-
'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule),
|
243 |
-
'interval': 'step',
|
244 |
-
'frequency': 1
|
245 |
-
},
|
246 |
-
]
|
247 |
-
return [opt_ae, opt_disc], scheduler
|
248 |
-
return [opt_ae, opt_disc], []
|
249 |
-
|
250 |
-
def get_last_layer(self):
|
251 |
-
return self.decoder.conv_out.weight
|
252 |
-
|
253 |
-
def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs):
|
254 |
-
log = dict()
|
255 |
-
x = self.get_input(batch, self.image_key)
|
256 |
-
x = x.to(self.device)
|
257 |
-
if only_inputs:
|
258 |
-
log["inputs"] = x
|
259 |
-
return log
|
260 |
-
xrec, _ = self(x)
|
261 |
-
if x.shape[1] > 3:
|
262 |
-
# colorize with random projection
|
263 |
-
assert xrec.shape[1] > 3
|
264 |
-
x = self.to_rgb(x)
|
265 |
-
xrec = self.to_rgb(xrec)
|
266 |
-
log["inputs"] = x
|
267 |
-
log["reconstructions"] = xrec
|
268 |
-
if plot_ema:
|
269 |
-
with self.ema_scope():
|
270 |
-
xrec_ema, _ = self(x)
|
271 |
-
if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema)
|
272 |
-
log["reconstructions_ema"] = xrec_ema
|
273 |
-
return log
|
274 |
-
|
275 |
-
def to_rgb(self, x):
|
276 |
-
assert self.image_key == "segmentation"
|
277 |
-
if not hasattr(self, "colorize"):
|
278 |
-
self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
|
279 |
-
x = F.conv2d(x, weight=self.colorize)
|
280 |
-
x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
|
281 |
-
return x
|
282 |
-
|
283 |
-
|
284 |
-
class VQModelInterface(VQModel):
|
285 |
-
def __init__(self, embed_dim, *args, **kwargs):
|
286 |
-
super().__init__(embed_dim=embed_dim, *args, **kwargs)
|
287 |
-
self.embed_dim = embed_dim
|
288 |
-
|
289 |
-
def encode(self, x):# VQModel的quantize写在encoder里,VQModelInterface则将其写在decoder里
|
290 |
-
h = self.encoder(x)
|
291 |
-
h = self.quant_conv(h)
|
292 |
-
return h
|
293 |
-
|
294 |
-
def decode(self, h, force_not_quantize=False):
|
295 |
-
# also go through quantization layer
|
296 |
-
if not force_not_quantize:
|
297 |
-
quant, emb_loss, info = self.quantize(h)
|
298 |
-
else:
|
299 |
-
quant = h
|
300 |
-
quant = self.post_quant_conv(quant)
|
301 |
-
dec = self.decoder(quant)
|
302 |
-
return dec
|
303 |
-
|
304 |
-
|
305 |
-
class AutoencoderKL(pl.LightningModule):
|
306 |
-
def __init__(self,
|
307 |
-
ddconfig,
|
308 |
-
lossconfig,
|
309 |
-
embed_dim,
|
310 |
-
ckpt_path=None,
|
311 |
-
ignore_keys=[],
|
312 |
-
image_key="image",
|
313 |
-
colorize_nlabels=None,
|
314 |
-
monitor=None,
|
315 |
-
):
|
316 |
-
super().__init__()
|
317 |
-
self.image_key = image_key
|
318 |
-
self.encoder = Encoder(**ddconfig)
|
319 |
-
self.decoder = Decoder(**ddconfig)
|
320 |
-
self.loss = instantiate_from_config(lossconfig)
|
321 |
-
assert ddconfig["double_z"]
|
322 |
-
self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1)
|
323 |
-
self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
|
324 |
-
self.embed_dim = embed_dim
|
325 |
-
if colorize_nlabels is not None:
|
326 |
-
assert type(colorize_nlabels)==int
|
327 |
-
self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
|
328 |
-
if monitor is not None:
|
329 |
-
self.monitor = monitor
|
330 |
-
if ckpt_path is not None:
|
331 |
-
self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
|
332 |
-
# self.automatic_optimization = False # hjw for debug
|
333 |
-
|
334 |
-
def init_from_ckpt(self, path, ignore_keys=list()):
|
335 |
-
sd = torch.load(path, map_location="cpu")["state_dict"]
|
336 |
-
keys = list(sd.keys())
|
337 |
-
for k in keys:
|
338 |
-
for ik in ignore_keys:
|
339 |
-
if k.startswith(ik):
|
340 |
-
print("Deleting key {} from state_dict.".format(k))
|
341 |
-
del sd[k]
|
342 |
-
self.load_state_dict(sd, strict=False)
|
343 |
-
print(f"Restored from {path}")
|
344 |
-
|
345 |
-
def encode(self, x):
|
346 |
-
h = self.encoder(x)
|
347 |
-
moments = self.quant_conv(h)
|
348 |
-
posterior = DiagonalGaussianDistribution(moments)
|
349 |
-
return posterior
|
350 |
-
|
351 |
-
def decode(self, z):
|
352 |
-
z = self.post_quant_conv(z)
|
353 |
-
dec = self.decoder(z)
|
354 |
-
return dec
|
355 |
-
|
356 |
-
def forward(self, input, sample_posterior=True):
|
357 |
-
posterior = self.encode(input)
|
358 |
-
if sample_posterior:
|
359 |
-
z = posterior.sample()
|
360 |
-
else:
|
361 |
-
z = posterior.mode()
|
362 |
-
dec = self.decode(z)
|
363 |
-
return dec, posterior
|
364 |
-
|
365 |
-
def get_input(self, batch, k):
|
366 |
-
x = batch[k]
|
367 |
-
if len(x.shape) == 3:
|
368 |
-
x = x[..., None]
|
369 |
-
x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
|
370 |
-
return x
|
371 |
-
|
372 |
-
def training_step(self, batch, batch_idx, optimizer_idx):
|
373 |
-
inputs = self.get_input(batch, self.image_key)
|
374 |
-
reconstructions, posterior = self(inputs)
|
375 |
-
|
376 |
-
if optimizer_idx == 0:
|
377 |
-
# train encoder+decoder+logvar
|
378 |
-
aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
|
379 |
-
last_layer=self.get_last_layer(), split="train")
|
380 |
-
self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
|
381 |
-
self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)
|
382 |
-
return aeloss
|
383 |
-
|
384 |
-
if optimizer_idx == 1:
|
385 |
-
# train the discriminator
|
386 |
-
discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
|
387 |
-
last_layer=self.get_last_layer(), split="train")
|
388 |
-
|
389 |
-
self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
|
390 |
-
self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)
|
391 |
-
return discloss
|
392 |
-
|
393 |
-
def validation_step(self, batch, batch_idx):
|
394 |
-
# self.log_images(batch,only_inputs=False,save_dir='mel_result_ae13_26/fake_class')
|
395 |
-
return self.log_dict
|
396 |
-
|
397 |
-
def test_step(self, batch, batch_idx):
|
398 |
-
test_ckpt_path = os.path.basename(self.trainer.tested_ckpt_path)
|
399 |
-
savedir = os.path.join(self.trainer.log_dir,f'output_imgs_{test_ckpt_path}','fake_class')
|
400 |
-
os.makedirs(savedir,exist_ok=True)
|
401 |
-
inputs = self.get_input(batch, self.image_key)# inputs shape:(b,c,mel_len,T) or (b,c,h,w)
|
402 |
-
# ic(inputs.shape)
|
403 |
-
# inputs = inputs[...,:624]
|
404 |
-
# ic(inputs.shape)
|
405 |
-
xrec, posterior = self(inputs)# reconstructions:(b,c,mel_len,T) or (b,c,h,w)
|
406 |
-
file_names = batch['f_name']
|
407 |
-
# print(f"reconstructions.shape:{reconstructions.shape}",file_names)
|
408 |
-
for b in range(len(file_names)):
|
409 |
-
rcon = (xrec[b].squeeze().detach().cpu().numpy() + 1) / 2 # to mel scale,squeeze channel dim
|
410 |
-
vname_num_split_index = file_names[b].rfind('_')# file_names[b]:video_name+'_'+num
|
411 |
-
v_n,num = file_names[b][:vname_num_split_index],file_names[b][vname_num_split_index+1:]
|
412 |
-
save_img_path = os.path.join(savedir,f'{v_n}_sample_{num}.npy')
|
413 |
-
np.save(save_img_path,rcon)
|
414 |
-
|
415 |
-
return None
|
416 |
-
|
417 |
-
def configure_optimizers(self):
|
418 |
-
lr = self.learning_rate
|
419 |
-
opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
|
420 |
-
list(self.decoder.parameters())+
|
421 |
-
list(self.quant_conv.parameters())+
|
422 |
-
list(self.post_quant_conv.parameters()),
|
423 |
-
lr=lr, betas=(0.5, 0.9))
|
424 |
-
opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
|
425 |
-
lr=lr, betas=(0.5, 0.9))
|
426 |
-
return [opt_ae, opt_disc], []
|
427 |
-
|
428 |
-
def get_last_layer(self):
|
429 |
-
return self.decoder.conv_out.weight
|
430 |
-
|
431 |
-
@torch.no_grad()
|
432 |
-
def log_images(self, batch, only_inputs=False,save_dir = 'mel_result_ae13_26_debug/fake_class', **kwargs): # 在main.py的on_validation_batch_end中调用
|
433 |
-
log = dict()
|
434 |
-
x = self.get_input(batch, self.image_key)
|
435 |
-
x = x.to(self.device)
|
436 |
-
if not only_inputs:
|
437 |
-
xrec, posterior = self(x)
|
438 |
-
if x.shape[1] > 3:
|
439 |
-
# colorize with random projection
|
440 |
-
assert xrec.shape[1] > 3
|
441 |
-
x = self.to_rgb(x)
|
442 |
-
xrec = self.to_rgb(xrec)
|
443 |
-
log["samples"] = self.decode(torch.randn_like(posterior.sample()))
|
444 |
-
log["reconstructions"] = xrec
|
445 |
-
log["inputs"] = x
|
446 |
-
return log
|
447 |
-
|
448 |
-
def to_rgb(self, x):
|
449 |
-
assert self.image_key == "segmentation"
|
450 |
-
if not hasattr(self, "colorize"):
|
451 |
-
self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
|
452 |
-
x = F.conv2d(x, weight=self.colorize)
|
453 |
-
x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
|
454 |
-
return x
|
455 |
-
|
456 |
-
|
457 |
-
class IdentityFirstStage(torch.nn.Module):
|
458 |
-
def __init__(self, *args, vq_interface=False, **kwargs):
|
459 |
-
self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff
|
460 |
-
super().__init__()
|
461 |
-
|
462 |
-
def encode(self, x, *args, **kwargs):
|
463 |
-
return x
|
464 |
-
|
465 |
-
def decode(self, x, *args, **kwargs):
|
466 |
-
return x
|
467 |
-
|
468 |
-
def quantize(self, x, *args, **kwargs):
|
469 |
-
if self.vq_interface:
|
470 |
-
return x, None, [None, None, None]
|
471 |
-
return x
|
472 |
-
|
473 |
-
def forward(self, x, *args, **kwargs):
|
474 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIWaves/Software_Company/src/agents/Component/__init__.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
from .ExtraComponent import *
|
2 |
-
from .PromptComponent import *
|
3 |
-
from .ToolComponent import *
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/eventpromise-plugin.js
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
import { WaitEvent, WaitComplete, Delay } from './eventpromise.js'
|
2 |
-
|
3 |
-
class EventPromisePlugin extends Phaser.Plugins.BasePlugin {
|
4 |
-
|
5 |
-
constructor(pluginManager) {
|
6 |
-
super(pluginManager);
|
7 |
-
}
|
8 |
-
}
|
9 |
-
|
10 |
-
var methods = {
|
11 |
-
waitEvent: WaitEvent,
|
12 |
-
waitComplete: WaitComplete,
|
13 |
-
delay: Delay,
|
14 |
-
}
|
15 |
-
|
16 |
-
// mixin
|
17 |
-
Object.assign(
|
18 |
-
EventPromisePlugin.prototype,
|
19 |
-
methods
|
20 |
-
);
|
21 |
-
|
22 |
-
export default EventPromisePlugin;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DialogQuest.d.ts
DELETED
@@ -1,74 +0,0 @@
|
|
1 |
-
import { Dialog } from '../ui/ui-components';
|
2 |
-
import QuestManager from '../../plugins/quest'
|
3 |
-
|
4 |
-
export default DialogQuest;
|
5 |
-
|
6 |
-
declare namespace DialogQuest {
|
7 |
-
interface IConfig extends QuestManager.IConfig {
|
8 |
-
dialog: Dialog,
|
9 |
-
}
|
10 |
-
|
11 |
-
namespace Events {
|
12 |
-
type UpdateChoiceCallbackType = (
|
13 |
-
choice: Phaser.GameObjects.GameObject,
|
14 |
-
option: QuestManager.QuestionType,
|
15 |
-
quest: QuestManager.Quest
|
16 |
-
) => void;
|
17 |
-
|
18 |
-
type UpdateDialogCallbackType = (
|
19 |
-
dialog: Dialog,
|
20 |
-
question: QuestManager.QuestionType,
|
21 |
-
quest: QuestManager.Quest
|
22 |
-
) => void;
|
23 |
-
|
24 |
-
type ClickChoiceCallbackType = (
|
25 |
-
choice: Phaser.GameObjects.GameObject,
|
26 |
-
dialog: Dialog,
|
27 |
-
quest: QuestManager.Quest
|
28 |
-
) => void;
|
29 |
-
|
30 |
-
type ClickActionCallbackType = (
|
31 |
-
action: Phaser.GameObjects.GameObject,
|
32 |
-
dialog: Dialog,
|
33 |
-
quest: QuestManager.Quest
|
34 |
-
) => void;
|
35 |
-
}
|
36 |
-
}
|
37 |
-
|
38 |
-
declare class DialogQuest extends Phaser.Events.EventEmitter {
|
39 |
-
constructor(
|
40 |
-
config?: DialogQuest.IConfig
|
41 |
-
);
|
42 |
-
|
43 |
-
start(): this;
|
44 |
-
|
45 |
-
next(key?: string): this;
|
46 |
-
|
47 |
-
isLast(): boolean;
|
48 |
-
|
49 |
-
getData(
|
50 |
-
key: string,
|
51 |
-
defaultValue?: any
|
52 |
-
): any;
|
53 |
-
|
54 |
-
getData(): any[];
|
55 |
-
|
56 |
-
setData(
|
57 |
-
key: string,
|
58 |
-
value: any
|
59 |
-
): this;
|
60 |
-
|
61 |
-
incData(
|
62 |
-
key: string,
|
63 |
-
inc: number,
|
64 |
-
defaultValue?: number
|
65 |
-
): this;
|
66 |
-
|
67 |
-
mulData(
|
68 |
-
key: string,
|
69 |
-
mul: number,
|
70 |
-
defaultValue?: number
|
71 |
-
): this;
|
72 |
-
|
73 |
-
clearData(): this;
|
74 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
_base_ = '../cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py'
|
2 |
-
model = dict(
|
3 |
-
backbone=dict(
|
4 |
-
dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
|
5 |
-
stage_with_dcn=(False, True, True, True)))
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
_base_ = './faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py'
|
2 |
-
# learning policy
|
3 |
-
lr_config = dict(step=[28, 34])
|
4 |
-
runner = dict(type='EpochBasedRunner', max_epochs=36)
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/schedules/schedule_40k.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
# optimizer
|
2 |
-
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
|
3 |
-
optimizer_config = dict()
|
4 |
-
# learning policy
|
5 |
-
lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
|
6 |
-
# runtime settings
|
7 |
-
runner = dict(type='IterBasedRunner', max_iters=40000)
|
8 |
-
checkpoint_config = dict(by_epoch=False, interval=4000)
|
9 |
-
evaluation = dict(interval=4000, metric='mIoU')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/pipelines/llava/README.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
## LLaVA pipeline
|
2 |
-
|
3 |
-
This module provides 2 pipelines:
|
4 |
-
- `llava-7b` - for use with LLaVA v0 7B model (finetuned LLaMa 7B)
|
5 |
-
- `llava-13b` - for use with LLaVA v0 13B model (finetuned LLaMa 13B)
|
6 |
-
|
7 |
-
[LLaVA](https://github.com/haotian-liu/LLaVA) uses CLIP `openai/clip-vit-large-patch14` as the vision model, and then a single linear layer. For 13B the projector weights are in `liuhaotian/LLaVA-13b-delta-v0`, and for 7B they are in `liuhaotian/LLaVA-7b-delta-v0`.
|
8 |
-
|
9 |
-
The supported parameter combinations for both the vision model, and the projector are: CUDA/32bit, CUDA/16bit, CPU/32bit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Apex-X/GODROOP/roop/processors/frame/core.py
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import importlib
|
3 |
-
import psutil
|
4 |
-
from concurrent.futures import ThreadPoolExecutor, as_completed
|
5 |
-
from queue import Queue
|
6 |
-
from types import ModuleType
|
7 |
-
from typing import Any, List, Callable
|
8 |
-
from tqdm import tqdm
|
9 |
-
|
10 |
-
import roop
|
11 |
-
|
12 |
-
FRAME_PROCESSORS_MODULES: List[ModuleType] = []
|
13 |
-
FRAME_PROCESSORS_INTERFACE = [
|
14 |
-
'pre_check',
|
15 |
-
'pre_start',
|
16 |
-
'process_frame',
|
17 |
-
'process_frames',
|
18 |
-
'process_image',
|
19 |
-
'process_video',
|
20 |
-
'post_process'
|
21 |
-
]
|
22 |
-
|
23 |
-
|
24 |
-
def load_frame_processor_module(frame_processor: str) -> Any:
|
25 |
-
try:
|
26 |
-
frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}')
|
27 |
-
for method_name in FRAME_PROCESSORS_INTERFACE:
|
28 |
-
if not hasattr(frame_processor_module, method_name):
|
29 |
-
raise NotImplementedError
|
30 |
-
except (ImportError, NotImplementedError):
|
31 |
-
quit(f'Frame processor {frame_processor} crashed.')
|
32 |
-
return frame_processor_module
|
33 |
-
|
34 |
-
|
35 |
-
def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]:
|
36 |
-
global FRAME_PROCESSORS_MODULES
|
37 |
-
|
38 |
-
if not FRAME_PROCESSORS_MODULES:
|
39 |
-
for frame_processor in frame_processors:
|
40 |
-
frame_processor_module = load_frame_processor_module(frame_processor)
|
41 |
-
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
|
42 |
-
return FRAME_PROCESSORS_MODULES
|
43 |
-
|
44 |
-
|
45 |
-
def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None:
|
46 |
-
with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor:
|
47 |
-
futures = []
|
48 |
-
queue = create_queue(temp_frame_paths)
|
49 |
-
queue_per_future = len(temp_frame_paths) // roop.globals.execution_threads
|
50 |
-
while not queue.empty():
|
51 |
-
future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update)
|
52 |
-
futures.append(future)
|
53 |
-
for future in as_completed(futures):
|
54 |
-
future.result()
|
55 |
-
|
56 |
-
|
57 |
-
def create_queue(temp_frame_paths: List[str]) -> Queue[str]:
|
58 |
-
queue: Queue[str] = Queue()
|
59 |
-
for frame_path in temp_frame_paths:
|
60 |
-
queue.put(frame_path)
|
61 |
-
return queue
|
62 |
-
|
63 |
-
|
64 |
-
def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]:
|
65 |
-
queues = []
|
66 |
-
for _ in range(queue_per_future):
|
67 |
-
if not queue.empty():
|
68 |
-
queues.append(queue.get())
|
69 |
-
return queues
|
70 |
-
|
71 |
-
|
72 |
-
def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None:
|
73 |
-
progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'
|
74 |
-
total = len(frame_paths)
|
75 |
-
with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress:
|
76 |
-
multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress))
|
77 |
-
|
78 |
-
|
79 |
-
def update_progress(progress: Any = None) -> None:
|
80 |
-
process = psutil.Process(os.getpid())
|
81 |
-
memory_usage = process.memory_info().rss / 1024 / 1024 / 1024
|
82 |
-
progress.set_postfix({
|
83 |
-
'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB',
|
84 |
-
'execution_providers': roop.globals.execution_providers,
|
85 |
-
'execution_threads': roop.globals.execution_threads
|
86 |
-
})
|
87 |
-
progress.refresh()
|
88 |
-
progress.update(1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Artrajz/vits-simple-api/vits/models.py
DELETED
@@ -1,419 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
|
6 |
-
from vits import commons
|
7 |
-
from vits import modules
|
8 |
-
from vits import attentions
|
9 |
-
|
10 |
-
from torch.nn import Conv1d, ConvTranspose1d
|
11 |
-
from torch.nn.utils import weight_norm
|
12 |
-
from vits.commons import init_weights
|
13 |
-
|
14 |
-
|
15 |
-
class StochasticDurationPredictor(nn.Module):
|
16 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
|
17 |
-
super().__init__()
|
18 |
-
filter_channels = in_channels # it needs to be removed from future version.
|
19 |
-
self.in_channels = in_channels
|
20 |
-
self.filter_channels = filter_channels
|
21 |
-
self.kernel_size = kernel_size
|
22 |
-
self.p_dropout = p_dropout
|
23 |
-
self.n_flows = n_flows
|
24 |
-
self.gin_channels = gin_channels
|
25 |
-
|
26 |
-
self.log_flow = modules.Log()
|
27 |
-
self.flows = nn.ModuleList()
|
28 |
-
self.flows.append(modules.ElementwiseAffine(2))
|
29 |
-
for i in range(n_flows):
|
30 |
-
self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
31 |
-
self.flows.append(modules.Flip())
|
32 |
-
|
33 |
-
self.post_pre = nn.Conv1d(1, filter_channels, 1)
|
34 |
-
self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
35 |
-
self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
36 |
-
self.post_flows = nn.ModuleList()
|
37 |
-
self.post_flows.append(modules.ElementwiseAffine(2))
|
38 |
-
for i in range(4):
|
39 |
-
self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
40 |
-
self.post_flows.append(modules.Flip())
|
41 |
-
|
42 |
-
self.pre = nn.Conv1d(in_channels, filter_channels, 1)
|
43 |
-
self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
44 |
-
self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
45 |
-
if gin_channels != 0:
|
46 |
-
self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
|
47 |
-
|
48 |
-
def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
|
49 |
-
x = torch.detach(x)
|
50 |
-
x = self.pre(x)
|
51 |
-
if g is not None:
|
52 |
-
g = torch.detach(g)
|
53 |
-
x = x + self.cond(g)
|
54 |
-
x = self.convs(x, x_mask)
|
55 |
-
x = self.proj(x) * x_mask
|
56 |
-
|
57 |
-
if not reverse:
|
58 |
-
flows = self.flows
|
59 |
-
assert w is not None
|
60 |
-
|
61 |
-
logdet_tot_q = 0
|
62 |
-
h_w = self.post_pre(w)
|
63 |
-
h_w = self.post_convs(h_w, x_mask)
|
64 |
-
h_w = self.post_proj(h_w) * x_mask
|
65 |
-
e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
|
66 |
-
z_q = e_q
|
67 |
-
for flow in self.post_flows:
|
68 |
-
z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
|
69 |
-
logdet_tot_q += logdet_q
|
70 |
-
z_u, z1 = torch.split(z_q, [1, 1], 1)
|
71 |
-
u = torch.sigmoid(z_u) * x_mask
|
72 |
-
z0 = (w - u) * x_mask
|
73 |
-
logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
|
74 |
-
logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
|
75 |
-
|
76 |
-
logdet_tot = 0
|
77 |
-
z0, logdet = self.log_flow(z0, x_mask)
|
78 |
-
logdet_tot += logdet
|
79 |
-
z = torch.cat([z0, z1], 1)
|
80 |
-
for flow in flows:
|
81 |
-
z, logdet = flow(z, x_mask, g=x, reverse=reverse)
|
82 |
-
logdet_tot = logdet_tot + logdet
|
83 |
-
nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
|
84 |
-
return nll + logq # [b]
|
85 |
-
else:
|
86 |
-
flows = list(reversed(self.flows))
|
87 |
-
flows = flows[:-2] + [flows[-1]] # remove a useless vflow
|
88 |
-
z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
|
89 |
-
for flow in flows:
|
90 |
-
z = flow(z, x_mask, g=x, reverse=reverse)
|
91 |
-
z0, z1 = torch.split(z, [1, 1], 1)
|
92 |
-
logw = z0
|
93 |
-
return logw
|
94 |
-
|
95 |
-
|
96 |
-
class DurationPredictor(nn.Module):
|
97 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
|
98 |
-
super().__init__()
|
99 |
-
|
100 |
-
self.in_channels = in_channels
|
101 |
-
self.filter_channels = filter_channels
|
102 |
-
self.kernel_size = kernel_size
|
103 |
-
self.p_dropout = p_dropout
|
104 |
-
self.gin_channels = gin_channels
|
105 |
-
|
106 |
-
self.drop = nn.Dropout(p_dropout)
|
107 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
|
108 |
-
self.norm_1 = modules.LayerNorm(filter_channels)
|
109 |
-
self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
|
110 |
-
self.norm_2 = modules.LayerNorm(filter_channels)
|
111 |
-
self.proj = nn.Conv1d(filter_channels, 1, 1)
|
112 |
-
|
113 |
-
if gin_channels != 0:
|
114 |
-
self.cond = nn.Conv1d(gin_channels, in_channels, 1)
|
115 |
-
|
116 |
-
def forward(self, x, x_mask, g=None):
|
117 |
-
x = torch.detach(x)
|
118 |
-
if g is not None:
|
119 |
-
g = torch.detach(g)
|
120 |
-
x = x + self.cond(g)
|
121 |
-
x = self.conv_1(x * x_mask)
|
122 |
-
x = torch.relu(x)
|
123 |
-
x = self.norm_1(x)
|
124 |
-
x = self.drop(x)
|
125 |
-
x = self.conv_2(x * x_mask)
|
126 |
-
x = torch.relu(x)
|
127 |
-
x = self.norm_2(x)
|
128 |
-
x = self.drop(x)
|
129 |
-
x = self.proj(x * x_mask)
|
130 |
-
return x * x_mask
|
131 |
-
|
132 |
-
|
133 |
-
class TextEncoder(nn.Module):
|
134 |
-
def __init__(self,
|
135 |
-
n_vocab,
|
136 |
-
out_channels,
|
137 |
-
hidden_channels,
|
138 |
-
filter_channels,
|
139 |
-
n_heads,
|
140 |
-
n_layers,
|
141 |
-
kernel_size,
|
142 |
-
p_dropout,
|
143 |
-
emotion_embedding,
|
144 |
-
bert_embedding):
|
145 |
-
super().__init__()
|
146 |
-
self.n_vocab = n_vocab
|
147 |
-
self.out_channels = out_channels
|
148 |
-
self.hidden_channels = hidden_channels
|
149 |
-
self.filter_channels = filter_channels
|
150 |
-
self.n_heads = n_heads
|
151 |
-
self.n_layers = n_layers
|
152 |
-
self.kernel_size = kernel_size
|
153 |
-
self.p_dropout = p_dropout
|
154 |
-
self.emotion_embedding = emotion_embedding
|
155 |
-
|
156 |
-
if self.n_vocab != 0:
|
157 |
-
self.emb = nn.Embedding(n_vocab, hidden_channels)
|
158 |
-
if emotion_embedding:
|
159 |
-
self.emo_proj = nn.Linear(1024, hidden_channels)
|
160 |
-
if bert_embedding:
|
161 |
-
self.emb_bert = nn.Linear(256, hidden_channels)
|
162 |
-
nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
|
163 |
-
|
164 |
-
self.encoder = attentions.Encoder(
|
165 |
-
hidden_channels,
|
166 |
-
filter_channels,
|
167 |
-
n_heads,
|
168 |
-
n_layers,
|
169 |
-
kernel_size,
|
170 |
-
p_dropout)
|
171 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
172 |
-
|
173 |
-
def forward(self, x, x_lengths, emotion_embedding=None, bert=None):
|
174 |
-
if self.n_vocab != 0:
|
175 |
-
x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
|
176 |
-
if emotion_embedding is not None:
|
177 |
-
x = x + self.emo_proj(emotion_embedding.unsqueeze(1))
|
178 |
-
|
179 |
-
if bert is not None:
|
180 |
-
x = x + self.emb_bert(bert)
|
181 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
182 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
183 |
-
|
184 |
-
x = self.encoder(x * x_mask, x_mask)
|
185 |
-
stats = self.proj(x) * x_mask
|
186 |
-
|
187 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
188 |
-
return x, m, logs, x_mask
|
189 |
-
|
190 |
-
|
191 |
-
class ResidualCouplingBlock(nn.Module):
|
192 |
-
def __init__(self,
|
193 |
-
channels,
|
194 |
-
hidden_channels,
|
195 |
-
kernel_size,
|
196 |
-
dilation_rate,
|
197 |
-
n_layers,
|
198 |
-
n_flows=4,
|
199 |
-
gin_channels=0):
|
200 |
-
super().__init__()
|
201 |
-
self.channels = channels
|
202 |
-
self.hidden_channels = hidden_channels
|
203 |
-
self.kernel_size = kernel_size
|
204 |
-
self.dilation_rate = dilation_rate
|
205 |
-
self.n_layers = n_layers
|
206 |
-
self.n_flows = n_flows
|
207 |
-
self.gin_channels = gin_channels
|
208 |
-
|
209 |
-
self.flows = nn.ModuleList()
|
210 |
-
for i in range(n_flows):
|
211 |
-
self.flows.append(
|
212 |
-
modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
|
213 |
-
gin_channels=gin_channels, mean_only=True))
|
214 |
-
self.flows.append(modules.Flip())
|
215 |
-
|
216 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
217 |
-
if not reverse:
|
218 |
-
for flow in self.flows:
|
219 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
220 |
-
else:
|
221 |
-
for flow in reversed(self.flows):
|
222 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
223 |
-
return x
|
224 |
-
|
225 |
-
|
226 |
-
class PosteriorEncoder(nn.Module):
|
227 |
-
def __init__(self,
|
228 |
-
in_channels,
|
229 |
-
out_channels,
|
230 |
-
hidden_channels,
|
231 |
-
kernel_size,
|
232 |
-
dilation_rate,
|
233 |
-
n_layers,
|
234 |
-
gin_channels=0):
|
235 |
-
super().__init__()
|
236 |
-
self.in_channels = in_channels
|
237 |
-
self.out_channels = out_channels
|
238 |
-
self.hidden_channels = hidden_channels
|
239 |
-
self.kernel_size = kernel_size
|
240 |
-
self.dilation_rate = dilation_rate
|
241 |
-
self.n_layers = n_layers
|
242 |
-
self.gin_channels = gin_channels
|
243 |
-
|
244 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
245 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
246 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
247 |
-
|
248 |
-
def forward(self, x, x_lengths, g=None):
|
249 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
250 |
-
x = self.pre(x) * x_mask
|
251 |
-
x = self.enc(x, x_mask, g=g)
|
252 |
-
stats = self.proj(x) * x_mask
|
253 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
254 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
255 |
-
return z, m, logs, x_mask
|
256 |
-
|
257 |
-
|
258 |
-
class Generator(torch.nn.Module):
|
259 |
-
def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
|
260 |
-
upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
|
261 |
-
super(Generator, self).__init__()
|
262 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
263 |
-
self.num_upsamples = len(upsample_rates)
|
264 |
-
self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
|
265 |
-
resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
|
266 |
-
|
267 |
-
self.ups = nn.ModuleList()
|
268 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
269 |
-
self.ups.append(weight_norm(
|
270 |
-
ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
|
271 |
-
k, u, padding=(k - u) // 2)))
|
272 |
-
|
273 |
-
self.resblocks = nn.ModuleList()
|
274 |
-
for i in range(len(self.ups)):
|
275 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
276 |
-
for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
277 |
-
self.resblocks.append(resblock(ch, k, d))
|
278 |
-
|
279 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
280 |
-
self.ups.apply(init_weights)
|
281 |
-
|
282 |
-
if gin_channels != 0:
|
283 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
284 |
-
|
285 |
-
def forward(self, x, g=None):
|
286 |
-
x = self.conv_pre(x)
|
287 |
-
if g is not None:
|
288 |
-
x = x + self.cond(g)
|
289 |
-
|
290 |
-
for i in range(self.num_upsamples):
|
291 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
292 |
-
x = self.ups[i](x)
|
293 |
-
xs = None
|
294 |
-
for j in range(self.num_kernels):
|
295 |
-
if xs is None:
|
296 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
297 |
-
else:
|
298 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
299 |
-
x = xs / self.num_kernels
|
300 |
-
x = F.leaky_relu(x)
|
301 |
-
x = self.conv_post(x)
|
302 |
-
x = torch.tanh(x)
|
303 |
-
|
304 |
-
return x
|
305 |
-
|
306 |
-
|
307 |
-
class SynthesizerTrn(nn.Module):
|
308 |
-
"""
|
309 |
-
Synthesizer for Training
|
310 |
-
"""
|
311 |
-
|
312 |
-
def __init__(self,
|
313 |
-
n_vocab,
|
314 |
-
spec_channels,
|
315 |
-
segment_size,
|
316 |
-
inter_channels,
|
317 |
-
hidden_channels,
|
318 |
-
filter_channels,
|
319 |
-
n_heads,
|
320 |
-
n_layers,
|
321 |
-
kernel_size,
|
322 |
-
p_dropout,
|
323 |
-
resblock,
|
324 |
-
resblock_kernel_sizes,
|
325 |
-
resblock_dilation_sizes,
|
326 |
-
upsample_rates,
|
327 |
-
upsample_initial_channel,
|
328 |
-
upsample_kernel_sizes,
|
329 |
-
n_speakers=0,
|
330 |
-
gin_channels=0,
|
331 |
-
use_sdp=True,
|
332 |
-
emotion_embedding=False,
|
333 |
-
bert_embedding=False,
|
334 |
-
**kwargs):
|
335 |
-
|
336 |
-
super().__init__()
|
337 |
-
self.n_vocab = n_vocab
|
338 |
-
self.spec_channels = spec_channels
|
339 |
-
self.inter_channels = inter_channels
|
340 |
-
self.hidden_channels = hidden_channels
|
341 |
-
self.filter_channels = filter_channels
|
342 |
-
self.n_heads = n_heads
|
343 |
-
self.n_layers = n_layers
|
344 |
-
self.kernel_size = kernel_size
|
345 |
-
self.p_dropout = p_dropout
|
346 |
-
self.resblock = resblock
|
347 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
348 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
349 |
-
self.upsample_rates = upsample_rates
|
350 |
-
self.upsample_initial_channel = upsample_initial_channel
|
351 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
352 |
-
self.segment_size = segment_size
|
353 |
-
self.n_speakers = n_speakers
|
354 |
-
self.gin_channels = gin_channels
|
355 |
-
self.use_sdp = use_sdp
|
356 |
-
self.emotion_embedding = emotion_embedding
|
357 |
-
self.bert_embedding = bert_embedding
|
358 |
-
|
359 |
-
self.enc_p = TextEncoder(n_vocab,
|
360 |
-
inter_channels,
|
361 |
-
hidden_channels,
|
362 |
-
filter_channels,
|
363 |
-
n_heads,
|
364 |
-
n_layers,
|
365 |
-
kernel_size,
|
366 |
-
p_dropout,
|
367 |
-
emotion_embedding,
|
368 |
-
bert_embedding)
|
369 |
-
self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
|
370 |
-
upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
|
371 |
-
self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
|
372 |
-
gin_channels=gin_channels)
|
373 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
|
374 |
-
|
375 |
-
if self.use_sdp:
|
376 |
-
self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
|
377 |
-
else:
|
378 |
-
self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
|
379 |
-
|
380 |
-
if n_speakers >= 1:
|
381 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
382 |
-
|
383 |
-
def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None,
|
384 |
-
emotion_embedding=None, bert=None):
|
385 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding, bert)
|
386 |
-
if self.n_speakers > 0:
|
387 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
388 |
-
else:
|
389 |
-
g = None
|
390 |
-
|
391 |
-
if self.use_sdp:
|
392 |
-
logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
|
393 |
-
else:
|
394 |
-
logw = self.dp(x, x_mask, g=g)
|
395 |
-
w = torch.exp(logw) * x_mask * length_scale
|
396 |
-
w_ceil = torch.ceil(w)
|
397 |
-
y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
|
398 |
-
y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
|
399 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
400 |
-
attn = commons.generate_path(w_ceil, attn_mask)
|
401 |
-
|
402 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
403 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
|
404 |
-
2) # [b, t', t], [b, t, d] -> [b, d, t']
|
405 |
-
|
406 |
-
z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
|
407 |
-
z = self.flow(z_p, y_mask, g=g, reverse=True)
|
408 |
-
o = self.dec((z * y_mask)[:, :, :max_len], g=g)
|
409 |
-
return o, attn, y_mask, (z, z_p, m_p, logs_p)
|
410 |
-
|
411 |
-
def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
|
412 |
-
assert self.n_speakers > 0, "n_speakers have to be larger than 0."
|
413 |
-
g_src = self.emb_g(sid_src).unsqueeze(-1)
|
414 |
-
g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
|
415 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
|
416 |
-
z_p = self.flow(z, y_mask, g=g_src)
|
417 |
-
z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
|
418 |
-
o_hat = self.dec(z_hat * y_mask, g=g_tgt)
|
419 |
-
return o_hat, y_mask, (z, z_p, z_hat)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/stop.py
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
# Copyright 2016–2021 Julien Danjou
|
2 |
-
# Copyright 2016 Joshua Harlow
|
3 |
-
# Copyright 2013-2014 Ray Holder
|
4 |
-
#
|
5 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
6 |
-
# you may not use this file except in compliance with the License.
|
7 |
-
# You may obtain a copy of the License at
|
8 |
-
#
|
9 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
10 |
-
#
|
11 |
-
# Unless required by applicable law or agreed to in writing, software
|
12 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
13 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
14 |
-
# See the License for the specific language governing permissions and
|
15 |
-
# limitations under the License.
|
16 |
-
import abc
|
17 |
-
import typing
|
18 |
-
|
19 |
-
from pip._vendor.tenacity import _utils
|
20 |
-
|
21 |
-
if typing.TYPE_CHECKING:
|
22 |
-
import threading
|
23 |
-
|
24 |
-
from pip._vendor.tenacity import RetryCallState
|
25 |
-
|
26 |
-
|
27 |
-
class stop_base(abc.ABC):
|
28 |
-
"""Abstract base class for stop strategies."""
|
29 |
-
|
30 |
-
@abc.abstractmethod
|
31 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
32 |
-
pass
|
33 |
-
|
34 |
-
def __and__(self, other: "stop_base") -> "stop_all":
|
35 |
-
return stop_all(self, other)
|
36 |
-
|
37 |
-
def __or__(self, other: "stop_base") -> "stop_any":
|
38 |
-
return stop_any(self, other)
|
39 |
-
|
40 |
-
|
41 |
-
StopBaseT = typing.Union[stop_base, typing.Callable[["RetryCallState"], bool]]
|
42 |
-
|
43 |
-
|
44 |
-
class stop_any(stop_base):
|
45 |
-
"""Stop if any of the stop condition is valid."""
|
46 |
-
|
47 |
-
def __init__(self, *stops: stop_base) -> None:
|
48 |
-
self.stops = stops
|
49 |
-
|
50 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
51 |
-
return any(x(retry_state) for x in self.stops)
|
52 |
-
|
53 |
-
|
54 |
-
class stop_all(stop_base):
|
55 |
-
"""Stop if all the stop conditions are valid."""
|
56 |
-
|
57 |
-
def __init__(self, *stops: stop_base) -> None:
|
58 |
-
self.stops = stops
|
59 |
-
|
60 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
61 |
-
return all(x(retry_state) for x in self.stops)
|
62 |
-
|
63 |
-
|
64 |
-
class _stop_never(stop_base):
|
65 |
-
"""Never stop."""
|
66 |
-
|
67 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
68 |
-
return False
|
69 |
-
|
70 |
-
|
71 |
-
stop_never = _stop_never()
|
72 |
-
|
73 |
-
|
74 |
-
class stop_when_event_set(stop_base):
|
75 |
-
"""Stop when the given event is set."""
|
76 |
-
|
77 |
-
def __init__(self, event: "threading.Event") -> None:
|
78 |
-
self.event = event
|
79 |
-
|
80 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
81 |
-
return self.event.is_set()
|
82 |
-
|
83 |
-
|
84 |
-
class stop_after_attempt(stop_base):
|
85 |
-
"""Stop when the previous attempt >= max_attempt."""
|
86 |
-
|
87 |
-
def __init__(self, max_attempt_number: int) -> None:
|
88 |
-
self.max_attempt_number = max_attempt_number
|
89 |
-
|
90 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
91 |
-
return retry_state.attempt_number >= self.max_attempt_number
|
92 |
-
|
93 |
-
|
94 |
-
class stop_after_delay(stop_base):
|
95 |
-
"""Stop when the time from the first attempt >= limit."""
|
96 |
-
|
97 |
-
def __init__(self, max_delay: _utils.time_unit_type) -> None:
|
98 |
-
self.max_delay = _utils.to_seconds(max_delay)
|
99 |
-
|
100 |
-
def __call__(self, retry_state: "RetryCallState") -> bool:
|
101 |
-
if retry_state.seconds_since_start is None:
|
102 |
-
raise RuntimeError("__call__() called but seconds_since_start is not set")
|
103 |
-
return retry_state.seconds_since_start >= self.max_delay
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/__init__.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
"""Read resources contained within a package."""
|
2 |
-
|
3 |
-
from ._common import (
|
4 |
-
as_file,
|
5 |
-
files,
|
6 |
-
Package,
|
7 |
-
)
|
8 |
-
|
9 |
-
from ._legacy import (
|
10 |
-
contents,
|
11 |
-
open_binary,
|
12 |
-
read_binary,
|
13 |
-
open_text,
|
14 |
-
read_text,
|
15 |
-
is_resource,
|
16 |
-
path,
|
17 |
-
Resource,
|
18 |
-
)
|
19 |
-
|
20 |
-
from .abc import ResourceReader
|
21 |
-
|
22 |
-
|
23 |
-
__all__ = [
|
24 |
-
'Package',
|
25 |
-
'Resource',
|
26 |
-
'ResourceReader',
|
27 |
-
'as_file',
|
28 |
-
'contents',
|
29 |
-
'files',
|
30 |
-
'is_resource',
|
31 |
-
'open_binary',
|
32 |
-
'open_text',
|
33 |
-
'path',
|
34 |
-
'read_binary',
|
35 |
-
'read_text',
|
36 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Atualli/yoloxTeste/configs/yolox_s.py
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
# -*- coding:utf-8 -*-
|
3 |
-
# Copyright (c) Megvii, Inc. and its affiliates.
|
4 |
-
|
5 |
-
import os
|
6 |
-
|
7 |
-
from yolox.exp import Exp as MyExp
|
8 |
-
|
9 |
-
|
10 |
-
class Exp(MyExp):
|
11 |
-
def __init__(self):
|
12 |
-
super(Exp, self).__init__()
|
13 |
-
self.depth = 0.33
|
14 |
-
self.width = 0.50
|
15 |
-
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BaitMan/abroader-otters/README.md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Abroader Otters
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: docker
|
7 |
-
pinned: false
|
8 |
-
---
|
9 |
-
|
10 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/tools/app.py
DELETED
@@ -1,148 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
import os
|
3 |
-
|
4 |
-
# os.system("wget -P cvec/ https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt")
|
5 |
-
import gradio as gr
|
6 |
-
from dotenv import load_dotenv
|
7 |
-
|
8 |
-
from configs.config import Config
|
9 |
-
from i18n import I18nAuto
|
10 |
-
from infer.modules.vc.pipeline import Pipeline
|
11 |
-
VC = Pipeline
|
12 |
-
|
13 |
-
logging.getLogger("numba").setLevel(logging.WARNING)
|
14 |
-
logging.getLogger("markdown_it").setLevel(logging.WARNING)
|
15 |
-
logging.getLogger("urllib3").setLevel(logging.WARNING)
|
16 |
-
logging.getLogger("matplotlib").setLevel(logging.WARNING)
|
17 |
-
logger = logging.getLogger(__name__)
|
18 |
-
|
19 |
-
i18n = I18nAuto()
|
20 |
-
#(i18n)
|
21 |
-
|
22 |
-
load_dotenv()
|
23 |
-
config = Config()
|
24 |
-
vc = VC(config)
|
25 |
-
|
26 |
-
weight_root = os.getenv("weight_root")
|
27 |
-
weight_uvr5_root = os.getenv("weight_uvr5_root")
|
28 |
-
index_root = os.getenv("index_root")
|
29 |
-
names = []
|
30 |
-
hubert_model = None
|
31 |
-
for name in os.listdir(weight_root):
|
32 |
-
if name.endswith(".pth"):
|
33 |
-
names.append(name)
|
34 |
-
index_paths = []
|
35 |
-
for root, dirs, files in os.walk(index_root, topdown=False):
|
36 |
-
for name in files:
|
37 |
-
if name.endswith(".index") and "trained" not in name:
|
38 |
-
index_paths.append("%s/%s" % (root, name))
|
39 |
-
|
40 |
-
|
41 |
-
app = gr.Blocks()
|
42 |
-
with app:
|
43 |
-
with gr.Tabs():
|
44 |
-
with gr.TabItem("在线demo"):
|
45 |
-
gr.Markdown(
|
46 |
-
value="""
|
47 |
-
RVC 在线demo
|
48 |
-
"""
|
49 |
-
)
|
50 |
-
sid = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names))
|
51 |
-
with gr.Column():
|
52 |
-
spk_item = gr.Slider(
|
53 |
-
minimum=0,
|
54 |
-
maximum=2333,
|
55 |
-
step=1,
|
56 |
-
label=i18n("请选择说话人id"),
|
57 |
-
value=0,
|
58 |
-
visible=False,
|
59 |
-
interactive=True,
|
60 |
-
)
|
61 |
-
sid.change(fn=vc.get_vc, inputs=[sid], outputs=[spk_item])
|
62 |
-
gr.Markdown(
|
63 |
-
value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ")
|
64 |
-
)
|
65 |
-
vc_input3 = gr.Audio(label="上传音频(长度小于90秒)")
|
66 |
-
vc_transform0 = gr.Number(label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0)
|
67 |
-
f0method0 = gr.Radio(
|
68 |
-
label=i18n("选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"),
|
69 |
-
choices=["pm", "harvest", "crepe", "rmvpe"],
|
70 |
-
value="pm",
|
71 |
-
interactive=True,
|
72 |
-
)
|
73 |
-
filter_radius0 = gr.Slider(
|
74 |
-
minimum=0,
|
75 |
-
maximum=7,
|
76 |
-
label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
|
77 |
-
value=3,
|
78 |
-
step=1,
|
79 |
-
interactive=True,
|
80 |
-
)
|
81 |
-
with gr.Column():
|
82 |
-
file_index1 = gr.Textbox(
|
83 |
-
label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
|
84 |
-
value="",
|
85 |
-
interactive=False,
|
86 |
-
visible=False,
|
87 |
-
)
|
88 |
-
file_index2 = gr.Dropdown(
|
89 |
-
label=i18n("自动检测index路径,下拉式选择(dropdown)"),
|
90 |
-
choices=sorted(index_paths),
|
91 |
-
interactive=True,
|
92 |
-
)
|
93 |
-
index_rate1 = gr.Slider(
|
94 |
-
minimum=0,
|
95 |
-
maximum=1,
|
96 |
-
label=i18n("检索特征占比"),
|
97 |
-
value=0.88,
|
98 |
-
interactive=True,
|
99 |
-
)
|
100 |
-
resample_sr0 = gr.Slider(
|
101 |
-
minimum=0,
|
102 |
-
maximum=48000,
|
103 |
-
label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
|
104 |
-
value=0,
|
105 |
-
step=1,
|
106 |
-
interactive=True,
|
107 |
-
)
|
108 |
-
rms_mix_rate0 = gr.Slider(
|
109 |
-
minimum=0,
|
110 |
-
maximum=1,
|
111 |
-
label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
|
112 |
-
value=1,
|
113 |
-
interactive=True,
|
114 |
-
)
|
115 |
-
protect0 = gr.Slider(
|
116 |
-
minimum=0,
|
117 |
-
maximum=0.5,
|
118 |
-
label=i18n("保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"),
|
119 |
-
value=0.33,
|
120 |
-
step=0.01,
|
121 |
-
interactive=True,
|
122 |
-
)
|
123 |
-
f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"))
|
124 |
-
but0 = gr.Button(i18n("转换"), variant="primary")
|
125 |
-
vc_output1 = gr.Textbox(label=i18n("输出信息"))
|
126 |
-
vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)"))
|
127 |
-
but0.click(
|
128 |
-
vc.vc_single,
|
129 |
-
[
|
130 |
-
spk_item,
|
131 |
-
vc_input3,
|
132 |
-
vc_transform0,
|
133 |
-
f0_file,
|
134 |
-
f0method0,
|
135 |
-
file_index1,
|
136 |
-
file_index2,
|
137 |
-
# file_big_npy1,
|
138 |
-
index_rate1,
|
139 |
-
filter_radius0,
|
140 |
-
resample_sr0,
|
141 |
-
rms_mix_rate0,
|
142 |
-
protect0,
|
143 |
-
],
|
144 |
-
[vc_output1, vc_output2],
|
145 |
-
)
|
146 |
-
|
147 |
-
|
148 |
-
app.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Como Bajar La Llamada Del Deber Warzone Mvil En Iphone.md
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Call of Duty: Warzone Mobile - Todo lo que necesitas saber</h1>
|
3 |
-
<p>Si eres un fan de los juegos de Call of Duty y Battle Royale, te espera una sorpresa. Call of Duty: Warzone Mobile es la última incorporación a la franquicia Call of Duty, que trae lo mejor de ambos mundos a tu dispositivo móvil. En este artículo, te contaremos todo lo que necesitas saber sobre este emocionante nuevo juego, incluyendo qué es, cómo descargarlo y jugarlo, y cuáles son sus características y beneficios. Vamos a bucear en! </p>
|
4 |
-
<h2>¿Qué es Call of Duty: Warzone Mobile? </h2>
|
5 |
-
<p>Call of Duty: Warzone Mobile es una versión móvil del popular juego Call of Duty: Warzone, que es un juego de battle royale gratuito que forma parte de la serie Call of Duty: Modern Warfare II. Está desarrollado por Activision Publishing, Inc. y se espera que se lance en todo el mundo en 2023. Estos son algunos de los aspectos principales del juego:</p>
|
6 |
-
<h2>Como bajar la llamada del deber warzone móvil en iphone</h2><br /><p><b><b>DOWNLOAD</b> ····· <a href="https://bltlly.com/2v6LH7">https://bltlly.com/2v6LH7</a></b></p><br /><br />
|
7 |
-
<h3>Una nueva era de batalla móvil Royale</h3>
|
8 |
-
<p>Call of Duty: Warzone Mobile no es solo un puerto del juego existente, sino un nuevo juego construido desde cero para dispositivos móviles. Cuenta con un auténtico modo de juego Call of Duty, armas, movimiento y vehículos, así como modos únicos y divertidos que permiten hasta 120 jugadores en vivo en un partido. Gracias a la tecnología unificada Call of Duty, los jugadores pueden usar funciones sociales como amigos, canales de chat y Battle Pass en todas las plataformas para disfrutar de una experiencia de juego multijugador FPS verdaderamente conectada. </p>
|
9 |
-
<h3>El regreso de Verdansk</h3>
|
10 |
-
¡<p>El mapa de battle royale favorito de los fans ha vuelto! Verdansk es un mapa masivo con docenas de puntos de interés para desplegar, saquear y luchar. El mapa también cuenta con eventos dinámicos, como caídas de suministro, ataques aéreos y fugas de la cárcel, que añaden más emoción e imprevisibilidad a los partidos. Y si te eliminan, ¡puedes tener una segunda oportunidad de sobrevivir ganando un duelo en el Gulag! </p>
|
11 |
-
<h3>Más competencia, más diversión</h3>
|
12 |
-
|
13 |
-
<h2>Cómo descargar y jugar Call of Duty: Warzone Mobile? </h2>
|
14 |
-
<p>Si estás ansioso por jugar Call of Duty: Warzone Mobile, aquí hay algunos pasos que debes seguir:</p>
|
15 |
-
<h3>Pre-registro y pre-orden en Google Play y App Store</h3>
|
16 |
-
<p>Lo primero que tienes que hacer es pre-registrarse en Google Play o pre-ordenar en la App Store. De esta manera, puedes recibir notificaciones cuando el juego esté disponible para su descarga e instalación. También puede escanear el código QR en el sitio web oficial para pre-registrarse o pre-ordenar directamente. </p>
|
17 |
-
<h3>Gana recompensas alcanzando hitos globales</h3>
|
18 |
-
<p>Al pre-registrar o pre-ordenar, también puedes ganar recompensas que puedes usar cuando el juego se lanza en todo el mundo. Las recompensas incluyen pieles exclusivas, armas, emblemas, e incluso un nuevo mapa! Las recompensas se basan en hitos globales que se logran por el número de pre-registros o pre-pedidos. Puede consultar el progreso y los detalles en el sitio web oficial . </p>
|
19 |
-
<p></p>
|
20 |
-
<h3> elegir tu punto de aterrizaje, saquear, luchar o esconderse. Con tantas opciones y posibilidades, nunca te cansarás de jugar a este juego. </p>
|
21 |
-
<h3>Impresionantes gráficos y controles intuitivos</h3>
|
22 |
-
<p>Call of Duty: Warzone Mobile no solo es divertido de jugar, sino también hermoso para mirar. El juego cuenta con impresionantes gráficos que muestran los detalles y el realismo del mapa, las armas y los personajes. El juego también funciona sin problemas en la mayoría de los dispositivos móviles, gracias al rendimiento optimizado y la compatibilidad. Además, el juego tiene controles intuitivos que son fáciles de aprender y usar. También puede personalizar su diseño, sensibilidad y configuración para adaptarse a sus preferencias. </p>
|
23 |
-
<h2>Conclusión</h2>
|
24 |
-
|
25 |
-
<h2>Preguntas frecuentes</h2>
|
26 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Call of Duty: Warzone Mobile:</p>
|
27 |
-
<h4>P: ¿Call of Duty: Warzone Mobile es libre de jugar? </h4>
|
28 |
-
<p>A: Sí, Call of Duty: Warzone Mobile es gratis para jugar. No necesitas pagar nada para descargar y jugar el juego. Sin embargo, puedes comprar objetos del juego y divisas con dinero real si quieres. </p>
|
29 |
-
<h4>Q: ¿Cuáles son los requisitos mínimos para jugar Call of Duty: Warzone Mobile? </h4>
|
30 |
-
<p>A: Los requisitos mínimos para jugar Call of Duty: Warzone Mobile son los siguientes:</p>
|
31 |
-
<tabla>
|
32 |
-
<tr><td>OS</td><td>Android 5.0 o iOS 10.0 o superior</td></tr>
|
33 |
-
<tr><td>RAM</td><td>2 GB o más</td></tr>
|
34 |
-
<tr><td>Almacenamiento</td><td>4 GB o más</td></tr>
|
35 |
-
<tr><td>Internet</td><td>Wi-Fi o datos celulares (4G o superior)</td></tr>
|
36 |
-
</tabla>
|
37 |
-
<h4>Q: ¿Puedo jugar Call of Duty: Warzone Mobile sin conexión? </h4>
|
38 |
-
<p>A: No, no puedes jugar sin conexión a Call of Duty: Warzone Mobile. Necesitas una conexión a Internet para jugar, ya que es un juego multijugador en línea. </p>
|
39 |
-
<h4>Q: ¿Puedo jugar Call of Duty: Warzone Mobile con un controlador? </h4>
|
40 |
-
<p>A: Sí, puedes jugar Call of Duty: Warzone Mobile con un controlador. El juego es compatible con la mayoría de los controladores Bluetooth compatibles con tu dispositivo. También puedes ajustar la configuración del mando en el menú del juego. </p>
|
41 |
-
<h4>Q: ¿Cómo puedo contactar al servicio de atención al cliente para Call of Duty: Warzone Mobile? </h4>
|
42 |
-
<p>A: Puede ponerse en contacto con el servicio de atención al cliente para Call of Duty: Warzone Mobile visitando el sitio web oficial y haciendo clic en el botón "Support". También puedes acceder a la página de soporte desde dentro del juego pulsando en el icono "Configuración" y luego en el botón "Ayuda". </p> 64aa2da5cf<br />
|
43 |
-
<br />
|
44 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descarga Gratuita Multijugador En Lnea De Picas.md
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Picas en línea multijugador descarga gratuita: Cómo jugar el juego de cartas clásico con amigos</h1>
|
3 |
-
<p>Spades es un popular juego de cartas que se puede jugar en línea o fuera de línea con amigos o extraños. Es un juego de estrategia, habilidad y suerte que requiere trabajo en equipo y comunicación. Si usted está buscando una manera divertida y desafiante de pasar su tiempo, picas en línea de descarga gratuita multijugador es una gran opción. En este artículo, explicaremos qué es una pala y cómo jugarla, dónde descargar picas en línea gratis, y algunos consejos y trucos para mejorar tus habilidades de picas. </p>
|
4 |
-
<h2>¿Qué es espadas y cómo jugarlo</h2>
|
5 |
-
<p>Spades es un juego de cartas que se originó en los Estados Unidos en la década de 1930. Se juega con una baraja estándar de 52 cartas, dividida en cuatro palos: picas, corazones, diamantes y tréboles. Las picas son siempre el traje de triunfo, lo que significa que pueden vencer a cualquier otro traje en un truco. El juego puede ser jugado por dos o cuatro jugadores, ya sea individualmente o en parejas. </p>
|
6 |
-
<h2>descarga gratuita multijugador en línea de picas</h2><br /><p><b><b>DOWNLOAD</b> ↔ <a href="https://bltlly.com/2v6Mme">https://bltlly.com/2v6Mme</a></b></p><br /><br />
|
7 |
-
<h3>Los fundamentos de las picas</h3>
|
8 |
-
<p>El juego consta de varias rondas, llamadas manos, cada una con 13 trucos. Un truco es una ronda de juego donde cada jugador juega una carta de su mano, siguiendo el palo de la primera carta jugada si es posible. El jugador que juega la carta más alta del palo principal o la pala más alta gana el truco y lidera el siguiente. </p>
|
9 |
-
<p>El juego comienza con cada jugador recibiendo 13 cartas. Luego, cada jugador hace una puja, que es una estimación de cuántos trucos puede ganar en esa mano. Las ofertas se suman para formar el contrato del equipo, que es el número mínimo de trucos que deben ganar para evitar una penalización. El equipo que puje más alto tiene el privilegio de nombrar la primera carta a jugar. </p>
|
10 |
-
<h3>El sistema de puja y puntuación</h3>
|
11 |
-
|
12 |
-
<p>Si haces tu contrato, obtienes 10 puntos por cada truco en tu oferta. Por ejemplo, si haces 5 y ganas 5 trucos, obtienes 50 puntos. Si tomas más trucos que tu oferta, obtienes un punto por cada truco extra, llamado bolsa. Por ejemplo, si haces 5 y ganas 7 trucos, obtienes 52 puntos (50 por tu oferta y 2 por tus maletas). Sin embargo, si acumulas 10 bolsas en el transcurso del juego, pierdes 100 puntos como penalización. </p>
|
13 |
-
<p>Si no cumples con tu contrato, pierdes 10 puntos por cada truco en tu oferta. Por ejemplo, si haces 5 y ganas solo 4 trucos, pierdes 50 puntos. El número de trucos adicionales o menos que tomes no importa en este caso. </p>
|
14 |
-
<p>También hay algunas ofertas especiales que pueden aumentar o disminuir su puntuación significativamente. Una oferta nula es cuando haces cero trucos, lo que significa que intentarás no ganar ningún truco en esa mano. Si tienes éxito, obtienes 100 puntos. Si fallas, pierdes 100 puntos. Una apuesta ciega a cero es cuando haces una oferta a cero antes de mirar tus cartas. Si tienes éxito, obtienes 200 puntos. Si fallas, pierdes 200 puntos. </p>
|
15 |
-
<h3>Los diferentes modos de juego y variaciones</h3>
|
16 |
-
|
17 |
-
<p>Si quieres jugar picas en línea con tus amigos u otros jugadores de todo el mundo, tienes muchas opciones para elegir. Aquí están algunas de las mejores picas en línea aplicaciones de descarga gratuita multijugador y sitios web que se pueden probar: <h3>Picas: Juegos de cartas clásicas por MobilityWare</h3>
|
18 |
-
<p>Esta es una de las aplicaciones de picas más populares y altamente calificadas en la App Store y Google Play. Ofrece una interfaz suave y fácil de usar, con ajustes y temas personalizables. Puedes jugar online con jugadores reales o offline con bots, en diferentes modos de juego y niveles de dificultad. También puedes chatear con tus oponentes, rastrear tus estadísticas y ganar logros y recompensas. </p>
|
19 |
-
<p>Puedes descargar Spades: Classic Card Games by MobilityWare gratis desde [aquí] para dispositivos iOS y [aquí] para dispositivos Android. </p>
|
20 |
-
<p></p>
|
21 |
-
<h3>Espadas en línea gratis por VIPSpades.com</h3>
|
22 |
-
<p>Este es uno de los mejores sitios web de picas que se puede acceder desde cualquier navegador. Ofrece un diseño elegante y moderno, con gráficos y sonidos de alta calidad. Puedes jugar online con miles de jugadores de diferentes países, en diferentes modos de juego y variaciones. También puedes chatear con tus amigos, unirte a clubes, participar en torneos y ganar monedas y gemas. </p>
|
23 |
-
<p>Puedes jugar Spades Online gratis por VIPSpades.com gratis desde [aquí]. </p>
|
24 |
-
<h3>Espadas por Karmangames</h3>
|
25 |
-
<p>Esta es otra gran aplicación de picas que puede descargar en su dispositivo móvil o tableta. Ofrece una interfaz sencilla e intuitiva, con animaciones y efectos realistas. Puedes jugar online con otros jugadores o offline con inteligencia artificial, en diferentes modos de juego y niveles de dificultad. También puedes chatear con tu pareja, ver tu historial y ajustar tu configuración. </p>
|
26 |
-
<p>Puedes descargar Spades by Karmangames gratis desde [aquí] para dispositivos iOS y [aquí] para dispositivos Android. </p>
|
27 |
-
<h2>Consejos y trucos para mejorar tus habilidades de picas</h2>
|
28 |
-
|
29 |
-
<p>Una de las habilidades más importantes en picas es estimar su oferta con precisión. Esto significa que tienes que considerar la fuerza de tu mano, el número de picas que tienes, la probabilidad de ganar trucos y las ofertas de tus oponentes. Estas son algunas pautas que pueden ayudarte: - Si tienes muchas cartas altas (Ases, Reyes, Reinas), especialmente en picas, puedes pujar más alto de lo habitual. - Si tienes muchas cartas bajas (2s, 3s, 4s), especialmente en trajes que no sean de pádel, puedes pujar más bajo de lo habitual. - Si tienes una mano equilibrada (una mezcla de cartas altas y bajas en diferentes palos), puedes pujar alrededor de 4 o 5 trucos. - Si tienes un palo largo (cuatro o más cartas en un palo), especialmente en picas, puedes contar con ganar al menos un truco por carta en ese palo. - Si usted tiene un traje corto (una o dos cartas en un palo), especialmente en trajes de no-pade, puede tratar de deshacerse de ellos temprano y esperar ganar algunos trucos con picas más tarde. - Si tienes comodines (si juegas con ellos), puedes contarlos como picas y pujar en consecuencia. - Si tienes una oferta de cero o cero ciega, puedes tratar de evitar jugar cartas altas o picas, y esperar que tu pareja o oponentes tomen los trucos por ti. - Si usted tiene un socio, usted puede intentar coordinar sus ofertas con ellos, basados en las señales que le dan o las cartas que juegan. <h3>Cómo comunicarse con su pareja efectivamente</h3>
|
30 |
-
|
31 |
-
<p>Un desafío común es evitar tener demasiadas o muy pocas bolsas. Las bolsas son trucos adicionales que se hacen cargo de su oferta, que puede sumar una pena de 100 puntos si se obtiene 10 de ellos. Por otro lado, si tomas muy pocos trucos, corres el riesgo de fallar tu contrato y perder puntos. Aquí hay algunas estrategias que pueden ayudarle a evitar conseguir demasiadas o muy pocas bolsas: - Si está cerca de conseguir 10 bolsas, puede intentar pujar más alto de lo habitual o jugar más agresivamente para evitar tomar trucos adicionales. - Si estás lejos de conseguir 10 bolsas, puedes intentar pujar más bajo de lo habitual o jugar de forma más conservadora para evitar perder trucos. - Si no está seguro de cuántas bolsas tiene, puede consultar el tablero de puntuación o preguntar a su pareja antes de hacer su oferta o jugar su carta. - Si tienes muchas cartas altas o picas, puedes intentar jugarlas temprano o empezar con ellas para evitar quedarte atascado con ellas al final y hacer trucos no deseados. - Si tienes muchas cartas bajas o trajes que no sean de pádel, puedes intentar guardarlos para más tarde o seguirlos para evitar perder trucos que podrías haber ganado con picas. <h2>Conclusión</h2>
|
32 |
-
<p>Spades es un juego de cartas divertido y emocionante que se puede jugar en línea con amigos o extraños. Es un juego de estrategia, habilidad y suerte que requiere trabajo en equipo y comunicación. Para jugar a las picas en línea de descarga gratuita multijugador, es necesario saber qué es picas y cómo jugarlo, dónde descargar picas en línea de forma gratuita, y algunos consejos y trucos para mejorar sus habilidades de picas. Esperamos que este artículo te haya ayudado a aprender más sobre picas y cómo disfrutarlo en línea. </p>
|
33 |
-
<h2>Preguntas frecuentes</h2>
|
34 |
-
<p>Aquí hay algunas preguntas frecuentes sobre picas en línea multijugador gratis descargar:</p>
|
35 |
-
<tabla>
|
36 |
-
<tr>
|
37 |
-
<th>Pregunta</th>
|
38 |
-
<th>Respuesta</th>
|
39 |
-
</tr>
|
40 |
-
<tr>
|
41 |
-
<td>¿Cuántos jugadores pueden jugar picas online? </td>
|
42 |
-
<td>Puedes jugar picas online con dos o cuatro jugadores, ya sea individualmente o en parejas. </td>
|
43 |
-
</tr>
|
44 |
-
<tr>
|
45 |
-
<td>¿Cómo puedo encontrar otros jugadores online? </td>
|
46 |
-
|
47 |
-
</tr>
|
48 |
-
<tr>
|
49 |
-
<td>¿Puedo chatear con otros jugadores en línea? </td>
|
50 |
-
<td>Sí, la mayoría de las aplicaciones y sitios web en línea te permiten chatear con otros jugadores en línea. Puedes usar la función de chat para comunicarte con tu pareja, burlarte de tus oponentes o hacer nuevos amigos. </td>
|
51 |
-
</tr>
|
52 |
-
<tr>
|
53 |
-
<td>¿Puedo jugar espadas en línea gratis? </td>
|
54 |
-
<td>Sí, la mayoría de las aplicaciones en línea y sitios web son gratis para descargar y jugar. Sin embargo, algunos de ellos pueden ofrecer compras en la aplicación o anuncios para apoyar su desarrollo y mantenimiento. </td>
|
55 |
-
</tr>
|
56 |
-
<tr>
|
57 |
-
<td>¿Puedo jugar espadas en línea sin conexión? </td>
|
58 |
-
<td>Sí, algunas aplicaciones en línea de picas y sitios web le permiten jugar picas sin conexión con bots o jugadores locales. Puedes usar esta función para practicar tus habilidades, jugar sin conexión a Internet o divertirte con tu familia y amigos. </td>
|
59 |
-
</tr>
|
60 |
-
</tabla></p> 64aa2da5cf<br />
|
61 |
-
<br />
|
62 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/utils.py
DELETED
@@ -1,100 +0,0 @@
|
|
1 |
-
# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4 |
-
# may not use this file except in compliance with the License. A copy of
|
5 |
-
# the License is located at
|
6 |
-
#
|
7 |
-
# https://aws.amazon.com/apache2.0/
|
8 |
-
#
|
9 |
-
# or in the "license" file accompanying this file. This file is
|
10 |
-
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11 |
-
# ANY KIND, either express or implied. See the License for the specific
|
12 |
-
# language governing permissions and limitations under the License.
|
13 |
-
import sys
|
14 |
-
from collections import namedtuple
|
15 |
-
|
16 |
-
_ServiceContext = namedtuple(
|
17 |
-
'ServiceContext',
|
18 |
-
[
|
19 |
-
'service_name',
|
20 |
-
'service_model',
|
21 |
-
'service_waiter_model',
|
22 |
-
'resource_json_definitions',
|
23 |
-
],
|
24 |
-
)
|
25 |
-
|
26 |
-
|
27 |
-
class ServiceContext(_ServiceContext):
|
28 |
-
"""Provides important service-wide, read-only information about a service
|
29 |
-
|
30 |
-
:type service_name: str
|
31 |
-
:param service_name: The name of the service
|
32 |
-
|
33 |
-
:type service_model: :py:class:`botocore.model.ServiceModel`
|
34 |
-
:param service_model: The model of the service.
|
35 |
-
|
36 |
-
:type service_waiter_model: :py:class:`botocore.waiter.WaiterModel` or
|
37 |
-
a waiter model-like object such as
|
38 |
-
:py:class:`boto3.utils.LazyLoadedWaiterModel`
|
39 |
-
:param service_waiter_model: The waiter model of the service.
|
40 |
-
|
41 |
-
:type resource_json_definitions: dict
|
42 |
-
:param resource_json_definitions: The loaded json models of all resource
|
43 |
-
shapes for a service. It is equivalient of loading a
|
44 |
-
``resource-1.json`` and retrieving the value at the key "resources".
|
45 |
-
"""
|
46 |
-
|
47 |
-
pass
|
48 |
-
|
49 |
-
|
50 |
-
def import_module(name):
|
51 |
-
"""Import module given a name.
|
52 |
-
|
53 |
-
Does not support relative imports.
|
54 |
-
|
55 |
-
"""
|
56 |
-
__import__(name)
|
57 |
-
return sys.modules[name]
|
58 |
-
|
59 |
-
|
60 |
-
def lazy_call(full_name, **kwargs):
|
61 |
-
parent_kwargs = kwargs
|
62 |
-
|
63 |
-
def _handler(**kwargs):
|
64 |
-
module, function_name = full_name.rsplit('.', 1)
|
65 |
-
module = import_module(module)
|
66 |
-
kwargs.update(parent_kwargs)
|
67 |
-
return getattr(module, function_name)(**kwargs)
|
68 |
-
|
69 |
-
return _handler
|
70 |
-
|
71 |
-
|
72 |
-
def inject_attribute(class_attributes, name, value):
|
73 |
-
if name in class_attributes:
|
74 |
-
raise RuntimeError(
|
75 |
-
f'Cannot inject class attribute "{name}", attribute '
|
76 |
-
f'already exists in class dict.'
|
77 |
-
)
|
78 |
-
else:
|
79 |
-
class_attributes[name] = value
|
80 |
-
|
81 |
-
|
82 |
-
class LazyLoadedWaiterModel:
|
83 |
-
"""A lazily loaded waiter model
|
84 |
-
|
85 |
-
This does not load the service waiter model until an attempt is made
|
86 |
-
to retrieve the waiter model for a specific waiter. This is helpful
|
87 |
-
in docstring generation where we do not need to actually need to grab
|
88 |
-
the waiter-2.json until it is accessed through a ``get_waiter`` call
|
89 |
-
when the docstring is generated/accessed.
|
90 |
-
"""
|
91 |
-
|
92 |
-
def __init__(self, bc_session, service_name, api_version):
|
93 |
-
self._session = bc_session
|
94 |
-
self._service_name = service_name
|
95 |
-
self._api_version = api_version
|
96 |
-
|
97 |
-
def get_waiter(self, waiter_name):
|
98 |
-
return self._session.get_waiter_model(
|
99 |
-
self._service_name, self._api_version
|
100 |
-
).get_waiter(waiter_name)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/cmd.py
DELETED
@@ -1,436 +0,0 @@
|
|
1 |
-
"""distutils.cmd
|
2 |
-
|
3 |
-
Provides the Command class, the base class for the command classes
|
4 |
-
in the distutils.command package.
|
5 |
-
"""
|
6 |
-
|
7 |
-
import sys
|
8 |
-
import os
|
9 |
-
import re
|
10 |
-
from distutils.errors import DistutilsOptionError
|
11 |
-
from distutils import util, dir_util, file_util, archive_util, dep_util
|
12 |
-
from distutils import log
|
13 |
-
|
14 |
-
|
15 |
-
class Command:
|
16 |
-
"""Abstract base class for defining command classes, the "worker bees"
|
17 |
-
of the Distutils. A useful analogy for command classes is to think of
|
18 |
-
them as subroutines with local variables called "options". The options
|
19 |
-
are "declared" in 'initialize_options()' and "defined" (given their
|
20 |
-
final values, aka "finalized") in 'finalize_options()', both of which
|
21 |
-
must be defined by every command class. The distinction between the
|
22 |
-
two is necessary because option values might come from the outside
|
23 |
-
world (command line, config file, ...), and any options dependent on
|
24 |
-
other options must be computed *after* these outside influences have
|
25 |
-
been processed -- hence 'finalize_options()'. The "body" of the
|
26 |
-
subroutine, where it does all its work based on the values of its
|
27 |
-
options, is the 'run()' method, which must also be implemented by every
|
28 |
-
command class.
|
29 |
-
"""
|
30 |
-
|
31 |
-
# 'sub_commands' formalizes the notion of a "family" of commands,
|
32 |
-
# eg. "install" as the parent with sub-commands "install_lib",
|
33 |
-
# "install_headers", etc. The parent of a family of commands
|
34 |
-
# defines 'sub_commands' as a class attribute; it's a list of
|
35 |
-
# (command_name : string, predicate : unbound_method | string | None)
|
36 |
-
# tuples, where 'predicate' is a method of the parent command that
|
37 |
-
# determines whether the corresponding command is applicable in the
|
38 |
-
# current situation. (Eg. we "install_headers" is only applicable if
|
39 |
-
# we have any C header files to install.) If 'predicate' is None,
|
40 |
-
# that command is always applicable.
|
41 |
-
#
|
42 |
-
# 'sub_commands' is usually defined at the *end* of a class, because
|
43 |
-
# predicates can be unbound methods, so they must already have been
|
44 |
-
# defined. The canonical example is the "install" command.
|
45 |
-
sub_commands = []
|
46 |
-
|
47 |
-
# -- Creation/initialization methods -------------------------------
|
48 |
-
|
49 |
-
def __init__(self, dist):
|
50 |
-
"""Create and initialize a new Command object. Most importantly,
|
51 |
-
invokes the 'initialize_options()' method, which is the real
|
52 |
-
initializer and depends on the actual command being
|
53 |
-
instantiated.
|
54 |
-
"""
|
55 |
-
# late import because of mutual dependence between these classes
|
56 |
-
from distutils.dist import Distribution
|
57 |
-
|
58 |
-
if not isinstance(dist, Distribution):
|
59 |
-
raise TypeError("dist must be a Distribution instance")
|
60 |
-
if self.__class__ is Command:
|
61 |
-
raise RuntimeError("Command is an abstract class")
|
62 |
-
|
63 |
-
self.distribution = dist
|
64 |
-
self.initialize_options()
|
65 |
-
|
66 |
-
# Per-command versions of the global flags, so that the user can
|
67 |
-
# customize Distutils' behaviour command-by-command and let some
|
68 |
-
# commands fall back on the Distribution's behaviour. None means
|
69 |
-
# "not defined, check self.distribution's copy", while 0 or 1 mean
|
70 |
-
# false and true (duh). Note that this means figuring out the real
|
71 |
-
# value of each flag is a touch complicated -- hence "self._dry_run"
|
72 |
-
# will be handled by __getattr__, below.
|
73 |
-
# XXX This needs to be fixed.
|
74 |
-
self._dry_run = None
|
75 |
-
|
76 |
-
# verbose is largely ignored, but needs to be set for
|
77 |
-
# backwards compatibility (I think)?
|
78 |
-
self.verbose = dist.verbose
|
79 |
-
|
80 |
-
# Some commands define a 'self.force' option to ignore file
|
81 |
-
# timestamps, but methods defined *here* assume that
|
82 |
-
# 'self.force' exists for all commands. So define it here
|
83 |
-
# just to be safe.
|
84 |
-
self.force = None
|
85 |
-
|
86 |
-
# The 'help' flag is just used for command-line parsing, so
|
87 |
-
# none of that complicated bureaucracy is needed.
|
88 |
-
self.help = 0
|
89 |
-
|
90 |
-
# 'finalized' records whether or not 'finalize_options()' has been
|
91 |
-
# called. 'finalize_options()' itself should not pay attention to
|
92 |
-
# this flag: it is the business of 'ensure_finalized()', which
|
93 |
-
# always calls 'finalize_options()', to respect/update it.
|
94 |
-
self.finalized = 0
|
95 |
-
|
96 |
-
# XXX A more explicit way to customize dry_run would be better.
|
97 |
-
def __getattr__(self, attr):
|
98 |
-
if attr == 'dry_run':
|
99 |
-
myval = getattr(self, "_" + attr)
|
100 |
-
if myval is None:
|
101 |
-
return getattr(self.distribution, attr)
|
102 |
-
else:
|
103 |
-
return myval
|
104 |
-
else:
|
105 |
-
raise AttributeError(attr)
|
106 |
-
|
107 |
-
def ensure_finalized(self):
|
108 |
-
if not self.finalized:
|
109 |
-
self.finalize_options()
|
110 |
-
self.finalized = 1
|
111 |
-
|
112 |
-
# Subclasses must define:
|
113 |
-
# initialize_options()
|
114 |
-
# provide default values for all options; may be customized by
|
115 |
-
# setup script, by options from config file(s), or by command-line
|
116 |
-
# options
|
117 |
-
# finalize_options()
|
118 |
-
# decide on the final values for all options; this is called
|
119 |
-
# after all possible intervention from the outside world
|
120 |
-
# (command-line, option file, etc.) has been processed
|
121 |
-
# run()
|
122 |
-
# run the command: do whatever it is we're here to do,
|
123 |
-
# controlled by the command's various option values
|
124 |
-
|
125 |
-
def initialize_options(self):
|
126 |
-
"""Set default values for all the options that this command
|
127 |
-
supports. Note that these defaults may be overridden by other
|
128 |
-
commands, by the setup script, by config files, or by the
|
129 |
-
command-line. Thus, this is not the place to code dependencies
|
130 |
-
between options; generally, 'initialize_options()' implementations
|
131 |
-
are just a bunch of "self.foo = None" assignments.
|
132 |
-
|
133 |
-
This method must be implemented by all command classes.
|
134 |
-
"""
|
135 |
-
raise RuntimeError(
|
136 |
-
"abstract method -- subclass %s must override" % self.__class__
|
137 |
-
)
|
138 |
-
|
139 |
-
def finalize_options(self):
|
140 |
-
"""Set final values for all the options that this command supports.
|
141 |
-
This is always called as late as possible, ie. after any option
|
142 |
-
assignments from the command-line or from other commands have been
|
143 |
-
done. Thus, this is the place to code option dependencies: if
|
144 |
-
'foo' depends on 'bar', then it is safe to set 'foo' from 'bar' as
|
145 |
-
long as 'foo' still has the same value it was assigned in
|
146 |
-
'initialize_options()'.
|
147 |
-
|
148 |
-
This method must be implemented by all command classes.
|
149 |
-
"""
|
150 |
-
raise RuntimeError(
|
151 |
-
"abstract method -- subclass %s must override" % self.__class__
|
152 |
-
)
|
153 |
-
|
154 |
-
def dump_options(self, header=None, indent=""):
|
155 |
-
from distutils.fancy_getopt import longopt_xlate
|
156 |
-
|
157 |
-
if header is None:
|
158 |
-
header = "command options for '%s':" % self.get_command_name()
|
159 |
-
self.announce(indent + header, level=log.INFO)
|
160 |
-
indent = indent + " "
|
161 |
-
for (option, _, _) in self.user_options:
|
162 |
-
option = option.translate(longopt_xlate)
|
163 |
-
if option[-1] == "=":
|
164 |
-
option = option[:-1]
|
165 |
-
value = getattr(self, option)
|
166 |
-
self.announce(indent + "{} = {}".format(option, value), level=log.INFO)
|
167 |
-
|
168 |
-
def run(self):
|
169 |
-
"""A command's raison d'etre: carry out the action it exists to
|
170 |
-
perform, controlled by the options initialized in
|
171 |
-
'initialize_options()', customized by other commands, the setup
|
172 |
-
script, the command-line, and config files, and finalized in
|
173 |
-
'finalize_options()'. All terminal output and filesystem
|
174 |
-
interaction should be done by 'run()'.
|
175 |
-
|
176 |
-
This method must be implemented by all command classes.
|
177 |
-
"""
|
178 |
-
raise RuntimeError(
|
179 |
-
"abstract method -- subclass %s must override" % self.__class__
|
180 |
-
)
|
181 |
-
|
182 |
-
def announce(self, msg, level=1):
|
183 |
-
"""If the current verbosity level is of greater than or equal to
|
184 |
-
'level' print 'msg' to stdout.
|
185 |
-
"""
|
186 |
-
log.log(level, msg)
|
187 |
-
|
188 |
-
def debug_print(self, msg):
|
189 |
-
"""Print 'msg' to stdout if the global DEBUG (taken from the
|
190 |
-
DISTUTILS_DEBUG environment variable) flag is true.
|
191 |
-
"""
|
192 |
-
from distutils.debug import DEBUG
|
193 |
-
|
194 |
-
if DEBUG:
|
195 |
-
print(msg)
|
196 |
-
sys.stdout.flush()
|
197 |
-
|
198 |
-
# -- Option validation methods -------------------------------------
|
199 |
-
# (these are very handy in writing the 'finalize_options()' method)
|
200 |
-
#
|
201 |
-
# NB. the general philosophy here is to ensure that a particular option
|
202 |
-
# value meets certain type and value constraints. If not, we try to
|
203 |
-
# force it into conformance (eg. if we expect a list but have a string,
|
204 |
-
# split the string on comma and/or whitespace). If we can't force the
|
205 |
-
# option into conformance, raise DistutilsOptionError. Thus, command
|
206 |
-
# classes need do nothing more than (eg.)
|
207 |
-
# self.ensure_string_list('foo')
|
208 |
-
# and they can be guaranteed that thereafter, self.foo will be
|
209 |
-
# a list of strings.
|
210 |
-
|
211 |
-
def _ensure_stringlike(self, option, what, default=None):
|
212 |
-
val = getattr(self, option)
|
213 |
-
if val is None:
|
214 |
-
setattr(self, option, default)
|
215 |
-
return default
|
216 |
-
elif not isinstance(val, str):
|
217 |
-
raise DistutilsOptionError(
|
218 |
-
"'{}' must be a {} (got `{}`)".format(option, what, val)
|
219 |
-
)
|
220 |
-
return val
|
221 |
-
|
222 |
-
def ensure_string(self, option, default=None):
|
223 |
-
"""Ensure that 'option' is a string; if not defined, set it to
|
224 |
-
'default'.
|
225 |
-
"""
|
226 |
-
self._ensure_stringlike(option, "string", default)
|
227 |
-
|
228 |
-
def ensure_string_list(self, option):
|
229 |
-
r"""Ensure that 'option' is a list of strings. If 'option' is
|
230 |
-
currently a string, we split it either on /,\s*/ or /\s+/, so
|
231 |
-
"foo bar baz", "foo,bar,baz", and "foo, bar baz" all become
|
232 |
-
["foo", "bar", "baz"].
|
233 |
-
"""
|
234 |
-
val = getattr(self, option)
|
235 |
-
if val is None:
|
236 |
-
return
|
237 |
-
elif isinstance(val, str):
|
238 |
-
setattr(self, option, re.split(r',\s*|\s+', val))
|
239 |
-
else:
|
240 |
-
if isinstance(val, list):
|
241 |
-
ok = all(isinstance(v, str) for v in val)
|
242 |
-
else:
|
243 |
-
ok = False
|
244 |
-
if not ok:
|
245 |
-
raise DistutilsOptionError(
|
246 |
-
"'{}' must be a list of strings (got {!r})".format(option, val)
|
247 |
-
)
|
248 |
-
|
249 |
-
def _ensure_tested_string(self, option, tester, what, error_fmt, default=None):
|
250 |
-
val = self._ensure_stringlike(option, what, default)
|
251 |
-
if val is not None and not tester(val):
|
252 |
-
raise DistutilsOptionError(
|
253 |
-
("error in '%s' option: " + error_fmt) % (option, val)
|
254 |
-
)
|
255 |
-
|
256 |
-
def ensure_filename(self, option):
|
257 |
-
"""Ensure that 'option' is the name of an existing file."""
|
258 |
-
self._ensure_tested_string(
|
259 |
-
option, os.path.isfile, "filename", "'%s' does not exist or is not a file"
|
260 |
-
)
|
261 |
-
|
262 |
-
def ensure_dirname(self, option):
|
263 |
-
self._ensure_tested_string(
|
264 |
-
option,
|
265 |
-
os.path.isdir,
|
266 |
-
"directory name",
|
267 |
-
"'%s' does not exist or is not a directory",
|
268 |
-
)
|
269 |
-
|
270 |
-
# -- Convenience methods for commands ------------------------------
|
271 |
-
|
272 |
-
def get_command_name(self):
|
273 |
-
if hasattr(self, 'command_name'):
|
274 |
-
return self.command_name
|
275 |
-
else:
|
276 |
-
return self.__class__.__name__
|
277 |
-
|
278 |
-
def set_undefined_options(self, src_cmd, *option_pairs):
|
279 |
-
"""Set the values of any "undefined" options from corresponding
|
280 |
-
option values in some other command object. "Undefined" here means
|
281 |
-
"is None", which is the convention used to indicate that an option
|
282 |
-
has not been changed between 'initialize_options()' and
|
283 |
-
'finalize_options()'. Usually called from 'finalize_options()' for
|
284 |
-
options that depend on some other command rather than another
|
285 |
-
option of the same command. 'src_cmd' is the other command from
|
286 |
-
which option values will be taken (a command object will be created
|
287 |
-
for it if necessary); the remaining arguments are
|
288 |
-
'(src_option,dst_option)' tuples which mean "take the value of
|
289 |
-
'src_option' in the 'src_cmd' command object, and copy it to
|
290 |
-
'dst_option' in the current command object".
|
291 |
-
"""
|
292 |
-
# Option_pairs: list of (src_option, dst_option) tuples
|
293 |
-
src_cmd_obj = self.distribution.get_command_obj(src_cmd)
|
294 |
-
src_cmd_obj.ensure_finalized()
|
295 |
-
for (src_option, dst_option) in option_pairs:
|
296 |
-
if getattr(self, dst_option) is None:
|
297 |
-
setattr(self, dst_option, getattr(src_cmd_obj, src_option))
|
298 |
-
|
299 |
-
def get_finalized_command(self, command, create=1):
|
300 |
-
"""Wrapper around Distribution's 'get_command_obj()' method: find
|
301 |
-
(create if necessary and 'create' is true) the command object for
|
302 |
-
'command', call its 'ensure_finalized()' method, and return the
|
303 |
-
finalized command object.
|
304 |
-
"""
|
305 |
-
cmd_obj = self.distribution.get_command_obj(command, create)
|
306 |
-
cmd_obj.ensure_finalized()
|
307 |
-
return cmd_obj
|
308 |
-
|
309 |
-
# XXX rename to 'get_reinitialized_command()'? (should do the
|
310 |
-
# same in dist.py, if so)
|
311 |
-
def reinitialize_command(self, command, reinit_subcommands=0):
|
312 |
-
return self.distribution.reinitialize_command(command, reinit_subcommands)
|
313 |
-
|
314 |
-
def run_command(self, command):
|
315 |
-
"""Run some other command: uses the 'run_command()' method of
|
316 |
-
Distribution, which creates and finalizes the command object if
|
317 |
-
necessary and then invokes its 'run()' method.
|
318 |
-
"""
|
319 |
-
self.distribution.run_command(command)
|
320 |
-
|
321 |
-
def get_sub_commands(self):
|
322 |
-
"""Determine the sub-commands that are relevant in the current
|
323 |
-
distribution (ie., that need to be run). This is based on the
|
324 |
-
'sub_commands' class attribute: each tuple in that list may include
|
325 |
-
a method that we call to determine if the subcommand needs to be
|
326 |
-
run for the current distribution. Return a list of command names.
|
327 |
-
"""
|
328 |
-
commands = []
|
329 |
-
for (cmd_name, method) in self.sub_commands:
|
330 |
-
if method is None or method(self):
|
331 |
-
commands.append(cmd_name)
|
332 |
-
return commands
|
333 |
-
|
334 |
-
# -- External world manipulation -----------------------------------
|
335 |
-
|
336 |
-
def warn(self, msg):
|
337 |
-
log.warn("warning: %s: %s\n", self.get_command_name(), msg)
|
338 |
-
|
339 |
-
def execute(self, func, args, msg=None, level=1):
|
340 |
-
util.execute(func, args, msg, dry_run=self.dry_run)
|
341 |
-
|
342 |
-
def mkpath(self, name, mode=0o777):
|
343 |
-
dir_util.mkpath(name, mode, dry_run=self.dry_run)
|
344 |
-
|
345 |
-
def copy_file(
|
346 |
-
self, infile, outfile, preserve_mode=1, preserve_times=1, link=None, level=1
|
347 |
-
):
|
348 |
-
"""Copy a file respecting verbose, dry-run and force flags. (The
|
349 |
-
former two default to whatever is in the Distribution object, and
|
350 |
-
the latter defaults to false for commands that don't define it.)"""
|
351 |
-
return file_util.copy_file(
|
352 |
-
infile,
|
353 |
-
outfile,
|
354 |
-
preserve_mode,
|
355 |
-
preserve_times,
|
356 |
-
not self.force,
|
357 |
-
link,
|
358 |
-
dry_run=self.dry_run,
|
359 |
-
)
|
360 |
-
|
361 |
-
def copy_tree(
|
362 |
-
self,
|
363 |
-
infile,
|
364 |
-
outfile,
|
365 |
-
preserve_mode=1,
|
366 |
-
preserve_times=1,
|
367 |
-
preserve_symlinks=0,
|
368 |
-
level=1,
|
369 |
-
):
|
370 |
-
"""Copy an entire directory tree respecting verbose, dry-run,
|
371 |
-
and force flags.
|
372 |
-
"""
|
373 |
-
return dir_util.copy_tree(
|
374 |
-
infile,
|
375 |
-
outfile,
|
376 |
-
preserve_mode,
|
377 |
-
preserve_times,
|
378 |
-
preserve_symlinks,
|
379 |
-
not self.force,
|
380 |
-
dry_run=self.dry_run,
|
381 |
-
)
|
382 |
-
|
383 |
-
def move_file(self, src, dst, level=1):
|
384 |
-
"""Move a file respecting dry-run flag."""
|
385 |
-
return file_util.move_file(src, dst, dry_run=self.dry_run)
|
386 |
-
|
387 |
-
def spawn(self, cmd, search_path=1, level=1):
|
388 |
-
"""Spawn an external command respecting dry-run flag."""
|
389 |
-
from distutils.spawn import spawn
|
390 |
-
|
391 |
-
spawn(cmd, search_path, dry_run=self.dry_run)
|
392 |
-
|
393 |
-
def make_archive(
|
394 |
-
self, base_name, format, root_dir=None, base_dir=None, owner=None, group=None
|
395 |
-
):
|
396 |
-
return archive_util.make_archive(
|
397 |
-
base_name,
|
398 |
-
format,
|
399 |
-
root_dir,
|
400 |
-
base_dir,
|
401 |
-
dry_run=self.dry_run,
|
402 |
-
owner=owner,
|
403 |
-
group=group,
|
404 |
-
)
|
405 |
-
|
406 |
-
def make_file(
|
407 |
-
self, infiles, outfile, func, args, exec_msg=None, skip_msg=None, level=1
|
408 |
-
):
|
409 |
-
"""Special case of 'execute()' for operations that process one or
|
410 |
-
more input files and generate one output file. Works just like
|
411 |
-
'execute()', except the operation is skipped and a different
|
412 |
-
message printed if 'outfile' already exists and is newer than all
|
413 |
-
files listed in 'infiles'. If the command defined 'self.force',
|
414 |
-
and it is true, then the command is unconditionally run -- does no
|
415 |
-
timestamp checks.
|
416 |
-
"""
|
417 |
-
if skip_msg is None:
|
418 |
-
skip_msg = "skipping %s (inputs unchanged)" % outfile
|
419 |
-
|
420 |
-
# Allow 'infiles' to be a single string
|
421 |
-
if isinstance(infiles, str):
|
422 |
-
infiles = (infiles,)
|
423 |
-
elif not isinstance(infiles, (list, tuple)):
|
424 |
-
raise TypeError("'infiles' must be a string, or a list or tuple of strings")
|
425 |
-
|
426 |
-
if exec_msg is None:
|
427 |
-
exec_msg = "generating {} from {}".format(outfile, ', '.join(infiles))
|
428 |
-
|
429 |
-
# If 'outfile' must be regenerated (either because it doesn't
|
430 |
-
# exist, is out-of-date, or the 'force' flag is true) then
|
431 |
-
# perform the action that presumably regenerates it
|
432 |
-
if self.force or dep_util.newer_group(infiles, outfile):
|
433 |
-
self.execute(func, args, exec_msg, level)
|
434 |
-
# Otherwise, print the "skip" message
|
435 |
-
else:
|
436 |
-
log.debug(skip_msg)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/dev/README.md
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
|
2 |
-
## Some scripts for developers to use, include:
|
3 |
-
|
4 |
-
- `run_instant_tests.sh`: run training for a few iterations.
|
5 |
-
- `run_inference_tests.sh`: run inference on a small dataset.
|
6 |
-
- `../../dev/linter.sh`: lint the codebase before commit
|
7 |
-
- `../../dev/parse_results.sh`: parse results from log file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/dependencies/cub/cub/cmake/cub-config-version.cmake
DELETED
@@ -1,33 +0,0 @@
|
|
1 |
-
# Parse version information from version.cuh:
|
2 |
-
file(READ "${CMAKE_CURRENT_LIST_DIR}/../version.cuh" CUB_VERSION_HEADER)
|
3 |
-
string(REGEX MATCH "#define[ \t]+CUB_VERSION[ \t]+([0-9]+)" DUMMY "${CUB_VERSION_HEADER}")
|
4 |
-
set(CUB_VERSION_FLAT ${CMAKE_MATCH_1})
|
5 |
-
# Note that CUB calls this the PATCH number, CMake calls it the TWEAK number:
|
6 |
-
string(REGEX MATCH "#define[ \t]+CUB_PATCH_NUMBER[ \t]+([0-9]+)" DUMMY "${CUB_VERSION_HEADER}")
|
7 |
-
set(CUB_VERSION_TWEAK ${CMAKE_MATCH_1})
|
8 |
-
|
9 |
-
math(EXPR CUB_VERSION_MAJOR "${CUB_VERSION_FLAT} / 100000")
|
10 |
-
math(EXPR CUB_VERSION_MINOR "(${CUB_VERSION_FLAT} / 100) % 1000")
|
11 |
-
math(EXPR CUB_VERSION_PATCH "${CUB_VERSION_FLAT} % 100") # CUB: "subminor" CMake: "patch"
|
12 |
-
|
13 |
-
# Build comparison versions:
|
14 |
-
set(CUB_COMPAT "${CUB_VERSION_MAJOR}.${CUB_VERSION_MINOR}.${CUB_VERSION_PATCH}")
|
15 |
-
set(CUB_EXACT "${CUB_COMPAT}.${CUB_VERSION_TWEAK}")
|
16 |
-
set(FIND_COMPAT "${PACKAGE_FIND_VERSION_MAJOR}.${PACKAGE_FIND_VERSION_MINOR}.${PACKAGE_FIND_VERSION_PATCH}")
|
17 |
-
set(FIND_EXACT "${FIND_COMPAT}.${PACKAGE_FIND_VERSION_TWEAK}")
|
18 |
-
|
19 |
-
# Set default results
|
20 |
-
set(PACKAGE_VERSION ${CUB_EXACT})
|
21 |
-
set(PACKAGE_VERSION_UNSUITABLE FALSE)
|
22 |
-
set(PACKAGE_VERSION_COMPATIBLE FALSE)
|
23 |
-
set(PACKAGE_VERSION_EXACT FALSE)
|
24 |
-
|
25 |
-
# Test for compatibility (ignores tweak)
|
26 |
-
if (FIND_COMPAT VERSION_EQUAL CUB_COMPAT)
|
27 |
-
set(PACKAGE_VERSION_COMPATIBLE TRUE)
|
28 |
-
endif()
|
29 |
-
|
30 |
-
# Test for exact (does not ignore tweak)
|
31 |
-
if (FIND_EXACT VERSION_EQUAL CUB_EXACT)
|
32 |
-
set(PACKAGE_VERSION_EXACT TRUE)
|
33 |
-
endif()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/global_context_head.py
DELETED
@@ -1,102 +0,0 @@
|
|
1 |
-
import torch.nn as nn
|
2 |
-
from mmcv.cnn import ConvModule
|
3 |
-
from mmcv.runner import auto_fp16, force_fp32
|
4 |
-
|
5 |
-
from mmdet.models.builder import HEADS
|
6 |
-
from mmdet.models.utils import ResLayer, SimplifiedBasicBlock
|
7 |
-
|
8 |
-
|
9 |
-
@HEADS.register_module()
|
10 |
-
class GlobalContextHead(nn.Module):
|
11 |
-
"""Global context head used in `SCNet <https://arxiv.org/abs/2012.10150>`_.
|
12 |
-
|
13 |
-
Args:
|
14 |
-
num_convs (int, optional): number of convolutional layer in GlbCtxHead.
|
15 |
-
Default: 4.
|
16 |
-
in_channels (int, optional): number of input channels. Default: 256.
|
17 |
-
conv_out_channels (int, optional): number of output channels before
|
18 |
-
classification layer. Default: 256.
|
19 |
-
num_classes (int, optional): number of classes. Default: 80.
|
20 |
-
loss_weight (float, optional): global context loss weight. Default: 1.
|
21 |
-
conv_cfg (dict, optional): config to init conv layer. Default: None.
|
22 |
-
norm_cfg (dict, optional): config to init norm layer. Default: None.
|
23 |
-
conv_to_res (bool, optional): if True, 2 convs will be grouped into
|
24 |
-
1 `SimplifiedBasicBlock` using a skip connection. Default: False.
|
25 |
-
"""
|
26 |
-
|
27 |
-
def __init__(self,
|
28 |
-
num_convs=4,
|
29 |
-
in_channels=256,
|
30 |
-
conv_out_channels=256,
|
31 |
-
num_classes=80,
|
32 |
-
loss_weight=1.0,
|
33 |
-
conv_cfg=None,
|
34 |
-
norm_cfg=None,
|
35 |
-
conv_to_res=False):
|
36 |
-
super(GlobalContextHead, self).__init__()
|
37 |
-
self.num_convs = num_convs
|
38 |
-
self.in_channels = in_channels
|
39 |
-
self.conv_out_channels = conv_out_channels
|
40 |
-
self.num_classes = num_classes
|
41 |
-
self.loss_weight = loss_weight
|
42 |
-
self.conv_cfg = conv_cfg
|
43 |
-
self.norm_cfg = norm_cfg
|
44 |
-
self.conv_to_res = conv_to_res
|
45 |
-
self.fp16_enabled = False
|
46 |
-
|
47 |
-
if self.conv_to_res:
|
48 |
-
num_res_blocks = num_convs // 2
|
49 |
-
self.convs = ResLayer(
|
50 |
-
SimplifiedBasicBlock,
|
51 |
-
in_channels,
|
52 |
-
self.conv_out_channels,
|
53 |
-
num_res_blocks,
|
54 |
-
conv_cfg=self.conv_cfg,
|
55 |
-
norm_cfg=self.norm_cfg)
|
56 |
-
self.num_convs = num_res_blocks
|
57 |
-
else:
|
58 |
-
self.convs = nn.ModuleList()
|
59 |
-
for i in range(self.num_convs):
|
60 |
-
in_channels = self.in_channels if i == 0 else conv_out_channels
|
61 |
-
self.convs.append(
|
62 |
-
ConvModule(
|
63 |
-
in_channels,
|
64 |
-
conv_out_channels,
|
65 |
-
3,
|
66 |
-
padding=1,
|
67 |
-
conv_cfg=self.conv_cfg,
|
68 |
-
norm_cfg=self.norm_cfg))
|
69 |
-
|
70 |
-
self.pool = nn.AdaptiveAvgPool2d(1)
|
71 |
-
self.fc = nn.Linear(conv_out_channels, num_classes)
|
72 |
-
|
73 |
-
self.criterion = nn.BCEWithLogitsLoss()
|
74 |
-
|
75 |
-
def init_weights(self):
|
76 |
-
"""Init weights for the head."""
|
77 |
-
nn.init.normal_(self.fc.weight, 0, 0.01)
|
78 |
-
nn.init.constant_(self.fc.bias, 0)
|
79 |
-
|
80 |
-
@auto_fp16()
|
81 |
-
def forward(self, feats):
|
82 |
-
"""Forward function."""
|
83 |
-
x = feats[-1]
|
84 |
-
for i in range(self.num_convs):
|
85 |
-
x = self.convs[i](x)
|
86 |
-
x = self.pool(x)
|
87 |
-
|
88 |
-
# multi-class prediction
|
89 |
-
mc_pred = x.reshape(x.size(0), -1)
|
90 |
-
mc_pred = self.fc(mc_pred)
|
91 |
-
|
92 |
-
return mc_pred, x
|
93 |
-
|
94 |
-
@force_fp32(apply_to=('pred', ))
|
95 |
-
def loss(self, pred, labels):
|
96 |
-
"""Loss function."""
|
97 |
-
labels = [lbl.unique() for lbl in labels]
|
98 |
-
targets = pred.new_zeros(pred.size())
|
99 |
-
for i, label in enumerate(labels):
|
100 |
-
targets[i, label] = 1.0
|
101 |
-
loss = self.loss_weight * self.criterion(pred, targets)
|
102 |
-
return loss
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py
DELETED
@@ -1,108 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from mmcv.runner import force_fp32
|
3 |
-
|
4 |
-
from mmdet.models.builder import ROI_EXTRACTORS
|
5 |
-
from .base_roi_extractor import BaseRoIExtractor
|
6 |
-
|
7 |
-
|
8 |
-
@ROI_EXTRACTORS.register_module()
|
9 |
-
class SingleRoIExtractor(BaseRoIExtractor):
|
10 |
-
"""Extract RoI features from a single level feature map.
|
11 |
-
|
12 |
-
If there are multiple input feature levels, each RoI is mapped to a level
|
13 |
-
according to its scale. The mapping rule is proposed in
|
14 |
-
`FPN <https://arxiv.org/abs/1612.03144>`_.
|
15 |
-
|
16 |
-
Args:
|
17 |
-
roi_layer (dict): Specify RoI layer type and arguments.
|
18 |
-
out_channels (int): Output channels of RoI layers.
|
19 |
-
featmap_strides (List[int]): Strides of input feature maps.
|
20 |
-
finest_scale (int): Scale threshold of mapping to level 0. Default: 56.
|
21 |
-
"""
|
22 |
-
|
23 |
-
def __init__(self,
|
24 |
-
roi_layer,
|
25 |
-
out_channels,
|
26 |
-
featmap_strides,
|
27 |
-
finest_scale=56):
|
28 |
-
super(SingleRoIExtractor, self).__init__(roi_layer, out_channels,
|
29 |
-
featmap_strides)
|
30 |
-
self.finest_scale = finest_scale
|
31 |
-
|
32 |
-
def map_roi_levels(self, rois, num_levels):
|
33 |
-
"""Map rois to corresponding feature levels by scales.
|
34 |
-
|
35 |
-
- scale < finest_scale * 2: level 0
|
36 |
-
- finest_scale * 2 <= scale < finest_scale * 4: level 1
|
37 |
-
- finest_scale * 4 <= scale < finest_scale * 8: level 2
|
38 |
-
- scale >= finest_scale * 8: level 3
|
39 |
-
|
40 |
-
Args:
|
41 |
-
rois (Tensor): Input RoIs, shape (k, 5).
|
42 |
-
num_levels (int): Total level number.
|
43 |
-
|
44 |
-
Returns:
|
45 |
-
Tensor: Level index (0-based) of each RoI, shape (k, )
|
46 |
-
"""
|
47 |
-
scale = torch.sqrt(
|
48 |
-
(rois[:, 3] - rois[:, 1]) * (rois[:, 4] - rois[:, 2]))
|
49 |
-
target_lvls = torch.floor(torch.log2(scale / self.finest_scale + 1e-6))
|
50 |
-
target_lvls = target_lvls.clamp(min=0, max=num_levels - 1).long()
|
51 |
-
return target_lvls
|
52 |
-
|
53 |
-
@force_fp32(apply_to=('feats', ), out_fp16=True)
|
54 |
-
def forward(self, feats, rois, roi_scale_factor=None):
|
55 |
-
"""Forward function."""
|
56 |
-
out_size = self.roi_layers[0].output_size
|
57 |
-
num_levels = len(feats)
|
58 |
-
expand_dims = (-1, self.out_channels * out_size[0] * out_size[1])
|
59 |
-
if torch.onnx.is_in_onnx_export():
|
60 |
-
# Work around to export mask-rcnn to onnx
|
61 |
-
roi_feats = rois[:, :1].clone().detach()
|
62 |
-
roi_feats = roi_feats.expand(*expand_dims)
|
63 |
-
roi_feats = roi_feats.reshape(-1, self.out_channels, *out_size)
|
64 |
-
roi_feats = roi_feats * 0
|
65 |
-
else:
|
66 |
-
roi_feats = feats[0].new_zeros(
|
67 |
-
rois.size(0), self.out_channels, *out_size)
|
68 |
-
# TODO: remove this when parrots supports
|
69 |
-
if torch.__version__ == 'parrots':
|
70 |
-
roi_feats.requires_grad = True
|
71 |
-
|
72 |
-
if num_levels == 1:
|
73 |
-
if len(rois) == 0:
|
74 |
-
return roi_feats
|
75 |
-
return self.roi_layers[0](feats[0], rois)
|
76 |
-
|
77 |
-
target_lvls = self.map_roi_levels(rois, num_levels)
|
78 |
-
|
79 |
-
if roi_scale_factor is not None:
|
80 |
-
rois = self.roi_rescale(rois, roi_scale_factor)
|
81 |
-
|
82 |
-
for i in range(num_levels):
|
83 |
-
mask = target_lvls == i
|
84 |
-
if torch.onnx.is_in_onnx_export():
|
85 |
-
# To keep all roi_align nodes exported to onnx
|
86 |
-
# and skip nonzero op
|
87 |
-
mask = mask.float().unsqueeze(-1).expand(*expand_dims).reshape(
|
88 |
-
roi_feats.shape)
|
89 |
-
roi_feats_t = self.roi_layers[i](feats[i], rois)
|
90 |
-
roi_feats_t *= mask
|
91 |
-
roi_feats += roi_feats_t
|
92 |
-
continue
|
93 |
-
inds = mask.nonzero(as_tuple=False).squeeze(1)
|
94 |
-
if inds.numel() > 0:
|
95 |
-
rois_ = rois[inds]
|
96 |
-
roi_feats_t = self.roi_layers[i](feats[i], rois_)
|
97 |
-
roi_feats[inds] = roi_feats_t
|
98 |
-
else:
|
99 |
-
# Sometimes some pyramid levels will not be used for RoI
|
100 |
-
# feature extraction and this will cause an incomplete
|
101 |
-
# computation graph in one GPU, which is different from those
|
102 |
-
# in other GPUs and will cause a hanging error.
|
103 |
-
# Therefore, we add it to ensure each feature pyramid is
|
104 |
-
# included in the computation graph to avoid runtime bugs.
|
105 |
-
roi_feats += sum(
|
106 |
-
x.view(-1)[0]
|
107 |
-
for x in self.parameters()) * 0. + feats[i].sum() * 0.
|
108 |
-
return roi_feats
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cloudfeng/anime-remove-background/app.py
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import huggingface_hub
|
3 |
-
import onnxruntime as rt
|
4 |
-
import numpy as np
|
5 |
-
import cv2
|
6 |
-
|
7 |
-
|
8 |
-
def get_mask(img, s=1024):
|
9 |
-
img = (img / 255).astype(np.float32)
|
10 |
-
h, w = h0, w0 = img.shape[:-1]
|
11 |
-
h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
|
12 |
-
ph, pw = s - h, s - w
|
13 |
-
img_input = np.zeros([s, s, 3], dtype=np.float32)
|
14 |
-
img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
|
15 |
-
img_input = np.transpose(img_input, (2, 0, 1))
|
16 |
-
img_input = img_input[np.newaxis, :]
|
17 |
-
mask = rmbg_model.run(None, {'img': img_input})[0][0]
|
18 |
-
mask = np.transpose(mask, (1, 2, 0))
|
19 |
-
mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
|
20 |
-
mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
|
21 |
-
return mask
|
22 |
-
|
23 |
-
|
24 |
-
def rmbg_fn(img):
|
25 |
-
mask = get_mask(img)
|
26 |
-
img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
|
27 |
-
mask = (mask * 255).astype(np.uint8)
|
28 |
-
img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
|
29 |
-
mask = mask.repeat(3, axis=2)
|
30 |
-
return mask, img
|
31 |
-
|
32 |
-
|
33 |
-
if __name__ == "__main__":
|
34 |
-
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
|
35 |
-
model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
|
36 |
-
rmbg_model = rt.InferenceSession(model_path, providers=providers)
|
37 |
-
app = gr.Blocks()
|
38 |
-
with app:
|
39 |
-
gr.Markdown("# Anime Remove Background\n\n"
|
40 |
-
"\n\n"
|
41 |
-
"demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)")
|
42 |
-
with gr.Row():
|
43 |
-
with gr.Column():
|
44 |
-
input_img = gr.Image(label="input image")
|
45 |
-
examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)]
|
46 |
-
examples = gr.Dataset(components=[input_img], samples=examples_data)
|
47 |
-
run_btn = gr.Button(variant="primary")
|
48 |
-
output_mask = gr.Image(label="mask")
|
49 |
-
output_img = gr.Image(label="result", image_mode="RGBA")
|
50 |
-
examples.click(lambda x: x[0], [examples], [input_img])
|
51 |
-
run_btn.click(rmbg_fn, [input_img], [output_mask, output_img])
|
52 |
-
app.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/g4f/__init__.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
import sys
|
2 |
-
from . import Provider
|
3 |
-
from g4f.models import Model, ModelUtils
|
4 |
-
|
5 |
-
|
6 |
-
class ChatCompletion:
|
7 |
-
@staticmethod
|
8 |
-
def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs):
|
9 |
-
kwargs['auth'] = auth
|
10 |
-
|
11 |
-
if provider and provider.needs_auth and not auth:
|
12 |
-
print(
|
13 |
-
f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr)
|
14 |
-
sys.exit(1)
|
15 |
-
|
16 |
-
try:
|
17 |
-
if isinstance(model, str):
|
18 |
-
try:
|
19 |
-
model = ModelUtils.convert[model]
|
20 |
-
except KeyError:
|
21 |
-
raise Exception(f'The model: {model} does not exist')
|
22 |
-
|
23 |
-
engine = model.best_provider if not provider else provider
|
24 |
-
|
25 |
-
if not engine.supports_stream and stream == True:
|
26 |
-
print(
|
27 |
-
f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr)
|
28 |
-
sys.exit(1)
|
29 |
-
|
30 |
-
print(f'Using {engine.__name__} provider')
|
31 |
-
|
32 |
-
return (engine._create_completion(model.name, messages, stream, **kwargs)
|
33 |
-
if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs)))
|
34 |
-
except TypeError as e:
|
35 |
-
print(e)
|
36 |
-
arg: str = str(e).split("'")[1]
|
37 |
-
print(
|
38 |
-
f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr)
|
39 |
-
sys.exit(1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/grid_sample_gradfix.py
DELETED
@@ -1,83 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
"""Custom replacement for `torch.nn.functional.grid_sample` that
|
10 |
-
supports arbitrarily high order gradients between the input and output.
|
11 |
-
Only works on 2D images and assumes
|
12 |
-
`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`."""
|
13 |
-
|
14 |
-
import warnings
|
15 |
-
import torch
|
16 |
-
|
17 |
-
# pylint: disable=redefined-builtin
|
18 |
-
# pylint: disable=arguments-differ
|
19 |
-
# pylint: disable=protected-access
|
20 |
-
|
21 |
-
#----------------------------------------------------------------------------
|
22 |
-
|
23 |
-
enabled = False # Enable the custom op by setting this to true.
|
24 |
-
|
25 |
-
#----------------------------------------------------------------------------
|
26 |
-
|
27 |
-
def grid_sample(input, grid):
|
28 |
-
if _should_use_custom_op():
|
29 |
-
return _GridSample2dForward.apply(input, grid)
|
30 |
-
return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
|
31 |
-
|
32 |
-
#----------------------------------------------------------------------------
|
33 |
-
|
34 |
-
def _should_use_custom_op():
|
35 |
-
if not enabled:
|
36 |
-
return False
|
37 |
-
if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']):
|
38 |
-
return True
|
39 |
-
warnings.warn(f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().')
|
40 |
-
return False
|
41 |
-
|
42 |
-
#----------------------------------------------------------------------------
|
43 |
-
|
44 |
-
class _GridSample2dForward(torch.autograd.Function):
|
45 |
-
@staticmethod
|
46 |
-
def forward(ctx, input, grid):
|
47 |
-
assert input.ndim == 4
|
48 |
-
assert grid.ndim == 4
|
49 |
-
output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
|
50 |
-
ctx.save_for_backward(input, grid)
|
51 |
-
return output
|
52 |
-
|
53 |
-
@staticmethod
|
54 |
-
def backward(ctx, grad_output):
|
55 |
-
input, grid = ctx.saved_tensors
|
56 |
-
grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid)
|
57 |
-
return grad_input, grad_grid
|
58 |
-
|
59 |
-
#----------------------------------------------------------------------------
|
60 |
-
|
61 |
-
class _GridSample2dBackward(torch.autograd.Function):
|
62 |
-
@staticmethod
|
63 |
-
def forward(ctx, grad_output, input, grid):
|
64 |
-
op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
|
65 |
-
grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
|
66 |
-
ctx.save_for_backward(grid)
|
67 |
-
return grad_input, grad_grid
|
68 |
-
|
69 |
-
@staticmethod
|
70 |
-
def backward(ctx, grad2_grad_input, grad2_grad_grid):
|
71 |
-
_ = grad2_grad_grid # unused
|
72 |
-
grid, = ctx.saved_tensors
|
73 |
-
grad2_grad_output = None
|
74 |
-
grad2_input = None
|
75 |
-
grad2_grid = None
|
76 |
-
|
77 |
-
if ctx.needs_input_grad[0]:
|
78 |
-
grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid)
|
79 |
-
|
80 |
-
assert not ctx.needs_input_grad[2]
|
81 |
-
return grad2_grad_output, grad2_input, grad2_grid
|
82 |
-
|
83 |
-
#----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cpp4App/Cpp4App/CDM/run_online_demo.py
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
import sys
|
2 |
-
import os
|
3 |
-
|
4 |
-
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'scrutinizing_alexa'))
|
5 |
-
# from run_single_sem import run_single_pp
|
6 |
-
|
7 |
-
sys.path.append(os.path.join(os.path.dirname(__file__)))
|
8 |
-
from run_single import run_single_img
|
9 |
-
|
10 |
-
import cv2
|
11 |
-
|
12 |
-
def run_demo(img_root, output_root, segment_root, file):
|
13 |
-
# run_single_pp(file)
|
14 |
-
|
15 |
-
output_board, output_data = run_single_img(img_root, output_root, segment_root)
|
16 |
-
|
17 |
-
# cv2.imshow("result", output_board)
|
18 |
-
# cv2.waitKey(0)
|
19 |
-
|
20 |
-
return output_board, output_data
|
21 |
-
|
22 |
-
if __name__ == '__main__':
|
23 |
-
|
24 |
-
input_img_root = "./input_examples/1-1.jpg"
|
25 |
-
output_root = "./result_classification"
|
26 |
-
segment_root = '../scrutinizing_alexa/txt'
|
27 |
-
img_root = "./input_examples/1-1-write.jpg"
|
28 |
-
pp_root = "../scrutinizing_alexa/pp_example/1.html"
|
29 |
-
|
30 |
-
# run_single_pp(file)
|
31 |
-
#
|
32 |
-
# img = cv2.imread(input_img_root)
|
33 |
-
#
|
34 |
-
# cv2.imwrite(input_img, img)
|
35 |
-
#
|
36 |
-
# output_board, output_data = run_single_img("./input_examples/1-1-write.jpg", output_root, segment_root)
|
37 |
-
#
|
38 |
-
# # cv2.imshow("result", output_board)
|
39 |
-
# # cv2.waitKey(0)
|
40 |
-
|
41 |
-
img = cv2.imread(input_img_root)
|
42 |
-
cv2.imwrite(img_root, img)
|
43 |
-
|
44 |
-
file = open('../scrutinizing_alexa/pp_example/1.html', encoding='utf-8')
|
45 |
-
|
46 |
-
output_board, output_data = run_demo(img_root, output_root, segment_root, file)
|
47 |
-
|
48 |
-
# cv2.imshow("result", output_board)
|
49 |
-
# cv2.waitKey(0)
|
50 |
-
|
51 |
-
print(output_data)
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Crow34/Joi/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/EleutherAI/gpt-neo-125m").launch()
|
|
|
|
|
|
|
|
spaces/DEEMOSTECH/ChatAvatar/static/css/main.46e5a5fa.css
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
html{overflow-x:hidden;overflow-y:overlay}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;box-sizing:border-box;color:#cfcfcf;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;margin:0}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.root{display:flex;justify-content:center;width:100%}.container{height:100vh;width:100%}.\!container{width:100%!important}@media (min-width:640px){.container{max-width:640px}.\!container{max-width:640px!important}}@media (min-width:768px){.container{max-width:768px}.\!container{max-width:768px!important}}@media (min-width:1024px){.container{max-width:1024px}.\!container{max-width:1024px!important}}@media (min-width:1280px){.container{max-width:1280px}.\!container{max-width:1280px!important}}@media (min-width:1536px){.container{max-width:1536px}.\!container{max-width:1536px!important}}.App{--theme-color:#4a00e0;--font-dark-color:#434343;--font-gray-color:#aaa;--font-light-color:#cfcfcf;--bg-light-color:#fff;--bg-gray0-color:#f8f8f8;--bg-gray1-color:#ececec;--bg-gray2-color:#7c7c7c;--bg-gray3-color:#373737;--bg-theme-color:#e7e3f1;--bg-dark-color:#121317;--side-gap:5rem;--radius:0.5rem;--shadow:-10px 0px 12px 1px hsla(0,0%,53%,.16);display:flex;justify-content:space-between;padding:16px;text-align:center}.App *{box-sizing:border-box;transition:all .3s}.App ::-webkit-scrollbar-thumb{background-color:rgba(0,0,0,.2)}textarea{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;border:1px solid transparent;color:var(--font-dark-color);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;font-size:1rem;line-height:1.5rem;outline:none;padding:0;resize:none}textarea:focus{border-color:var(--theme-color)}img{-webkit-user-drag:none;-webkit-user-select:none;user-select:none}.gallery_con__Y2mej{align-items:flex-start;display:flex;justify-content:center;margin-top:3rem;padding:0 1.25rem;width:100%}.gallery_menuCon__fVdFJ{margin-right:2rem;width:-webkit-max-content;width:max-content}.gallery_menu__U2btD{align-items:center;background-color:initial;border:2px solid transparent;border-radius:1.5rem;cursor:pointer;display:flex;height:3rem;justify-content:center;line-height:1rem;margin-bottom:1rem;text-align:center;width:6rem}.gallery_menu__U2btD.gallery_selected__T2qcs,.gallery_menu__U2btD:hover{background-color:var(--bg-gray3-color);color:#fff}.gallery_menu__U2btD.gallery_selected__T2qcs{border-color:#fff}.gallery_cardsCon__wAfcp{align-items:flex-start;display:flex;flex-grow:1;flex-shrink:1;flex-wrap:wrap;justify-content:space-between;max-height:100vh;max-width:calc(1600px + 9rem)}.gallery_cardsCon__wAfcp::-webkit-scrollbar-thumb{background-color:hsla(0,0%,100%,.2);border:5px solid #121317;border-radius:8px}.gallery_card__noUoL{background-color:var(--bg-gray3-color);border-radius:var(--radius);cursor:pointer;font-size:.75rem;height:260px;margin-bottom:1rem;overflow:hidden;position:relative;width:200px}.gallery_coverImg__BYj-o,.gallery_coverImg__BYj-o img{height:100%;width:100%}.gallery_prompt__9PEmb{background-color:#f8f8f880;border-radius:var(--radius);bottom:1rem;color:var(--font-dark-color);height:0;left:1rem;overflow:hidden;padding:0 .5rem;position:absolute;right:1rem;text-align:left;white-space:pre-wrap;word-break:break-all}.gallery_prompt__9PEmb.gallery_show__c2k50{height:-webkit-fit-content;height:-moz-fit-content;height:fit-content;padding:.5rem}.gallery_infoCon__E8oLy{align-items:center;bottom:1rem;color:var(--font-dark-color);display:flex;justify-content:flex-start;left:1rem;position:absolute;right:1rem}.gallery_avatar__KWBmI,.gallery_avatar__KWBmI img{border-radius:12px;height:24px;overflow:hidden;width:24px}.gallery_avatar__KWBmI{margin-right:1rem}.gallery_spaceholder__xJwYU{flex-grow:1;flex-shrink:1}.header_con__M\+u1W{align-items:center;display:flex;justify-content:center;padding:0 var(--side-gap);width:100vw}.header_header__Y7CqP{align-items:center;border-bottom:1px solid hsla(0,0%,100%,.1);display:flex;justify-content:space-between;padding:1rem 0;width:100%}.header_logoCon__MIdGL{align-items:flex-start;display:flex;height:3rem;justify-content:center}.header_logo__90zuC{height:3rem;margin-right:1rem}.header_logoCon__MIdGL>div{font-size:2rem;font-weight:700;line-height:2rem;margin-top:5px}.header_avatar__B3zXB{background:var(--bg-gray2-color);border-radius:50%;overflow:hidden}.header_avatar__B3zXB,.header_avatar__B3zXB img{height:3rem;width:3rem}.result_con__gHOU1{align-items:center;color:var(--font-dark-color);justify-content:center;width:50%;z-index:999}.result_con__gHOU1 *{flex-shrink:0}.result_board__PCvVJ{background-color:var(--bg-light-color);border-radius:var(--radius);display:flex;flex-flow:column;height:100%;width:100%}.result_colHead__k0Mk-{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;flex:0 1 auto;padding:8px}.result_colInner__9FccK{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 1px 2px 0 rgba(0,0,0,.05);flex-wrap:wrap;gap:1px;margin-bottom:1rem;overflow:hidden;padding:10px 12px}.result_colDetail__jggqg,.result_colInner__9FccK{align-items:center;flex-direction:column;justify-content:flex-start}.result_colDetail__jggqg{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;display:flex;flex:1 1 auto;margin-top:1rem;padding:8px 8px 24px}.result_colContent__FYZno{background:#fff;border:1px solid #e5e7eb;border-radius:8px;height:100%;width:100%}.result_colTitle__R8k\+A{align-items:flex-end;color:#6b7280;display:flex;font-size:.875rem;justify-content:space-between;line-height:1.2rem;margin-bottom:8px;width:100%}.result_passwordCon__OjFSI{border-top:1px solid #e5e7eb;padding:8px 12px 2px}.result_emailCon__eEqXk{padding-bottom:10px;padding-left:12px;padding-right:12px}.result_colTitle__R8k\+A>div{margin-bottom:.5rem}.result_colTitle__R8k\+A>div.result_restart__fLq8E{border-radius:5px;cursor:pointer;font-size:1rem;font-weight:400;margin-bottom:0;margin-left:1rem;padding:.5rem;-webkit-user-select:none;user-select:none}.result_restart__fLq8E:hover{background-color:var(--bg-gray0-color);color:var(--font-dark-color)}.result_spaceholder__GAxGZ{flex-grow:1;flex-shrink:1}.result_lang__85-De{cursor:pointer;font-weight:400;margin-right:1rem;-webkit-user-select:none;user-select:none}.result_lang__85-De.result_en__n-Jo7{margin-left:1rem;margin-right:0;width:4rem}.result_lang__85-De:hover{font-weight:700}.result_lang__85-De.result_selected__kDzD1{color:var(--font-dark-color);font-weight:700}.result_regene__yKazF{color:var(--theme-color);cursor:pointer;font-weight:400;-webkit-user-select:none;user-select:none}.result_chatCon__Hm\+zJ{background-color:var(--bg-gray0-color);border-radius:var(--radius);height:calc(100% - 4rem);padding:1rem}.result_chatCon__Hm\+zJ,.result_chatMsgCon__x8UTP{align-items:center;display:flex;flex-direction:column;flex-grow:1;flex-shrink:1;justify-content:flex-start;width:100%}.result_chatMsgCon__x8UTP{overflow-y:overlay;text-align:left}.result_chatMsgCon__x8UTP::-webkit-scrollbar-thumb{border:none;border-radius:3px}.result_chatMsgCon__x8UTP::-webkit-scrollbar{width:6px}.result_chatMsgRow__dr9Qg{align-items:flex-start;display:flex;flex-direction:row;justify-content:flex-start;margin-bottom:1rem;width:100%}.result_chatMsgRow__dr9Qg.result_user__bUuRg{flex-direction:row-reverse}.result_avatar__B2zOp{background:var(--bg-gray2-color);border-radius:1.5rem;margin-left:0;margin-right:1rem;overflow:hidden}.result_avatar__B2zOp,.result_avatar__B2zOp img{height:3rem;width:3rem}.result_user__bUuRg .result_avatar__B2zOp{margin-left:1rem;margin-right:0}.result_bubble__GexXm{background:var(--bg-theme-color);border-radius:var(--radius);flex-shrink:1;line-height:1.5rem;padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_bubble__GexXm.result_unactive__zyVF2{background:var(--bg-gray1-color)}.result_user__bUuRg .result_bubble__GexXm{background:var(--bg-light-color)}.result_chatIptCon__LXDF-{align-items:center;display:flex;flex-direction:column;justify-content:flex-start;width:100%}.result_chatTipsCon__w4uUf{align-items:flex-end;display:flex;flex-direction:row;justify-content:flex-start;margin-top:1rem;max-width:100%;overflow-x:auto;overflow-y:hidden;width:100%}.result_chatTipsCon__w4uUf::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_chatTips__6b9zJ{background:var(--bg-light-color);border-radius:var(--radius);cursor:pointer;margin-right:1rem;padding:1rem;text-align:left;white-space:pre-wrap;width:15.5rem;word-break:break-all}.result_chatTips__6b9zJ:last-child{margin-right:0}.result_chatRowCon__jLGk3{align-items:flex-start;display:flex;flex-direction:row;justify-content:space-between;margin-top:1rem;width:100%}.result_iptLineCon__nLuWa{flex-grow:1;flex-shrink:1;line-height:1.5rem;margin-right:1rem;position:relative;text-align:left}.result_iptSpaceholder__hAkD5{border:1px solid transparent;max-height:calc(9rem + 2px);visibility:hidden}.result_iptSpaceholder__hAkD5,.result_ipt__tA\+g4{padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_ipt__tA\+g4{background:var(--bg-light-color);border-radius:var(--radius);bottom:0;left:0;overflow-y:auto;position:absolute;right:0;top:0}.result_ipt__tA\+g4::-webkit-scrollbar-thumb{border-color:var(--bg-light-color)}.result_btn__h5tQr{align-items:center;background-color:var(--theme-color);border:1px solid var(--theme-color);border-radius:1.5rem;color:#fff;cursor:pointer;display:flex;font-weight:700;height:calc(3rem - 2px);justify-content:center;line-height:1rem;padding:0 1.5rem;-webkit-user-select:none;user-select:none}.result_con__gHOU1 .result_btn__h5tQr.result_disabled__lB61-{background:var(--bg-gray2-color);border-color:var(--bg-gray2-color);color:var(--font-light-color);cursor:not-allowed}.result_iptArea__23TZc{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 0 0 3px transparent,inset 0 2px 4px 0 rgba(0,0,0,.05);color:#1f2937;display:block;font-size:14px;height:42px;line-height:1.4;outline:none!important;padding:10px;position:relative;width:100%}.result_iptArea__23TZc:focus{border-color:#93c5fd;box-shadow:0 0 0 3px #dfedfe,inset 0 2px 4px 0 transparent}.result_iptArea__23TZc::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_clearBtn__r6e0y{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_clearBtn__r6e0y:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_clearBtnLogin__LOsgV{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_inputError__qtPTq{border-color:#f56565;box-shadow:0 0 0 3px #fed7d7,inset 0 2px 4px 0 transparent}.result_clearBtnLogin__LOsgV:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_btnCon__LEoi5{display:flex;justify-content:space-between}.result_generateBtn__UGmBG{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtn__UGmBG:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_generateBtnLogin__nkLOj{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtnLogin__nkLOj:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_candidateCon__x9kyB{align-items:flex-start;background-color:var(--bg-gray0-color);border-radius:var(--radius);display:flex;flex-direction:row;flex-grow:1;flex-shrink:1;height:100%;justify-content:space-between;max-height:45rem;overflow-y:auto;padding:1rem;position:relative;width:100%}.result_candidateCon__x9kyB::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_candidateCol__eoHna{margin-right:1rem;position:relative;width:calc(33.33333% - .66667rem)}.result_candidateCol__eoHna:last-child{margin-right:0}.result_candidateCol__eoHna img{border-radius:var(--radius);cursor:pointer;margin-bottom:.5rem}.result_creatorCon__tIm3e{align-items:flex-end;color:var(--font-gray-color);display:flex;font-size:1.2rem;font-weight:700;justify-content:flex-start;line-height:1.2rem;margin-bottom:1rem;width:100%}.result_creatorInfoCon__pET8h{text-align:left}.result_creatorName__VLTXL{color:var(--font-dark-color);font-size:1.2rem;font-weight:700;line-height:1.8rem}.result_creatorInfo__CkbWU{color:var(--font-gray-color);font-size:1rem;line-height:1.2rem}.result_modelView__Y25w5{background:var(--bg-gray0-color);border-radius:var(--radius);flex-grow:1;flex-shrink:1;height:100%;overflow:hidden;width:100%}.result_modelInfoCon__bXw5O{align-items:center;display:flex;flex-direction:column;justify-content:flex-end;text-align:left}.result_progressInfo__g9iwR{margin-bottom:.5rem;width:100%}.result_progressTrack__I6zDn{background:var(--bg-light-color);border-radius:2px;height:4px;position:relative;width:100%}.result_progressThumb__mbBQj{background-color:var(--theme-color);border-radius:2px;height:4px;left:0;position:absolute;top:0}.result_modelPrompt__DzUbD{background:var(--bg-light-color);border-radius:var(--radius);margin-top:1rem;min-height:3rem;padding:1rem;width:100%}.result_progressCon__O57XA{font-size:14px;position:absolute;top:calc(50% - 10px)}.result_loadingCon__XVvXD{font-size:14px;position:absolute;top:55%;z-index:-111}.result_icon__dFKnM{height:20px;position:absolute;top:calc(50% - 10px)}.result_hideModel__3phD0{display:none}.result_descriptionLogin__xi7Yx{text-align:start}.login_con__\+RJgQ{background:#000;box-shadow:-5px 0 20px 0 hsla(0,0%,100%,.2);height:100vh;padding:var(--side-gap);position:fixed;right:0;top:0;z-index:9}.login_close__JulM-{cursor:pointer;-webkit-user-select:none;user-select:none}.welcome_con__o1kmf{align-items:center;background:#121317;border-radius:.5rem;display:flex;flex-direction:column;justify-content:flex-start;padding-bottom:1rem;padding-top:2rem;position:relative;width:45%}.welcome_con__o1kmf>img{position:absolute;top:0;width:100%}.welcome_mainCon__H1gv\+{margin-top:.5rem;z-index:999}.welcome_title__Gd8m4{color:#fff;font-family:Courier New;font-size:5rem;font-weight:700;line-height:5rem}.welcome_ioCon__PQZXU{background-color:#fff;border-radius:1rem;border-style:solid;margin-left:8rem;margin-right:8rem;margin-top:24rem;padding:2rem;width:calc(100% - 16rem)}.welcome_iptCon__KpWEL{align-items:center;background:#ededf2;border-radius:1rem;display:flex;height:4rem;justify-content:space-between;margin-bottom:2rem;width:100%}.welcome_iptCon__KpWEL>img{height:2rem;margin-right:1rem;position:static;width:2rem}.welcome_ipt__ayi9Z{background:#ededf2;border:none;border-radius:1rem;color:var(--font-dark-color);flex-grow:1;font-size:1rem;height:100%;outline:none;padding:0 2rem}.welcome_ipt__ayi9Z::-webkit-input-placeholder{font-size:1rem}.welcome_ipt__ayi9Z::placeholder{font-size:1rem}.welcome_btnCon__Mx-ta,.welcome_btn__jCuoG{align-items:center;display:flex;justify-content:center}.welcome_btn__jCuoG{border:1px solid #8f8f8f;border-radius:1rem;cursor:pointer;height:3rem;line-height:1rem;-webkit-user-select:none;user-select:none;width:100%}.welcome_btn__jCuoG:last-child{background:#4a00e0;border:none;font-weight:700}.welcome_btn__jCuoG.welcome_disabled__pcSzv{cursor:not-allowed}.welcome_btn__jCuoG:hover{color:#fff}
|
2 |
-
/*# sourceMappingURL=main.46e5a5fa.css.map*/
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/iup.c
DELETED
The diff for this file is too large to render.
See raw diff
|
|
spaces/DaFujaTyping/hf-Chat-ui/src/app.d.ts
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
/// <reference types="@sveltejs/kit" />
|
2 |
-
/// <reference types="unplugin-icons/types/svelte" />
|
3 |
-
|
4 |
-
// See https://kit.svelte.dev/docs/types#app
|
5 |
-
// for information about these interfaces
|
6 |
-
declare global {
|
7 |
-
namespace App {
|
8 |
-
// interface Error {}
|
9 |
-
interface Locals {
|
10 |
-
sessionId: string;
|
11 |
-
}
|
12 |
-
// interface PageData {}
|
13 |
-
// interface Platform {}
|
14 |
-
}
|
15 |
-
}
|
16 |
-
|
17 |
-
export {};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dennis0402/QSign/Dockerfile
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
#程序开源地址 https://github.com/fuqiuluo/unidbg-fetch-qsign
|
2 |
-
|
3 |
-
FROM openjdk:11.0-jdk
|
4 |
-
|
5 |
-
ENV TZ Asia/Shanghai
|
6 |
-
|
7 |
-
WORKDIR /app
|
8 |
-
|
9 |
-
COPY unidbg-fetch-qsign /app
|
10 |
-
|
11 |
-
CMD bash bin/unidbg-fetch-qsign --host=0.0.0.0 --port=7860 --count=5 --library=txlib --android_id=
|
12 |
-
|
13 |
-
EXPOSE 7860
|
14 |
-
|
15 |
-
#抱脸推荐项目 https://github.com/CikeyQi/QQsign_docs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.cpp
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
//
|
3 |
-
// NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
// and proprietary rights in and to this software, related documentation
|
5 |
-
// and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
// distribution of this software and related documentation without an express
|
7 |
-
// license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
#include <torch/extension.h>
|
10 |
-
#include <ATen/cuda/CUDAContext.h>
|
11 |
-
#include <c10/cuda/CUDAGuard.h>
|
12 |
-
#include "upfirdn2d.h"
|
13 |
-
|
14 |
-
//------------------------------------------------------------------------
|
15 |
-
|
16 |
-
static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain)
|
17 |
-
{
|
18 |
-
// Validate arguments.
|
19 |
-
TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
|
20 |
-
TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x");
|
21 |
-
TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32");
|
22 |
-
TORCH_CHECK(x.numel() <= INT_MAX, "x is too large");
|
23 |
-
TORCH_CHECK(f.numel() <= INT_MAX, "f is too large");
|
24 |
-
TORCH_CHECK(x.numel() > 0, "x has zero size");
|
25 |
-
TORCH_CHECK(f.numel() > 0, "f has zero size");
|
26 |
-
TORCH_CHECK(x.dim() == 4, "x must be rank 4");
|
27 |
-
TORCH_CHECK(f.dim() == 2, "f must be rank 2");
|
28 |
-
TORCH_CHECK((x.size(0)-1)*x.stride(0) + (x.size(1)-1)*x.stride(1) + (x.size(2)-1)*x.stride(2) + (x.size(3)-1)*x.stride(3) <= INT_MAX, "x memory footprint is too large");
|
29 |
-
TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1");
|
30 |
-
TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1");
|
31 |
-
TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1");
|
32 |
-
|
33 |
-
// Create output tensor.
|
34 |
-
const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
|
35 |
-
int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx;
|
36 |
-
int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy;
|
37 |
-
TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1");
|
38 |
-
torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format());
|
39 |
-
TORCH_CHECK(y.numel() <= INT_MAX, "output is too large");
|
40 |
-
TORCH_CHECK((y.size(0)-1)*y.stride(0) + (y.size(1)-1)*y.stride(1) + (y.size(2)-1)*y.stride(2) + (y.size(3)-1)*y.stride(3) <= INT_MAX, "output memory footprint is too large");
|
41 |
-
|
42 |
-
// Initialize CUDA kernel parameters.
|
43 |
-
upfirdn2d_kernel_params p;
|
44 |
-
p.x = x.data_ptr();
|
45 |
-
p.f = f.data_ptr<float>();
|
46 |
-
p.y = y.data_ptr();
|
47 |
-
p.up = make_int2(upx, upy);
|
48 |
-
p.down = make_int2(downx, downy);
|
49 |
-
p.pad0 = make_int2(padx0, pady0);
|
50 |
-
p.flip = (flip) ? 1 : 0;
|
51 |
-
p.gain = gain;
|
52 |
-
p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
|
53 |
-
p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0));
|
54 |
-
p.filterSize = make_int2((int)f.size(1), (int)f.size(0));
|
55 |
-
p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0));
|
56 |
-
p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
|
57 |
-
p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0));
|
58 |
-
p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z;
|
59 |
-
p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1;
|
60 |
-
|
61 |
-
// Choose CUDA kernel.
|
62 |
-
upfirdn2d_kernel_spec spec;
|
63 |
-
AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&]
|
64 |
-
{
|
65 |
-
spec = choose_upfirdn2d_kernel<scalar_t>(p);
|
66 |
-
});
|
67 |
-
|
68 |
-
// Set looping options.
|
69 |
-
p.loopMajor = (p.sizeMajor - 1) / 16384 + 1;
|
70 |
-
p.loopMinor = spec.loopMinor;
|
71 |
-
p.loopX = spec.loopX;
|
72 |
-
p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1;
|
73 |
-
p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1;
|
74 |
-
|
75 |
-
// Compute grid size.
|
76 |
-
dim3 blockSize, gridSize;
|
77 |
-
if (spec.tileOutW < 0) // large
|
78 |
-
{
|
79 |
-
blockSize = dim3(4, 32, 1);
|
80 |
-
gridSize = dim3(
|
81 |
-
((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor,
|
82 |
-
(p.outSize.x - 1) / (blockSize.y * p.loopX) + 1,
|
83 |
-
p.launchMajor);
|
84 |
-
}
|
85 |
-
else // small
|
86 |
-
{
|
87 |
-
blockSize = dim3(256, 1, 1);
|
88 |
-
gridSize = dim3(
|
89 |
-
((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor,
|
90 |
-
(p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1,
|
91 |
-
p.launchMajor);
|
92 |
-
}
|
93 |
-
|
94 |
-
// Launch CUDA kernel.
|
95 |
-
void* args[] = {&p};
|
96 |
-
AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream()));
|
97 |
-
return y;
|
98 |
-
}
|
99 |
-
|
100 |
-
//------------------------------------------------------------------------
|
101 |
-
|
102 |
-
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
|
103 |
-
{
|
104 |
-
m.def("upfirdn2d", &upfirdn2d);
|
105 |
-
}
|
106 |
-
|
107 |
-
//------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ECCV2022/bytetrack/yolox/exp/build.py
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
# -*- coding:utf-8 -*-
|
3 |
-
# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.
|
4 |
-
|
5 |
-
import importlib
|
6 |
-
import os
|
7 |
-
import sys
|
8 |
-
|
9 |
-
|
10 |
-
def get_exp_by_file(exp_file):
|
11 |
-
try:
|
12 |
-
sys.path.append(os.path.dirname(exp_file))
|
13 |
-
current_exp = importlib.import_module(os.path.basename(exp_file).split(".")[0])
|
14 |
-
exp = current_exp.Exp()
|
15 |
-
except Exception:
|
16 |
-
raise ImportError("{} doesn't contains class named 'Exp'".format(exp_file))
|
17 |
-
return exp
|
18 |
-
|
19 |
-
|
20 |
-
def get_exp_by_name(exp_name):
|
21 |
-
import yolox
|
22 |
-
|
23 |
-
yolox_path = os.path.dirname(os.path.dirname(yolox.__file__))
|
24 |
-
filedict = {
|
25 |
-
"yolox-s": "yolox_s.py",
|
26 |
-
"yolox-m": "yolox_m.py",
|
27 |
-
"yolox-l": "yolox_l.py",
|
28 |
-
"yolox-x": "yolox_x.py",
|
29 |
-
"yolox-tiny": "yolox_tiny.py",
|
30 |
-
"yolox-nano": "nano.py",
|
31 |
-
"yolov3": "yolov3.py",
|
32 |
-
}
|
33 |
-
filename = filedict[exp_name]
|
34 |
-
exp_path = os.path.join(yolox_path, "exps", "default", filename)
|
35 |
-
return get_exp_by_file(exp_path)
|
36 |
-
|
37 |
-
|
38 |
-
def get_exp(exp_file, exp_name):
|
39 |
-
"""
|
40 |
-
get Exp object by file or name. If exp_file and exp_name
|
41 |
-
are both provided, get Exp by exp_file.
|
42 |
-
|
43 |
-
Args:
|
44 |
-
exp_file (str): file path of experiment.
|
45 |
-
exp_name (str): name of experiment. "yolo-s",
|
46 |
-
"""
|
47 |
-
assert (
|
48 |
-
exp_file is not None or exp_name is not None
|
49 |
-
), "plz provide exp file or exp name."
|
50 |
-
if exp_file is not None:
|
51 |
-
return get_exp_by_file(exp_file)
|
52 |
-
else:
|
53 |
-
return get_exp_by_name(exp_name)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EPFL-VILAB/MultiMAE/dpt/transforms.py
DELETED
@@ -1,231 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import cv2
|
3 |
-
import math
|
4 |
-
|
5 |
-
|
6 |
-
def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
|
7 |
-
"""Rezise the sample to ensure the given size. Keeps aspect ratio.
|
8 |
-
|
9 |
-
Args:
|
10 |
-
sample (dict): sample
|
11 |
-
size (tuple): image size
|
12 |
-
|
13 |
-
Returns:
|
14 |
-
tuple: new size
|
15 |
-
"""
|
16 |
-
shape = list(sample["disparity"].shape)
|
17 |
-
|
18 |
-
if shape[0] >= size[0] and shape[1] >= size[1]:
|
19 |
-
return sample
|
20 |
-
|
21 |
-
scale = [0, 0]
|
22 |
-
scale[0] = size[0] / shape[0]
|
23 |
-
scale[1] = size[1] / shape[1]
|
24 |
-
|
25 |
-
scale = max(scale)
|
26 |
-
|
27 |
-
shape[0] = math.ceil(scale * shape[0])
|
28 |
-
shape[1] = math.ceil(scale * shape[1])
|
29 |
-
|
30 |
-
# resize
|
31 |
-
sample["image"] = cv2.resize(
|
32 |
-
sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method
|
33 |
-
)
|
34 |
-
|
35 |
-
sample["disparity"] = cv2.resize(
|
36 |
-
sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST
|
37 |
-
)
|
38 |
-
sample["mask"] = cv2.resize(
|
39 |
-
sample["mask"].astype(np.float32),
|
40 |
-
tuple(shape[::-1]),
|
41 |
-
interpolation=cv2.INTER_NEAREST,
|
42 |
-
)
|
43 |
-
sample["mask"] = sample["mask"].astype(bool)
|
44 |
-
|
45 |
-
return tuple(shape)
|
46 |
-
|
47 |
-
|
48 |
-
class Resize(object):
|
49 |
-
"""Resize sample to given size (width, height)."""
|
50 |
-
|
51 |
-
def __init__(
|
52 |
-
self,
|
53 |
-
width,
|
54 |
-
height,
|
55 |
-
resize_target=True,
|
56 |
-
keep_aspect_ratio=False,
|
57 |
-
ensure_multiple_of=1,
|
58 |
-
resize_method="lower_bound",
|
59 |
-
image_interpolation_method=cv2.INTER_AREA,
|
60 |
-
):
|
61 |
-
"""Init.
|
62 |
-
|
63 |
-
Args:
|
64 |
-
width (int): desired output width
|
65 |
-
height (int): desired output height
|
66 |
-
resize_target (bool, optional):
|
67 |
-
True: Resize the full sample (image, mask, target).
|
68 |
-
False: Resize image only.
|
69 |
-
Defaults to True.
|
70 |
-
keep_aspect_ratio (bool, optional):
|
71 |
-
True: Keep the aspect ratio of the input sample.
|
72 |
-
Output sample might not have the given width and height, and
|
73 |
-
resize behaviour depends on the parameter 'resize_method'.
|
74 |
-
Defaults to False.
|
75 |
-
ensure_multiple_of (int, optional):
|
76 |
-
Output width and height is constrained to be multiple of this parameter.
|
77 |
-
Defaults to 1.
|
78 |
-
resize_method (str, optional):
|
79 |
-
"lower_bound": Output will be at least as large as the given size.
|
80 |
-
"upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.)
|
81 |
-
"minimal": Scale as least as possible. (Output size might be smaller than given size.)
|
82 |
-
Defaults to "lower_bound".
|
83 |
-
"""
|
84 |
-
self.__width = width
|
85 |
-
self.__height = height
|
86 |
-
|
87 |
-
self.__resize_target = resize_target
|
88 |
-
self.__keep_aspect_ratio = keep_aspect_ratio
|
89 |
-
self.__multiple_of = ensure_multiple_of
|
90 |
-
self.__resize_method = resize_method
|
91 |
-
self.__image_interpolation_method = image_interpolation_method
|
92 |
-
|
93 |
-
def constrain_to_multiple_of(self, x, min_val=0, max_val=None):
|
94 |
-
y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)
|
95 |
-
|
96 |
-
if max_val is not None and y > max_val:
|
97 |
-
y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)
|
98 |
-
|
99 |
-
if y < min_val:
|
100 |
-
y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)
|
101 |
-
|
102 |
-
return y
|
103 |
-
|
104 |
-
def get_size(self, width, height):
|
105 |
-
# determine new height and width
|
106 |
-
scale_height = self.__height / height
|
107 |
-
scale_width = self.__width / width
|
108 |
-
|
109 |
-
if self.__keep_aspect_ratio:
|
110 |
-
if self.__resize_method == "lower_bound":
|
111 |
-
# scale such that output size is lower bound
|
112 |
-
if scale_width > scale_height:
|
113 |
-
# fit width
|
114 |
-
scale_height = scale_width
|
115 |
-
else:
|
116 |
-
# fit height
|
117 |
-
scale_width = scale_height
|
118 |
-
elif self.__resize_method == "upper_bound":
|
119 |
-
# scale such that output size is upper bound
|
120 |
-
if scale_width < scale_height:
|
121 |
-
# fit width
|
122 |
-
scale_height = scale_width
|
123 |
-
else:
|
124 |
-
# fit height
|
125 |
-
scale_width = scale_height
|
126 |
-
elif self.__resize_method == "minimal":
|
127 |
-
# scale as least as possbile
|
128 |
-
if abs(1 - scale_width) < abs(1 - scale_height):
|
129 |
-
# fit width
|
130 |
-
scale_height = scale_width
|
131 |
-
else:
|
132 |
-
# fit height
|
133 |
-
scale_width = scale_height
|
134 |
-
else:
|
135 |
-
raise ValueError(
|
136 |
-
f"resize_method {self.__resize_method} not implemented"
|
137 |
-
)
|
138 |
-
|
139 |
-
if self.__resize_method == "lower_bound":
|
140 |
-
new_height = self.constrain_to_multiple_of(
|
141 |
-
scale_height * height, min_val=self.__height
|
142 |
-
)
|
143 |
-
new_width = self.constrain_to_multiple_of(
|
144 |
-
scale_width * width, min_val=self.__width
|
145 |
-
)
|
146 |
-
elif self.__resize_method == "upper_bound":
|
147 |
-
new_height = self.constrain_to_multiple_of(
|
148 |
-
scale_height * height, max_val=self.__height
|
149 |
-
)
|
150 |
-
new_width = self.constrain_to_multiple_of(
|
151 |
-
scale_width * width, max_val=self.__width
|
152 |
-
)
|
153 |
-
elif self.__resize_method == "minimal":
|
154 |
-
new_height = self.constrain_to_multiple_of(scale_height * height)
|
155 |
-
new_width = self.constrain_to_multiple_of(scale_width * width)
|
156 |
-
else:
|
157 |
-
raise ValueError(f"resize_method {self.__resize_method} not implemented")
|
158 |
-
|
159 |
-
return (new_width, new_height)
|
160 |
-
|
161 |
-
def __call__(self, sample):
|
162 |
-
width, height = self.get_size(
|
163 |
-
sample["image"].shape[1], sample["image"].shape[0]
|
164 |
-
)
|
165 |
-
|
166 |
-
# resize sample
|
167 |
-
sample["image"] = cv2.resize(
|
168 |
-
sample["image"],
|
169 |
-
(width, height),
|
170 |
-
interpolation=self.__image_interpolation_method,
|
171 |
-
)
|
172 |
-
|
173 |
-
if self.__resize_target:
|
174 |
-
if "disparity" in sample:
|
175 |
-
sample["disparity"] = cv2.resize(
|
176 |
-
sample["disparity"],
|
177 |
-
(width, height),
|
178 |
-
interpolation=cv2.INTER_NEAREST,
|
179 |
-
)
|
180 |
-
|
181 |
-
if "depth" in sample:
|
182 |
-
sample["depth"] = cv2.resize(
|
183 |
-
sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST
|
184 |
-
)
|
185 |
-
|
186 |
-
sample["mask"] = cv2.resize(
|
187 |
-
sample["mask"].astype(np.float32),
|
188 |
-
(width, height),
|
189 |
-
interpolation=cv2.INTER_NEAREST,
|
190 |
-
)
|
191 |
-
sample["mask"] = sample["mask"].astype(bool)
|
192 |
-
|
193 |
-
return sample
|
194 |
-
|
195 |
-
|
196 |
-
class NormalizeImage(object):
|
197 |
-
"""Normlize image by given mean and std."""
|
198 |
-
|
199 |
-
def __init__(self, mean, std):
|
200 |
-
self.__mean = mean
|
201 |
-
self.__std = std
|
202 |
-
|
203 |
-
def __call__(self, sample):
|
204 |
-
sample["image"] = (sample["image"] - self.__mean) / self.__std
|
205 |
-
|
206 |
-
return sample
|
207 |
-
|
208 |
-
|
209 |
-
class PrepareForNet(object):
|
210 |
-
"""Prepare sample for usage as network input."""
|
211 |
-
|
212 |
-
def __init__(self):
|
213 |
-
pass
|
214 |
-
|
215 |
-
def __call__(self, sample):
|
216 |
-
image = np.transpose(sample["image"], (2, 0, 1))
|
217 |
-
sample["image"] = np.ascontiguousarray(image).astype(np.float32)
|
218 |
-
|
219 |
-
if "mask" in sample:
|
220 |
-
sample["mask"] = sample["mask"].astype(np.float32)
|
221 |
-
sample["mask"] = np.ascontiguousarray(sample["mask"])
|
222 |
-
|
223 |
-
if "disparity" in sample:
|
224 |
-
disparity = sample["disparity"].astype(np.float32)
|
225 |
-
sample["disparity"] = np.ascontiguousarray(disparity)
|
226 |
-
|
227 |
-
if "depth" in sample:
|
228 |
-
depth = sample["depth"].astype(np.float32)
|
229 |
-
sample["depth"] = np.ascontiguousarray(depth)
|
230 |
-
|
231 |
-
return sample
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/__init__.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
from .backbone.swin import D2SwinTransformer
|
3 |
-
from .pixel_decoder.fpn import BasePixelDecoder
|
4 |
-
from .pixel_decoder.msdeformattn import MSDeformAttnPixelDecoder
|
5 |
-
from .meta_arch.mask_former_head import MaskFormerHead
|
6 |
-
from .meta_arch.per_pixel_baseline import PerPixelBaselineHead, PerPixelBaselinePlusHead
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EcoCy/LoRA-DreamBooth-Training-UI/app_upload.py
DELETED
@@ -1,100 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python
|
2 |
-
|
3 |
-
from __future__ import annotations
|
4 |
-
|
5 |
-
import pathlib
|
6 |
-
|
7 |
-
import gradio as gr
|
8 |
-
import slugify
|
9 |
-
|
10 |
-
from constants import UploadTarget
|
11 |
-
from uploader import Uploader
|
12 |
-
from utils import find_exp_dirs
|
13 |
-
|
14 |
-
|
15 |
-
class LoRAModelUploader(Uploader):
|
16 |
-
def upload_lora_model(
|
17 |
-
self,
|
18 |
-
folder_path: str,
|
19 |
-
repo_name: str,
|
20 |
-
upload_to: str,
|
21 |
-
private: bool,
|
22 |
-
delete_existing_repo: bool,
|
23 |
-
) -> str:
|
24 |
-
if not folder_path:
|
25 |
-
raise ValueError
|
26 |
-
if not repo_name:
|
27 |
-
repo_name = pathlib.Path(folder_path).name
|
28 |
-
repo_name = slugify.slugify(repo_name)
|
29 |
-
|
30 |
-
if upload_to == UploadTarget.PERSONAL_PROFILE.value:
|
31 |
-
organization = ''
|
32 |
-
elif upload_to == UploadTarget.LORA_LIBRARY.value:
|
33 |
-
organization = 'lora-library'
|
34 |
-
else:
|
35 |
-
raise ValueError
|
36 |
-
|
37 |
-
return self.upload(folder_path,
|
38 |
-
repo_name,
|
39 |
-
organization=organization,
|
40 |
-
private=private,
|
41 |
-
delete_existing_repo=delete_existing_repo)
|
42 |
-
|
43 |
-
|
44 |
-
def load_local_lora_model_list() -> dict:
|
45 |
-
choices = find_exp_dirs(ignore_repo=True)
|
46 |
-
return gr.update(choices=choices, value=choices[0] if choices else None)
|
47 |
-
|
48 |
-
|
49 |
-
def create_upload_demo(hf_token: str | None) -> gr.Blocks:
|
50 |
-
uploader = LoRAModelUploader(hf_token)
|
51 |
-
model_dirs = find_exp_dirs(ignore_repo=True)
|
52 |
-
|
53 |
-
with gr.Blocks() as demo:
|
54 |
-
with gr.Box():
|
55 |
-
gr.Markdown('Local Models')
|
56 |
-
reload_button = gr.Button('Reload Model List')
|
57 |
-
model_dir = gr.Dropdown(
|
58 |
-
label='Model names',
|
59 |
-
choices=model_dirs,
|
60 |
-
value=model_dirs[0] if model_dirs else None)
|
61 |
-
with gr.Box():
|
62 |
-
gr.Markdown('Upload Settings')
|
63 |
-
with gr.Row():
|
64 |
-
use_private_repo = gr.Checkbox(label='Private', value=True)
|
65 |
-
delete_existing_repo = gr.Checkbox(
|
66 |
-
label='Delete existing repo of the same name', value=False)
|
67 |
-
upload_to = gr.Radio(label='Upload to',
|
68 |
-
choices=[_.value for _ in UploadTarget],
|
69 |
-
value=UploadTarget.LORA_LIBRARY.value)
|
70 |
-
model_name = gr.Textbox(label='Model Name')
|
71 |
-
upload_button = gr.Button('Upload')
|
72 |
-
gr.Markdown('''
|
73 |
-
- You can upload your trained model to your personal profile (i.e. https://huggingface.co/{your_username}/{model_name}) or to the public [LoRA Concepts Library](https://huggingface.co/lora-library) (i.e. https://huggingface.co/lora-library/{model_name}).
|
74 |
-
''')
|
75 |
-
with gr.Box():
|
76 |
-
gr.Markdown('Output message')
|
77 |
-
output_message = gr.Markdown()
|
78 |
-
|
79 |
-
reload_button.click(fn=load_local_lora_model_list,
|
80 |
-
inputs=None,
|
81 |
-
outputs=model_dir)
|
82 |
-
upload_button.click(fn=uploader.upload_lora_model,
|
83 |
-
inputs=[
|
84 |
-
model_dir,
|
85 |
-
model_name,
|
86 |
-
upload_to,
|
87 |
-
use_private_repo,
|
88 |
-
delete_existing_repo,
|
89 |
-
],
|
90 |
-
outputs=output_message)
|
91 |
-
|
92 |
-
return demo
|
93 |
-
|
94 |
-
|
95 |
-
if __name__ == '__main__':
|
96 |
-
import os
|
97 |
-
|
98 |
-
hf_token = os.getenv('HF_TOKEN')
|
99 |
-
demo = create_upload_demo(hf_token)
|
100 |
-
demo.queue(max_size=1).launch(share=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/models_onnx.py
DELETED
@@ -1,824 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import logging
|
3 |
-
|
4 |
-
logger = logging.getLogger(__name__)
|
5 |
-
|
6 |
-
import numpy as np
|
7 |
-
import torch
|
8 |
-
from torch import nn
|
9 |
-
from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d
|
10 |
-
from torch.nn import functional as F
|
11 |
-
from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm
|
12 |
-
|
13 |
-
from infer.lib.infer_pack import attentions, commons, modules
|
14 |
-
from infer.lib.infer_pack.commons import get_padding, init_weights
|
15 |
-
|
16 |
-
|
17 |
-
class TextEncoder256(nn.Module):
|
18 |
-
def __init__(
|
19 |
-
self,
|
20 |
-
out_channels,
|
21 |
-
hidden_channels,
|
22 |
-
filter_channels,
|
23 |
-
n_heads,
|
24 |
-
n_layers,
|
25 |
-
kernel_size,
|
26 |
-
p_dropout,
|
27 |
-
f0=True,
|
28 |
-
):
|
29 |
-
super().__init__()
|
30 |
-
self.out_channels = out_channels
|
31 |
-
self.hidden_channels = hidden_channels
|
32 |
-
self.filter_channels = filter_channels
|
33 |
-
self.n_heads = n_heads
|
34 |
-
self.n_layers = n_layers
|
35 |
-
self.kernel_size = kernel_size
|
36 |
-
self.p_dropout = p_dropout
|
37 |
-
self.emb_phone = nn.Linear(256, hidden_channels)
|
38 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
39 |
-
if f0 == True:
|
40 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
41 |
-
self.encoder = attentions.Encoder(
|
42 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
43 |
-
)
|
44 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
45 |
-
|
46 |
-
def forward(self, phone, pitch, lengths):
|
47 |
-
if pitch == None:
|
48 |
-
x = self.emb_phone(phone)
|
49 |
-
else:
|
50 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
51 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
52 |
-
x = self.lrelu(x)
|
53 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
54 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
55 |
-
x.dtype
|
56 |
-
)
|
57 |
-
x = self.encoder(x * x_mask, x_mask)
|
58 |
-
stats = self.proj(x) * x_mask
|
59 |
-
|
60 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
61 |
-
return m, logs, x_mask
|
62 |
-
|
63 |
-
|
64 |
-
class TextEncoder768(nn.Module):
|
65 |
-
def __init__(
|
66 |
-
self,
|
67 |
-
out_channels,
|
68 |
-
hidden_channels,
|
69 |
-
filter_channels,
|
70 |
-
n_heads,
|
71 |
-
n_layers,
|
72 |
-
kernel_size,
|
73 |
-
p_dropout,
|
74 |
-
f0=True,
|
75 |
-
):
|
76 |
-
super().__init__()
|
77 |
-
self.out_channels = out_channels
|
78 |
-
self.hidden_channels = hidden_channels
|
79 |
-
self.filter_channels = filter_channels
|
80 |
-
self.n_heads = n_heads
|
81 |
-
self.n_layers = n_layers
|
82 |
-
self.kernel_size = kernel_size
|
83 |
-
self.p_dropout = p_dropout
|
84 |
-
self.emb_phone = nn.Linear(768, hidden_channels)
|
85 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
86 |
-
if f0 == True:
|
87 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
88 |
-
self.encoder = attentions.Encoder(
|
89 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
90 |
-
)
|
91 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
92 |
-
|
93 |
-
def forward(self, phone, pitch, lengths):
|
94 |
-
if pitch == None:
|
95 |
-
x = self.emb_phone(phone)
|
96 |
-
else:
|
97 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
98 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
99 |
-
x = self.lrelu(x)
|
100 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
101 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
102 |
-
x.dtype
|
103 |
-
)
|
104 |
-
x = self.encoder(x * x_mask, x_mask)
|
105 |
-
stats = self.proj(x) * x_mask
|
106 |
-
|
107 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
108 |
-
return m, logs, x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class ResidualCouplingBlock(nn.Module):
|
112 |
-
def __init__(
|
113 |
-
self,
|
114 |
-
channels,
|
115 |
-
hidden_channels,
|
116 |
-
kernel_size,
|
117 |
-
dilation_rate,
|
118 |
-
n_layers,
|
119 |
-
n_flows=4,
|
120 |
-
gin_channels=0,
|
121 |
-
):
|
122 |
-
super().__init__()
|
123 |
-
self.channels = channels
|
124 |
-
self.hidden_channels = hidden_channels
|
125 |
-
self.kernel_size = kernel_size
|
126 |
-
self.dilation_rate = dilation_rate
|
127 |
-
self.n_layers = n_layers
|
128 |
-
self.n_flows = n_flows
|
129 |
-
self.gin_channels = gin_channels
|
130 |
-
|
131 |
-
self.flows = nn.ModuleList()
|
132 |
-
for i in range(n_flows):
|
133 |
-
self.flows.append(
|
134 |
-
modules.ResidualCouplingLayer(
|
135 |
-
channels,
|
136 |
-
hidden_channels,
|
137 |
-
kernel_size,
|
138 |
-
dilation_rate,
|
139 |
-
n_layers,
|
140 |
-
gin_channels=gin_channels,
|
141 |
-
mean_only=True,
|
142 |
-
)
|
143 |
-
)
|
144 |
-
self.flows.append(modules.Flip())
|
145 |
-
|
146 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
147 |
-
if not reverse:
|
148 |
-
for flow in self.flows:
|
149 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
150 |
-
else:
|
151 |
-
for flow in reversed(self.flows):
|
152 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
153 |
-
return x
|
154 |
-
|
155 |
-
def remove_weight_norm(self):
|
156 |
-
for i in range(self.n_flows):
|
157 |
-
self.flows[i * 2].remove_weight_norm()
|
158 |
-
|
159 |
-
|
160 |
-
class PosteriorEncoder(nn.Module):
|
161 |
-
def __init__(
|
162 |
-
self,
|
163 |
-
in_channels,
|
164 |
-
out_channels,
|
165 |
-
hidden_channels,
|
166 |
-
kernel_size,
|
167 |
-
dilation_rate,
|
168 |
-
n_layers,
|
169 |
-
gin_channels=0,
|
170 |
-
):
|
171 |
-
super().__init__()
|
172 |
-
self.in_channels = in_channels
|
173 |
-
self.out_channels = out_channels
|
174 |
-
self.hidden_channels = hidden_channels
|
175 |
-
self.kernel_size = kernel_size
|
176 |
-
self.dilation_rate = dilation_rate
|
177 |
-
self.n_layers = n_layers
|
178 |
-
self.gin_channels = gin_channels
|
179 |
-
|
180 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
181 |
-
self.enc = modules.WN(
|
182 |
-
hidden_channels,
|
183 |
-
kernel_size,
|
184 |
-
dilation_rate,
|
185 |
-
n_layers,
|
186 |
-
gin_channels=gin_channels,
|
187 |
-
)
|
188 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
189 |
-
|
190 |
-
def forward(self, x, x_lengths, g=None):
|
191 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
|
192 |
-
x.dtype
|
193 |
-
)
|
194 |
-
x = self.pre(x) * x_mask
|
195 |
-
x = self.enc(x, x_mask, g=g)
|
196 |
-
stats = self.proj(x) * x_mask
|
197 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
198 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
199 |
-
return z, m, logs, x_mask
|
200 |
-
|
201 |
-
def remove_weight_norm(self):
|
202 |
-
self.enc.remove_weight_norm()
|
203 |
-
|
204 |
-
|
205 |
-
class Generator(torch.nn.Module):
|
206 |
-
def __init__(
|
207 |
-
self,
|
208 |
-
initial_channel,
|
209 |
-
resblock,
|
210 |
-
resblock_kernel_sizes,
|
211 |
-
resblock_dilation_sizes,
|
212 |
-
upsample_rates,
|
213 |
-
upsample_initial_channel,
|
214 |
-
upsample_kernel_sizes,
|
215 |
-
gin_channels=0,
|
216 |
-
):
|
217 |
-
super(Generator, self).__init__()
|
218 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
219 |
-
self.num_upsamples = len(upsample_rates)
|
220 |
-
self.conv_pre = Conv1d(
|
221 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
222 |
-
)
|
223 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
224 |
-
|
225 |
-
self.ups = nn.ModuleList()
|
226 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
227 |
-
self.ups.append(
|
228 |
-
weight_norm(
|
229 |
-
ConvTranspose1d(
|
230 |
-
upsample_initial_channel // (2**i),
|
231 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
232 |
-
k,
|
233 |
-
u,
|
234 |
-
padding=(k - u) // 2,
|
235 |
-
)
|
236 |
-
)
|
237 |
-
)
|
238 |
-
|
239 |
-
self.resblocks = nn.ModuleList()
|
240 |
-
for i in range(len(self.ups)):
|
241 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
242 |
-
for j, (k, d) in enumerate(
|
243 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
244 |
-
):
|
245 |
-
self.resblocks.append(resblock(ch, k, d))
|
246 |
-
|
247 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
248 |
-
self.ups.apply(init_weights)
|
249 |
-
|
250 |
-
if gin_channels != 0:
|
251 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
252 |
-
|
253 |
-
def forward(self, x, g=None):
|
254 |
-
x = self.conv_pre(x)
|
255 |
-
if g is not None:
|
256 |
-
x = x + self.cond(g)
|
257 |
-
|
258 |
-
for i in range(self.num_upsamples):
|
259 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
260 |
-
x = self.ups[i](x)
|
261 |
-
xs = None
|
262 |
-
for j in range(self.num_kernels):
|
263 |
-
if xs is None:
|
264 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
265 |
-
else:
|
266 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
267 |
-
x = xs / self.num_kernels
|
268 |
-
x = F.leaky_relu(x)
|
269 |
-
x = self.conv_post(x)
|
270 |
-
x = torch.tanh(x)
|
271 |
-
|
272 |
-
return x
|
273 |
-
|
274 |
-
def remove_weight_norm(self):
|
275 |
-
for l in self.ups:
|
276 |
-
remove_weight_norm(l)
|
277 |
-
for l in self.resblocks:
|
278 |
-
l.remove_weight_norm()
|
279 |
-
|
280 |
-
|
281 |
-
class SineGen(torch.nn.Module):
|
282 |
-
"""Definition of sine generator
|
283 |
-
SineGen(samp_rate, harmonic_num = 0,
|
284 |
-
sine_amp = 0.1, noise_std = 0.003,
|
285 |
-
voiced_threshold = 0,
|
286 |
-
flag_for_pulse=False)
|
287 |
-
samp_rate: sampling rate in Hz
|
288 |
-
harmonic_num: number of harmonic overtones (default 0)
|
289 |
-
sine_amp: amplitude of sine-wavefrom (default 0.1)
|
290 |
-
noise_std: std of Gaussian noise (default 0.003)
|
291 |
-
voiced_thoreshold: F0 threshold for U/V classification (default 0)
|
292 |
-
flag_for_pulse: this SinGen is used inside PulseGen (default False)
|
293 |
-
Note: when flag_for_pulse is True, the first time step of a voiced
|
294 |
-
segment is always sin(np.pi) or cos(0)
|
295 |
-
"""
|
296 |
-
|
297 |
-
def __init__(
|
298 |
-
self,
|
299 |
-
samp_rate,
|
300 |
-
harmonic_num=0,
|
301 |
-
sine_amp=0.1,
|
302 |
-
noise_std=0.003,
|
303 |
-
voiced_threshold=0,
|
304 |
-
flag_for_pulse=False,
|
305 |
-
):
|
306 |
-
super(SineGen, self).__init__()
|
307 |
-
self.sine_amp = sine_amp
|
308 |
-
self.noise_std = noise_std
|
309 |
-
self.harmonic_num = harmonic_num
|
310 |
-
self.dim = self.harmonic_num + 1
|
311 |
-
self.sampling_rate = samp_rate
|
312 |
-
self.voiced_threshold = voiced_threshold
|
313 |
-
|
314 |
-
def _f02uv(self, f0):
|
315 |
-
# generate uv signal
|
316 |
-
uv = torch.ones_like(f0)
|
317 |
-
uv = uv * (f0 > self.voiced_threshold)
|
318 |
-
return uv
|
319 |
-
|
320 |
-
def forward(self, f0, upp):
|
321 |
-
"""sine_tensor, uv = forward(f0)
|
322 |
-
input F0: tensor(batchsize=1, length, dim=1)
|
323 |
-
f0 for unvoiced steps should be 0
|
324 |
-
output sine_tensor: tensor(batchsize=1, length, dim)
|
325 |
-
output uv: tensor(batchsize=1, length, 1)
|
326 |
-
"""
|
327 |
-
with torch.no_grad():
|
328 |
-
f0 = f0[:, None].transpose(1, 2)
|
329 |
-
f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
|
330 |
-
# fundamental component
|
331 |
-
f0_buf[:, :, 0] = f0[:, :, 0]
|
332 |
-
for idx in np.arange(self.harmonic_num):
|
333 |
-
f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
|
334 |
-
idx + 2
|
335 |
-
) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
|
336 |
-
rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
|
337 |
-
rand_ini = torch.rand(
|
338 |
-
f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
|
339 |
-
)
|
340 |
-
rand_ini[:, 0] = 0
|
341 |
-
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
|
342 |
-
tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
|
343 |
-
tmp_over_one *= upp
|
344 |
-
tmp_over_one = F.interpolate(
|
345 |
-
tmp_over_one.transpose(2, 1),
|
346 |
-
scale_factor=upp,
|
347 |
-
mode="linear",
|
348 |
-
align_corners=True,
|
349 |
-
).transpose(2, 1)
|
350 |
-
rad_values = F.interpolate(
|
351 |
-
rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
|
352 |
-
).transpose(
|
353 |
-
2, 1
|
354 |
-
) #######
|
355 |
-
tmp_over_one %= 1
|
356 |
-
tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
|
357 |
-
cumsum_shift = torch.zeros_like(rad_values)
|
358 |
-
cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
|
359 |
-
sine_waves = torch.sin(
|
360 |
-
torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
|
361 |
-
)
|
362 |
-
sine_waves = sine_waves * self.sine_amp
|
363 |
-
uv = self._f02uv(f0)
|
364 |
-
uv = F.interpolate(
|
365 |
-
uv.transpose(2, 1), scale_factor=upp, mode="nearest"
|
366 |
-
).transpose(2, 1)
|
367 |
-
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
|
368 |
-
noise = noise_amp * torch.randn_like(sine_waves)
|
369 |
-
sine_waves = sine_waves * uv + noise
|
370 |
-
return sine_waves, uv, noise
|
371 |
-
|
372 |
-
|
373 |
-
class SourceModuleHnNSF(torch.nn.Module):
|
374 |
-
"""SourceModule for hn-nsf
|
375 |
-
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
|
376 |
-
add_noise_std=0.003, voiced_threshod=0)
|
377 |
-
sampling_rate: sampling_rate in Hz
|
378 |
-
harmonic_num: number of harmonic above F0 (default: 0)
|
379 |
-
sine_amp: amplitude of sine source signal (default: 0.1)
|
380 |
-
add_noise_std: std of additive Gaussian noise (default: 0.003)
|
381 |
-
note that amplitude of noise in unvoiced is decided
|
382 |
-
by sine_amp
|
383 |
-
voiced_threshold: threhold to set U/V given F0 (default: 0)
|
384 |
-
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
|
385 |
-
F0_sampled (batchsize, length, 1)
|
386 |
-
Sine_source (batchsize, length, 1)
|
387 |
-
noise_source (batchsize, length 1)
|
388 |
-
uv (batchsize, length, 1)
|
389 |
-
"""
|
390 |
-
|
391 |
-
def __init__(
|
392 |
-
self,
|
393 |
-
sampling_rate,
|
394 |
-
harmonic_num=0,
|
395 |
-
sine_amp=0.1,
|
396 |
-
add_noise_std=0.003,
|
397 |
-
voiced_threshod=0,
|
398 |
-
is_half=True,
|
399 |
-
):
|
400 |
-
super(SourceModuleHnNSF, self).__init__()
|
401 |
-
|
402 |
-
self.sine_amp = sine_amp
|
403 |
-
self.noise_std = add_noise_std
|
404 |
-
self.is_half = is_half
|
405 |
-
# to produce sine waveforms
|
406 |
-
self.l_sin_gen = SineGen(
|
407 |
-
sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
|
408 |
-
)
|
409 |
-
|
410 |
-
# to merge source harmonics into a single excitation
|
411 |
-
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
|
412 |
-
self.l_tanh = torch.nn.Tanh()
|
413 |
-
|
414 |
-
def forward(self, x, upp=None):
|
415 |
-
sine_wavs, uv, _ = self.l_sin_gen(x, upp)
|
416 |
-
if self.is_half:
|
417 |
-
sine_wavs = sine_wavs.half()
|
418 |
-
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
|
419 |
-
return sine_merge, None, None # noise, uv
|
420 |
-
|
421 |
-
|
422 |
-
class GeneratorNSF(torch.nn.Module):
|
423 |
-
def __init__(
|
424 |
-
self,
|
425 |
-
initial_channel,
|
426 |
-
resblock,
|
427 |
-
resblock_kernel_sizes,
|
428 |
-
resblock_dilation_sizes,
|
429 |
-
upsample_rates,
|
430 |
-
upsample_initial_channel,
|
431 |
-
upsample_kernel_sizes,
|
432 |
-
gin_channels,
|
433 |
-
sr,
|
434 |
-
is_half=False,
|
435 |
-
):
|
436 |
-
super(GeneratorNSF, self).__init__()
|
437 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
438 |
-
self.num_upsamples = len(upsample_rates)
|
439 |
-
|
440 |
-
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
|
441 |
-
self.m_source = SourceModuleHnNSF(
|
442 |
-
sampling_rate=sr, harmonic_num=0, is_half=is_half
|
443 |
-
)
|
444 |
-
self.noise_convs = nn.ModuleList()
|
445 |
-
self.conv_pre = Conv1d(
|
446 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
447 |
-
)
|
448 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
449 |
-
|
450 |
-
self.ups = nn.ModuleList()
|
451 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
452 |
-
c_cur = upsample_initial_channel // (2 ** (i + 1))
|
453 |
-
self.ups.append(
|
454 |
-
weight_norm(
|
455 |
-
ConvTranspose1d(
|
456 |
-
upsample_initial_channel // (2**i),
|
457 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
458 |
-
k,
|
459 |
-
u,
|
460 |
-
padding=(k - u) // 2,
|
461 |
-
)
|
462 |
-
)
|
463 |
-
)
|
464 |
-
if i + 1 < len(upsample_rates):
|
465 |
-
stride_f0 = np.prod(upsample_rates[i + 1 :])
|
466 |
-
self.noise_convs.append(
|
467 |
-
Conv1d(
|
468 |
-
1,
|
469 |
-
c_cur,
|
470 |
-
kernel_size=stride_f0 * 2,
|
471 |
-
stride=stride_f0,
|
472 |
-
padding=stride_f0 // 2,
|
473 |
-
)
|
474 |
-
)
|
475 |
-
else:
|
476 |
-
self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
|
477 |
-
|
478 |
-
self.resblocks = nn.ModuleList()
|
479 |
-
for i in range(len(self.ups)):
|
480 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
481 |
-
for j, (k, d) in enumerate(
|
482 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
483 |
-
):
|
484 |
-
self.resblocks.append(resblock(ch, k, d))
|
485 |
-
|
486 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
487 |
-
self.ups.apply(init_weights)
|
488 |
-
|
489 |
-
if gin_channels != 0:
|
490 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
491 |
-
|
492 |
-
self.upp = np.prod(upsample_rates)
|
493 |
-
|
494 |
-
def forward(self, x, f0, g=None):
|
495 |
-
har_source, noi_source, uv = self.m_source(f0, self.upp)
|
496 |
-
har_source = har_source.transpose(1, 2)
|
497 |
-
x = self.conv_pre(x)
|
498 |
-
if g is not None:
|
499 |
-
x = x + self.cond(g)
|
500 |
-
|
501 |
-
for i in range(self.num_upsamples):
|
502 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
503 |
-
x = self.ups[i](x)
|
504 |
-
x_source = self.noise_convs[i](har_source)
|
505 |
-
x = x + x_source
|
506 |
-
xs = None
|
507 |
-
for j in range(self.num_kernels):
|
508 |
-
if xs is None:
|
509 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
510 |
-
else:
|
511 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
512 |
-
x = xs / self.num_kernels
|
513 |
-
x = F.leaky_relu(x)
|
514 |
-
x = self.conv_post(x)
|
515 |
-
x = torch.tanh(x)
|
516 |
-
return x
|
517 |
-
|
518 |
-
def remove_weight_norm(self):
|
519 |
-
for l in self.ups:
|
520 |
-
remove_weight_norm(l)
|
521 |
-
for l in self.resblocks:
|
522 |
-
l.remove_weight_norm()
|
523 |
-
|
524 |
-
|
525 |
-
sr2sr = {
|
526 |
-
"32k": 32000,
|
527 |
-
"40k": 40000,
|
528 |
-
"48k": 48000,
|
529 |
-
}
|
530 |
-
|
531 |
-
|
532 |
-
class SynthesizerTrnMsNSFsidM(nn.Module):
|
533 |
-
def __init__(
|
534 |
-
self,
|
535 |
-
spec_channels,
|
536 |
-
segment_size,
|
537 |
-
inter_channels,
|
538 |
-
hidden_channels,
|
539 |
-
filter_channels,
|
540 |
-
n_heads,
|
541 |
-
n_layers,
|
542 |
-
kernel_size,
|
543 |
-
p_dropout,
|
544 |
-
resblock,
|
545 |
-
resblock_kernel_sizes,
|
546 |
-
resblock_dilation_sizes,
|
547 |
-
upsample_rates,
|
548 |
-
upsample_initial_channel,
|
549 |
-
upsample_kernel_sizes,
|
550 |
-
spk_embed_dim,
|
551 |
-
gin_channels,
|
552 |
-
sr,
|
553 |
-
version,
|
554 |
-
**kwargs
|
555 |
-
):
|
556 |
-
super().__init__()
|
557 |
-
if type(sr) == type("strr"):
|
558 |
-
sr = sr2sr[sr]
|
559 |
-
self.spec_channels = spec_channels
|
560 |
-
self.inter_channels = inter_channels
|
561 |
-
self.hidden_channels = hidden_channels
|
562 |
-
self.filter_channels = filter_channels
|
563 |
-
self.n_heads = n_heads
|
564 |
-
self.n_layers = n_layers
|
565 |
-
self.kernel_size = kernel_size
|
566 |
-
self.p_dropout = p_dropout
|
567 |
-
self.resblock = resblock
|
568 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
569 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
570 |
-
self.upsample_rates = upsample_rates
|
571 |
-
self.upsample_initial_channel = upsample_initial_channel
|
572 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
573 |
-
self.segment_size = segment_size
|
574 |
-
self.gin_channels = gin_channels
|
575 |
-
# self.hop_length = hop_length#
|
576 |
-
self.spk_embed_dim = spk_embed_dim
|
577 |
-
if version == "v1":
|
578 |
-
self.enc_p = TextEncoder256(
|
579 |
-
inter_channels,
|
580 |
-
hidden_channels,
|
581 |
-
filter_channels,
|
582 |
-
n_heads,
|
583 |
-
n_layers,
|
584 |
-
kernel_size,
|
585 |
-
p_dropout,
|
586 |
-
)
|
587 |
-
else:
|
588 |
-
self.enc_p = TextEncoder768(
|
589 |
-
inter_channels,
|
590 |
-
hidden_channels,
|
591 |
-
filter_channels,
|
592 |
-
n_heads,
|
593 |
-
n_layers,
|
594 |
-
kernel_size,
|
595 |
-
p_dropout,
|
596 |
-
)
|
597 |
-
self.dec = GeneratorNSF(
|
598 |
-
inter_channels,
|
599 |
-
resblock,
|
600 |
-
resblock_kernel_sizes,
|
601 |
-
resblock_dilation_sizes,
|
602 |
-
upsample_rates,
|
603 |
-
upsample_initial_channel,
|
604 |
-
upsample_kernel_sizes,
|
605 |
-
gin_channels=gin_channels,
|
606 |
-
sr=sr,
|
607 |
-
is_half=kwargs["is_half"],
|
608 |
-
)
|
609 |
-
self.enc_q = PosteriorEncoder(
|
610 |
-
spec_channels,
|
611 |
-
inter_channels,
|
612 |
-
hidden_channels,
|
613 |
-
5,
|
614 |
-
1,
|
615 |
-
16,
|
616 |
-
gin_channels=gin_channels,
|
617 |
-
)
|
618 |
-
self.flow = ResidualCouplingBlock(
|
619 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
620 |
-
)
|
621 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
622 |
-
self.speaker_map = None
|
623 |
-
logger.debug(
|
624 |
-
"gin_channels: "
|
625 |
-
+ gin_channels
|
626 |
-
+ ", self.spk_embed_dim: "
|
627 |
-
+ self.spk_embed_dim
|
628 |
-
)
|
629 |
-
|
630 |
-
def remove_weight_norm(self):
|
631 |
-
self.dec.remove_weight_norm()
|
632 |
-
self.flow.remove_weight_norm()
|
633 |
-
self.enc_q.remove_weight_norm()
|
634 |
-
|
635 |
-
def construct_spkmixmap(self, n_speaker):
|
636 |
-
self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
|
637 |
-
for i in range(n_speaker):
|
638 |
-
self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
|
639 |
-
self.speaker_map = self.speaker_map.unsqueeze(0)
|
640 |
-
|
641 |
-
def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
|
642 |
-
if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
|
643 |
-
g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
|
644 |
-
g = g * self.speaker_map # [N, S, B, 1, H]
|
645 |
-
g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
|
646 |
-
g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
|
647 |
-
else:
|
648 |
-
g = g.unsqueeze(0)
|
649 |
-
g = self.emb_g(g).transpose(1, 2)
|
650 |
-
|
651 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
652 |
-
z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
|
653 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
654 |
-
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
|
655 |
-
return o
|
656 |
-
|
657 |
-
|
658 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
659 |
-
def __init__(self, use_spectral_norm=False):
|
660 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
661 |
-
periods = [2, 3, 5, 7, 11, 17]
|
662 |
-
# periods = [3, 5, 7, 11, 17, 23, 37]
|
663 |
-
|
664 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
665 |
-
discs = discs + [
|
666 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
667 |
-
]
|
668 |
-
self.discriminators = nn.ModuleList(discs)
|
669 |
-
|
670 |
-
def forward(self, y, y_hat):
|
671 |
-
y_d_rs = [] #
|
672 |
-
y_d_gs = []
|
673 |
-
fmap_rs = []
|
674 |
-
fmap_gs = []
|
675 |
-
for i, d in enumerate(self.discriminators):
|
676 |
-
y_d_r, fmap_r = d(y)
|
677 |
-
y_d_g, fmap_g = d(y_hat)
|
678 |
-
# for j in range(len(fmap_r)):
|
679 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
680 |
-
y_d_rs.append(y_d_r)
|
681 |
-
y_d_gs.append(y_d_g)
|
682 |
-
fmap_rs.append(fmap_r)
|
683 |
-
fmap_gs.append(fmap_g)
|
684 |
-
|
685 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
686 |
-
|
687 |
-
|
688 |
-
class MultiPeriodDiscriminatorV2(torch.nn.Module):
|
689 |
-
def __init__(self, use_spectral_norm=False):
|
690 |
-
super(MultiPeriodDiscriminatorV2, self).__init__()
|
691 |
-
# periods = [2, 3, 5, 7, 11, 17]
|
692 |
-
periods = [2, 3, 5, 7, 11, 17, 23, 37]
|
693 |
-
|
694 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
695 |
-
discs = discs + [
|
696 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
697 |
-
]
|
698 |
-
self.discriminators = nn.ModuleList(discs)
|
699 |
-
|
700 |
-
def forward(self, y, y_hat):
|
701 |
-
y_d_rs = [] #
|
702 |
-
y_d_gs = []
|
703 |
-
fmap_rs = []
|
704 |
-
fmap_gs = []
|
705 |
-
for i, d in enumerate(self.discriminators):
|
706 |
-
y_d_r, fmap_r = d(y)
|
707 |
-
y_d_g, fmap_g = d(y_hat)
|
708 |
-
# for j in range(len(fmap_r)):
|
709 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
710 |
-
y_d_rs.append(y_d_r)
|
711 |
-
y_d_gs.append(y_d_g)
|
712 |
-
fmap_rs.append(fmap_r)
|
713 |
-
fmap_gs.append(fmap_g)
|
714 |
-
|
715 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
716 |
-
|
717 |
-
|
718 |
-
class DiscriminatorS(torch.nn.Module):
|
719 |
-
def __init__(self, use_spectral_norm=False):
|
720 |
-
super(DiscriminatorS, self).__init__()
|
721 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
722 |
-
self.convs = nn.ModuleList(
|
723 |
-
[
|
724 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
725 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
726 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
727 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
728 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
729 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
730 |
-
]
|
731 |
-
)
|
732 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
733 |
-
|
734 |
-
def forward(self, x):
|
735 |
-
fmap = []
|
736 |
-
|
737 |
-
for l in self.convs:
|
738 |
-
x = l(x)
|
739 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
740 |
-
fmap.append(x)
|
741 |
-
x = self.conv_post(x)
|
742 |
-
fmap.append(x)
|
743 |
-
x = torch.flatten(x, 1, -1)
|
744 |
-
|
745 |
-
return x, fmap
|
746 |
-
|
747 |
-
|
748 |
-
class DiscriminatorP(torch.nn.Module):
|
749 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
750 |
-
super(DiscriminatorP, self).__init__()
|
751 |
-
self.period = period
|
752 |
-
self.use_spectral_norm = use_spectral_norm
|
753 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
754 |
-
self.convs = nn.ModuleList(
|
755 |
-
[
|
756 |
-
norm_f(
|
757 |
-
Conv2d(
|
758 |
-
1,
|
759 |
-
32,
|
760 |
-
(kernel_size, 1),
|
761 |
-
(stride, 1),
|
762 |
-
padding=(get_padding(kernel_size, 1), 0),
|
763 |
-
)
|
764 |
-
),
|
765 |
-
norm_f(
|
766 |
-
Conv2d(
|
767 |
-
32,
|
768 |
-
128,
|
769 |
-
(kernel_size, 1),
|
770 |
-
(stride, 1),
|
771 |
-
padding=(get_padding(kernel_size, 1), 0),
|
772 |
-
)
|
773 |
-
),
|
774 |
-
norm_f(
|
775 |
-
Conv2d(
|
776 |
-
128,
|
777 |
-
512,
|
778 |
-
(kernel_size, 1),
|
779 |
-
(stride, 1),
|
780 |
-
padding=(get_padding(kernel_size, 1), 0),
|
781 |
-
)
|
782 |
-
),
|
783 |
-
norm_f(
|
784 |
-
Conv2d(
|
785 |
-
512,
|
786 |
-
1024,
|
787 |
-
(kernel_size, 1),
|
788 |
-
(stride, 1),
|
789 |
-
padding=(get_padding(kernel_size, 1), 0),
|
790 |
-
)
|
791 |
-
),
|
792 |
-
norm_f(
|
793 |
-
Conv2d(
|
794 |
-
1024,
|
795 |
-
1024,
|
796 |
-
(kernel_size, 1),
|
797 |
-
1,
|
798 |
-
padding=(get_padding(kernel_size, 1), 0),
|
799 |
-
)
|
800 |
-
),
|
801 |
-
]
|
802 |
-
)
|
803 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
804 |
-
|
805 |
-
def forward(self, x):
|
806 |
-
fmap = []
|
807 |
-
|
808 |
-
# 1d to 2d
|
809 |
-
b, c, t = x.shape
|
810 |
-
if t % self.period != 0: # pad first
|
811 |
-
n_pad = self.period - (t % self.period)
|
812 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
813 |
-
t = t + n_pad
|
814 |
-
x = x.view(b, c, t // self.period, self.period)
|
815 |
-
|
816 |
-
for l in self.convs:
|
817 |
-
x = l(x)
|
818 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
819 |
-
fmap.append(x)
|
820 |
-
x = self.conv_post(x)
|
821 |
-
fmap.append(x)
|
822 |
-
x = torch.flatten(x, 1, -1)
|
823 |
-
|
824 |
-
return x, fmap
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Eduardovco/Potato/Dockerfile
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
FROM node:18-bullseye-slim
|
2 |
-
|
3 |
-
RUN apt-get update && \
|
4 |
-
|
5 |
-
apt-get install -y git
|
6 |
-
|
7 |
-
RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
|
8 |
-
|
9 |
-
WORKDIR /app
|
10 |
-
|
11 |
-
RUN npm install
|
12 |
-
|
13 |
-
COPY Dockerfile greeting.md* .env* ./
|
14 |
-
|
15 |
-
RUN npm run build
|
16 |
-
|
17 |
-
EXPOSE 7860
|
18 |
-
|
19 |
-
ENV NODE_ENV=production
|
20 |
-
|
21 |
-
CMD [ "npm", "start" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Egrt/GCycleGAN/nets/resnest/__init__.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
from .resnest import *
|
2 |
-
from .ablation import *
|
|
|
|
|
|