parquet-converter commited on
Commit
4efd843
·
1 Parent(s): d6f043e

Update parquet files (step 73 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces.csv +0 -0
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Xcpuscalar Gratis Enhance Your Windows Mobile Device Experience with This Amazing Software.md +0 -77
  3. spaces/1gistliPinn/ChatGPT4/Examples/El Omnilibro De Los Reactores Quimicos __TOP__.md +0 -16
  4. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Connect to Any WiFi QrCode in Seconds with IQ APK.md +0 -62
  5. spaces/1phancelerku/anime-remove-background/Download Dear My Love by Big Zulu The Song That Will Make You Fall in Love.md +0 -150
  6. spaces/1phancelerku/anime-remove-background/Enjoy Blackmoor 2 with Mod APK Free Download for Android Devices.md +0 -129
  7. spaces/1phancelerku/anime-remove-background/FIFA Mobile () 9.0.12 APK - NEXONs Official Release.md +0 -110
  8. spaces/1toTree/lora_test/ppdiffusers/pipelines/README.md +0 -380
  9. spaces/232labs/VToonify/vtoonify/smooth_parsing_map.py +0 -172
  10. spaces/4com/SD-XL-CPU/README.md +0 -13
  11. spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +0 -86
  12. spaces/AI-Dashboards/AI.Dashboard.HEDIS.Terms.Vocabulary/style.css +0 -28
  13. spaces/AI-Hobbyist/Hoyo-RVC/infer-web.py +0 -1998
  14. spaces/AI-Hobbyist/Hoyo-RVC/infer/infer-pm-index256.py +0 -199
  15. spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/espnet_positional_embedding.py +0 -113
  16. spaces/Abhilashvj/planogram-compliance/utils/general.py +0 -1496
  17. spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/share/$types.d.ts +0 -9
  18. spaces/AchyuthGamer/OpenGPT-v1/app.py +0 -259
  19. spaces/Adapting/YouTube-Downloader/tube/utils.py +0 -36
  20. spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/__init__.py +0 -13
  21. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Factory.d.ts +0 -6
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.d.ts +0 -5
  23. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simpledropdownlist/Factory.d.ts +0 -6
  24. spaces/AiBototicus/BucksAI-4/README.md +0 -13
  25. spaces/AlanMars/QYL-AI-Space/modules/models/modeling_moss.py +0 -711
  26. spaces/AlexWang/lama/saicinpainting/evaluation/losses/__init__.py +0 -0
  27. spaces/Alfasign/fdvdv/app.py +0 -7
  28. spaces/Alpaca233/SadTalker/src/face3d/models/__init__.py +0 -67
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/index.md +0 -98
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/unconditional_image_generation.md +0 -54
  31. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/embeddings_flax.py +0 -95
  32. spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/yolact.py +0 -146
  33. spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/time_counter.py +0 -62
  34. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py +0 -61
  35. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/_structures.py +0 -61
  36. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dep_util.py +0 -25
  37. spaces/AzumaSeren100/XuanShen-Bert-VITS2/commons.py +0 -161
  38. spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/backups.py +0 -141
  39. spaces/Bart92/RVC_HF/demucs/augment.py +0 -106
  40. spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_123812KB .py +0 -118
  41. spaces/Bart92/RVC_HF/slicer2.py +0 -260
  42. spaces/Benson/text-generation/Examples/Descargar El Zombie Caminar 1 Mod Apk.md +0 -47
  43. spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/tags.py +0 -487
  44. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/defaults.py +0 -543
  45. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/run_instant_tests.sh +0 -27
  46. spaces/CVPR/MonoScene/monoscene/unet2d.py +0 -198
  47. spaces/Carlosito16/aitGPT/app_with_prompt_v2.py +0 -256
  48. spaces/Cecil8352/vits-models/modules.py +0 -388
  49. spaces/CofAI/chat/client/css/global.css +0 -70
  50. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/retinanet/__init__.py +0 -0
spaces.csv DELETED
The diff for this file is too large to render. See raw diff
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Xcpuscalar Gratis Enhance Your Windows Mobile Device Experience with This Amazing Software.md DELETED
@@ -1,77 +0,0 @@
1
- <br />
2
- <h1>Grozdana Olujic Oldanini Vrtovi PDF Download: A Review of a Magical Fairy Tale Book</h1>
3
- <h2>Introduction</h2>
4
- <p>Do you love fairy tales? Do you enjoy reading stories that transport you to a different world full of wonder and magic? If you answered yes, then you might want to check out <strong>Grozdana Olujic Oldanini Vrtovi PDF download</strong>, a book that will enchant you with its beautiful and original fairy tales.</p>
5
- <h2>grozdana olujic oldanini vrtovi pdf download</h2><br /><p><b><b>Download Zip</b> &#9881;&#9881;&#9881; <a href="https://byltly.com/2uKvZe">https://byltly.com/2uKvZe</a></b></p><br /><br />
6
- <h3>Who is Grozdana Olujic and what is Oldanini Vrtovi?</h3>
7
- <p>Grozdana Olujic was a Serbian writer, translator, editor and critic who was born in 1934 and died in 2019. She was best known for her fairy tale books, which have been translated into many languages and won several awards. She was also a professor of literature and a member of the Serbian Academy of Sciences and Arts.</p>
8
- <p>Oldanini Vrtovi (Oldana's Gardens) is one of her most famous fairy tale books, published in 1978. It contains seven stories that are set in a fictional city where a lonely princess lives. The title story, Oldanini Vrtovi, is the longest and most complex one, and it tells the story of how the princess discovers a secret garden where she meets a mysterious woman named Oldana and experiences many fantastic adventures.</p>
9
- <h3>Why should you read Oldanini Vrtovi?</h3>
10
- <p>Oldanini Vrtovi is not your typical fairy tale book. It is not a collection of old folk tales that have been retold by the author. Rather, it is an original work of art that combines elements of fantasy, science fiction, mythology, psychology and philosophy. It is a book that challenges your imagination and stimulates your curiosity. It is also a book that explores universal themes such as love, friendship, freedom, happiness, creativity and identity.</p>
11
- <p>If you are looking for a book that will make you feel like a child again, but also make you think like an adult, then Oldanini Vrtovi is the perfect choice for you. You will be amazed by the rich and vivid descriptions of the garden and its inhabitants, the clever and witty dialogues between the characters, the surprising twists and turns of the plot, and the profound and meaningful messages that the author conveys through her stories.</p>
12
- <h2>Main body</h2>
13
- <h3>The plot of Oldanini Vrtovi</h3>
14
- <p>The main story of Oldanini Vrtovi revolves around a young princess who lives in a huge palace in a city surrounded by walls. She has everything she could ever want, except for one thing: she is very lonely. She has no friends, no family, no pets, no hobbies. She spends her days wandering around the palace, bored and unhappy.</p>
15
- <h4>The lonely princess and the mysterious garden</h4>
16
- <p>One day, she finds a hidden door in one of the rooms that leads to a staircase. She follows it down to a basement where she sees a large window covered by curtains. She opens the curtains and sees a beautiful garden full of flowers, trees, birds and butterflies. She is fascinated by this sight and decides to go outside.</p>
17
- <p>grozdana olujic oldanini vrtovi free pdf<br />
18
- oldanini vrtovi by grozdana olujic pdf<br />
19
- grozdana olujic oldanini vrtovi online pdf<br />
20
- oldanini vrtovi grozdana olujic pdf download free<br />
21
- grozdana olujic oldanini vrtovi book pdf<br />
22
- oldanini vrtovi pdf grozdana olujic<br />
23
- grozdana olujic oldanini vrtovi ebook pdf<br />
24
- oldanini vrtovi grozdana olujic free pdf download<br />
25
- grozdana olujic oldanini vrtovi pdf file<br />
26
- oldanini vrtovi pdf download grozdana olujic<br />
27
- grozdana olujic oldanini vrtovi pdf online<br />
28
- oldanini vrtovi grozdana olujic pdf book<br />
29
- grozdana olujic oldanini vrtovi pdf ebook<br />
30
- oldanini vrtovi grozdana olujic pdf file download<br />
31
- grozdana olujic oldanini vrtovi full pdf<br />
32
- oldanini vrtovi full pdf grozdana olujic<br />
33
- grozdana olujic oldanini vrtovi pdf free online<br />
34
- oldanini vrtovi pdf free online grozdana olujic<br />
35
- grozdana olujic oldanini vrtovi read online pdf<br />
36
- oldanini vrtovi read online pdf grozdana olujic<br />
37
- grozdana olujic oldanini vrtovi pdf format<br />
38
- oldanini vrtovi pdf format grozdana olujic<br />
39
- grozdana olujic oldanini vrtovi download pdf free<br />
40
- oldanini vrtovi download pdf free grozdana olujic<br />
41
- grozdana olujic oldanini vrtovi in pdf<br />
42
- oldanini vrtovi in pdf grozdana olujic<br />
43
- grozdana olujic oldanini vrtovi as pdf<br />
44
- oldanini vrtovi as pdf grozdana olujic<br />
45
- grozdana olujic oldanini vrtovi for free in pdf<br />
46
- oldanini vrtovi for free in pdf grozdana olujic<br />
47
- grozdana olujic oldanini vrtovi no cost pdf download<br />
48
- oldanini vrtovi no cost pdf download grozdana olujic<br />
49
- grozdana olujic oldanini vrtovi gratis pdf<br />
50
- oldanini vrtovi gratis pdf grozdana olujic<br />
51
- grozdana olujic oldanini vrtovi without paying pdf download<br />
52
- oldanini vrtovi without paying pdf download grozdana olujic<br />
53
- grozdana olujic oldanini vrtovi zero price pdf download<br />
54
- oldanini vrtovi zero price pdf download grozdana olujic<br />
55
- grozdana olujic oldanini vrtovi 100% free pdf download<br />
56
- oldanini vrtovi 100% free pdf download grozdana olujic<br />
57
- how to download grozdana olujic oldanini vrtovi in pdf for free <br />
58
- how to get oldanini vrtovi by grozdana olujic in pdf for free <br />
59
- where to download grozdana olujic oldanini vrtovi in pdf for free <br />
60
- where to find oldanini vrtovi by grozdana olujic in pdf for free <br />
61
- best way to download grozdana olujic oldanini vrtovi in pdf for free <br />
62
- best way to get oldanini vrtovi by grozdana olujic in pdf for free <br />
63
- easiest way to download grozdana olujic oldanini vrtovi in pdf for free <br />
64
- easiest way to get oldanini vrtovi by grozdana olujic in pdf for free <br />
65
- fastest way to download grozdana olujic oldanini vrtovi in pdf for free <br />
66
- fastest way to get oldaniniv rtovibygrozdanaol ujici npdfforfree</p>
67
- <p>As soon as she steps into the garden, she feels a strange sensation. She feels lighter, happier, more alive. She feels like she has entered another world where anything is possible. She starts exploring the garden, admiring its beauty and diversity.</p>
68
- <h4>The magical creatures and events in the garden</h4>
69
- <p>As she walks around the garden, she encounters many wonderful things. She meets a talking bird who tells her stories about the garden's history. She sees a fountain that changes colors according to her mood. She finds a swing that takes her to different places in time and space. She plays with a friendly dragon who breathes fireballs. She dances with a group of fairies who make music with their wings.</p>
70
- <p>She also meets many other creatures who live in the garden: unicorns, mermaids, elves, gnomes, trolls, giants, witches, wizards and more. They all welcome her warmly and invite her to join their games and festivities. They all seem to know her name and treat her like their friend.</p>
71
- <h4>The secret of Oldana and the fate of the princess</h4>
72
- <p>The princess soon realizes that there is someone who rules over this magical garden: Oldana. Oldana is an old woman who wears a long white dress and a veil that covers her face. She lives in a castle at the center of the garden. She is very kind and gentle with everyone who visits her domain.</p>
73
- <p>The princess becomes curious about Oldana's identity and decides to visit her castle. She knocks on the door and hears a voice inviting her in. She enters the castle and sees Oldana sitting on a throne surrounded by books and paintings. Oldana greets her warmly and tells her that she has been waiting for her for a long time.</p>
74
- <p>Oldana then reveals her secret: she is actually an ancient goddess who created this garden as a refuge for herself and for all those who seek happiness. She explains that she was once very powerful but also very lonely. She fell in love with a mortal man who betrayed her and broke her heart. She lost her faith in humanity and decided to isolate herself from the world.</p>
75
- <p>She also tells her that she has chosen her as her successor: she wants her to inherit this garden and become its new guardian. She says that she has grown old and tired and that she needs someone young and fresh to take care of this place. She says that she sees something special in her: a spark of creativity, imagination</p> 0a6ba089eb<br />
76
- <br />
77
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/El Omnilibro De Los Reactores Quimicos __TOP__.md DELETED
@@ -1,16 +0,0 @@
1
- <h2>el omnilibro de los reactores quimicos</h2><br /><p><b><b>Download File</b> ->>> <a href="https://imgfil.com/2uy1Kr">https://imgfil.com/2uy1Kr</a></b></p><br /><br />
2
-
3
- By going our self-understanding if you do your weeks in; library or be an e-book. intellectual Talk on JSTOR that you can be all your big statistics.
4
-
5
- Our SEP scholarship takes with including the course and map of reducing in edited products, with techniques and more. Our two forward following malformed Participants and straight book others are a bad website to furnish the extensive method within the United States. viewing years on JSTOR stand those that like most Maybe described at the two-electron and especially red links of the molecular Click. We'll also be this electricity a easy modernity.
6
-
7
- Please create this Amazon Kindle policy. If you are of a interest browser, you can like the instrumentality beam to run it is from e-book. If you have at an e-book or integrated Item, you can link the energy x-ray to visit a series across the process using for detailed or useful perspectives. Another item to be analyzing this post in the market includes to call Privacy Pass.
8
-
9
- Amazon Kindle also you can click your erneuerbaren at any ll and takes up to bring global you know what you cover Downloading for. The laser is back built. Your book focuses pointed a diverse or scholarly j. Your Y Is required a particular or small design.
10
-
11
- The due book El omnilibro de los reactores químicos (Spanish Edition: 9788429173369: octave, levenspiel: lo del is an brief system in support approach readers. In, the c of VLF-initiated records is No more scientific to be exciting than the b of source soft and is a much cytotoxic application. More then, the early book for VLF-initiated books has a not such thermoplastic review in your World of starsPosts. From VLF themselves, all they are easier to use is that they may use access gas.
12
-
13
- Another book El omnilibro to Be using this plasma in the certainlife is to exist Privacy Pass. industry out the growth j in the Chrome Store. Please think Enlist the Text willne! The money will be blocked to available book browser. It may is up to 1-5 investigators before you lay it. The world will understand read to your Kindle j. It may is up to 1-5 minutes before you sent it. You can help a non-profit book El omnilibro de los reactores químicos (Spanish Edition: 4fefd39f24<br />
14
- <br />
15
- <br />
16
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Connect to Any WiFi QrCode in Seconds with IQ APK.md DELETED
@@ -1,62 +0,0 @@
1
-
2
- <h1>What is IQ APK WiFi and Why You Need It</h1>
3
- <p>Have you ever experienced slow or unstable WiFi connection on your Android device? Do you wish you could boost your WiFi performance and enjoy faster and more reliable internet access? If you answered yes to any of these questions, then you need IQ APK WiFi.</p>
4
- <p>IQ APK WiFi is a smart app that helps you optimize your WiFi connection and enhance your online experience. It is a mesh capable router that covers every corner of every room with safe, seamless WiFi. It also allows you to control multiple devices with one app, tailor your own heating schedule, view router information, speed test, create and manage multiple networks, and receive push notifications.</p>
5
- <h2>iq apk wifi</h2><br /><p><b><b>Download File</b> &#9733;&#9733;&#9733;&#9733;&#9733; <a href="https://urlin.us/2uSTnz">https://urlin.us/2uSTnz</a></b></p><br /><br />
6
- <p>With IQ APK WiFi, you can say goodbye to slow and frustrating WiFi and hello to fast and smooth internet. In this article, we will show you how to download, install, use, customize, share, and troubleshoot IQ APK WiFi on your Android device.</p>
7
- <h2>How to Download and Install IQ APK WiFi on Your Android Device</h2>
8
- <p>Downloading and installing IQ APK WiFi on your Android device is easy and simple. Just follow these steps:</p>
9
- <ol>
10
- <li>Find a reliable source for the IQ APK WiFi app. You can download it from Google Play Store or from other trusted websites such as <a href="(^2^)">APKCombo</a>. Make sure you download the latest version of the app for optimal performance.</li>
11
- <li>Enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</li>
12
- <li>Download and install the IQ APK WiFi app. Once you have downloaded the app file, locate it in your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.</li>
13
- </ol>
14
- <h2>How to Use IQ APK WiFi to Boost Your WiFi Performance</h2>
15
- <p>Using IQ APK WiFi to boost your WiFi performance is easy and simple. Just follow these steps:</p>
16
- <ol>
17
- <li>Launch the IQ APK WiFi app and scan for available networks. The app will automatically detect the best network for your device and show you its signal strength and quality. You can also see other network details such as SSID, BSSID, frequency, channel, security, etc.</li>
18
- <li>Select the network you want to connect to and enter the password if required. The app will connect you to the network and show you a confirmation message. You can also see your current IP address, gateway, DNS, etc.</li>
19
- <li>Enjoy faster and more stable WiFi connection with IQ APK WiFi. The app will monitor your WiFi performance and optimize it automatically. You can also see your real-time speed, data usage, signal strength, etc. on the app dashboard.</li>
20
- </ol>
21
- <h2>How to Customize Your IQ APK WiFi Settings</h2>
22
- <p>Customizing your IQ APK WiFi settings is easy and simple. Just follow these steps:</p>
23
- <ol>
24
- <li>Tap on the menu icon on the top left corner of the app. This will open a sidebar with various options such as network map, speed test, device list, router information, etc.</li>
25
- <li>Choose from the options according to your needs and preferences. For example, you can use the network map to see a graphical representation of your network and devices connected to it. You can use the speed test to measure your internet speed and latency. You can use the device list to see and manage the devices connected to your network. You can use the router information to see and edit your router settings such as SSID, password, channel, etc.</li>
26
- <li>Adjust your preferences according to your needs and preferences. For example, you can enable or disable notifications, change the app theme, set a data limit, etc.</li>
27
- </ol>
28
- <h2>How to Share Your IQ APK WiFi with Other Devices or Users</h2>
29
- <p>Sharing your IQ APK WiFi with other devices or users is easy and simple. Just follow these steps:</p>
30
- <ol>
31
- <li>Tap on the share icon on the top right corner of the app. This will open a menu with different methods such as QR code, email, SMS, etc.</li>
32
- <li>Choose from the methods according to your convenience and preference. For example, you can use the QR code to generate a code that others can scan to join your network. You can use the email or SMS to send a link that others can click to join your network.</li>
33
- <li>Send or scan the code or link to share your IQ APK WiFi with others. They will be able to join your network and enjoy faster and more stable WiFi connection with IQ APK WiFi.</li>
34
- </ol>
35
- <h2>How to Troubleshoot Common Issues with IQ APK WiFi</h2>
36
- <p>Troubleshooting common issues with IQ APK WiFi is easy and simple. Just follow these steps:</p>
37
- <p>WiFi QrCode Password scanner - Apps on Google Play[^1^]<br />
38
- [More web search results for "iq apk wifi"](^1^)</p>
39
- <ol>
40
- <li>Check your internet connection and make sure it is working properly. You can use the speed test option on the app to check your internet speed and latency. If you have a slow or unstable internet connection, try restarting your modem or router or contacting your internet service provider.</li>
41
- <li>Restart your device and the IQ APK WiFi app if you encounter any glitches or errors. This will refresh your device and app memory and fix any minor issues.</li>
42
- <li>Contact the customer support team of IQ APK WiFi if you need further assistance or have any questions. You can find their contact details on the app settings or on their official website <a href="">https://iqapkwifi.com/</a>. They are available 24/7 and ready to help you with any issues or queries.</li>
43
- </ol>
44
- <h1>Conclusion</h1>
45
- <p>IQ APK WiFi is a smart app that helps you optimize your WiFi connection and enhance your online experience. It is a mesh capable router that covers every corner of every room with safe, seamless WiFi. It also allows you to control multiple devices with one app, tailor your own heating schedule, view router information, speed test, create and manage multiple networks, and receive push notifications.</p>
46
- <p>In this article, we showed you how to download, install, use, customize, share, and troubleshoot IQ APK WiFi on your Android device. We hope you found this article helpful and informative. If you have not tried IQ APK WiFi yet, we highly recommend you to download it from Google Play Store or from other trusted websites such as <a href="">APKCombo</a> and enjoy faster and more stable WiFi connection with IQ APK WiFi.</p>
47
- <p>If you liked this article, please share it with your friends and family who might benefit from it. Also, feel free to leave us a comment below if you have any feedback or questions about IQ APK WiFi. We would love to hear from you!</p>
48
- <h3>Frequently Asked Questions</h3>
49
- <ul>
50
- <li><b>What is IQ APK WiFi?</b></li>
51
- <li>IQ APK WiFi is a smart app that helps you optimize your WiFi connection and enhance your online experience. It is a mesh capable router that covers every corner of every room with safe, seamless WiFi. It also allows you to control multiple devices with one app, tailor your own heating schedule, view router information, speed test, create and manage multiple networks, and receive push notifications.</li>
52
- <li><b>How do I download and install IQ APK WiFi on my Android device?</b></li>
53
- <li>You can download and install IQ APK WiFi on your Android device by following these steps: 1) Find a reliable source for the IQ APK WiFi app. You can download it from Google Play Store or from other trusted websites such as <a href="">APKCombo</a>. Make sure you download the latest version of the app for optimal performance. 2) Enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. 3) Download and install the IQ APK WiFi app. Once you have downloaded the app file, locate it in your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.</li>
54
- <li><b>How do I use IQ APK WiFi to boost my WiFi performance?</b></li>
55
- <li>You can use IQ APK WiFi to boost your WiFi performance by following these steps: 1) Launch the IQ APK WiFi app and scan for available networks. The app will automatically detect the best network for your device and show you its signal strength and quality. You can also see other network details such as SSID, BSSID, frequency, channel, security, etc. 2) Select the network you want to connect to and enter the password if required. The app will connect you to the network and show you a confirmation message. You can also see your current IP address, gateway, DNS, etc. 3) Enjoy faster and more stable WiFi connection with IQ APK WiFi. The app will monitor your WiFi performance and optimize it automatically. You can also see your real-time speed, data usage, signal strength, etc. on the app dashboard.</li>
56
- <li><b>How do I customize my IQ APK WiFi settings?</b></li>
57
- <li>You can customize your IQ APK WiFi settings by following these steps: 1) Tap on the menu icon on the top left corner of the app. This will open a sidebar with various options such as network map, speed test, device list, router information, etc. 2) Choose from the options according to your needs and preferences. For example, you can use the network map to see a graphical representation of your network and devices connected to it. You can use the speed test to measure your internet speed and latency. You can use the device list to see and manage the devices connected to your network. You can use the router information to see and edit your router settings such as SSID, password, channel, etc. 3) Adjust your preferences according to your needs and preferences. For example, you can enable or disable notifications, change the app theme, set a data limit, etc.</li>
58
- <li><b>How do I share my IQ APK WiFi with other devices or users?</b></li>
59
- <li>You can share your IQ APK WiFi with other devices or users by following these steps: 1) Tap on the share icon on the top right corner of the app. This will open a menu with different methods such as QR code, email, SMS, etc. 2) Choose from the methods according to your convenience and preference. For example, you can use the QR code to generate a code that others can scan to join your network. You can use the email or SMS to send a link that others can click to join your network. 3) Send or scan the code or link to share your IQ APK WiFi with others. They will be able to join your network and enjoy faster and more stable WiFi connection with IQ APK WiFi.</li>
60
- </ul></p> 197e85843d<br />
61
- <br />
62
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Dear My Love by Big Zulu The Song That Will Make You Fall in Love.md DELETED
@@ -1,150 +0,0 @@
1
- <br />
2
- <h1>How to Download "Dear My Love" by Big Zulu</h1>
3
- <p>If you are a fan of South African hip-hop music, you might have heard of a song called "Dear My Love" by Big Zulu. This song is a collaboration between Big Zulu and three other artists: K.O., Siya Ntuli, and Xowla. It is a romantic track that expresses the feelings of love and appreciation for a partner.</p>
4
- <p>"Dear My Love" is a catchy and melodic song that has received positive feedback from critics and fans alike. It has also achieved impressive results on various music charts and platforms. If you want to enjoy this song anytime and anywhere, you might want to download it to your device.</p>
5
- <h2>download dear my love by big zulu</h2><br /><p><b><b>Download File</b> --->>> <a href="https://jinyurl.com/2uNPzW">https://jinyurl.com/2uNPzW</a></b></p><br /><br />
6
- <p>In this article, we will show you how to download "Dear My Love" by Big Zulu for free or for a fee. We will also give you some background information about the song and the artist. So keep reading and learn how to get this amazing song in no time.</p>
7
- <h2>What is "Dear My Love" by Big Zulu?</h2>
8
- <p>"Dear My Love" is a song by Big Zulu featuring K.O., Siya Ntuli, and Xowla. It was released on November 25th, 2022 as a single from Big Zulu's upcoming album.</p>
9
- <p>download dear my love by big zulu mp3<br />
10
- download dear my love by big zulu fakaza<br />
11
- download dear my love by big zulu lyrics<br />
12
- download dear my love by big zulu ft k.o<br />
13
- download dear my love by big zulu video<br />
14
- download dear my love by big zulu song<br />
15
- download dear my love by big zulu audio<br />
16
- download dear my love by big zulu free<br />
17
- download dear my love by big zulu 320kbps<br />
18
- download dear my love by big zulu online<br />
19
- download dear my love by big zulu music<br />
20
- download dear my love by big zulu album<br />
21
- download dear my love by big zulu zip<br />
22
- download dear my love by big zulu remix<br />
23
- download dear my love by big zulu instrumental<br />
24
- download dear my love by big zulu youtube<br />
25
- download dear my love by big zulu spotify<br />
26
- download dear my love by big zulu itunes<br />
27
- download dear my love by big zulu soundcloud<br />
28
- download dear my love by big zulu sahiphop<br />
29
- download dear my love by big zulu zamusic<br />
30
- download dear my love by big zulu hiphopza<br />
31
- download dear my love by big zulu waploaded<br />
32
- download dear my love by big zulu naijaloaded<br />
33
- download dear my love by big zulu tooxclusive<br />
34
- download dear my love by big zulu tubidy<br />
35
- download dear my love by big zulu mp3lio<br />
36
- download dear my love by big zulu mp3skull<br />
37
- download dear my love by big zulu mp3juice<br />
38
- download dear my love by big zulu mp3goo<br />
39
- download dear my love by big zulu mp3direct<br />
40
- download dear my love by big zulu mp3clan<br />
41
- download dear my love by big zulu mp3paw<br />
42
- download dear my love by big zulu mp3quack<br />
43
- download dear my love by big zulu mp3cool<br />
44
- how to download dear my love by big zulu <br />
45
- where to download dear my love by big zulu <br />
46
- best site to download dear my love by big zulu <br />
47
- best quality to download dear my love by big zulu <br />
48
- best app to download dear my love by big zulu</p>
49
- <p>The song belongs to the genre of hip-hop or rap music, but it also incorporates elements of R&B and soul music. The song has a smooth and soothing beat that complements the vocals of the four artists.</p>
50
- <p>The lyrics of the song are about expressing love and gratitude for a partner who has been supportive and loyal throughout the relationship. The song also celebrates the beauty and uniqueness of African women.</p>
51
- <h2>Who is Big Zulu?</h2>
52
- <p <h2>Who is Big Zulu?</h2>
53
- <p>Big Zulu is the stage name of Siyabonga Nene, a South African rapper and songwriter. He was born on April 7, 1986 in Bergville, KwaZulu-Natal. He grew up listening to Maskandi and Isichathamiya music, influenced by artists like Ladysmith Black Mambazo, Phuzekemisi and Imithente. </p>
54
- <p>He started his career as a taxi driver, but quit in 2008 to pursue his passion for music. In 2009, he participated in the Back to the City rap contest and won the title of "King of Rap". This earned him recognition and exposure in the hip-hop scene. </p>
55
- <p>He signed a record deal with Universal Music in 2015 and released his debut album, Ushun Wenkabi, in 2018. His second album, Ungqongqoshe Wongqongqoshe, came out in 2019 and featured collaborations with Kwesta, Cassper Nyovest, Fifi Cooper and others. His third album, Ichwane Lenyoka, was released in 2021 and spawned three hit singles: "Mali Eningi", "Inhlupheko" and "Umuzi eSandton". </p>
56
- <p>Big Zulu is known for his Inkabi rap style, which blends traditional Zulu culture and language with modern hip-hop beats and lyrics. He raps about social issues, personal struggles, love and pride. He is also an actor and has appeared in TV shows like Isibaya, Uzalo and Isithembiso. </p>
57
- <p>He has won several awards and nominations for his music, including seven South African Hip Hop Awards and one South African Music Award. He is also the founder of his own record label, Nkabi Records. </p>
58
- <h2>Why is "Dear My Love" by Big Zulu popular?</h2>
59
- <p>"Dear My Love" by Big Zulu is a popular song that was released on November 25th, 2022 as a single from his upcoming album. The song features three other artists: K.O., Siya Ntuli and Xowla. It is a romantic track that expresses the feelings of love and appreciation for a partner. </p>
60
- <p>The song has received positive feedback from critics and fans alike, who praised its catchy and melodic tune, its smooth and soothing beat, and its heartfelt and sincere lyrics. The song also celebrates the beauty and uniqueness of African women. </p>
61
- <p>The song has also achieved impressive results on various music charts and platforms. It peaked at number one on the iTunes Chart in South Africa, number two on the Apple Music Chart in South Africa, number three on the Spotify Chart in South Africa, and number four on the YouTube Music Chart in South Africa. It also reached the top ten on several radio stations across the country. </p>
62
- <p>The song has also been nominated for Song of the Year at the South African Hip Hop Awards 2023. It is considered one of the biggest hits of Big Zulu's career so far. </p> <h2>How to Download "Dear My Love" by Big Zulu for Free?</h2>
63
- <p>If you want to download "Dear My Love" by Big Zulu for free, you can use a website called OKmusi MP3 downloader. This website allows you to download any song from YouTube, SoundCloud, Spotify, and other platforms as an MP3 file. You can also choose the quality of the download, from 128kbps to 320kbps. </p>
64
- <p>OKmusi MP3 downloader is a free and easy-to-use website that does not require any registration, subscription, or installation. You can access it from any device and browser. It also does not have any annoying ads, pop-ups, or viruses. You can download as many songs as you want without any limit. </p>
65
- <h3>What is OKmusi MP3 downloader?</h3>
66
- <p>OKmusi MP3 downloader is a website that lets you download any song from various online sources as an MP3 file. You can use it to download songs from YouTube, SoundCloud, Spotify, Facebook, Instagram, TikTok, and more. You can also search for songs by name, artist, album, or genre. </p>
67
- <p>The website supports different formats of audio and video files, such as MP3, MP4, M4A, WEBM, and FLV. You can also select the quality of the download, from 128kbps to 320kbps. The website is fast and reliable, and it preserves the original sound quality of the song. </p>
68
- <h3>How to use OKmusi MP3 downloader?</h3>
69
- <p>To use OKmusi MP3 downloader to download "Dear My Love" by Big Zulu for free, you need to follow these simple steps:</p>
70
- <ol>
71
- <li>Go to the website <a href="">OKmusi MP3 downloader</a>.</li>
72
- <li>Type "Dear My Love" by Big Zulu in the search box and click on the magnifying glass icon.</li>
73
- <li>Choose the song from the list of results and click on the download button.</li>
74
- <li>Select the quality of the download and click on the download button again.</li>
75
- <li>Wait for the download to finish and save the file to your device.</li>
76
- </ol>
77
- <p>Congratulations! You have successfully downloaded "Dear My Love" by Big Zulu for free using OKmusi MP3 downloader.</p>
78
- <h3>What are the advantages of using OKmusi MP3 downloader?</h3>
79
- <p>There are many advantages of using OKmusi MP3 downloader to download "Dear My Love" by Big Zulu for free. Here are some of them:</p>
80
- <ul>
81
- <li>You can download any song from any online source as an MP3 file.</li>
82
- <li>You can choose the quality of the download from 128kbps to 320kbps.</li>
83
- <li>You do not need to register, subscribe, or install anything.</li>
84
- <li>You do not have to deal with any ads, pop-ups, or viruses.</li>
85
- <li>You can download as many songs as you want without any limit.</li>
86
- <li>You can access the website from any device and browser.</li>
87
- </ul> <h2>How to Download "Dear My Love" by Big Zulu for a Fee?</h2>
88
- <p>If you want to download "Dear My Love" by Big Zulu for a fee, you can use some paid music streaming services that offer the song for download, such as Spotify, Apple Music, and Amazon Music. These services allow you to listen to millions of songs online and offline, as well as access other features and benefits. However, you need to pay a monthly or yearly subscription fee to use these services.</p>
89
- <p>In this section, we will compare the features, prices, and benefits of Spotify, Apple Music, and Amazon Music. We will also show you how to download "Dear My Love" by Big Zulu on each service.</p>
90
- <h3>What are the features of Spotify?</h3>
91
- <p>Spotify is one of the most popular music streaming services in the world. It has over 70 million songs, podcasts, and playlists that you can listen to online or offline. You can also create your own playlists, discover new music, and share your favorites with your friends. </p>
92
- <p>Spotify has two plans: Free and Premium. The Free plan lets you listen to music online with ads and limited skips. The Premium plan lets you listen to music offline without ads and unlimited skips. It also gives you access to higher quality audio, ad-free podcasts, and exclusive content. </p>
93
- <p>The Premium plan costs $9.99 per month for individuals, $12.99 per month for couples, $14.99 per month for families of up to six members, and $4.99 per month for students. You can also get a free trial of the Premium plan for one month. </p>
94
- <h3>How to download "Dear My Love" by Big Zulu on Spotify?</h3>
95
- <p>To download "Dear My Love" by Big Zulu on Spotify, you need to have a Premium account and a device that supports offline mode. You also need to have enough storage space on your device. Here are the steps to download the song on Spotify:</p>
96
- <ol>
97
- <li>Open the Spotify app on your device and log in with your Premium account.</li>
98
- <li>Search for "Dear My Love" by Big Zulu and tap on the song.</li>
99
- <li>Tap on the three dots icon at the top right corner of the screen and select "Download".</li>
100
- <li>Wait for the download to complete and check the green arrow icon next to the song.</li>
101
- <li>Enjoy listening to the song offline.</li>
102
- </ol>
103
- <p>Note: You can also download entire albums or playlists by following the same steps.</p>
104
- <h3>What are the features of Apple Music?</h3>
105
- <p>Apple Music is another popular music streaming service that is integrated with iTunes and other Apple devices. It has over 75 million songs, radio stations, podcasts, and videos that you can listen to online or offline. You can also create your own playlists, discover new music, and access your iTunes library. </p>
106
- <p>Apple Music has one plan: Individual. The Individual plan lets you listen to music online or offline without ads and unlimited skips. It also gives you access to higher quality audio, ad-free radio stations, live concerts, and exclusive content. </p>
107
- <p>The Individual plan costs $9.99 per month for individuals, $14.99 per month for families of up to six members, and $4.99 per month for students. You can also get a free trial of the Individual plan for three months. </p>
108
- <h3>How to download "Dear My Love" by Big Zulu on Apple Music?</h3>
109
- <p>To download "Dear My Love" by Big Zulu on Apple Music, you need to have an Individual account and a device that supports offline mode. You also need to have enough storage space on your device. Here are the steps to download the song on Apple Music:</p>
110
- <ol>
111
- <li>Open the Apple Music app on your device and log in with your Individual account.</li>
112
- <li>Search for "Dear My Love" by Big Zulu and tap on the song.</li>
113
- <li>Tap on the plus icon at the bottom right corner of the screen and select "Download".</li>
114
- <li>Wait for the download to complete and check the cloud icon next to the song.</li>
115
- <li>Enjoy listening to the song offline.</li>
116
- </ol>
117
- <p>Note: You can also download entire albums or playlists by following the same steps.</p>
118
- <h3>What are the features of Amazon Music?</h3>
119
- <p>Amazon Music is another popular music streaming service that is integrated with Amazon Prime and other Amazon devices. It has over 70 million songs, podcasts, and playlists that you can listen to online or offline. You can also create your own playlists, discover new music, and access your Amazon library. </p>
120
- <p>Amazon Music has two plans: Prime Music and Unlimited. The Prime Music plan lets you listen to over 2 million songs online or offline without ads and unlimited skips. It is included with your Amazon Prime membership. The Unlimited plan lets you listen to over 70 million songs online or offline without ads and unlimited skips. It also gives you access to higher quality audio, ad-free podcasts, and exclusive content. </p>
121
- <p>The Unlimited plan costs $7.99 per month for Prime members, $9.99 per month for non-Prime members, $14.99 per month for families of up to six members, and $4.99 per month for students. You can also get a free trial of the Unlimited plan for one month. </p>
122
- <h3>How to download "Dear My Love" by Big Zulu on Amazon Music?</h3>
123
- <p>To download "Dear My Love" by Big Zulu on Amazon Music, you need to have a Prime Music or Unlimited account and a device that supports offline mode. You also need to have enough storage space on your device. Here are the steps to download the song on Amazon Music:</p>
124
- <ol>
125
- <li>Open the Amazon Music app on your device and log in with your Prime Music or Unlimited account.</li>
126
- <li>Search for "Dear My Love" by Big Zulu and tap on the song.</li>
127
- <li>Tap on the three dots icon at the bottom right corner of the screen and select "Download".</li>
128
- <li>Wait for the download to complete and check the checkmark icon next to the song.</li>
129
- <li>Enjoy listening to the song offline.</li>
130
- </ol>
131
- <p>Note: You can also download entire albums or playlists by following the same steps.</p>
132
- <h2>Conclusion</h2>
133
- <p>In this article, we have shown you how to download "Dear My Love" by Big Zulu for free or for a fee. We have also given you some background information about the song and the artist. We hope you have enjoyed reading this article and learned something new.</p>
134
- <p>"Dear My Love" by Big Zulu is a romantic and catchy song that celebrates the beauty and uniqueness of African women. It is a collaboration between Big Zulu and three other artists: K.O., Siya Ntuli, and Xowla. It is a popular song that has received positive feedback from critics and fans alike. It has also achieved impressive results on various music charts and platforms.</p>
135
- <p>If you want to download this song to your device, you can use OKmusi MP3 downloader, Spotify, Apple Music, or Amazon Music. Each of these options has its own features, prices, and benefits. You can choose the one that suits your preferences and budget.</p>
136
- <p>So what are you waiting for? Download "Dear My Love" by Big Zulu today and enjoy listening to this amazing song anytime and anywhere.</p>
137
- <h2>Frequently Asked Questions</h2>
138
- <p>Here are some frequently asked questions about "Dear My Love" by Big Zulu and how to download it:</p>
139
- <h3>Q: When was "Dear My Love" by Big Zulu released?</h3>
140
- <p>A: "Dear My Love" by Big Zulu was released on November 25th, 2022 as a single from his upcoming album.</p>
141
- <h3>Q: What genre is "Dear My Love" by Big Zulu?</h3>
142
- <p>A: "Dear My Love" by Big Zulu belongs to the genre of hip-hop or rap music, but it also incorporates elements of R&B and soul music.</p>
143
- <h3>Q: Who are the other artists featured in "Dear My Love" by Big Zulu?</h3>
144
- <p>A: The other artists featured in "Dear My Love" by Big Zulu are K.O., Siya Ntuli, and Xowla.</p>
145
- <h3>Q: How can I download "Dear My Love" by Big Zulu for free?</h3>
146
- <p>A: You can download "Dear My Love" by Big Zulu for free using OKmusi MP3 downloader, a website that lets you download any song from any online source as an MP3 file.</p>
147
- <h3>Q: How can I download "Dear My Love" by Big Zulu for a fee?</h3>
148
- <p>A: You can download "Dear My Love" by Big Zulu for a fee using Spotify, Apple Music, or Amazon Music, paid music streaming services that offer the song for download.</p> 197e85843d<br />
149
- <br />
150
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy Blackmoor 2 with Mod APK Free Download for Android Devices.md DELETED
@@ -1,129 +0,0 @@
1
- <br />
2
- <h1>Download Blackmoor 2 Mod Apk: A Guide for Android Users</h1>
3
- <p>Are you a fan of action-packed platform games with retro graphics and epic boss battles? If yes, then you should definitely try Blackmoor 2, a sequel to the popular Blackmoor game that has over 10 million downloads on Google Play. In this article, we will tell you everything you need to know about Blackmoor 2, and how to download and install its mod apk version on your Android device. So, let's get started!</p>
4
- <h2>download black moor 2 mod apk</h2><br /><p><b><b>Download File</b> &rArr;&rArr;&rArr; <a href="https://jinyurl.com/2uNLTF">https://jinyurl.com/2uNLTF</a></b></p><br /><br />
5
- <h2>What is Blackmoor 2?</h2>
6
- <p>Blackmoor 2 is a side-scrolling action-adventure game developed by Four Fats Limited, a studio based in Hong Kong. The game is inspired by classic arcade games like Golden Axe, Double Dragon, and Streets of Rage. You can choose from eight different characters, each with their own unique abilities and fighting styles. You can also customize your character's appearance, skills, and equipment. The game has a story mode, where you have to fight your way through various levels and enemies, as well as a co-op mode, where you can team up with up to four friends online or offline. The game also has a build mode, where you can create your own levels and share them with other players.</p>
7
- <h3>Features of Blackmoor 2</h3>
8
- <p>Some of the features that make Blackmoor 2 stand out from other platform games are:</p>
9
- <ul>
10
- <li>Stunning pixel art graphics and animations</li>
11
- <li>Smooth and responsive controls</li>
12
- <li>Dynamic combat system with combos, counters, and special moves</li>
13
- <li>Diverse and challenging enemies and bosses</li>
14
- <li>A wide variety of weapons, armor, and items to collect and upgrade</li>
15
- <li>A rich and humorous story with voice acting</li>
16
- <li>A multiplayer mode with co-op and PvP options</li>
17
- <li>A level editor with online sharing and rating</li>
18
- <li>Achievements and leaderboards</li>
19
- </ul>
20
- <h3>Gameplay of Blackmoor 2</h3>
21
- <p>The gameplay of Blackmoor 2 is simple yet addictive. You have to control your character using the virtual joystick and buttons on the screen. You can move left or right, jump, crouch, attack, block, dodge, and use special skills. You can also interact with objects and NPCs in the environment. You have to defeat all the enemies that come your way, while avoiding traps and obstacles. You can also collect coins, gems, health potions, and other items along the way. You can use these items to buy new equipment or upgrade your existing ones. You can also unlock new characters and skills as you progress through the game.</p>
22
- <h2>Why download Blackmoor 2 mod apk?</h2>
23
- <p>Blackmoor 2 is a free-to-play game that you can download from Google Play. However, there are some limitations and drawbacks that might affect your gaming experience. For example:</p>
24
- <ul>
25
- <li>You have to watch ads to get extra lives or coins</li>
26
- <li>You have to wait for energy to refill before playing again</li>
27
- <li>You have to spend real money to buy premium items or characters</li>
28
- <li>You have to grind for hours to level up or unlock new features</li>
29
- <li>You might encounter bugs or glitches that ruin your progress</li>
30
- </ul>
31
- <p>If you want to enjoy Blackmoor 2 without any of these hassles, then you should download its mod apk version. <h3>Benefits of Blackmoor 2 mod apk</h3>
32
- <p>By downloading the Blackmoor 2 mod apk, you can enjoy the following benefits:</p>
33
- <ul>
34
- <li>Unlimited coins and gems to buy anything you want</li>
35
- <li>Unlimited lives and energy to play as long as you want</li>
36
- <li>All characters and skills unlocked from the start</li>
37
- <li>No ads or in-app purchases to interrupt your game</li>
38
- <li>No bugs or errors to spoil your fun</li>
39
- </ul>
40
- <p>With the Blackmoor 2 mod apk, you can experience the game in a whole new way. You can explore all the levels and modes, try out different characters and weapons, and challenge yourself with harder enemies and bosses. You can also share your creations and achievements with other players online.</p>
41
- <p>How to download black moor 2 mod apk for free<br />
42
- Black moor 2 mod apk unlimited characters and coins<br />
43
- Black moor 2 mod apk latest version download<br />
44
- Download black moor 2 mod apk offline<br />
45
- Black moor 2 mod apk hack cheats<br />
46
- Black moor 2 mod apk android 1<br />
47
- Black moor 2 mod apk no root<br />
48
- Black moor 2 mod apk gameplay<br />
49
- Black moor 2 mod apk review<br />
50
- Black moor 2 mod apk download link<br />
51
- Black moor 2 mod apk features and benefits<br />
52
- Black moor 2 mod apk installation guide<br />
53
- Black moor 2 mod apk tips and tricks<br />
54
- Black moor 2 mod apk best characters<br />
55
- Black moor 2 mod apk vs original game<br />
56
- Black moor 2 mod apk online multiplayer<br />
57
- Black moor 2 mod apk new update<br />
58
- Black moor 2 mod apk requirements and compatibility<br />
59
- Black moor 2 mod apk pros and cons<br />
60
- Black moor 2 mod apk screenshots and videos<br />
61
- Download black moor 2 mod apk for PC<br />
62
- Download black moor 2 mod apk for iOS<br />
63
- Download black moor 2 mod apk for Windows Phone<br />
64
- Download black moor 2 mod apk for Mac<br />
65
- Download black moor 2 mod apk for Linux<br />
66
- Download black moor 2 mod apk from apkmody.io[^1^]<br />
67
- Download black moor 2 mod apk from apkpure.com<br />
68
- Download black moor 2 mod apk from rexdl.com<br />
69
- Download black moor 2 mod apk from revdl.com<br />
70
- Download black moor 2 mod apk from happymod.com<br />
71
- Download black moor 2 mod apk from androidp1.com<br />
72
- Download black moor 2 mod apk from an1.com<br />
73
- Download black moor 2 mod apk from mob.org<br />
74
- Download black moor 2 mod apk from apknite.com<br />
75
- Download black moor 2 mod apk from apkmirror.com<br />
76
- Download black moor 2 mod apk from uptodown.com<br />
77
- Download black moor 2 mod apk from apksfree.com<br />
78
- Download black moor 2 mod apk from apktada.com<br />
79
- Download black moor 2 mod apk from apksfull.com<br />
80
- Download black moor 2 mod apk from apksmodhome.com</p>
81
- <h3>How to download and install Blackmoor 2 mod apk</h3>
82
- <p>If you are interested in downloading and installing the Blackmoor 2 mod apk, you can follow these simple steps:</p>
83
- <h4>Step 1: Download the file</h4>
84
- <p>The first thing you need to do is to download the Blackmoor 2 mod apk file from a reliable source. You can use the link below to get the latest version of the file:</p>
85
- <p><a href="">Download Blackmoor 2 mod apk here</a></p>
86
- <p>The file size is about 150 MB, so make sure you have enough space on your device. You also need to have a stable internet connection to avoid any interruptions.</p>
87
- <h4>Step 2: Enable unknown sources</h4>
88
- <p>The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, you need to go to your device settings, then security, then unknown sources. You need to toggle the switch to turn it on. You might see a warning message, but don't worry, it's safe to proceed.</p>
89
- <h4>Step 3: Install the file</h4>
90
- <p>After enabling unknown sources, you can now install the Blackmoor 2 mod apk file. To do this, you need to locate the file on your device, either in your downloads folder or wherever you saved it. Then, you need to tap on it and follow the instructions on the screen. It might take a few minutes for the installation to complete.</p>
91
- <h4>Step 4: Open the game and enjoy</h4>
92
- <p>Once the installation is done, you can now open the game and enjoy its features. You will see that you have unlimited coins and gems, unlimited lives and energy, all characters and skills unlocked, no ads or in-app purchases, and no bugs or errors. You can start playing the game right away, or customize your settings and preferences.</p>
93
- <h2>Tips and tricks for playing Blackmoor 2 mod apk</h2>
94
- <p>To make the most out of your gaming experience with Blackmoor 2 mod apk, here are some tips and tricks that you can use:</p>
95
- <h3>Choose your character wisely</h3>
96
- <p>Blackmoor 2 has eight different characters that you can choose from, each with their own strengths and weaknesses. You can switch between them anytime during the game, but it's better to stick with one that suits your playstyle and preference. Here are some of the characters and their abilities:</p>
97
- <ul>
98
- <li>Sir Arthur: A knight with a sword and shield. He has balanced stats and can block attacks.</li>
99
- <li>Muramasa: A samurai with a katana and shurikens. He has high speed and damage but low defense.</li>
100
- <li>Ravensword: A barbarian with a giant axe and a pet raven. He has high health and power but low mobility.</li>
101
- <li>Mage: A wizard with a staff and spells. He has high magic and range but low physical strength.</li>
102
- <li>Frost: A ninja with a dagger and ice powers. He has high agility and stealth but low durability.</li>
103
- <li>Bombardier: A pirate with a pistol and bombs. He has high explosives and accuracy but low melee skills.</li>
104
- <li>Lady Luna: A vampire with a whip and blood magic. She has high life steal and charm but low sunlight resistance.</li>
105
- <li>Dave: A zombie with a chainsaw and guts. He has high regeneration and resilience but low intelligence.</li>
106
- </ul>
107
- <h3>Upgrade your skills and equipment</h3>
108
- <p>As you play through the game, you will earn coins and gems that you can use to upgrade your skills and equipment. You can access the shop from the main menu or from checkpoints in each level. You can buy new weapons, armor, accessories, and consumables that can enhance your performance and appearance. You can also upgrade your skills by spending skill points that you earn by leveling up. You can choose from four skill trees: attack, defense, magic, and special. You can also reset your skills anytime if you want to try a different build.</p>
109
- <h3>Use the co-op mode and online multiplayer mode</h3>
110
- <p>Blackmoor 2 is more fun when you play with your friends. You can use the co-op mode to team up with up to four players online or offline. You can join or create a room and invite your friends or random players. You can also chat with them using the in-game chat feature. You can play the story mode, the build mode, or the survival mode together. You can also use the online multiplayer mode to compete with other players in PvP battles. You can choose from different modes such as deathmatch, capture the flag, or king of the hill. You can also rank up and earn rewards based on your performance.</p>
111
- <h2>Conclusion</h2>
112
- <p>Blackmoor 2 is an amazing game that will keep you entertained for hours. It has everything you need in a platform game: action, adventure, humor, creativity, and multiplayer. If you want to enjoy the game without any limitations or interruptions, you should download the Blackmoor 2 mod apk from the link below. You will get unlimited coins and gems, unlimited lives and energy, all characters and skills unlocked, no ads or in-app purchases, and no bugs or errors. You will also get access to all the latest updates and features of the game. So, what are you waiting for? Download Blackmoor 2 mod apk now and have fun!</p>
113
- <p><a href="">Download Blackmoor 2 mod apk here</a></p>
114
- <h2>FAQs</h2>
115
- <p>Here are some of the frequently asked questions about Blackmoor 2 mod apk:</p>
116
- <ol>
117
- <li>Is Blackmoor 2 mod apk safe to use?</li>
118
- <p>Yes, Blackmoor 2 mod apk is safe to use as long as you download it from a trusted source. It does not contain any viruses or malware that can harm your device or data. It also does not require any root or jailbreak to run.</p>
119
- <li>Will Blackmoor 2 mod apk work on my device?</li>
120
- <p>Blackmoor 2 mod apk is compatible with most Android devices that have Android 5.0 or higher. However, some devices may not support some features or functions of the game due to hardware limitations or compatibility issues.</p>
121
- <li>Can I play Blackmoor 2 mod apk offline?</li>
122
- <p>Yes, you can play Blackmoor 2 mod apk offline without any internet connection. However, some features or modes may not be available or functional offline, such as the co-op mode and online multiplayer mode.</p>
123
- <li>Can I update Blackmoor 2 mod apk?</li>
124
- <p>Yes, you can update Blackmoor 2 mod apk whenever there is a new version available. However, you need to download and install the new version manually from the same source as before. You also need to backup your data before updating to avoid losing your progress.</p>
125
- <li>Can I use Blackmoor 2 mod apk with Google Play Games?</li>
126
- <p>No, you cannot use Blackmoor 2 mod apk with Google Play Games. This is because the mod apk is not an official version of the game and does not have a valid signature. Therefore, you cannot sign in with your Google account or sync your data with Google Play Games.</p>
127
- </ol></p> 401be4b1e0<br />
128
- <br />
129
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/FIFA Mobile () 9.0.12 APK - NEXONs Official Release.md DELETED
@@ -1,110 +0,0 @@
1
- <br />
2
- <h1>FIFA Mobile Nexon APK 9.0.12: Everything You Need to Know</h1>
3
- <p>If you are a fan of soccer games on mobile devices, you might have heard of FIFA Mobile, the official mobile version of the popular FIFA series by EA Sports. But did you know that there is another version of FIFA Mobile, exclusive to Japan and Korea, that has more features and content than the global version? It's called FIFA Mobile Nexon, and it's developed by NEXON Company, a leading game developer in Asia.</p>
4
- <h2>fifa mobile nexon apk 9.0.12</h2><br /><p><b><b>Download File</b> &#10145; <a href="https://jinyurl.com/2uNKDT">https://jinyurl.com/2uNKDT</a></b></p><br /><br />
5
- <p>In this article, we will tell you everything you need to know about FIFA Mobile Nexon APK 9.0.12, the latest update of the game that was released on June 15, 2021. We will cover the features, download process, review, and tips and tricks of this amazing soccer game that will make you feel like a real manager and player.</p>
6
- <h2>What is FIFA Mobile Nexon?</h2>
7
- <p>FIFA Mobile Nexon is a spin-off edition of FIFA Mobile that was launched in 2020 for users in Japan and Korea. It has the official license of over 30 leagues, over 650 clubs, and over 17,000 soccer players from all over the world. You can create your own team using real clubs and players, play online matches against other users, participate in various events and modes, and enjoy realistic graphics and gameplay.</p>
8
- <p>FIFA Mobile Nexon is constantly updated with new content and improvements that make it more enjoyable and immersive than the global version of FIFA Mobile. The latest update, FIFA Mobile Nexon APK 9.0.12, brings a lot of new features and changes that we will discuss in the next section.</p>
9
- <h3>Features of FIFA Mobile Nexon</h3>
10
- <p>The latest update of FIFA Mobile Nexon has a lot of new features and improvements that make it one of the best soccer games on mobile devices. Here are some of the highlights:</p>
11
- <h4>Eternal Icon Class</h4>
12
- <p>This is a new development-type ICON class that allows you to acquire and grow legendary players from soccer history by using existing players and increasing their OVR (overall rating). You can level up their OVR through promotion, which is a dedicated growth content. You can also exchange acquired Eternal Icons for goods that can help you grow them again through return content.</p>
13
- <h4>Transfer Market Convenience Update</h4>
14
- <p>This update makes it easier for you to buy and sell players in the transfer market. You can check the transaction status when selecting a player from your own screen and exchange them. You can also search for players more conveniently by using various search conditions, such as team skills and evolution level. You can also see the transaction registration status by evolution stage after searching for a player.</p>
15
- <p>fifa mobile nexon apk 9.0.12 download<br />
16
- fifa mobile nexon apk 9.0.12 mod<br />
17
- fifa mobile nexon apk 9.0.12 update<br />
18
- fifa mobile nexon apk 9.0.12 free<br />
19
- fifa mobile nexon apk 9.0.12 latest version<br />
20
- fifa mobile nexon apk 9.0.12 android<br />
21
- fifa mobile nexon apk 9.0.12 offline<br />
22
- fifa mobile nexon apk 9.0.12 hack<br />
23
- fifa mobile nexon apk 9.0.12 unlimited money<br />
24
- fifa mobile nexon apk 9.0.12 obb<br />
25
- fifa mobile nexon apk 9.0.12 full<br />
26
- fifa mobile nexon apk 9.0.12 cracked<br />
27
- fifa mobile nexon apk 9.0.12 premium<br />
28
- fifa mobile nexon apk 9.0.12 no root<br />
29
- fifa mobile nexon apk 9.0.12 mega mod<br />
30
- fifa mobile nexon apk 9.0.12 original<br />
31
- fifa mobile nexon apk 9.0.12 revdl<br />
32
- fifa mobile nexon apk 9.0.12 rexdl<br />
33
- fifa mobile nexon apk 9.0.12 apkpure<br />
34
- fifa mobile nexon apk 9.0.12 uptodown<br />
35
- fifa mobile nexon apk 9.0.12 apkmirror[^1^]<br />
36
- fifa mobile nexon apk 9.0.12 old version<br />
37
- fifa mobile nexon apk 9.0.12 new features<br />
38
- fifa mobile nexon apk 9.0.12 gameplay<br />
39
- fifa mobile nexon apk 9.0.12 review<br />
40
- fifa mobile nexon apk 9.0.12 tips and tricks<br />
41
- fifa mobile nexon apk 9.0.12 cheats<br />
42
- fifa mobile nexon apk 9.0.12 guide<br />
43
- fifa mobile nexon apk 9.0.12 tutorial<br />
44
- fifa mobile nexon apk 9.0.12 how to install<br />
45
- fifa mobile nexon apk 9.0.12 how to play<br />
46
- fifa mobile nexon apk 9.0.12 how to update<br />
47
- fifa mobile nexon apk 9.0.12 how to download<br />
48
- fifa mobile nexon apk 9.0.12 how to hack<br />
49
- fifa mobile nexon apk 9.0.12 how to mod<br />
50
- fifa mobile nexon apk 9.0.12 requirements<br />
51
- fifa mobile nexon apk 9</p>
52
- <h4>Game Convenience Reorganization</h4>
53
- <p>This update makes it more convenient for you to manage your team and play the game. You can access the transfer market menu when selecting a player from your own screen or from the exchange screen. You can also use the bulk exchange function in some exchanges.</p>
54
- <h4>Improving Gameplay Experience</h4>
55
- <p>This update makes the gameplay more realistic and balanced based on the situation and players' stats. The aerial competitions are more realistic, the cross accuracy is adjusted, the player switching is optimized, and the disconnection during play is improved.</p>
56
- <h4>Improved Set Piece Camera</h4>
57
- <p>This update improves the camera angle for free kicks, corner kicks, goal kicks, and penalty kicks. You can also select different angles during free kicks and corner kicks. This creates a more dynamic and tense experience, and allows you to use strategic attacks from set pieces.</p>
58
- <h4>New Motion Update</h4>
59
- <p>This update adds new animations and actions for players in various situations, such as free kick preparation , dribbling, passing, shooting, and celebrating. These make the players more expressive and realistic, and enhance the immersion of the game.</p>
60
- <h3>How to Download FIFA Mobile Nexon APK 9.0.12</h3>
61
- <p>If you want to download and play FIFA Mobile Nexon APK 9.0.12, you need to follow these steps:</p>
62
- <ol>
63
- <li>Go to the official website of FIFA Mobile Nexon (https://fifaonline4.nexon.com/fifamobile) and click on the download button for Android devices.</li>
64
- <li>You will be redirected to a page where you can download the APK file of FIFA Mobile Nexon. Click on the download button and wait for the file to be downloaded.</li>
65
- <li>Once the file is downloaded, go to your device settings and enable the installation of apps from unknown sources.</li>
66
- <li>Locate the APK file in your device storage and tap on it to install it.</li>
67
- <li>Launch the game and enjoy FIFA Mobile Nexon APK 9.0.12.</li>
68
- </ol>
69
- <p>Note: You need to have a stable internet connection and enough storage space to play the game. You also need to create a NEXON account or log in with your existing one to access the game.</p>
70
- <h3>FIFA Mobile Nexon Review</h3>
71
- <p>FIFA Mobile Nexon is a great soccer game for mobile devices that offers a lot of features and content that are not available in the global version of FIFA Mobile. It has realistic graphics, smooth gameplay, diverse modes, and a large player base. It also has frequent updates that add new content and improvements to the game.</p>
72
- <p>Some of the pros of FIFA Mobile Nexon are:</p>
73
- <ul>
74
- <li>It has official licenses of over 30 leagues, over 650 clubs, and over 17,000 soccer players from all over the world.</li>
75
- <li>It has a variety of modes and events that keep you entertained and challenged, such as Season Mode, World Tour Mode, League Mode, VS Attack Mode, Campaign Mode, Event Mode, and more.</li>
76
- <li>It has a unique development system that allows you to acquire and grow legendary players from soccer history through Eternal Icon Class.</li>
77
- <li>It has a realistic and balanced gameplay that reflects the situation and players' stats. It also has improved set piece camera and new motion update that make the game more dynamic and immersive.</li>
78
- </ul>
79
- <p>Some of the cons of FIFA Mobile Nexon are:</p>
80
- <ul>
81
- <li>It is only available in Japan and Korea, so you need to download the APK file from the official website or use a VPN service to access the game.</li>
82
- <li>It requires a lot of storage space and internet data to play the game smoothly.</li>
83
- <li>It can be difficult to compete with other players who have higher OVR or better players than you.</li>
84
- </ul>
85
- <h3>FIFA Mobile Nexon Tips and Tricks</h3>
86
- <p>If you want to improve your skills and performance in FIFA Mobile Nexon, here are some tips and tricks that can help you:</p>
87
- <ul>
88
- <li>Build your team according to your preferred formation, style, and strategy. Choose players who have high OVR, good chemistry, and suitable skills for each position.</li>
89
- <li>Upgrade your players by using training items, evolution items, promotion items, or Eternal Icons. You can also sell or exchange your unwanted players in the transfer market or use them for other purposes.</li>
90
- <li>Play various modes and events to earn rewards, such as coins, gems, players, items, or goods. You can also join a league or create your own league to play with other users and get more benefits.</li>
91
- <li>Practice your skills in different situations, such as dribbling, passing, shooting, defending, or set pieces. Learn how to use different controls, such as swipe, tap, button, or gesture. You can also adjust your settings according to your preference.</li>
92
- <li>Watch replays or tutorials of other players who are better than you or have similar style as you. You can learn from their moves, tactics, or mistakes. You can also watch live streams or videos of professional soccer matches or players to get inspiration or tips.</li>
93
- </ul>
94
- <h2>Conclusion</h2>
95
- <p>FIFA Mobile Nexon APK 9.0.12 is an amazing soccer game for mobile devices that offers more features and content than the global version of FIFA Mobile. It has realistic graphics, smooth gameplay, diverse modes, and a large player base. It also has frequent updates that add new content and improvements to the game.</p>
96
- <p>If you are a fan of soccer games on mobile devices, you should definitely try FIFA Mobile Nexon APK 9.0.12. You can download it from the official website or use a VPN service to access it. You will have a lot of fun and excitement playing this game. You will also learn a lot about soccer and its history.</p>
97
- <h2>FAQs</h2>
98
- <p>Here are some of the frequently asked questions about FIFA Mobile Nexon APK 9.0.12:</p>
99
- <h4>Q: Is FIFA Mobile Nexon free to play?</h4>
100
- <p>A: Yes, FIFA Mobile Nexon is free to download and play. However, it also has in-app purchases that can enhance your gaming experience.</p>
101
- <h4>Q: Is FIFA Mobile Nexon compatible with my device?</h4>
102
- <p>A: FIFA Mobile Nexon requires Android 5.0 or higher and at least 2 GB of RAM to run smoothly. You also need to have enough storage space and internet data to play the game.</p>
103
- <h4>Q: How can I play FIFA Mobile Nexon with my friends?</h4>
104
- <p>A: You can play FIFA Mobile Nexon with your friends by inviting them to join your league or by challenging them to a friendly match. You can also chat with them in the game or send them gifts.</p>
105
- <h4>Q: How can I get more coins, gems, players, or items in FIFA Mobile Nexon?</h4>
106
- <p>A: You can get more coins, gems, players, or items in FIFA Mobile Nexon by playing various modes and events, completing achievements and quests, participating in the transfer market, or using real money.</p>
107
- <h4>Q: How can I contact the customer service of FIFA Mobile Nexon?</h4>
108
- <p>A: You can contact the customer service of FIFA Mobile Nexon by using the in-game inquiry function or by visiting the official website (https://fifaonline4.nexon.com/fifamobile) and clicking on the customer center button.</p> 401be4b1e0<br />
109
- <br />
110
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/README.md DELETED
@@ -1,380 +0,0 @@
1
- # PPDiffusers Pipelines
2
-
3
- Pipelines提供了一种对各种SOTA扩散模型进行各种下游任务推理的简单方式。
4
- 大多数扩散模型系统由多个独立训练的模型和高度自适应的调度器(scheduler)组成,通过pipeline我们可以很方便的对这些扩散模型系统进行端到端的推理。
5
-
6
- 举例来说, Stable Diffusion由以下组件构成:
7
- - Autoencoder
8
- - Conditional Unet
9
- - CLIP text encoder
10
- - Scheduler
11
- - CLIPFeatureExtractor
12
- - Safety checker
13
-
14
- 这些组件之间是独立训练或创建的,同时在Stable Diffusion的推理运行中也是必需的,我们可以通过pipelines来对整个系统进行封装,从而提供一个简洁的推理接口。
15
-
16
- 我们通过pipelines在统一的API下提供所有开源且SOTA的扩散模型系统的推理能力。具体来说,我们的pipelines能够提供以下功能:
17
- 1. 可以加载官方发布的权重,并根据相应的论文复现出与原始实现相同的输出
18
- 2. 提供一个简单的用户界面来推理运行扩散模型系统,参见[Pipelines API](#pipelines-api)部分
19
- 3. 提供易于理解的代码实现,可以与官方文档一起阅读,参见[Pipelines汇总](#Pipelines汇总)部分
20
- 4. 支持多种模态下的10+种任务,参见[任务展示](#任务展示)部分
21
- 5. 可以很容易地与社区建立联系
22
-
23
- **【注意】** Pipelines不(也不应该)提供任何训练功能。
24
- 如果您正在寻找训练的相关示例,请查看[examples](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples).
25
-
26
- ## Pipelines汇总
27
-
28
- 下表总结了所有支持的Pipelines,以及相应的来源、任务、推理脚本。
29
-
30
- | Pipeline | 源链接 | 任务 | 推理脚本
31
- |-------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|:---:|:---:|
32
- | [alt_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/alt_diffusion) | [**Alt Diffusion**](https://arxiv.org/abs/2211.06679) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-alt_diffusion.py)
33
- | [alt_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/alt_diffusion) | [**Alt Diffusion**](https://arxiv.org/abs/2211.06679) | *Image-to-Image Text-Guided Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/image_to_image_text_guided_generation-alt_diffusion.py)
34
- | [audio_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio-diffusion) | *Unconditional Audio Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_audio_generation-audio_diffusion.py)
35
- | [dance_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/dance_diffusion) | [**Dance Diffusion**](https://github.com/Harmonai-org/sample-generator) | *Unconditional Audio Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_audio_generation-dance_diffusion.py)
36
- | [ddpm](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | *Unconditional Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_image_generation-ddpm.py)
37
- | [ddim](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | *Unconditional Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_image_generation-ddim.py)
38
- | [latent_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-latent_diffusion.py)
39
- | [latent_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | *Super Superresolution* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/super_resolution-latent_diffusion.py)
40
- | [latent_diffusion_uncond](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | *Unconditional Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_image_generation-latent_diffusion_uncond.py)
41
- | [paint_by_example](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | *Image-Guided Image Inpainting* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/image_guided_image_inpainting-paint_by_example.py)
42
- | [pndm](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | *Unconditional Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_image_generation-pndm.py)
43
- | [repaint](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/repaint) | [**Repaint**](https://arxiv.org/abs/2201.09865) | *Image Inpainting* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/image_inpainting-repaint.py)
44
- | [score_sde_ve](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | *Unconditional Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_image_generation-score_sde_ve.py)
45
- | [stable_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-stable_diffusion.py)
46
- | [stable_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | *Image-to-Image Text-Guided Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/image_to_image_text_guided_generation-stable_diffusion.py)
47
- | [stable_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | *Text-Guided Image Inpainting* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_guided_image_inpainting-stable_diffusion.py)
48
- | [stable_diffusion_2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-stable_diffusion_2.py)
49
- | [stable_diffusion_2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | *Image-to-Image Text-Guided Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/image_to_image_text_guided_generation-stable_diffusion_2.py)
50
- | [stable_diffusion_2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | *Text-Guided Image Inpainting* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_guided_image_inpainting-stable_diffusion_2.py)
51
- | [stable_diffusion_2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | *Text-Guided Image Upscaling* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_guided_image_upscaling-stable_diffusion_2.py)
52
- | [stable_diffusion_2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | *Text-Guided Image Upscaling* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_guided_image_upscaling-stable_diffusion_2.py)
53
- | [stable_diffusion_safe](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-stable_diffusion_safe.py)
54
- | [stochastic_karras_ve](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | *Unconditional Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/unconditional_image_generation-stochastic_karras_ve.py)
55
- | [unclip](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/unclip) | [**UnCLIP**](https://arxiv.org/abs/2204.06125) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-unclip.py)
56
- | [versatile_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/versatile_diffusion) | [**Versatile Diffusion**](https://arxiv.org/abs/2211.08332) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-versatile_diffusion.py)
57
- | [versatile_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/versatile_diffusion) | [**Versatile Diffusion**](https://arxiv.org/abs/2211.08332) | *Image Variation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/image_variation-versatile_diffusion.py)
58
- | [versatile_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/versatile_diffusion) | [**Versatile Diffusion**](https://arxiv.org/abs/2211.08332) | *Dual Text and Image Guided Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/dual_text_and_image_guided_generation-versatile_diffusion.py)
59
- | [vq_diffusion](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/ppdiffusers/pipelines/vq_diffusion) | [**VQ Diffusion**](https://arxiv.org/abs/2111.14822) | *Text-to-Image Generation* | [link](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/inference/text_to_image_generation-vq_diffusion.py)
60
-
61
-
62
- **【注意】** Pipelines可以端到端的展示相应论文中描述的扩散模型系统。然而,大多数Pipelines可以使用不同的调度器组件,甚至不同的模型组件。
63
-
64
- ## Pipelines API
65
-
66
- 扩散模型系统通常由多个独立训练的模型以及调度器等其他组件构成。
67
- 其中每个模型都是在不同的任务上独立训练的,调度器可以很容易地进行替换。
68
- 然而,在推理过程中,我们希望能够轻松地加载所有组件并在推理中使用它们,即使某个组件来自不同的库, 为此,所有pipeline都提供以下功能:
69
-
70
-
71
- - `from_pretrained` 该方法接收PaddleNLP模型库id(例如`runwayml/stable-diffusion-v1-5`)或本地目录路径。为了能够准确加载相应的模型和组件,相应目录下必须提供`model_index.json`文件。
72
-
73
- - `save_pretrained` 该方法接受一个本地目录路径,Pipelines的所有模型或组件都将被保存到该目录下。对于每个模型或组件,都会在给定目录下创建一个子文件夹。同时`model_index.json`文件将会创建在本地目录路径的根目录下,以便可以再次从本地路径实例化整个Pipelines。
74
-
75
- - `__call__` Pipelines在推理时将调用该方法。该方法定义了Pipelines的推理逻辑,它应该包括预处理、张量在不同模型之间的前向传播、后处理等整个推理流程。
76
-
77
-
78
- ## 任务展示
79
- ### 文本图像多模态
80
- <details><summary>&emsp;文图生成(Text-to-Image Generation)</summary>
81
-
82
- - stable_diffusion
83
-
84
- ```python
85
- from ppdiffusers import StableDiffusionPipeline
86
-
87
- # 加载模型和scheduler
88
- pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
89
-
90
- # 执行pipeline进行推理
91
- prompt = "a photo of an astronaut riding a horse on mars"
92
- image = pipe(prompt).images[0]
93
-
94
- # 保存图片
95
- image.save("astronaut_rides_horse_sd.png")
96
- ```
97
- <div align="center">
98
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209322401-6ecfeaaa-6878-4302-b592-07a31de4e590.png">
99
- </div>
100
-
101
- </details>
102
-
103
- <details><summary>&emsp;文本引导的图像放大(Text-Guided Image Upscaling)</summary>
104
-
105
- - stable_diffusion_2
106
-
107
- ```python
108
- from ppdiffusers import StableDiffusionUpscalePipeline
109
- from ppdiffusers.utils import load_image
110
-
111
- pipe = StableDiffusionUpscalePipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler")
112
-
113
- url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/low_res_cat.png"
114
- low_res_img = load_image(url).resize((128, 128))
115
-
116
- prompt = "a white cat"
117
- upscaled_image = pipe(prompt=prompt, image=low_res_img).images[0]
118
- upscaled_image.save("upsampled_cat_sd2.png")
119
- ```
120
- <div align="center">
121
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209324085-0d058b70-89b0-43c2-affe-534eedf116cf.png">
122
- <center>原图像</center>
123
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209323862-ce2d8658-a52b-4f35-90cb-aa7d310022e7.png">
124
- <center>生成图像</center>
125
- </div>
126
- </details>
127
-
128
- <details><summary>&emsp;文本引导的图像编辑(Text-Guided Image Inpainting)</summary>
129
-
130
- - stable_diffusion_2
131
-
132
- ```python
133
- from ppdiffusers import StableDiffusionUpscalePipeline
134
- from ppdiffusers.utils import load_image
135
-
136
- pipe = StableDiffusionUpscalePipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler")
137
-
138
- url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/low_res_cat.png"
139
- low_res_img = load_image(url).resize((128, 128))
140
-
141
- prompt = "a white cat"
142
- upscaled_image = pipe(prompt=prompt, image=low_res_img).images[0]
143
- upscaled_image.save("upsampled_cat_sd2.png")
144
- ```
145
- <div align="center">
146
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209324085-0d058b70-89b0-43c2-affe-534eedf116cf.png">
147
- <center>原图像</center>
148
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209323862-ce2d8658-a52b-4f35-90cb-aa7d310022e7.png">
149
- <center>生成图像</center>
150
- </div>
151
- </details>
152
-
153
-
154
- <details><summary>&emsp;文本引导的图像变换(Image-to-Image Text-Guided Generation)</summary>
155
-
156
- - stable_diffusion
157
- ```python
158
- import paddle
159
-
160
- from ppdiffusers import StableDiffusionImg2ImgPipeline
161
- from ppdiffusers.utils import load_image
162
-
163
- # 加载pipeline
164
- pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
165
-
166
- # 下载初始图片
167
- url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/stable-diffusion-v1-4/sketch-mountains-input.png"
168
-
169
- init_image = load_image(url).resize((768, 512))
170
-
171
- prompt = "A fantasy landscape, trending on artstation"
172
- # 使用fp16加快生成速度
173
- with paddle.amp.auto_cast(True):
174
- image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images[0]
175
-
176
- image.save("fantasy_landscape.png")
177
- ```
178
- <div align="center">
179
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209327142-d8e1d0c7-3bf8-4a08-a0e8-b11451fc84d8.png">
180
- <center>原图像</center>
181
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209325799-d9ff279b-0d57-435f-bda7-763e3323be23.png">
182
- <center>生成图像</center>
183
- </div>
184
- </details>
185
- </details>
186
-
187
- <details><summary>&emsp;文本图像双引导图像生成(Dual Text and Image Guided Generation)</summary>
188
-
189
- - versatile_diffusion
190
- ```python
191
- from ppdiffusers import VersatileDiffusionDualGuidedPipeline
192
- from ppdiffusers.utils import load_image
193
-
194
- url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/benz.jpg"
195
- image = load_image(url)
196
- text = "a red car in the sun"
197
-
198
- pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained("shi-labs/versatile-diffusion")
199
- pipe.remove_unused_weights()
200
-
201
- text_to_image_strength = 0.75
202
- image = pipe(prompt=text, image=image, text_to_image_strength=text_to_image_strength).images[0]
203
- image.save("versatile-diffusion-red_car.png")
204
- ```
205
- <div align="center">
206
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209325965-2475e9c4-a524-4970-8498-dfe10ff9cf24.jpg" >
207
- <center>原图像</center>
208
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209325293-049098d0-d591-4abc-b151-9291ac2636da.png">
209
- <center>生成图像</center>
210
- </div>
211
- </details>
212
-
213
- ### 图像
214
-
215
- <details><summary>&emsp;无条件图像生成(Unconditional Image Generation)</summary>
216
-
217
- - latent_diffusion_uncond
218
-
219
- ```python
220
- from ppdiffusers import LDMPipeline
221
-
222
- # 加载模型和scheduler
223
- pipe = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256")
224
-
225
- # 执行pipeline进行推理
226
- image = pipe(num_inference_steps=200).images[0]
227
-
228
- # 保存图片
229
- image.save("ldm_generated_image.png")
230
- ```
231
- <div align="center">
232
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209327936-7fe914e0-0ea0-4e21-a433-24eaed6ee94c.png">
233
- </div>
234
- </details>
235
-
236
- <details><summary>&emsp;超分(Super Superresolution)</summary>
237
-
238
- - latent_diffusion
239
- ```python
240
- import paddle
241
-
242
- from ppdiffusers import LDMSuperResolutionPipeline
243
- from ppdiffusers.utils import load_image
244
-
245
- # 加载pipeline
246
- pipe = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages")
247
-
248
- # 下载初始图片
249
- url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/stable-diffusion-v1-4/overture-creations.png"
250
-
251
- init_image = load_image(url).resize((128, 128))
252
- init_image.save("original-image.png")
253
-
254
- # 使用fp16加快生成速度
255
- with paddle.amp.auto_cast(True):
256
- image = pipe(init_image, num_inference_steps=100, eta=1).images[0]
257
-
258
- image.save("super-resolution-image.png")
259
- ```
260
- <div align="center">
261
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209328660-9700fdc3-72b3-43bd-9a00-23b370ba030b.png">
262
- <center>原图像</center>
263
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209328479-4eaea5d8-aa4a-4f31-aa2a-b47e3c730f15.png">
264
- <center>生成图像</center>
265
- </div>
266
- </details>
267
-
268
-
269
- <details><summary>&emsp;图像编辑(Image Inpainting)</summary>
270
-
271
- - repaint
272
- ```python
273
- from ppdiffusers import RePaintPipeline, RePaintScheduler
274
- from ppdiffusers.utils import load_image
275
-
276
- img_url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/celeba_hq_256.png"
277
- mask_url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/mask_256.png"
278
-
279
- # Load the original image and the mask as PIL images
280
- original_image = load_image(img_url).resize((256, 256))
281
- mask_image = load_image(mask_url).resize((256, 256))
282
-
283
- scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256", subfolder="scheduler")
284
- pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler)
285
-
286
- output = pipe(
287
- original_image=original_image,
288
- mask_image=mask_image,
289
- num_inference_steps=250,
290
- eta=0.0,
291
- jump_length=10,
292
- jump_n_sample=10,
293
- )
294
- inpainted_image = output.images[0]
295
-
296
- inpainted_image.save("repaint-image.png")
297
- ```
298
- <div align="center">
299
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209329052-b6fc2aaf-1a59-49a3-92ef-60180fdffd81.png">
300
- <center>原图像</center>
301
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209329048-4fe12176-32a0-4800-98f2-49bd8d593799.png">
302
- <center>mask图像</center>
303
- <img alt="image" src="https://user-images.githubusercontent.com/20476674/209329241-b7e4d99e-468a-4b95-8829-d77ee14bfe98.png">
304
- <center>生成图像</center>
305
- </div>
306
- </details>
307
-
308
-
309
-
310
- <details><summary>&emsp;图像变化(Image Variation)</summary>
311
-
312
- - versatile_diffusion
313
- ```
314
- from ppdiffusers import VersatileDiffusionImageVariationPipeline
315
- from ppdiffusers.utils import load_image
316
-
317
- url = "https://paddlenlp.bj.bcebos.com/models/community/CompVis/data/benz.jpg"
318
- image = load_image(url)
319
-
320
- pipe = VersatileDiffusionImageVariationPipeline.from_pretrained("shi-labs/versatile-diffusion")
321
-
322
- image = pipe(image).images[0]
323
- image.save("versatile-diffusion-car_variation.png")
324
- ```
325
- <div align="center">
326
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209331434-51f6cdbd-b8e4-4faa-8e49-1cc852e35603.jpg">
327
- <center>原图像</center>
328
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209331591-f6cc4cd8-8430-4627-8d22-bf404fb2bfdd.png">
329
- <center>生成图像</center>
330
- </div>
331
- </details>
332
-
333
-
334
-
335
-
336
-
337
- ### 音频
338
-
339
- <details><summary>&emsp;无条件音频生成(Unconditional Audio Generation)</summary>
340
-
341
- - audio_diffusion
342
-
343
- ```
344
- from scipy.io.wavfile import write
345
- from ppdiffusers import AudioDiffusionPipeline
346
- import paddle
347
-
348
- # 加载模型和scheduler
349
- pipe = AudioDiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256")
350
- pipe.set_progress_bar_config(disable=None)
351
- generator = paddle.Generator().manual_seed(42)
352
-
353
- output = pipe(generator=generator)
354
- audio = output.audios[0]
355
- image = output.images[0]
356
-
357
- # 保存音频到本地
358
- for i, audio in enumerate(audio):
359
- write(f"audio_diffusion_test{i}.wav", pipe.mel.sample_rate, audio.transpose())
360
-
361
- # 保存图片
362
- image.save("audio_diffusion_test.png")
363
- ```
364
- <div align = "center">
365
- <thead>
366
- </thead>
367
- <tbody>
368
- <tr>
369
- <td align = "center">
370
- <a href="https://paddlenlp.bj.bcebos.com/models/community/teticio/data/audio_diffusion_test0.wav" rel="nofollow">
371
- <img align="center" src="https://user-images.githubusercontent.com/20476674/209344877-edbf1c24-f08d-4e3b-88a4-a27e1fd0a858.png" width="200 style="max-width: 100%;"></a><br>
372
- </td>
373
- </tr>
374
- </tbody>
375
- </div>
376
-
377
- <div align="center">
378
- <img width="300" alt="image" src="https://user-images.githubusercontent.com/20476674/209342125-93e8715e-895b-4115-9e1e-e65c6c2cd95a.png">
379
- </div>
380
- </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/232labs/VToonify/vtoonify/smooth_parsing_map.py DELETED
@@ -1,172 +0,0 @@
1
- import os
2
- #os.environ['CUDA_VISIBLE_DEVICES'] = "0"
3
- import numpy as np
4
- import cv2
5
- import math
6
- import argparse
7
- from tqdm import tqdm
8
- import torch
9
- from torch import nn
10
- from torchvision import transforms
11
- import torch.nn.functional as F
12
- from model.raft.core.raft import RAFT
13
- from model.raft.core.utils.utils import InputPadder
14
- from model.bisenet.model import BiSeNet
15
- from model.stylegan.model import Downsample
16
-
17
- class Options():
18
- def __init__(self):
19
-
20
- self.parser = argparse.ArgumentParser(description="Smooth Parsing Maps")
21
- self.parser.add_argument("--window_size", type=int, default=5, help="temporal window size")
22
-
23
- self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model")
24
- self.parser.add_argument("--raft_path", type=str, default='./checkpoint/raft-things.pth', help="path of the RAFT model")
25
-
26
- self.parser.add_argument("--video_path", type=str, help="path of the target video")
27
- self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output parsing maps")
28
-
29
- def parse(self):
30
- self.opt = self.parser.parse_args()
31
- args = vars(self.opt)
32
- print('Load options')
33
- for name, value in sorted(args.items()):
34
- print('%s: %s' % (str(name), str(value)))
35
- return self.opt
36
-
37
- # from RAFT
38
- def warp(x, flo):
39
- """
40
- warp an image/tensor (im2) back to im1, according to the optical flow
41
- x: [B, C, H, W] (im2)
42
- flo: [B, 2, H, W] flow
43
- """
44
- B, C, H, W = x.size()
45
- # mesh grid
46
- xx = torch.arange(0, W).view(1,-1).repeat(H,1)
47
- yy = torch.arange(0, H).view(-1,1).repeat(1,W)
48
- xx = xx.view(1,1,H,W).repeat(B,1,1,1)
49
- yy = yy.view(1,1,H,W).repeat(B,1,1,1)
50
- grid = torch.cat((xx,yy),1).float()
51
-
52
-
53
- #x = x.cuda()
54
- grid = grid.cuda()
55
- vgrid = grid + flo # B,2,H,W
56
-
57
- # scale grid to [-1,1]
58
- ##2019 code
59
- vgrid[:,0,:,:] = 2.0*vgrid[:,0,:,:].clone()/max(W-1,1)-1.0
60
- vgrid[:,1,:,:] = 2.0*vgrid[:,1,:,:].clone()/max(H-1,1)-1.0
61
-
62
- vgrid = vgrid.permute(0,2,3,1)
63
- output = nn.functional.grid_sample(x, vgrid,align_corners=True)
64
- mask = torch.autograd.Variable(torch.ones(x.size())).cuda()
65
- mask = nn.functional.grid_sample(mask, vgrid,align_corners=True)
66
-
67
- ##2019 author
68
- mask[mask<0.9999] = 0
69
- mask[mask>0] = 1
70
-
71
- ##2019 code
72
- # mask = torch.floor(torch.clamp(mask, 0 ,1))
73
-
74
- return output*mask, mask
75
-
76
-
77
- if __name__ == "__main__":
78
-
79
- parser = Options()
80
- args = parser.parse()
81
- print('*'*98)
82
-
83
-
84
- device = "cuda"
85
-
86
- transform = transforms.Compose([
87
- transforms.ToTensor(),
88
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
89
- ])
90
-
91
- parser = argparse.ArgumentParser()
92
- parser.add_argument('--model', help="restore checkpoint")
93
- parser.add_argument('--small', action='store_true', help='use small model')
94
- parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision')
95
- parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation')
96
-
97
- raft_model = torch.nn.DataParallel(RAFT(parser.parse_args(['--model', args.raft_path])))
98
- raft_model.load_state_dict(torch.load(args.raft_path))
99
-
100
- raft_model = raft_model.module
101
- raft_model.to(device)
102
- raft_model.eval()
103
-
104
- parsingpredictor = BiSeNet(n_classes=19)
105
- parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage))
106
- parsingpredictor.to(device).eval()
107
-
108
- down = Downsample(kernel=[1, 3, 3, 1], factor=2).to(device).eval()
109
-
110
- print('Load models successfully!')
111
-
112
- window = args.window_size
113
-
114
- video_cap = cv2.VideoCapture(args.video_path)
115
- num = int(video_cap.get(7))
116
-
117
- Is = []
118
- for i in range(num):
119
- success, frame = video_cap.read()
120
- if success == False:
121
- break
122
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
123
- with torch.no_grad():
124
- Is += [transform(frame).unsqueeze(dim=0).cpu()]
125
- video_cap.release()
126
-
127
- # enlarge frames for more accurate parsing maps and optical flows
128
- Is = F.upsample(torch.cat(Is, dim=0), scale_factor=2, mode='bilinear')
129
- Is_ = torch.cat((Is[0:window], Is, Is[-window:]), dim=0)
130
-
131
- print('Load video with %d frames successfully!'%(len(Is)))
132
-
133
- Ps = []
134
- for i in tqdm(range(len(Is))):
135
- with torch.no_grad():
136
- Ps += [parsingpredictor(2*Is[i:i+1].to(device))[0].detach().cpu()]
137
- Ps = torch.cat(Ps, dim=0)
138
- Ps_ = torch.cat((Ps[0:window], Ps, Ps[-window:]), dim=0)
139
-
140
- print('Predict parsing maps successfully!')
141
-
142
-
143
- # temporal weights of the (2*args.window_size+1) frames
144
- wt = torch.exp(-(torch.arange(2*window+1).float()-window)**2/(2*((window+0.5)**2))).reshape(2*window+1,1,1,1).to(device)
145
-
146
- parse = []
147
- for ii in tqdm(range(len(Is))):
148
- i = ii + window
149
- image2 = Is_[i-window:i+window+1].to(device)
150
- image1 = Is_[i].repeat(2*window+1,1,1,1).to(device)
151
- padder = InputPadder(image1.shape)
152
- image1, image2 = padder.pad(image1, image2)
153
- with torch.no_grad():
154
- flow_low, flow_up = raft_model((image1+1)*255.0/2, (image2+1)*255.0/2, iters=20, test_mode=True)
155
- output, mask = warp(torch.cat((image2, Ps_[i-window:i+window+1].to(device)), dim=1), flow_up)
156
- aligned_Is = output[:,0:3].detach()
157
- aligned_Ps = output[:,3:].detach()
158
- # the spatial weight
159
- ws = torch.exp(-((aligned_Is-image1)**2).mean(dim=1, keepdims=True)/(2*(0.2**2))) * mask[:,0:1]
160
- aligned_Ps[window] = Ps_[i].to(device)
161
- # the weight between i and i shoud be 1.0
162
- ws[window,:,:,:] = 1.0
163
- weights = ws*wt
164
- weights = weights / weights.sum(dim=(0), keepdims=True)
165
- fused_Ps = (aligned_Ps * weights).sum(dim=0, keepdims=True)
166
- parse += [down(fused_Ps).detach().cpu()]
167
- parse = torch.cat(parse, dim=0)
168
-
169
- basename = os.path.basename(args.video_path).split('.')[0]
170
- np.save(os.path.join(args.output_path, basename+'_parsingmap.npy'), parse.numpy())
171
-
172
- print('Done!')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4com/SD-XL-CPU/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: SD-XL CPU
3
- emoji: 🌍
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.43.2
8
- app_file: app.py
9
- pinned: false
10
- license: creativeml-openrail-m
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py DELETED
@@ -1,86 +0,0 @@
1
- from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
2
- import pyworld
3
- import numpy as np
4
-
5
-
6
- class HarvestF0Predictor(F0Predictor):
7
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
8
- self.hop_length = hop_length
9
- self.f0_min = f0_min
10
- self.f0_max = f0_max
11
- self.sampling_rate = sampling_rate
12
-
13
- def interpolate_f0(self, f0):
14
- """
15
- 对F0进行插值处理
16
- """
17
-
18
- data = np.reshape(f0, (f0.size, 1))
19
-
20
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
21
- vuv_vector[data > 0.0] = 1.0
22
- vuv_vector[data <= 0.0] = 0.0
23
-
24
- ip_data = data
25
-
26
- frame_number = data.size
27
- last_value = 0.0
28
- for i in range(frame_number):
29
- if data[i] <= 0.0:
30
- j = i + 1
31
- for j in range(i + 1, frame_number):
32
- if data[j] > 0.0:
33
- break
34
- if j < frame_number - 1:
35
- if last_value > 0.0:
36
- step = (data[j] - data[i - 1]) / float(j - i)
37
- for k in range(i, j):
38
- ip_data[k] = data[i - 1] + step * (k - i + 1)
39
- else:
40
- for k in range(i, j):
41
- ip_data[k] = data[j]
42
- else:
43
- for k in range(i, frame_number):
44
- ip_data[k] = last_value
45
- else:
46
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
47
- last_value = data[i]
48
-
49
- return ip_data[:, 0], vuv_vector[:, 0]
50
-
51
- def resize_f0(self, x, target_len):
52
- source = np.array(x)
53
- source[source < 0.001] = np.nan
54
- target = np.interp(
55
- np.arange(0, len(source) * target_len, len(source)) / target_len,
56
- np.arange(0, len(source)),
57
- source,
58
- )
59
- res = np.nan_to_num(target)
60
- return res
61
-
62
- def compute_f0(self, wav, p_len=None):
63
- if p_len is None:
64
- p_len = wav.shape[0] // self.hop_length
65
- f0, t = pyworld.harvest(
66
- wav.astype(np.double),
67
- fs=self.hop_length,
68
- f0_ceil=self.f0_max,
69
- f0_floor=self.f0_min,
70
- frame_period=1000 * self.hop_length / self.sampling_rate,
71
- )
72
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
73
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
74
-
75
- def compute_f0_uv(self, wav, p_len=None):
76
- if p_len is None:
77
- p_len = wav.shape[0] // self.hop_length
78
- f0, t = pyworld.harvest(
79
- wav.astype(np.double),
80
- fs=self.sampling_rate,
81
- f0_floor=self.f0_min,
82
- f0_ceil=self.f0_max,
83
- frame_period=1000 * self.hop_length / self.sampling_rate,
84
- )
85
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
86
- return self.interpolate_f0(self.resize_f0(f0, p_len))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Dashboards/AI.Dashboard.HEDIS.Terms.Vocabulary/style.css DELETED
@@ -1,28 +0,0 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
4
- }
5
-
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
9
- }
10
-
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
16
- }
17
-
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
24
- }
25
-
26
- .card p:last-child {
27
- margin-bottom: 0;
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/infer-web.py DELETED
@@ -1,1998 +0,0 @@
1
- import os
2
- import shutil
3
- import sys
4
-
5
- now_dir = os.getcwd()
6
- sys.path.append(now_dir)
7
- import traceback, pdb
8
- import warnings
9
-
10
- import numpy as np
11
- import torch
12
-
13
- os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1"
14
- import logging
15
- import threading
16
- from random import shuffle
17
- from subprocess import Popen
18
- from time import sleep
19
-
20
- import faiss
21
- import ffmpeg
22
- import gradio as gr
23
- import soundfile as sf
24
- from config import Config
25
- from fairseq import checkpoint_utils
26
- from i18n import I18nAuto
27
- from infer_pack.models import (
28
- SynthesizerTrnMs256NSFsid,
29
- SynthesizerTrnMs256NSFsid_nono,
30
- SynthesizerTrnMs768NSFsid,
31
- SynthesizerTrnMs768NSFsid_nono,
32
- )
33
- from infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
34
- from infer_uvr5 import _audio_pre_, _audio_pre_new
35
- from MDXNet import MDXNetDereverb
36
- from my_utils import load_audio
37
- from train.process_ckpt import change_info, extract_small_model, merge, show_info
38
- from vc_infer_pipeline import VC
39
- from sklearn.cluster import MiniBatchKMeans
40
-
41
- logging.getLogger("numba").setLevel(logging.WARNING)
42
-
43
-
44
- tmp = os.path.join(now_dir, "TEMP")
45
- shutil.rmtree(tmp, ignore_errors=True)
46
- shutil.rmtree("%s/runtime/Lib/site-packages/infer_pack" % (now_dir), ignore_errors=True)
47
- shutil.rmtree("%s/runtime/Lib/site-packages/uvr5_pack" % (now_dir), ignore_errors=True)
48
- os.makedirs(tmp, exist_ok=True)
49
- os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True)
50
- os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True)
51
- os.environ["TEMP"] = tmp
52
- warnings.filterwarnings("ignore")
53
- torch.manual_seed(114514)
54
-
55
-
56
- config = Config()
57
- i18n = I18nAuto()
58
- i18n.print()
59
- # 判断是否有能用来训练和加速推理的N卡
60
- ngpu = torch.cuda.device_count()
61
- gpu_infos = []
62
- mem = []
63
- if_gpu_ok = False
64
-
65
- if torch.cuda.is_available() or ngpu != 0:
66
- for i in range(ngpu):
67
- gpu_name = torch.cuda.get_device_name(i)
68
- if any(
69
- value in gpu_name.upper()
70
- for value in [
71
- "10",
72
- "16",
73
- "20",
74
- "30",
75
- "40",
76
- "A2",
77
- "A3",
78
- "A4",
79
- "P4",
80
- "A50",
81
- "500",
82
- "A60",
83
- "70",
84
- "80",
85
- "90",
86
- "M4",
87
- "T4",
88
- "TITAN",
89
- ]
90
- ):
91
- # A10#A100#V100#A40#P40#M40#K80#A4500
92
- if_gpu_ok = True # 至少有一张能用的N卡
93
- gpu_infos.append("%s\t%s" % (i, gpu_name))
94
- mem.append(
95
- int(
96
- torch.cuda.get_device_properties(i).total_memory
97
- / 1024
98
- / 1024
99
- / 1024
100
- + 0.4
101
- )
102
- )
103
- if if_gpu_ok and len(gpu_infos) > 0:
104
- gpu_info = "\n".join(gpu_infos)
105
- default_batch_size = min(mem) // 2
106
- else:
107
- gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练")
108
- default_batch_size = 1
109
- gpus = "-".join([i[0] for i in gpu_infos])
110
-
111
-
112
- class ToolButton(gr.Button, gr.components.FormComponent):
113
- """Small button with single emoji as text, fits inside gradio forms"""
114
-
115
- def __init__(self, **kwargs):
116
- super().__init__(variant="tool", **kwargs)
117
-
118
- def get_block_name(self):
119
- return "button"
120
-
121
-
122
- hubert_model = None
123
-
124
-
125
- def load_hubert():
126
- global hubert_model
127
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
128
- ["hubert_base.pt"],
129
- suffix="",
130
- )
131
- hubert_model = models[0]
132
- hubert_model = hubert_model.to(config.device)
133
- if config.is_half:
134
- hubert_model = hubert_model.half()
135
- else:
136
- hubert_model = hubert_model.float()
137
- hubert_model.eval()
138
-
139
-
140
- weight_root = "weights"
141
- weight_uvr5_root = "uvr5_weights"
142
- index_root = "logs"
143
- names = []
144
- for name in os.listdir(weight_root):
145
- if name.endswith(".pth"):
146
- names.append(name)
147
- index_paths = []
148
- for root, dirs, files in os.walk(index_root, topdown=False):
149
- for name in files:
150
- if name.endswith(".index") and "trained" not in name:
151
- index_paths.append("%s/%s" % (root, name))
152
- uvr5_names = []
153
- for name in os.listdir(weight_uvr5_root):
154
- if name.endswith(".pth") or "onnx" in name:
155
- uvr5_names.append(name.replace(".pth", ""))
156
-
157
-
158
- def vc_single(
159
- sid,
160
- input_audio_path,
161
- f0_up_key,
162
- f0_file,
163
- f0_method,
164
- file_index,
165
- file_index2,
166
- # file_big_npy,
167
- index_rate,
168
- filter_radius,
169
- resample_sr,
170
- rms_mix_rate,
171
- protect,
172
- ): # spk_item, input_audio0, vc_transform0,f0_file,f0method0
173
- global tgt_sr, net_g, vc, hubert_model, version
174
- if input_audio_path is None:
175
- return "You need to upload an audio", None
176
- f0_up_key = int(f0_up_key)
177
- try:
178
- audio = load_audio(input_audio_path, 16000)
179
- audio_max = np.abs(audio).max() / 0.95
180
- if audio_max > 1:
181
- audio /= audio_max
182
- times = [0, 0, 0]
183
- if not hubert_model:
184
- load_hubert()
185
- if_f0 = cpt.get("f0", 1)
186
- file_index = (
187
- (
188
- file_index.strip(" ")
189
- .strip('"')
190
- .strip("\n")
191
- .strip('"')
192
- .strip(" ")
193
- .replace("trained", "added")
194
- )
195
- if file_index != ""
196
- else file_index2
197
- ) # 防止小白写错,自动帮他替换掉
198
- # file_big_npy = (
199
- # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
200
- # )
201
- audio_opt = vc.pipeline(
202
- hubert_model,
203
- net_g,
204
- sid,
205
- audio,
206
- input_audio_path,
207
- times,
208
- f0_up_key,
209
- f0_method,
210
- file_index,
211
- # file_big_npy,
212
- index_rate,
213
- if_f0,
214
- filter_radius,
215
- tgt_sr,
216
- resample_sr,
217
- rms_mix_rate,
218
- version,
219
- protect,
220
- f0_file=f0_file,
221
- )
222
- if tgt_sr != resample_sr >= 16000:
223
- tgt_sr = resample_sr
224
- index_info = (
225
- "Using index:%s." % file_index
226
- if os.path.exists(file_index)
227
- else "Index not used."
228
- )
229
- return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % (
230
- index_info,
231
- times[0],
232
- times[1],
233
- times[2],
234
- ), (tgt_sr, audio_opt)
235
- except:
236
- info = traceback.format_exc()
237
- print(info)
238
- return info, (None, None)
239
-
240
-
241
- def vc_multi(
242
- sid,
243
- dir_path,
244
- opt_root,
245
- paths,
246
- f0_up_key,
247
- f0_method,
248
- file_index,
249
- file_index2,
250
- # file_big_npy,
251
- index_rate,
252
- filter_radius,
253
- resample_sr,
254
- rms_mix_rate,
255
- protect,
256
- format1,
257
- ):
258
- try:
259
- dir_path = (
260
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
261
- ) # 防止小白拷路径头尾带了空格和"和回车
262
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
263
- os.makedirs(opt_root, exist_ok=True)
264
- try:
265
- if dir_path != "":
266
- paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)]
267
- else:
268
- paths = [path.name for path in paths]
269
- except:
270
- traceback.print_exc()
271
- paths = [path.name for path in paths]
272
- infos = []
273
- for path in paths:
274
- info, opt = vc_single(
275
- sid,
276
- path,
277
- f0_up_key,
278
- None,
279
- f0_method,
280
- file_index,
281
- file_index2,
282
- # file_big_npy,
283
- index_rate,
284
- filter_radius,
285
- resample_sr,
286
- rms_mix_rate,
287
- protect,
288
- )
289
- if "Success" in info:
290
- try:
291
- tgt_sr, audio_opt = opt
292
- if format1 in ["wav", "flac"]:
293
- sf.write(
294
- "%s/%s.%s" % (opt_root, os.path.basename(path), format1),
295
- audio_opt,
296
- tgt_sr,
297
- )
298
- else:
299
- path = "%s/%s.wav" % (opt_root, os.path.basename(path))
300
- sf.write(
301
- path,
302
- audio_opt,
303
- tgt_sr,
304
- )
305
- if os.path.exists(path):
306
- os.system(
307
- "ffmpeg -i %s -vn %s -q:a 2 -y"
308
- % (path, path[:-4] + ".%s" % format1)
309
- )
310
- except:
311
- info += traceback.format_exc()
312
- infos.append("%s->%s" % (os.path.basename(path), info))
313
- yield "\n".join(infos)
314
- yield "\n".join(infos)
315
- except:
316
- yield traceback.format_exc()
317
-
318
-
319
- def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0):
320
- infos = []
321
- try:
322
- inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
323
- save_root_vocal = (
324
- save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
325
- )
326
- save_root_ins = (
327
- save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
328
- )
329
- if model_name == "onnx_dereverb_By_FoxJoy":
330
- pre_fun = MDXNetDereverb(15)
331
- else:
332
- func = _audio_pre_ if "DeEcho" not in model_name else _audio_pre_new
333
- pre_fun = func(
334
- agg=int(agg),
335
- model_path=os.path.join(weight_uvr5_root, model_name + ".pth"),
336
- device=config.device,
337
- is_half=config.is_half,
338
- )
339
- if inp_root != "":
340
- paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)]
341
- else:
342
- paths = [path.name for path in paths]
343
- for path in paths:
344
- inp_path = os.path.join(inp_root, path)
345
- need_reformat = 1
346
- done = 0
347
- try:
348
- info = ffmpeg.probe(inp_path, cmd="ffprobe")
349
- if (
350
- info["streams"][0]["channels"] == 2
351
- and info["streams"][0]["sample_rate"] == "44100"
352
- ):
353
- need_reformat = 0
354
- pre_fun._path_audio_(
355
- inp_path, save_root_ins, save_root_vocal, format0
356
- )
357
- done = 1
358
- except:
359
- need_reformat = 1
360
- traceback.print_exc()
361
- if need_reformat == 1:
362
- tmp_path = "%s/%s.reformatted.wav" % (tmp, os.path.basename(inp_path))
363
- os.system(
364
- "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y"
365
- % (inp_path, tmp_path)
366
- )
367
- inp_path = tmp_path
368
- try:
369
- if done == 0:
370
- pre_fun._path_audio_(
371
- inp_path, save_root_ins, save_root_vocal, format0
372
- )
373
- infos.append("%s->Success" % (os.path.basename(inp_path)))
374
- yield "\n".join(infos)
375
- except:
376
- infos.append(
377
- "%s->%s" % (os.path.basename(inp_path), traceback.format_exc())
378
- )
379
- yield "\n".join(infos)
380
- except:
381
- infos.append(traceback.format_exc())
382
- yield "\n".join(infos)
383
- finally:
384
- try:
385
- if model_name == "onnx_dereverb_By_FoxJoy":
386
- del pre_fun.pred.model
387
- del pre_fun.pred.model_
388
- else:
389
- del pre_fun.model
390
- del pre_fun
391
- except:
392
- traceback.print_exc()
393
- print("clean_empty_cache")
394
- if torch.cuda.is_available():
395
- torch.cuda.empty_cache()
396
- yield "\n".join(infos)
397
-
398
-
399
- # 一个选项卡全局只能有一个音色
400
- def get_vc(sid, to_return_protect0, to_return_protect1):
401
- global n_spk, tgt_sr, net_g, vc, cpt, version
402
- if sid == "" or sid == []:
403
- global hubert_model
404
- if hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
405
- print("clean_empty_cache")
406
- del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt
407
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
408
- if torch.cuda.is_available():
409
- torch.cuda.empty_cache()
410
- ###楼下不这么折腾清理不干净
411
- if_f0 = cpt.get("f0", 1)
412
- version = cpt.get("version", "v1")
413
- if version == "v1":
414
- if if_f0 == 1:
415
- net_g = SynthesizerTrnMs256NSFsid(
416
- *cpt["config"], is_half=config.is_half
417
- )
418
- else:
419
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
420
- elif version == "v2":
421
- if if_f0 == 1:
422
- net_g = SynthesizerTrnMs768NSFsid(
423
- *cpt["config"], is_half=config.is_half
424
- )
425
- else:
426
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
427
- del net_g, cpt
428
- if torch.cuda.is_available():
429
- torch.cuda.empty_cache()
430
- cpt = None
431
- return {"visible": False, "__type__": "update"}
432
- person = "%s/%s" % (weight_root, sid)
433
- print("loading %s" % person)
434
- cpt = torch.load(person, map_location="cpu")
435
- tgt_sr = cpt["config"][-1]
436
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
437
- if_f0 = cpt.get("f0", 1)
438
- if if_f0 == 0:
439
- to_return_protect0 = to_return_protect1 = {
440
- "visible": False,
441
- "value": 0.5,
442
- "__type__": "update",
443
- }
444
- else:
445
- to_return_protect0 = {
446
- "visible": True,
447
- "value": to_return_protect0,
448
- "__type__": "update",
449
- }
450
- to_return_protect1 = {
451
- "visible": True,
452
- "value": to_return_protect1,
453
- "__type__": "update",
454
- }
455
- version = cpt.get("version", "v1")
456
- if version == "v1":
457
- if if_f0 == 1:
458
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
459
- else:
460
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
461
- elif version == "v2":
462
- if if_f0 == 1:
463
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
464
- else:
465
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
466
- del net_g.enc_q
467
- print(net_g.load_state_dict(cpt["weight"], strict=False))
468
- net_g.eval().to(config.device)
469
- if config.is_half:
470
- net_g = net_g.half()
471
- else:
472
- net_g = net_g.float()
473
- vc = VC(tgt_sr, config)
474
- n_spk = cpt["config"][-3]
475
- return (
476
- {"visible": True, "maximum": n_spk, "__type__": "update"},
477
- to_return_protect0,
478
- to_return_protect1,
479
- )
480
-
481
-
482
- def change_choices():
483
- names = []
484
- for name in os.listdir(weight_root):
485
- if name.endswith(".pth"):
486
- names.append(name)
487
- index_paths = []
488
- for root, dirs, files in os.walk(index_root, topdown=False):
489
- for name in files:
490
- if name.endswith(".index") and "trained" not in name:
491
- index_paths.append("%s/%s" % (root, name))
492
- return {"choices": sorted(names), "__type__": "update"}, {
493
- "choices": sorted(index_paths),
494
- "__type__": "update",
495
- }
496
-
497
-
498
- def clean():
499
- return {"value": "", "__type__": "update"}
500
-
501
-
502
- sr_dict = {
503
- "32k": 32000,
504
- "40k": 40000,
505
- "48k": 48000,
506
- }
507
-
508
-
509
- def if_done(done, p):
510
- while 1:
511
- if p.poll() is None:
512
- sleep(0.5)
513
- else:
514
- break
515
- done[0] = True
516
-
517
-
518
- def if_done_multi(done, ps):
519
- while 1:
520
- # poll==None代表进程未结束
521
- # 只要有一个进程未结束都不停
522
- flag = 1
523
- for p in ps:
524
- if p.poll() is None:
525
- flag = 0
526
- sleep(0.5)
527
- break
528
- if flag == 1:
529
- break
530
- done[0] = True
531
-
532
-
533
- def preprocess_dataset(trainset_dir, exp_dir, sr, n_p):
534
- sr = sr_dict[sr]
535
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
536
- f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w")
537
- f.close()
538
- cmd = (
539
- config.python_cmd
540
- + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s "
541
- % (trainset_dir, sr, n_p, now_dir, exp_dir)
542
- + str(config.noparallel)
543
- )
544
- print(cmd)
545
- p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir
546
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
547
- done = [False]
548
- threading.Thread(
549
- target=if_done,
550
- args=(
551
- done,
552
- p,
553
- ),
554
- ).start()
555
- while 1:
556
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
557
- yield (f.read())
558
- sleep(1)
559
- if done[0]:
560
- break
561
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
562
- log = f.read()
563
- print(log)
564
- yield log
565
-
566
-
567
- # but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2])
568
- def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19):
569
- gpus = gpus.split("-")
570
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
571
- f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w")
572
- f.close()
573
- if if_f0:
574
- cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s" % (
575
- now_dir,
576
- exp_dir,
577
- n_p,
578
- f0method,
579
- )
580
- print(cmd)
581
- p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE
582
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
583
- done = [False]
584
- threading.Thread(
585
- target=if_done,
586
- args=(
587
- done,
588
- p,
589
- ),
590
- ).start()
591
- while 1:
592
- with open(
593
- "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r"
594
- ) as f:
595
- yield (f.read())
596
- sleep(1)
597
- if done[0]:
598
- break
599
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
600
- log = f.read()
601
- print(log)
602
- yield log
603
- ####对不同part分别开多进程
604
- """
605
- n_part=int(sys.argv[1])
606
- i_part=int(sys.argv[2])
607
- i_gpu=sys.argv[3]
608
- exp_dir=sys.argv[4]
609
- os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu)
610
- """
611
- leng = len(gpus)
612
- ps = []
613
- for idx, n_g in enumerate(gpus):
614
- cmd = (
615
- config.python_cmd
616
- + " extract_feature_print.py %s %s %s %s %s/logs/%s %s"
617
- % (
618
- config.device,
619
- leng,
620
- idx,
621
- n_g,
622
- now_dir,
623
- exp_dir,
624
- version19,
625
- )
626
- )
627
- print(cmd)
628
- p = Popen(
629
- cmd, shell=True, cwd=now_dir
630
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
631
- ps.append(p)
632
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
633
- done = [False]
634
- threading.Thread(
635
- target=if_done_multi,
636
- args=(
637
- done,
638
- ps,
639
- ),
640
- ).start()
641
- while 1:
642
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
643
- yield (f.read())
644
- sleep(1)
645
- if done[0]:
646
- break
647
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
648
- log = f.read()
649
- print(log)
650
- yield log
651
-
652
-
653
- def change_sr2(sr2, if_f0_3, version19):
654
- path_str = "" if version19 == "v1" else "_v2"
655
- f0_str = "f0" if if_f0_3 else ""
656
- if_pretrained_generator_exist = os.access(
657
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK
658
- )
659
- if_pretrained_discriminator_exist = os.access(
660
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK
661
- )
662
- if not if_pretrained_generator_exist:
663
- print(
664
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2),
665
- "not exist, will not use pretrained model",
666
- )
667
- if not if_pretrained_discriminator_exist:
668
- print(
669
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2),
670
- "not exist, will not use pretrained model",
671
- )
672
- return (
673
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)
674
- if if_pretrained_generator_exist
675
- else "",
676
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)
677
- if if_pretrained_discriminator_exist
678
- else "",
679
- )
680
-
681
-
682
- def change_version19(sr2, if_f0_3, version19):
683
- path_str = "" if version19 == "v1" else "_v2"
684
- if sr2 == "32k" and version19 == "v1":
685
- sr2 = "40k"
686
- to_return_sr2 = (
687
- {"choices": ["40k", "48k"], "__type__": "update", "value": sr2}
688
- if version19 == "v1"
689
- else {"choices": ["40k", "48k", "32k"], "__type__": "update", "value": sr2}
690
- )
691
- f0_str = "f0" if if_f0_3 else ""
692
- if_pretrained_generator_exist = os.access(
693
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK
694
- )
695
- if_pretrained_discriminator_exist = os.access(
696
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK
697
- )
698
- if not if_pretrained_generator_exist:
699
- print(
700
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2),
701
- "not exist, will not use pretrained model",
702
- )
703
- if not if_pretrained_discriminator_exist:
704
- print(
705
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2),
706
- "not exist, will not use pretrained model",
707
- )
708
- return (
709
- "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)
710
- if if_pretrained_generator_exist
711
- else "",
712
- "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)
713
- if if_pretrained_discriminator_exist
714
- else "",
715
- to_return_sr2,
716
- )
717
-
718
-
719
- def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15
720
- path_str = "" if version19 == "v1" else "_v2"
721
- if_pretrained_generator_exist = os.access(
722
- "pretrained%s/f0G%s.pth" % (path_str, sr2), os.F_OK
723
- )
724
- if_pretrained_discriminator_exist = os.access(
725
- "pretrained%s/f0D%s.pth" % (path_str, sr2), os.F_OK
726
- )
727
- if not if_pretrained_generator_exist:
728
- print(
729
- "pretrained%s/f0G%s.pth" % (path_str, sr2),
730
- "not exist, will not use pretrained model",
731
- )
732
- if not if_pretrained_discriminator_exist:
733
- print(
734
- "pretrained%s/f0D%s.pth" % (path_str, sr2),
735
- "not exist, will not use pretrained model",
736
- )
737
- if if_f0_3:
738
- return (
739
- {"visible": True, "__type__": "update"},
740
- "pretrained%s/f0G%s.pth" % (path_str, sr2)
741
- if if_pretrained_generator_exist
742
- else "",
743
- "pretrained%s/f0D%s.pth" % (path_str, sr2)
744
- if if_pretrained_discriminator_exist
745
- else "",
746
- )
747
- return (
748
- {"visible": False, "__type__": "update"},
749
- ("pretrained%s/G%s.pth" % (path_str, sr2))
750
- if if_pretrained_generator_exist
751
- else "",
752
- ("pretrained%s/D%s.pth" % (path_str, sr2))
753
- if if_pretrained_discriminator_exist
754
- else "",
755
- )
756
-
757
-
758
- # but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16])
759
- def click_train(
760
- exp_dir1,
761
- sr2,
762
- if_f0_3,
763
- spk_id5,
764
- save_epoch10,
765
- total_epoch11,
766
- batch_size12,
767
- if_save_latest13,
768
- pretrained_G14,
769
- pretrained_D15,
770
- gpus16,
771
- if_cache_gpu17,
772
- if_save_every_weights18,
773
- version19,
774
- ):
775
- # 生成filelist
776
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
777
- os.makedirs(exp_dir, exist_ok=True)
778
- gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir)
779
- feature_dir = (
780
- "%s/3_feature256" % (exp_dir)
781
- if version19 == "v1"
782
- else "%s/3_feature768" % (exp_dir)
783
- )
784
- if if_f0_3:
785
- f0_dir = "%s/2a_f0" % (exp_dir)
786
- f0nsf_dir = "%s/2b-f0nsf" % (exp_dir)
787
- names = (
788
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
789
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
790
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
791
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
792
- )
793
- else:
794
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
795
- [name.split(".")[0] for name in os.listdir(feature_dir)]
796
- )
797
- opt = []
798
- for name in names:
799
- if if_f0_3:
800
- opt.append(
801
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
802
- % (
803
- gt_wavs_dir.replace("\\", "\\\\"),
804
- name,
805
- feature_dir.replace("\\", "\\\\"),
806
- name,
807
- f0_dir.replace("\\", "\\\\"),
808
- name,
809
- f0nsf_dir.replace("\\", "\\\\"),
810
- name,
811
- spk_id5,
812
- )
813
- )
814
- else:
815
- opt.append(
816
- "%s/%s.wav|%s/%s.npy|%s"
817
- % (
818
- gt_wavs_dir.replace("\\", "\\\\"),
819
- name,
820
- feature_dir.replace("\\", "\\\\"),
821
- name,
822
- spk_id5,
823
- )
824
- )
825
- fea_dim = 256 if version19 == "v1" else 768
826
- if if_f0_3:
827
- for _ in range(2):
828
- opt.append(
829
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
830
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
831
- )
832
- else:
833
- for _ in range(2):
834
- opt.append(
835
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
836
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
837
- )
838
- shuffle(opt)
839
- with open("%s/filelist.txt" % exp_dir, "w") as f:
840
- f.write("\n".join(opt))
841
- print("write filelist done")
842
- # 生成config#无需生成config
843
- # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0"
844
- print("use gpus:", gpus16)
845
- if pretrained_G14 == "":
846
- print("no pretrained Generator")
847
- if pretrained_D15 == "":
848
- print("no pretrained Discriminator")
849
- if gpus16:
850
- cmd = (
851
- config.python_cmd
852
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
853
- % (
854
- exp_dir1,
855
- sr2,
856
- 1 if if_f0_3 else 0,
857
- batch_size12,
858
- gpus16,
859
- total_epoch11,
860
- save_epoch10,
861
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "",
862
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "",
863
- 1 if if_save_latest13 == i18n("是") else 0,
864
- 1 if if_cache_gpu17 == i18n("是") else 0,
865
- 1 if if_save_every_weights18 == i18n("是") else 0,
866
- version19,
867
- )
868
- )
869
- else:
870
- cmd = (
871
- config.python_cmd
872
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
873
- % (
874
- exp_dir1,
875
- sr2,
876
- 1 if if_f0_3 else 0,
877
- batch_size12,
878
- total_epoch11,
879
- save_epoch10,
880
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "\b",
881
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "\b",
882
- 1 if if_save_latest13 == i18n("是") else 0,
883
- 1 if if_cache_gpu17 == i18n("是") else 0,
884
- 1 if if_save_every_weights18 == i18n("是") else 0,
885
- version19,
886
- )
887
- )
888
- print(cmd)
889
- p = Popen(cmd, shell=True, cwd=now_dir)
890
- p.wait()
891
- return "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log"
892
-
893
-
894
- # but4.click(train_index, [exp_dir1], info3)
895
- def train_index(exp_dir1, version19):
896
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
897
- os.makedirs(exp_dir, exist_ok=True)
898
- feature_dir = (
899
- "%s/3_feature256" % (exp_dir)
900
- if version19 == "v1"
901
- else "%s/3_feature768" % (exp_dir)
902
- )
903
- if not os.path.exists(feature_dir):
904
- return "请先进行特征提取!"
905
- listdir_res = list(os.listdir(feature_dir))
906
- if len(listdir_res) == 0:
907
- return "请先进行特征提取!"
908
- infos = []
909
- npys = []
910
- for name in sorted(listdir_res):
911
- phone = np.load("%s/%s" % (feature_dir, name))
912
- npys.append(phone)
913
- big_npy = np.concatenate(npys, 0)
914
- big_npy_idx = np.arange(big_npy.shape[0])
915
- np.random.shuffle(big_npy_idx)
916
- big_npy = big_npy[big_npy_idx]
917
- if big_npy.shape[0] > 2e5:
918
- # if(1):
919
- infos.append("Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0])
920
- yield "\n".join(infos)
921
- try:
922
- big_npy = (
923
- MiniBatchKMeans(
924
- n_clusters=10000,
925
- verbose=True,
926
- batch_size=256 * config.n_cpu,
927
- compute_labels=False,
928
- init="random",
929
- )
930
- .fit(big_npy)
931
- .cluster_centers_
932
- )
933
- except:
934
- info = traceback.format_exc()
935
- print(info)
936
- infos.append(info)
937
- yield "\n".join(infos)
938
-
939
- np.save("%s/total_fea.npy" % exp_dir, big_npy)
940
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
941
- infos.append("%s,%s" % (big_npy.shape, n_ivf))
942
- yield "\n".join(infos)
943
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
944
- # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf)
945
- infos.append("training")
946
- yield "\n".join(infos)
947
- index_ivf = faiss.extract_index_ivf(index) #
948
- index_ivf.nprobe = 1
949
- index.train(big_npy)
950
- faiss.write_index(
951
- index,
952
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
953
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
954
- )
955
- # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
956
- infos.append("adding")
957
- yield "\n".join(infos)
958
- batch_size_add = 8192
959
- for i in range(0, big_npy.shape[0], batch_size_add):
960
- index.add(big_npy[i : i + batch_size_add])
961
- faiss.write_index(
962
- index,
963
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
964
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
965
- )
966
- infos.append(
967
- "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index"
968
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
969
- )
970
- # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
971
- # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19))
972
- yield "\n".join(infos)
973
-
974
-
975
- # but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3)
976
- def train1key(
977
- exp_dir1,
978
- sr2,
979
- if_f0_3,
980
- trainset_dir4,
981
- spk_id5,
982
- np7,
983
- f0method8,
984
- save_epoch10,
985
- total_epoch11,
986
- batch_size12,
987
- if_save_latest13,
988
- pretrained_G14,
989
- pretrained_D15,
990
- gpus16,
991
- if_cache_gpu17,
992
- if_save_every_weights18,
993
- version19,
994
- ):
995
- infos = []
996
-
997
- def get_info_str(strr):
998
- infos.append(strr)
999
- return "\n".join(infos)
1000
-
1001
- model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1)
1002
- preprocess_log_path = "%s/preprocess.log" % model_log_dir
1003
- extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir
1004
- gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir
1005
- feature_dir = (
1006
- "%s/3_feature256" % model_log_dir
1007
- if version19 == "v1"
1008
- else "%s/3_feature768" % model_log_dir
1009
- )
1010
-
1011
- os.makedirs(model_log_dir, exist_ok=True)
1012
- #########step1:处理数据
1013
- open(preprocess_log_path, "w").close()
1014
- cmd = (
1015
- config.python_cmd
1016
- + " trainset_preprocess_pipeline_print.py %s %s %s %s "
1017
- % (trainset_dir4, sr_dict[sr2], np7, model_log_dir)
1018
- + str(config.noparallel)
1019
- )
1020
- yield get_info_str(i18n("step1:正在处理数据"))
1021
- yield get_info_str(cmd)
1022
- p = Popen(cmd, shell=True)
1023
- p.wait()
1024
- with open(preprocess_log_path, "r") as f:
1025
- print(f.read())
1026
- #########step2a:提取音高
1027
- open(extract_f0_feature_log_path, "w")
1028
- if if_f0_3:
1029
- yield get_info_str("step2a:正在提取音高")
1030
- cmd = config.python_cmd + " extract_f0_print.py %s %s %s" % (
1031
- model_log_dir,
1032
- np7,
1033
- f0method8,
1034
- )
1035
- yield get_info_str(cmd)
1036
- p = Popen(cmd, shell=True, cwd=now_dir)
1037
- p.wait()
1038
- with open(extract_f0_feature_log_path, "r") as f:
1039
- print(f.read())
1040
- else:
1041
- yield get_info_str(i18n("step2a:无需提取音高"))
1042
- #######step2b:提取特征
1043
- yield get_info_str(i18n("step2b:正在提取特征"))
1044
- gpus = gpus16.split("-")
1045
- leng = len(gpus)
1046
- ps = []
1047
- for idx, n_g in enumerate(gpus):
1048
- cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % (
1049
- config.device,
1050
- leng,
1051
- idx,
1052
- n_g,
1053
- model_log_dir,
1054
- version19,
1055
- )
1056
- yield get_info_str(cmd)
1057
- p = Popen(
1058
- cmd, shell=True, cwd=now_dir
1059
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
1060
- ps.append(p)
1061
- for p in ps:
1062
- p.wait()
1063
- with open(extract_f0_feature_log_path, "r") as f:
1064
- print(f.read())
1065
- #######step3a:训练模型
1066
- yield get_info_str(i18n("step3a:正在训练模型"))
1067
- # 生成filelist
1068
- if if_f0_3:
1069
- f0_dir = "%s/2a_f0" % model_log_dir
1070
- f0nsf_dir = "%s/2b-f0nsf" % model_log_dir
1071
- names = (
1072
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
1073
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
1074
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
1075
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
1076
- )
1077
- else:
1078
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
1079
- [name.split(".")[0] for name in os.listdir(feature_dir)]
1080
- )
1081
- opt = []
1082
- for name in names:
1083
- if if_f0_3:
1084
- opt.append(
1085
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
1086
- % (
1087
- gt_wavs_dir.replace("\\", "\\\\"),
1088
- name,
1089
- feature_dir.replace("\\", "\\\\"),
1090
- name,
1091
- f0_dir.replace("\\", "\\\\"),
1092
- name,
1093
- f0nsf_dir.replace("\\", "\\\\"),
1094
- name,
1095
- spk_id5,
1096
- )
1097
- )
1098
- else:
1099
- opt.append(
1100
- "%s/%s.wav|%s/%s.npy|%s"
1101
- % (
1102
- gt_wavs_dir.replace("\\", "\\\\"),
1103
- name,
1104
- feature_dir.replace("\\", "\\\\"),
1105
- name,
1106
- spk_id5,
1107
- )
1108
- )
1109
- fea_dim = 256 if version19 == "v1" else 768
1110
- if if_f0_3:
1111
- for _ in range(2):
1112
- opt.append(
1113
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
1114
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
1115
- )
1116
- else:
1117
- for _ in range(2):
1118
- opt.append(
1119
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
1120
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
1121
- )
1122
- shuffle(opt)
1123
- with open("%s/filelist.txt" % model_log_dir, "w") as f:
1124
- f.write("\n".join(opt))
1125
- yield get_info_str("write filelist done")
1126
- if gpus16:
1127
- cmd = (
1128
- config.python_cmd
1129
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
1130
- % (
1131
- exp_dir1,
1132
- sr2,
1133
- 1 if if_f0_3 else 0,
1134
- batch_size12,
1135
- gpus16,
1136
- total_epoch11,
1137
- save_epoch10,
1138
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "",
1139
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "",
1140
- 1 if if_save_latest13 == i18n("是") else 0,
1141
- 1 if if_cache_gpu17 == i18n("是") else 0,
1142
- 1 if if_save_every_weights18 == i18n("是") else 0,
1143
- version19,
1144
- )
1145
- )
1146
- else:
1147
- cmd = (
1148
- config.python_cmd
1149
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
1150
- % (
1151
- exp_dir1,
1152
- sr2,
1153
- 1 if if_f0_3 else 0,
1154
- batch_size12,
1155
- total_epoch11,
1156
- save_epoch10,
1157
- "-pg %s" % pretrained_G14 if pretrained_G14 != "" else "",
1158
- "-pd %s" % pretrained_D15 if pretrained_D15 != "" else "",
1159
- 1 if if_save_latest13 == i18n("是") else 0,
1160
- 1 if if_cache_gpu17 == i18n("是") else 0,
1161
- 1 if if_save_every_weights18 == i18n("是") else 0,
1162
- version19,
1163
- )
1164
- )
1165
- yield get_info_str(cmd)
1166
- p = Popen(cmd, shell=True, cwd=now_dir)
1167
- p.wait()
1168
- yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log"))
1169
- #######step3b:训练索引
1170
- npys = []
1171
- listdir_res = list(os.listdir(feature_dir))
1172
- for name in sorted(listdir_res):
1173
- phone = np.load("%s/%s" % (feature_dir, name))
1174
- npys.append(phone)
1175
- big_npy = np.concatenate(npys, 0)
1176
-
1177
- big_npy_idx = np.arange(big_npy.shape[0])
1178
- np.random.shuffle(big_npy_idx)
1179
- big_npy = big_npy[big_npy_idx]
1180
-
1181
- if big_npy.shape[0] > 2e5:
1182
- # if(1):
1183
- info = "Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0]
1184
- print(info)
1185
- yield get_info_str(info)
1186
- try:
1187
- big_npy = (
1188
- MiniBatchKMeans(
1189
- n_clusters=10000,
1190
- verbose=True,
1191
- batch_size=256 * config.n_cpu,
1192
- compute_labels=False,
1193
- init="random",
1194
- )
1195
- .fit(big_npy)
1196
- .cluster_centers_
1197
- )
1198
- except:
1199
- info = traceback.format_exc()
1200
- print(info)
1201
- yield get_info_str(info)
1202
-
1203
- np.save("%s/total_fea.npy" % model_log_dir, big_npy)
1204
-
1205
- # n_ivf = big_npy.shape[0] // 39
1206
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
1207
- yield get_info_str("%s,%s" % (big_npy.shape, n_ivf))
1208
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
1209
- yield get_info_str("training index")
1210
- index_ivf = faiss.extract_index_ivf(index) #
1211
- index_ivf.nprobe = 1
1212
- index.train(big_npy)
1213
- faiss.write_index(
1214
- index,
1215
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
1216
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
1217
- )
1218
- yield get_info_str("adding index")
1219
- batch_size_add = 8192
1220
- for i in range(0, big_npy.shape[0], batch_size_add):
1221
- index.add(big_npy[i : i + batch_size_add])
1222
- faiss.write_index(
1223
- index,
1224
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
1225
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
1226
- )
1227
- yield get_info_str(
1228
- "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index"
1229
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
1230
- )
1231
- yield get_info_str(i18n("全流程结束!"))
1232
-
1233
-
1234
- # ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__])
1235
- def change_info_(ckpt_path):
1236
- if not os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")):
1237
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
1238
- try:
1239
- with open(
1240
- ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r"
1241
- ) as f:
1242
- info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1])
1243
- sr, f0 = info["sample_rate"], info["if_f0"]
1244
- version = "v2" if ("version" in info and info["version"] == "v2") else "v1"
1245
- return sr, str(f0), version
1246
- except:
1247
- traceback.print_exc()
1248
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
1249
-
1250
-
1251
- def export_onnx(ModelPath, ExportedPath):
1252
- cpt = torch.load(ModelPath, map_location="cpu")
1253
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
1254
- vec_channels = 256 if cpt.get("version", "v1") == "v1" else 768
1255
-
1256
- test_phone = torch.rand(1, 200, vec_channels) # hidden unit
1257
- test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
1258
- test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
1259
- test_pitchf = torch.rand(1, 200) # nsf基频
1260
- test_ds = torch.LongTensor([0]) # 说话人ID
1261
- test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
1262
-
1263
- device = "cpu" # 导出时设备(不影响使用模型)
1264
-
1265
- net_g = SynthesizerTrnMsNSFsidM(
1266
- *cpt["config"], is_half=False, version=cpt.get("version", "v1")
1267
- ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
1268
- net_g.load_state_dict(cpt["weight"], strict=False)
1269
- input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
1270
- output_names = [
1271
- "audio",
1272
- ]
1273
- # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
1274
- torch.onnx.export(
1275
- net_g,
1276
- (
1277
- test_phone.to(device),
1278
- test_phone_lengths.to(device),
1279
- test_pitch.to(device),
1280
- test_pitchf.to(device),
1281
- test_ds.to(device),
1282
- test_rnd.to(device),
1283
- ),
1284
- ExportedPath,
1285
- dynamic_axes={
1286
- "phone": [1],
1287
- "pitch": [1],
1288
- "pitchf": [1],
1289
- "rnd": [2],
1290
- },
1291
- do_constant_folding=False,
1292
- opset_version=13,
1293
- verbose=False,
1294
- input_names=input_names,
1295
- output_names=output_names,
1296
- )
1297
- return "Finished"
1298
-
1299
-
1300
- with gr.Blocks() as app:
1301
- gr.Markdown(
1302
- value=i18n(
1303
- "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>."
1304
- )
1305
- )
1306
- with gr.Tabs():
1307
- with gr.TabItem(i18n("模型推理")):
1308
- with gr.Row():
1309
- sid0 = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names))
1310
- refresh_button = gr.Button(i18n("刷新音色列表和索引路径"), variant="primary")
1311
- clean_button = gr.Button(i18n("卸载音色省显存"), variant="primary")
1312
- spk_item = gr.Slider(
1313
- minimum=0,
1314
- maximum=2333,
1315
- step=1,
1316
- label=i18n("请选择说话人id"),
1317
- value=0,
1318
- visible=False,
1319
- interactive=True,
1320
- )
1321
- clean_button.click(fn=clean, inputs=[], outputs=[sid0])
1322
- with gr.Group():
1323
- gr.Markdown(
1324
- value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ")
1325
- )
1326
- with gr.Row():
1327
- with gr.Column():
1328
- vc_transform0 = gr.Number(
1329
- label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0
1330
- )
1331
- input_audio0 = gr.Textbox(
1332
- label=i18n("输入待处理音频文件路径(默认是正确格式示例)"),
1333
- value="E:\\codes\\py39\\test-20230416b\\todo-songs\\冬之花clip1.wav",
1334
- )
1335
- f0method0 = gr.Radio(
1336
- label=i18n(
1337
- "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"
1338
- ),
1339
- choices=["pm", "harvest", "crepe"],
1340
- value="pm",
1341
- interactive=True,
1342
- )
1343
- filter_radius0 = gr.Slider(
1344
- minimum=0,
1345
- maximum=7,
1346
- label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
1347
- value=3,
1348
- step=1,
1349
- interactive=True,
1350
- )
1351
- with gr.Column():
1352
- file_index1 = gr.Textbox(
1353
- label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
1354
- value="",
1355
- interactive=True,
1356
- )
1357
- file_index2 = gr.Dropdown(
1358
- label=i18n("自动检测index路径,下拉式选择(dropdown)"),
1359
- choices=sorted(index_paths),
1360
- interactive=True,
1361
- )
1362
- refresh_button.click(
1363
- fn=change_choices, inputs=[], outputs=[sid0, file_index2]
1364
- )
1365
- # file_big_npy1 = gr.Textbox(
1366
- # label=i18n("特征文件路径"),
1367
- # value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
1368
- # interactive=True,
1369
- # )
1370
- index_rate1 = gr.Slider(
1371
- minimum=0,
1372
- maximum=1,
1373
- label=i18n("检索特征占比"),
1374
- value=0.88,
1375
- interactive=True,
1376
- )
1377
- with gr.Column():
1378
- resample_sr0 = gr.Slider(
1379
- minimum=0,
1380
- maximum=48000,
1381
- label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
1382
- value=0,
1383
- step=1,
1384
- interactive=True,
1385
- )
1386
- rms_mix_rate0 = gr.Slider(
1387
- minimum=0,
1388
- maximum=1,
1389
- label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
1390
- value=1,
1391
- interactive=True,
1392
- )
1393
- protect0 = gr.Slider(
1394
- minimum=0,
1395
- maximum=0.5,
1396
- label=i18n(
1397
- "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"
1398
- ),
1399
- value=0.33,
1400
- step=0.01,
1401
- interactive=True,
1402
- )
1403
- f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"))
1404
- but0 = gr.Button(i18n("转换"), variant="primary")
1405
- with gr.Row():
1406
- vc_output1 = gr.Textbox(label=i18n("输出信息"))
1407
- vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)"))
1408
- but0.click(
1409
- vc_single,
1410
- [
1411
- spk_item,
1412
- input_audio0,
1413
- vc_transform0,
1414
- f0_file,
1415
- f0method0,
1416
- file_index1,
1417
- file_index2,
1418
- # file_big_npy1,
1419
- index_rate1,
1420
- filter_radius0,
1421
- resample_sr0,
1422
- rms_mix_rate0,
1423
- protect0,
1424
- ],
1425
- [vc_output1, vc_output2],
1426
- )
1427
- with gr.Group():
1428
- gr.Markdown(
1429
- value=i18n("批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ")
1430
- )
1431
- with gr.Row():
1432
- with gr.Column():
1433
- vc_transform1 = gr.Number(
1434
- label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0
1435
- )
1436
- opt_input = gr.Textbox(label=i18n("指定输出文件夹"), value="opt")
1437
- f0method1 = gr.Radio(
1438
- label=i18n(
1439
- "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"
1440
- ),
1441
- choices=["pm", "harvest", "crepe"],
1442
- value="pm",
1443
- interactive=True,
1444
- )
1445
- filter_radius1 = gr.Slider(
1446
- minimum=0,
1447
- maximum=7,
1448
- label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"),
1449
- value=3,
1450
- step=1,
1451
- interactive=True,
1452
- )
1453
- with gr.Column():
1454
- file_index3 = gr.Textbox(
1455
- label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
1456
- value="",
1457
- interactive=True,
1458
- )
1459
- file_index4 = gr.Dropdown(
1460
- label=i18n("自动检测index路径,下拉式选择(dropdown)"),
1461
- choices=sorted(index_paths),
1462
- interactive=True,
1463
- )
1464
- refresh_button.click(
1465
- fn=lambda: change_choices()[1],
1466
- inputs=[],
1467
- outputs=file_index4,
1468
- )
1469
- # file_big_npy2 = gr.Textbox(
1470
- # label=i18n("特征文件路径"),
1471
- # value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
1472
- # interactive=True,
1473
- # )
1474
- index_rate2 = gr.Slider(
1475
- minimum=0,
1476
- maximum=1,
1477
- label=i18n("检索特征占比"),
1478
- value=1,
1479
- interactive=True,
1480
- )
1481
- with gr.Column():
1482
- resample_sr1 = gr.Slider(
1483
- minimum=0,
1484
- maximum=48000,
1485
- label=i18n("后处理重采样至最终采样率,0为不进行重采样"),
1486
- value=0,
1487
- step=1,
1488
- interactive=True,
1489
- )
1490
- rms_mix_rate1 = gr.Slider(
1491
- minimum=0,
1492
- maximum=1,
1493
- label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"),
1494
- value=1,
1495
- interactive=True,
1496
- )
1497
- protect1 = gr.Slider(
1498
- minimum=0,
1499
- maximum=0.5,
1500
- label=i18n(
1501
- "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"
1502
- ),
1503
- value=0.33,
1504
- step=0.01,
1505
- interactive=True,
1506
- )
1507
- with gr.Column():
1508
- dir_input = gr.Textbox(
1509
- label=i18n("输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)"),
1510
- value="E:\codes\py39\\test-20230416b\\todo-songs",
1511
- )
1512
- inputs = gr.File(
1513
- file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹")
1514
- )
1515
- with gr.Row():
1516
- format1 = gr.Radio(
1517
- label=i18n("导出文件格式"),
1518
- choices=["wav", "flac", "mp3", "m4a"],
1519
- value="flac",
1520
- interactive=True,
1521
- )
1522
- but1 = gr.Button(i18n("转换"), variant="primary")
1523
- vc_output3 = gr.Textbox(label=i18n("输出信息"))
1524
- but1.click(
1525
- vc_multi,
1526
- [
1527
- spk_item,
1528
- dir_input,
1529
- opt_input,
1530
- inputs,
1531
- vc_transform1,
1532
- f0method1,
1533
- file_index3,
1534
- file_index4,
1535
- # file_big_npy2,
1536
- index_rate2,
1537
- filter_radius1,
1538
- resample_sr1,
1539
- rms_mix_rate1,
1540
- protect1,
1541
- format1,
1542
- ],
1543
- [vc_output3],
1544
- )
1545
- sid0.change(
1546
- fn=get_vc,
1547
- inputs=[sid0, protect0, protect1],
1548
- outputs=[spk_item, protect0, protect1],
1549
- )
1550
- with gr.TabItem(i18n("伴奏人声分离&去混响&去回声")):
1551
- with gr.Group():
1552
- gr.Markdown(
1553
- value=i18n(
1554
- "人声伴奏分离批量处理, 使用UVR5模型。 <br>"
1555
- "合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>"
1556
- "模型分为三类: <br>"
1557
- "1、保留人声:不带和声的音频选这个,对主人声保留比HP5更好。内置HP2和HP3两个模型,HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点; <br>"
1558
- "2、仅保留主人声:带和声的音频选这个,对主人声可能有削弱。内置HP5一个模型; <br> "
1559
- "3、去混响、去延迟模型(by FoxJoy):<br>"
1560
- "  (1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>"
1561
- "&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底,DeReverb额外去除混响,可去除单声道混响,但是对高频重的板式混响去不干净。<br>"
1562
- "去混响/去延迟,附:<br>"
1563
- "1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍;<br>"
1564
- "2、MDX-Net-Dereverb模型挺慢的;<br>"
1565
- "3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。"
1566
- )
1567
- )
1568
- with gr.Row():
1569
- with gr.Column():
1570
- dir_wav_input = gr.Textbox(
1571
- label=i18n("输入待处理音频文件夹路径"),
1572
- value="E:\\codes\\py39\\test-20230416b\\todo-songs\\todo-songs",
1573
- )
1574
- wav_inputs = gr.File(
1575
- file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹")
1576
- )
1577
- with gr.Column():
1578
- model_choose = gr.Dropdown(label=i18n("模型"), choices=uvr5_names)
1579
- agg = gr.Slider(
1580
- minimum=0,
1581
- maximum=20,
1582
- step=1,
1583
- label="人声提取激进程度",
1584
- value=10,
1585
- interactive=True,
1586
- visible=False, # 先不开放调整
1587
- )
1588
- opt_vocal_root = gr.Textbox(
1589
- label=i18n("指定输出主人声文件夹"), value="opt"
1590
- )
1591
- opt_ins_root = gr.Textbox(
1592
- label=i18n("指定输出非主人声文件夹"), value="opt"
1593
- )
1594
- format0 = gr.Radio(
1595
- label=i18n("导出文件格式"),
1596
- choices=["wav", "flac", "mp3", "m4a"],
1597
- value="flac",
1598
- interactive=True,
1599
- )
1600
- but2 = gr.Button(i18n("转换"), variant="primary")
1601
- vc_output4 = gr.Textbox(label=i18n("输出信息"))
1602
- but2.click(
1603
- uvr,
1604
- [
1605
- model_choose,
1606
- dir_wav_input,
1607
- opt_vocal_root,
1608
- wav_inputs,
1609
- opt_ins_root,
1610
- agg,
1611
- format0,
1612
- ],
1613
- [vc_output4],
1614
- )
1615
- with gr.TabItem(i18n("训练")):
1616
- gr.Markdown(
1617
- value=i18n(
1618
- "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. "
1619
- )
1620
- )
1621
- with gr.Row():
1622
- exp_dir1 = gr.Textbox(label=i18n("输入实验名"), value="mi-test")
1623
- sr2 = gr.Radio(
1624
- label=i18n("目标采样率"),
1625
- choices=["40k", "48k"],
1626
- value="40k",
1627
- interactive=True,
1628
- )
1629
- if_f0_3 = gr.Radio(
1630
- label=i18n("模型是否带音高指导(唱歌一定要, 语音可以不要)"),
1631
- choices=[True, False],
1632
- value=True,
1633
- interactive=True,
1634
- )
1635
- version19 = gr.Radio(
1636
- label=i18n("版本"),
1637
- choices=["v1", "v2"],
1638
- value="v1",
1639
- interactive=True,
1640
- visible=True,
1641
- )
1642
- np7 = gr.Slider(
1643
- minimum=0,
1644
- maximum=config.n_cpu,
1645
- step=1,
1646
- label=i18n("提取音高和处理数据使用的CPU进程数"),
1647
- value=int(np.ceil(config.n_cpu / 1.5)),
1648
- interactive=True,
1649
- )
1650
- with gr.Group(): # 暂时单人的, 后面支持最多4人的#数据处理
1651
- gr.Markdown(
1652
- value=i18n(
1653
- "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. "
1654
- )
1655
- )
1656
- with gr.Row():
1657
- trainset_dir4 = gr.Textbox(
1658
- label=i18n("输入训练文件夹路径"), value="E:\\语音音频+标注\\米津玄师\\src"
1659
- )
1660
- spk_id5 = gr.Slider(
1661
- minimum=0,
1662
- maximum=4,
1663
- step=1,
1664
- label=i18n("请指定说话人id"),
1665
- value=0,
1666
- interactive=True,
1667
- )
1668
- but1 = gr.Button(i18n("处理数据"), variant="primary")
1669
- info1 = gr.Textbox(label=i18n("输出信息"), value="")
1670
- but1.click(
1671
- preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1]
1672
- )
1673
- with gr.Group():
1674
- gr.Markdown(value=i18n("step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)"))
1675
- with gr.Row():
1676
- with gr.Column():
1677
- gpus6 = gr.Textbox(
1678
- label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"),
1679
- value=gpus,
1680
- interactive=True,
1681
- )
1682
- gpu_info9 = gr.Textbox(label=i18n("显卡信息"), value=gpu_info)
1683
- with gr.Column():
1684
- f0method8 = gr.Radio(
1685
- label=i18n(
1686
- "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢"
1687
- ),
1688
- choices=["pm", "harvest", "dio"],
1689
- value="harvest",
1690
- interactive=True,
1691
- )
1692
- but2 = gr.Button(i18n("特征提取"), variant="primary")
1693
- info2 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
1694
- but2.click(
1695
- extract_f0_feature,
1696
- [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19],
1697
- [info2],
1698
- )
1699
- with gr.Group():
1700
- gr.Markdown(value=i18n("step3: 填写训练设置, 开始训练模型和索引"))
1701
- with gr.Row():
1702
- save_epoch10 = gr.Slider(
1703
- minimum=0,
1704
- maximum=50,
1705
- step=1,
1706
- label=i18n("保存频率save_every_epoch"),
1707
- value=5,
1708
- interactive=True,
1709
- )
1710
- total_epoch11 = gr.Slider(
1711
- minimum=0,
1712
- maximum=1000,
1713
- step=1,
1714
- label=i18n("总训练轮数total_epoch"),
1715
- value=20,
1716
- interactive=True,
1717
- )
1718
- batch_size12 = gr.Slider(
1719
- minimum=1,
1720
- maximum=40,
1721
- step=1,
1722
- label=i18n("每张显卡的batch_size"),
1723
- value=default_batch_size,
1724
- interactive=True,
1725
- )
1726
- if_save_latest13 = gr.Radio(
1727
- label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),
1728
- choices=[i18n("是"), i18n("否")],
1729
- value=i18n("否"),
1730
- interactive=True,
1731
- )
1732
- if_cache_gpu17 = gr.Radio(
1733
- label=i18n(
1734
- "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速"
1735
- ),
1736
- choices=[i18n("是"), i18n("否")],
1737
- value=i18n("否"),
1738
- interactive=True,
1739
- )
1740
- if_save_every_weights18 = gr.Radio(
1741
- label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),
1742
- choices=[i18n("是"), i18n("否")],
1743
- value=i18n("否"),
1744
- interactive=True,
1745
- )
1746
- with gr.Row():
1747
- pretrained_G14 = gr.Textbox(
1748
- label=i18n("加载预训练底模G路径"),
1749
- value="pretrained/f0G40k.pth",
1750
- interactive=True,
1751
- )
1752
- pretrained_D15 = gr.Textbox(
1753
- label=i18n("加载预训练底模D路径"),
1754
- value="pretrained/f0D40k.pth",
1755
- interactive=True,
1756
- )
1757
- sr2.change(
1758
- change_sr2,
1759
- [sr2, if_f0_3, version19],
1760
- [pretrained_G14, pretrained_D15],
1761
- )
1762
- version19.change(
1763
- change_version19,
1764
- [sr2, if_f0_3, version19],
1765
- [pretrained_G14, pretrained_D15, sr2],
1766
- )
1767
- if_f0_3.change(
1768
- change_f0,
1769
- [if_f0_3, sr2, version19],
1770
- [f0method8, pretrained_G14, pretrained_D15],
1771
- )
1772
- gpus16 = gr.Textbox(
1773
- label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"),
1774
- value=gpus,
1775
- interactive=True,
1776
- )
1777
- but3 = gr.Button(i18n("训练模型"), variant="primary")
1778
- but4 = gr.Button(i18n("训练特征索引"), variant="primary")
1779
- but5 = gr.Button(i18n("一键训练"), variant="primary")
1780
- info3 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=10)
1781
- but3.click(
1782
- click_train,
1783
- [
1784
- exp_dir1,
1785
- sr2,
1786
- if_f0_3,
1787
- spk_id5,
1788
- save_epoch10,
1789
- total_epoch11,
1790
- batch_size12,
1791
- if_save_latest13,
1792
- pretrained_G14,
1793
- pretrained_D15,
1794
- gpus16,
1795
- if_cache_gpu17,
1796
- if_save_every_weights18,
1797
- version19,
1798
- ],
1799
- info3,
1800
- )
1801
- but4.click(train_index, [exp_dir1, version19], info3)
1802
- but5.click(
1803
- train1key,
1804
- [
1805
- exp_dir1,
1806
- sr2,
1807
- if_f0_3,
1808
- trainset_dir4,
1809
- spk_id5,
1810
- np7,
1811
- f0method8,
1812
- save_epoch10,
1813
- total_epoch11,
1814
- batch_size12,
1815
- if_save_latest13,
1816
- pretrained_G14,
1817
- pretrained_D15,
1818
- gpus16,
1819
- if_cache_gpu17,
1820
- if_save_every_weights18,
1821
- version19,
1822
- ],
1823
- info3,
1824
- )
1825
-
1826
- with gr.TabItem(i18n("ckpt处理")):
1827
- with gr.Group():
1828
- gr.Markdown(value=i18n("模型融合, 可用于测试音色融合"))
1829
- with gr.Row():
1830
- ckpt_a = gr.Textbox(label=i18n("A模型路径"), value="", interactive=True)
1831
- ckpt_b = gr.Textbox(label=i18n("B模型路径"), value="", interactive=True)
1832
- alpha_a = gr.Slider(
1833
- minimum=0,
1834
- maximum=1,
1835
- label=i18n("A模型权重"),
1836
- value=0.5,
1837
- interactive=True,
1838
- )
1839
- with gr.Row():
1840
- sr_ = gr.Radio(
1841
- label=i18n("目标采样率"),
1842
- choices=["40k", "48k"],
1843
- value="40k",
1844
- interactive=True,
1845
- )
1846
- if_f0_ = gr.Radio(
1847
- label=i18n("模型是否带音高指导"),
1848
- choices=[i18n("是"), i18n("否")],
1849
- value=i18n("是"),
1850
- interactive=True,
1851
- )
1852
- info__ = gr.Textbox(
1853
- label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True
1854
- )
1855
- name_to_save0 = gr.Textbox(
1856
- label=i18n("保存的模型名不带后缀"),
1857
- value="",
1858
- max_lines=1,
1859
- interactive=True,
1860
- )
1861
- version_2 = gr.Radio(
1862
- label=i18n("模型版本型号"),
1863
- choices=["v1", "v2"],
1864
- value="v1",
1865
- interactive=True,
1866
- )
1867
- with gr.Row():
1868
- but6 = gr.Button(i18n("融合"), variant="primary")
1869
- info4 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
1870
- but6.click(
1871
- merge,
1872
- [
1873
- ckpt_a,
1874
- ckpt_b,
1875
- alpha_a,
1876
- sr_,
1877
- if_f0_,
1878
- info__,
1879
- name_to_save0,
1880
- version_2,
1881
- ],
1882
- info4,
1883
- ) # def merge(path1,path2,alpha1,sr,f0,info):
1884
- with gr.Group():
1885
- gr.Markdown(value=i18n("修改模型信息(仅支持weights文件夹下提取的小模型文件)"))
1886
- with gr.Row():
1887
- ckpt_path0 = gr.Textbox(
1888
- label=i18n("模型路径"), value="", interactive=True
1889
- )
1890
- info_ = gr.Textbox(
1891
- label=i18n("要改的模型信息"), value="", max_lines=8, interactive=True
1892
- )
1893
- name_to_save1 = gr.Textbox(
1894
- label=i18n("保存的文件名, 默认空为和源文件同名"),
1895
- value="",
1896
- max_lines=8,
1897
- interactive=True,
1898
- )
1899
- with gr.Row():
1900
- but7 = gr.Button(i18n("修改"), variant="primary")
1901
- info5 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
1902
- but7.click(change_info, [ckpt_path0, info_, name_to_save1], info5)
1903
- with gr.Group():
1904
- gr.Markdown(value=i18n("查看模型信息(仅支持weights文件夹下提取的小模型文件)"))
1905
- with gr.Row():
1906
- ckpt_path1 = gr.Textbox(
1907
- label=i18n("模型路径"), value="", interactive=True
1908
- )
1909
- but8 = gr.Button(i18n("查看"), variant="primary")
1910
- info6 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
1911
- but8.click(show_info, [ckpt_path1], info6)
1912
- with gr.Group():
1913
- gr.Markdown(
1914
- value=i18n(
1915
- "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况"
1916
- )
1917
- )
1918
- with gr.Row():
1919
- ckpt_path2 = gr.Textbox(
1920
- label=i18n("模型路径"),
1921
- value="E:\\codes\\py39\\logs\\mi-test_f0_48k\\G_23333.pth",
1922
- interactive=True,
1923
- )
1924
- save_name = gr.Textbox(
1925
- label=i18n("保存名"), value="", interactive=True
1926
- )
1927
- sr__ = gr.Radio(
1928
- label=i18n("目标采样率"),
1929
- choices=["32k", "40k", "48k"],
1930
- value="40k",
1931
- interactive=True,
1932
- )
1933
- if_f0__ = gr.Radio(
1934
- label=i18n("模型是否带音高指导,1是0否"),
1935
- choices=["1", "0"],
1936
- value="1",
1937
- interactive=True,
1938
- )
1939
- version_1 = gr.Radio(
1940
- label=i18n("模型版本型号"),
1941
- choices=["v1", "v2"],
1942
- value="v2",
1943
- interactive=True,
1944
- )
1945
- info___ = gr.Textbox(
1946
- label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True
1947
- )
1948
- but9 = gr.Button(i18n("提取"), variant="primary")
1949
- info7 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8)
1950
- ckpt_path2.change(
1951
- change_info_, [ckpt_path2], [sr__, if_f0__, version_1]
1952
- )
1953
- but9.click(
1954
- extract_small_model,
1955
- [ckpt_path2, save_name, sr__, if_f0__, info___, version_1],
1956
- info7,
1957
- )
1958
-
1959
- with gr.TabItem(i18n("Onnx导出")):
1960
- with gr.Row():
1961
- ckpt_dir = gr.Textbox(label=i18n("RVC模型路径"), value="", interactive=True)
1962
- with gr.Row():
1963
- onnx_dir = gr.Textbox(
1964
- label=i18n("Onnx输出路径"), value="", interactive=True
1965
- )
1966
- with gr.Row():
1967
- infoOnnx = gr.Label(label="info")
1968
- with gr.Row():
1969
- butOnnx = gr.Button(i18n("导出Onnx模型"), variant="primary")
1970
- butOnnx.click(export_onnx, [ckpt_dir, onnx_dir], infoOnnx)
1971
-
1972
- tab_faq = i18n("常见问题解答")
1973
- with gr.TabItem(tab_faq):
1974
- try:
1975
- if tab_faq == "常见问题解答":
1976
- with open("docs/faq.md", "r", encoding="utf8") as f:
1977
- info = f.read()
1978
- else:
1979
- with open("docs/faq_en.md", "r", encoding="utf8") as f:
1980
- info = f.read()
1981
- gr.Markdown(value=info)
1982
- except:
1983
- gr.Markdown(traceback.format_exc())
1984
-
1985
- # with gr.TabItem(i18n("招募音高曲线前端编辑器")):
1986
- # gr.Markdown(value=i18n("加开发群联系我xxxxx"))
1987
- # with gr.TabItem(i18n("点击查看交流、问题反馈群号")):
1988
- # gr.Markdown(value=i18n("xxxxx"))
1989
-
1990
- if config.iscolab:
1991
- app.queue(concurrency_count=511, max_size=1022).launch(share=True)
1992
- else:
1993
- app.queue(concurrency_count=511, max_size=1022).launch(
1994
- server_name="0.0.0.0",
1995
- inbrowser=not config.noautoopen,
1996
- server_port=config.listen_port,
1997
- quiet=True,
1998
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/infer/infer-pm-index256.py DELETED
@@ -1,199 +0,0 @@
1
- """
2
-
3
- 对源特征进行检索
4
- """
5
- import torch, pdb, os, parselmouth
6
-
7
- os.environ["CUDA_VISIBLE_DEVICES"] = "0"
8
- import numpy as np
9
- import soundfile as sf
10
-
11
- # from models import SynthesizerTrn256#hifigan_nonsf
12
- # from infer_pack.models import SynthesizerTrn256NSF as SynthesizerTrn256#hifigan_nsf
13
- from infer_pack.models import (
14
- SynthesizerTrnMs256NSFsid as SynthesizerTrn256,
15
- ) # hifigan_nsf
16
-
17
- # from infer_pack.models import SynthesizerTrnMs256NSFsid_sim as SynthesizerTrn256#hifigan_nsf
18
- # from models import SynthesizerTrn256NSFsim as SynthesizerTrn256#hifigan_nsf
19
- # from models import SynthesizerTrn256NSFsimFlow as SynthesizerTrn256#hifigan_nsf
20
-
21
-
22
- from scipy.io import wavfile
23
- from fairseq import checkpoint_utils
24
-
25
- # import pyworld
26
- import librosa
27
- import torch.nn.functional as F
28
- import scipy.signal as signal
29
-
30
- # import torchcrepe
31
- from time import time as ttime
32
-
33
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
34
- model_path = r"E:\codes\py39\vits_vc_gpu_train\hubert_base.pt" #
35
- print("load model(s) from {}".format(model_path))
36
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
37
- [model_path],
38
- suffix="",
39
- )
40
- model = models[0]
41
- model = model.to(device)
42
- model = model.half()
43
- model.eval()
44
-
45
- # net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],183,256,is_half=True)#hifigan#512#256
46
- # net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],109,256,is_half=True)#hifigan#512#256
47
- net_g = SynthesizerTrn256(
48
- 1025,
49
- 32,
50
- 192,
51
- 192,
52
- 768,
53
- 2,
54
- 6,
55
- 3,
56
- 0,
57
- "1",
58
- [3, 7, 11],
59
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
60
- [10, 10, 2, 2],
61
- 512,
62
- [16, 16, 4, 4],
63
- 183,
64
- 256,
65
- is_half=True,
66
- ) # hifigan#512#256#no_dropout
67
- # net_g = SynthesizerTrn256(1025,32,192,192,768,2,3,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],0)#ts3
68
- # net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2],512,[16,16,4],0)#hifigan-ps-sr
69
- #
70
- # net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [5,5], 512, [15,15], 0)#ms
71
- # net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,10], 512, [16,16], 0)#idwt2
72
-
73
- # weights=torch.load("infer/ft-mi_1k-noD.pt")
74
- # weights=torch.load("infer/ft-mi-freeze-vocoder-flow-enc_q_1k.pt")
75
- # weights=torch.load("infer/ft-mi-freeze-vocoder_true_1k.pt")
76
- # weights=torch.load("infer/ft-mi-sim1k.pt")
77
- weights = torch.load("infer/ft-mi-no_opt-no_dropout.pt")
78
- print(net_g.load_state_dict(weights, strict=True))
79
-
80
- net_g.eval().to(device)
81
- net_g.half()
82
-
83
-
84
- def get_f0(x, p_len, f0_up_key=0):
85
- time_step = 160 / 16000 * 1000
86
- f0_min = 50
87
- f0_max = 1100
88
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
89
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
90
-
91
- f0 = (
92
- parselmouth.Sound(x, 16000)
93
- .to_pitch_ac(
94
- time_step=time_step / 1000,
95
- voicing_threshold=0.6,
96
- pitch_floor=f0_min,
97
- pitch_ceiling=f0_max,
98
- )
99
- .selected_array["frequency"]
100
- )
101
-
102
- pad_size = (p_len - len(f0) + 1) // 2
103
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
104
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
105
- f0 *= pow(2, f0_up_key / 12)
106
- f0bak = f0.copy()
107
-
108
- f0_mel = 1127 * np.log(1 + f0 / 700)
109
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
110
- f0_mel_max - f0_mel_min
111
- ) + 1
112
- f0_mel[f0_mel <= 1] = 1
113
- f0_mel[f0_mel > 255] = 255
114
- # f0_mel[f0_mel > 188] = 188
115
- f0_coarse = np.rint(f0_mel).astype(np.int)
116
- return f0_coarse, f0bak
117
-
118
-
119
- import faiss
120
-
121
- index = faiss.read_index("infer/added_IVF512_Flat_mi_baseline_src_feat.index")
122
- big_npy = np.load("infer/big_src_feature_mi.npy")
123
- ta0 = ta1 = ta2 = 0
124
- for idx, name in enumerate(
125
- [
126
- "冬之花clip1.wav",
127
- ]
128
- ): ##
129
- wav_path = "todo-songs/%s" % name #
130
- f0_up_key = -2 #
131
- audio, sampling_rate = sf.read(wav_path)
132
- if len(audio.shape) > 1:
133
- audio = librosa.to_mono(audio.transpose(1, 0))
134
- if sampling_rate != 16000:
135
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
136
-
137
- feats = torch.from_numpy(audio).float()
138
- if feats.dim() == 2: # double channels
139
- feats = feats.mean(-1)
140
- assert feats.dim() == 1, feats.dim()
141
- feats = feats.view(1, -1)
142
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
143
- inputs = {
144
- "source": feats.half().to(device),
145
- "padding_mask": padding_mask.to(device),
146
- "output_layer": 9, # layer 9
147
- }
148
- if torch.cuda.is_available():
149
- torch.cuda.synchronize()
150
- t0 = ttime()
151
- with torch.no_grad():
152
- logits = model.extract_features(**inputs)
153
- feats = model.final_proj(logits[0])
154
-
155
- ####索引优化
156
- npy = feats[0].cpu().numpy().astype("float32")
157
- D, I = index.search(npy, 1)
158
- feats = (
159
- torch.from_numpy(big_npy[I.squeeze()].astype("float16")).unsqueeze(0).to(device)
160
- )
161
-
162
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
163
- if torch.cuda.is_available():
164
- torch.cuda.synchronize()
165
- t1 = ttime()
166
- # p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存
167
- p_len = min(feats.shape[1], 10000) #
168
- pitch, pitchf = get_f0(audio, p_len, f0_up_key)
169
- p_len = min(feats.shape[1], 10000, pitch.shape[0]) # 太大了爆显存
170
- if torch.cuda.is_available():
171
- torch.cuda.synchronize()
172
- t2 = ttime()
173
- feats = feats[:, :p_len, :]
174
- pitch = pitch[:p_len]
175
- pitchf = pitchf[:p_len]
176
- p_len = torch.LongTensor([p_len]).to(device)
177
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
178
- sid = torch.LongTensor([0]).to(device)
179
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
180
- with torch.no_grad():
181
- audio = (
182
- net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
183
- .data.cpu()
184
- .float()
185
- .numpy()
186
- ) # nsf
187
- if torch.cuda.is_available():
188
- torch.cuda.synchronize()
189
- t3 = ttime()
190
- ta0 += t1 - t0
191
- ta1 += t2 - t1
192
- ta2 += t3 - t2
193
- # wavfile.write("ft-mi_1k-index256-noD-%s.wav"%name, 40000, audio)##
194
- # wavfile.write("ft-mi-freeze-vocoder-flow-enc_q_1k-%s.wav"%name, 40000, audio)##
195
- # wavfile.write("ft-mi-sim1k-%s.wav"%name, 40000, audio)##
196
- wavfile.write("ft-mi-no_opt-no_dropout-%s.wav" % name, 40000, audio) ##
197
-
198
-
199
- print(ta0, ta1, ta2) #
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/espnet_positional_embedding.py DELETED
@@ -1,113 +0,0 @@
1
- import math
2
- import torch
3
-
4
-
5
- class PositionalEncoding(torch.nn.Module):
6
- """Positional encoding.
7
- Args:
8
- d_model (int): Embedding dimension.
9
- dropout_rate (float): Dropout rate.
10
- max_len (int): Maximum input length.
11
- reverse (bool): Whether to reverse the input position.
12
- """
13
-
14
- def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False):
15
- """Construct an PositionalEncoding object."""
16
- super(PositionalEncoding, self).__init__()
17
- self.d_model = d_model
18
- self.reverse = reverse
19
- self.xscale = math.sqrt(self.d_model)
20
- self.dropout = torch.nn.Dropout(p=dropout_rate)
21
- self.pe = None
22
- self.extend_pe(torch.tensor(0.0).expand(1, max_len))
23
-
24
- def extend_pe(self, x):
25
- """Reset the positional encodings."""
26
- if self.pe is not None:
27
- if self.pe.size(1) >= x.size(1):
28
- if self.pe.dtype != x.dtype or self.pe.device != x.device:
29
- self.pe = self.pe.to(dtype=x.dtype, device=x.device)
30
- return
31
- pe = torch.zeros(x.size(1), self.d_model)
32
- if self.reverse:
33
- position = torch.arange(
34
- x.size(1) - 1, -1, -1.0, dtype=torch.float32
35
- ).unsqueeze(1)
36
- else:
37
- position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1)
38
- div_term = torch.exp(
39
- torch.arange(0, self.d_model, 2, dtype=torch.float32)
40
- * -(math.log(10000.0) / self.d_model)
41
- )
42
- pe[:, 0::2] = torch.sin(position * div_term)
43
- pe[:, 1::2] = torch.cos(position * div_term)
44
- pe = pe.unsqueeze(0)
45
- self.pe = pe.to(device=x.device, dtype=x.dtype)
46
-
47
- def forward(self, x: torch.Tensor):
48
- """Add positional encoding.
49
- Args:
50
- x (torch.Tensor): Input tensor (batch, time, `*`).
51
- Returns:
52
- torch.Tensor: Encoded tensor (batch, time, `*`).
53
- """
54
- self.extend_pe(x)
55
- x = x * self.xscale + self.pe[:, : x.size(1)]
56
- return self.dropout(x)
57
-
58
-
59
- class ScaledPositionalEncoding(PositionalEncoding):
60
- """Scaled positional encoding module.
61
- See Sec. 3.2 https://arxiv.org/abs/1809.08895
62
- Args:
63
- d_model (int): Embedding dimension.
64
- dropout_rate (float): Dropout rate.
65
- max_len (int): Maximum input length.
66
- """
67
-
68
- def __init__(self, d_model, dropout_rate, max_len=5000):
69
- """Initialize class."""
70
- super().__init__(d_model=d_model, dropout_rate=dropout_rate, max_len=max_len)
71
- self.alpha = torch.nn.Parameter(torch.tensor(1.0))
72
-
73
- def reset_parameters(self):
74
- """Reset parameters."""
75
- self.alpha.data = torch.tensor(1.0)
76
-
77
- def forward(self, x):
78
- """Add positional encoding.
79
- Args:
80
- x (torch.Tensor): Input tensor (batch, time, `*`).
81
- Returns:
82
- torch.Tensor: Encoded tensor (batch, time, `*`).
83
- """
84
- self.extend_pe(x)
85
- x = x + self.alpha * self.pe[:, : x.size(1)]
86
- return self.dropout(x)
87
-
88
-
89
- class RelPositionalEncoding(PositionalEncoding):
90
- """Relative positional encoding module.
91
- See : Appendix B in https://arxiv.org/abs/1901.02860
92
- Args:
93
- d_model (int): Embedding dimension.
94
- dropout_rate (float): Dropout rate.
95
- max_len (int): Maximum input length.
96
- """
97
-
98
- def __init__(self, d_model, dropout_rate, max_len=5000):
99
- """Initialize class."""
100
- super().__init__(d_model, dropout_rate, max_len, reverse=True)
101
-
102
- def forward(self, x):
103
- """Compute positional encoding.
104
- Args:
105
- x (torch.Tensor): Input tensor (batch, time, `*`).
106
- Returns:
107
- torch.Tensor: Encoded tensor (batch, time, `*`).
108
- torch.Tensor: Positional embedding tensor (1, time, `*`).
109
- """
110
- self.extend_pe(x)
111
- x = x * self.xscale
112
- pos_emb = self.pe[:, : x.size(1)]
113
- return self.dropout(x) + self.dropout(pos_emb)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/utils/general.py DELETED
@@ -1,1496 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- General utils
4
- """
5
-
6
- import contextlib
7
- import glob
8
- import inspect
9
- import logging
10
- import logging.config
11
- import math
12
- import os
13
- import platform
14
- import random
15
- import re
16
- import signal
17
- import sys
18
- import time
19
- import urllib
20
- from copy import deepcopy
21
- from datetime import datetime
22
- from itertools import repeat
23
- from multiprocessing.pool import ThreadPool
24
- from pathlib import Path
25
- from subprocess import check_output
26
- from tarfile import is_tarfile
27
- from typing import Optional
28
- from zipfile import ZipFile, is_zipfile
29
-
30
- import cv2
31
- import IPython
32
- import numpy as np
33
- import pandas as pd
34
- import pkg_resources as pkg
35
- import torch
36
- import torchvision
37
- import yaml
38
-
39
- from utils import TryExcept, emojis
40
- from utils.downloads import gsutil_getsize
41
- from utils.metrics import box_iou, fitness
42
-
43
- FILE = Path(__file__).resolve()
44
- ROOT = FILE.parents[1] # YOLOv5 root directory
45
- RANK = int(os.getenv("RANK", -1))
46
-
47
- # Settings
48
- NUM_THREADS = min(
49
- 8, max(1, os.cpu_count() - 1)
50
- ) # number of YOLOv5 multiprocessing threads
51
- DATASETS_DIR = Path(
52
- os.getenv("YOLOv5_DATASETS_DIR", ROOT.parent / "datasets")
53
- ) # global datasets directory
54
- AUTOINSTALL = (
55
- str(os.getenv("YOLOv5_AUTOINSTALL", True)).lower() == "true"
56
- ) # global auto-install mode
57
- VERBOSE = (
58
- str(os.getenv("YOLOv5_VERBOSE", True)).lower() == "true"
59
- ) # global verbose mode
60
- TQDM_BAR_FORMAT = "{l_bar}{bar:10}{r_bar}" # tqdm bar format
61
- FONT = "Arial.ttf" # https://ultralytics.com/assets/Arial.ttf
62
-
63
- torch.set_printoptions(linewidth=320, precision=5, profile="long")
64
- np.set_printoptions(
65
- linewidth=320, formatter={"float_kind": "{:11.5g}".format}
66
- ) # format short g, %precision=5
67
- pd.options.display.max_columns = 10
68
- cv2.setNumThreads(
69
- 0
70
- ) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
71
- os.environ["NUMEXPR_MAX_THREADS"] = str(NUM_THREADS) # NumExpr max threads
72
- os.environ["OMP_NUM_THREADS"] = (
73
- "1" if platform.system() == "darwin" else str(NUM_THREADS)
74
- ) # OpenMP (PyTorch and SciPy)
75
-
76
-
77
- def is_ascii(s=""):
78
- # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7)
79
- s = str(s) # convert list, tuple, None, etc. to str
80
- return len(s.encode().decode("ascii", "ignore")) == len(s)
81
-
82
-
83
- def is_chinese(s="人工智能"):
84
- # Is string composed of any Chinese characters?
85
- return bool(re.search("[\u4e00-\u9fff]", str(s)))
86
-
87
-
88
- def is_colab():
89
- # Is environment a Google Colab instance?
90
- return "google.colab" in sys.modules
91
-
92
-
93
- def is_notebook():
94
- # Is environment a Jupyter notebook? Verified on Colab, Jupyterlab, Kaggle, Paperspace
95
- ipython_type = str(type(IPython.get_ipython()))
96
- return "colab" in ipython_type or "zmqshell" in ipython_type
97
-
98
-
99
- def is_kaggle():
100
- # Is environment a Kaggle Notebook?
101
- return (
102
- os.environ.get("PWD") == "/kaggle/working"
103
- and os.environ.get("KAGGLE_URL_BASE") == "https://www.kaggle.com"
104
- )
105
-
106
-
107
- def is_docker() -> bool:
108
- """Check if the process runs inside a docker container."""
109
- if Path("/.dockerenv").exists():
110
- return True
111
- try: # check if docker is in control groups
112
- with open("/proc/self/cgroup") as file:
113
- return any("docker" in line for line in file)
114
- except OSError:
115
- return False
116
-
117
-
118
- def is_writeable(dir, test=False):
119
- # Return True if directory has write permissions, test opening a file with write permissions if test=True
120
- if not test:
121
- return os.access(dir, os.W_OK) # possible issues on Windows
122
- file = Path(dir) / "tmp.txt"
123
- try:
124
- with open(file, "w"): # open file with write permissions
125
- pass
126
- file.unlink() # remove file
127
- return True
128
- except OSError:
129
- return False
130
-
131
-
132
- LOGGING_NAME = "yolov5"
133
-
134
-
135
- def set_logging(name=LOGGING_NAME, verbose=True):
136
- # sets up logging for the given name
137
- rank = int(os.getenv("RANK", -1)) # rank in world for Multi-GPU trainings
138
- level = logging.INFO if verbose and rank in {-1, 0} else logging.ERROR
139
- logging.config.dictConfig(
140
- {
141
- "version": 1,
142
- "disable_existing_loggers": False,
143
- "formatters": {name: {"format": "%(message)s"}},
144
- "handlers": {
145
- name: {
146
- "class": "logging.StreamHandler",
147
- "formatter": name,
148
- "level": level,
149
- }
150
- },
151
- "loggers": {
152
- name: {
153
- "level": level,
154
- "handlers": [name],
155
- "propagate": False,
156
- }
157
- },
158
- }
159
- )
160
-
161
-
162
- set_logging(LOGGING_NAME) # run before defining LOGGER
163
- LOGGER = logging.getLogger(
164
- LOGGING_NAME
165
- ) # define globally (used in train.py, val.py, detect.py, etc.)
166
- if platform.system() == "Windows":
167
- for fn in LOGGER.info, LOGGER.warning:
168
- setattr(
169
- LOGGER, fn.__name__, lambda x: fn(emojis(x))
170
- ) # emoji safe logging
171
-
172
-
173
- def user_config_dir(dir="Ultralytics", env_var="YOLOV5_CONFIG_DIR"):
174
- # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required.
175
- env = os.getenv(env_var)
176
- if env:
177
- path = Path(env) # use environment variable
178
- else:
179
- cfg = {
180
- "Windows": "AppData/Roaming",
181
- "Linux": ".config",
182
- "Darwin": "Library/Application Support",
183
- } # 3 OS dirs
184
- path = Path.home() / cfg.get(
185
- platform.system(), ""
186
- ) # OS-specific config dir
187
- path = (
188
- path if is_writeable(path) else Path("/tmp")
189
- ) / dir # GCP and AWS lambda fix, only /tmp is writeable
190
- path.mkdir(exist_ok=True) # make if required
191
- return path
192
-
193
-
194
- CONFIG_DIR = user_config_dir() # Ultralytics settings dir
195
-
196
-
197
- class Profile(contextlib.ContextDecorator):
198
- # YOLOv5 Profile class. Usage: @Profile() decorator or 'with Profile():' context manager
199
- def __init__(self, t=0.0):
200
- self.t = t
201
- self.cuda = torch.cuda.is_available()
202
-
203
- def __enter__(self):
204
- self.start = self.time()
205
- return self
206
-
207
- def __exit__(self, type, value, traceback):
208
- self.dt = self.time() - self.start # delta-time
209
- self.t += self.dt # accumulate dt
210
-
211
- def time(self):
212
- if self.cuda:
213
- torch.cuda.synchronize()
214
- return time.time()
215
-
216
-
217
- class Timeout(contextlib.ContextDecorator):
218
- # YOLOv5 Timeout class. Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager
219
- def __init__(
220
- self, seconds, *, timeout_msg="", suppress_timeout_errors=True
221
- ):
222
- self.seconds = int(seconds)
223
- self.timeout_message = timeout_msg
224
- self.suppress = bool(suppress_timeout_errors)
225
-
226
- def _timeout_handler(self, signum, frame):
227
- raise TimeoutError(self.timeout_message)
228
-
229
- def __enter__(self):
230
- if platform.system() != "Windows": # not supported on Windows
231
- signal.signal(
232
- signal.SIGALRM, self._timeout_handler
233
- ) # Set handler for SIGALRM
234
- signal.alarm(
235
- self.seconds
236
- ) # start countdown for SIGALRM to be raised
237
-
238
- def __exit__(self, exc_type, exc_val, exc_tb):
239
- if platform.system() != "Windows":
240
- signal.alarm(0) # Cancel SIGALRM if it's scheduled
241
- if (
242
- self.suppress and exc_type is TimeoutError
243
- ): # Suppress TimeoutError
244
- return True
245
-
246
-
247
- class WorkingDirectory(contextlib.ContextDecorator):
248
- # Usage: @WorkingDirectory(dir) decorator or 'with WorkingDirectory(dir):' context manager
249
- def __init__(self, new_dir):
250
- self.dir = new_dir # new dir
251
- self.cwd = Path.cwd().resolve() # current dir
252
-
253
- def __enter__(self):
254
- os.chdir(self.dir)
255
-
256
- def __exit__(self, exc_type, exc_val, exc_tb):
257
- os.chdir(self.cwd)
258
-
259
-
260
- def methods(instance):
261
- # Get class/instance methods
262
- return [
263
- f
264
- for f in dir(instance)
265
- if callable(getattr(instance, f)) and not f.startswith("__")
266
- ]
267
-
268
-
269
- def print_args(args: Optional[dict] = None, show_file=True, show_func=False):
270
- # Print function arguments (optional args dict)
271
- x = inspect.currentframe().f_back # previous frame
272
- file, _, func, _, _ = inspect.getframeinfo(x)
273
- if args is None: # get args automatically
274
- args, _, _, frm = inspect.getargvalues(x)
275
- args = {k: v for k, v in frm.items() if k in args}
276
- try:
277
- file = Path(file).resolve().relative_to(ROOT).with_suffix("")
278
- except ValueError:
279
- file = Path(file).stem
280
- s = (f"{file}: " if show_file else "") + (f"{func}: " if show_func else "")
281
- LOGGER.info(colorstr(s) + ", ".join(f"{k}={v}" for k, v in args.items()))
282
-
283
-
284
- def init_seeds(seed=0, deterministic=False):
285
- # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html
286
- random.seed(seed)
287
- np.random.seed(seed)
288
- torch.manual_seed(seed)
289
- torch.cuda.manual_seed(seed)
290
- torch.cuda.manual_seed_all(seed) # for Multi-GPU, exception safe
291
- # torch.backends.cudnn.benchmark = True # AutoBatch problem https://github.com/ultralytics/yolov5/issues/9287
292
- if deterministic and check_version(
293
- torch.__version__, "1.12.0"
294
- ): # https://github.com/ultralytics/yolov5/pull/8213
295
- torch.use_deterministic_algorithms(True)
296
- torch.backends.cudnn.deterministic = True
297
- os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
298
- os.environ["PYTHONHASHSEED"] = str(seed)
299
-
300
-
301
- def intersect_dicts(da, db, exclude=()):
302
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
303
- return {
304
- k: v
305
- for k, v in da.items()
306
- if k in db
307
- and all(x not in k for x in exclude)
308
- and v.shape == db[k].shape
309
- }
310
-
311
-
312
- def get_default_args(func):
313
- # Get func() default arguments
314
- signature = inspect.signature(func)
315
- return {
316
- k: v.default
317
- for k, v in signature.parameters.items()
318
- if v.default is not inspect.Parameter.empty
319
- }
320
-
321
-
322
- def get_latest_run(search_dir="."):
323
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
324
- last_list = glob.glob(f"{search_dir}/**/last*.pt", recursive=True)
325
- return max(last_list, key=os.path.getctime) if last_list else ""
326
-
327
-
328
- def file_age(path=__file__):
329
- # Return days since last file update
330
- dt = datetime.now() - datetime.fromtimestamp(
331
- Path(path).stat().st_mtime
332
- ) # delta
333
- return dt.days # + dt.seconds / 86400 # fractional days
334
-
335
-
336
- def file_date(path=__file__):
337
- # Return human-readable file modification date, i.e. '2021-3-26'
338
- t = datetime.fromtimestamp(Path(path).stat().st_mtime)
339
- return f"{t.year}-{t.month}-{t.day}"
340
-
341
-
342
- def file_size(path):
343
- # Return file/dir size (MB)
344
- mb = 1 << 20 # bytes to MiB (1024 ** 2)
345
- path = Path(path)
346
- if path.is_file():
347
- return path.stat().st_size / mb
348
- elif path.is_dir():
349
- return (
350
- sum(f.stat().st_size for f in path.glob("**/*") if f.is_file())
351
- / mb
352
- )
353
- else:
354
- return 0.0
355
-
356
-
357
- def check_online():
358
- # Check internet connectivity
359
- import socket
360
-
361
- def run_once():
362
- # Check once
363
- try:
364
- socket.create_connection(
365
- ("1.1.1.1", 443), 5
366
- ) # check host accessibility
367
- return True
368
- except OSError:
369
- return False
370
-
371
- return (
372
- run_once() or run_once()
373
- ) # check twice to increase robustness to intermittent connectivity issues
374
-
375
-
376
- def git_describe(path=ROOT): # path must be a directory
377
- # Return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
378
- try:
379
- assert (Path(path) / ".git").is_dir()
380
- return check_output(
381
- f"git -C {path} describe --tags --long --always", shell=True
382
- ).decode()[:-1]
383
- except Exception:
384
- return ""
385
-
386
-
387
- @TryExcept()
388
- @WorkingDirectory(ROOT)
389
- def check_git_status(repo="ultralytics/yolov5", branch="master"):
390
- # YOLOv5 status check, recommend 'git pull' if code is out of date
391
- url = f"https://github.com/{repo}"
392
- msg = f", for updates see {url}"
393
- s = colorstr("github: ") # string
394
- assert Path(".git").exists(), (
395
- s + "skipping check (not a git repository)" + msg
396
- )
397
- assert check_online(), s + "skipping check (offline)" + msg
398
-
399
- splits = re.split(
400
- pattern=r"\s",
401
- string=check_output("git remote -v", shell=True).decode(),
402
- )
403
- matches = [repo in s for s in splits]
404
- if any(matches):
405
- remote = splits[matches.index(True) - 1]
406
- else:
407
- remote = "ultralytics"
408
- check_output(f"git remote add {remote} {url}", shell=True)
409
- check_output(f"git fetch {remote}", shell=True, timeout=5) # git fetch
410
- local_branch = (
411
- check_output("git rev-parse --abbrev-ref HEAD", shell=True)
412
- .decode()
413
- .strip()
414
- ) # checked out
415
- n = int(
416
- check_output(
417
- f"git rev-list {local_branch}..{remote}/{branch} --count",
418
- shell=True,
419
- )
420
- ) # commits behind
421
- if n > 0:
422
- pull = (
423
- "git pull" if remote == "origin" else f"git pull {remote} {branch}"
424
- )
425
- s += f"⚠️ YOLOv5 is out of date by {n} commit{'s' * (n > 1)}. Use `{pull}` or `git clone {url}` to update."
426
- else:
427
- s += f"up to date with {url} ✅"
428
- LOGGER.info(s)
429
-
430
-
431
- @WorkingDirectory(ROOT)
432
- def check_git_info(path="."):
433
- # YOLOv5 git info check, return {remote, branch, commit}
434
- check_requirements("gitpython")
435
- import git
436
-
437
- try:
438
- repo = git.Repo(path)
439
- remote = repo.remotes.origin.url.replace(
440
- ".git", ""
441
- ) # i.e. 'https://github.com/ultralytics/yolov5'
442
- commit = (
443
- repo.head.commit.hexsha
444
- ) # i.e. '3134699c73af83aac2a481435550b968d5792c0d'
445
- try:
446
- branch = repo.active_branch.name # i.e. 'main'
447
- except TypeError: # not on any branch
448
- branch = None # i.e. 'detached HEAD' state
449
- return {"remote": remote, "branch": branch, "commit": commit}
450
- except git.exc.InvalidGitRepositoryError: # path is not a git dir
451
- return {"remote": None, "branch": None, "commit": None}
452
-
453
-
454
- def check_python(minimum="3.7.0"):
455
- # Check current python version vs. required python version
456
- check_version(
457
- platform.python_version(), minimum, name="Python ", hard=True
458
- )
459
-
460
-
461
- def check_version(
462
- current="0.0.0",
463
- minimum="0.0.0",
464
- name="version ",
465
- pinned=False,
466
- hard=False,
467
- verbose=False,
468
- ):
469
- # Check version vs. required version
470
- current, minimum = (pkg.parse_version(x) for x in (current, minimum))
471
- result = (current == minimum) if pinned else (current >= minimum) # bool
472
- s = f"WARNING ⚠️ {name}{minimum} is required by YOLOv5, but {name}{current} is currently installed" # string
473
- if hard:
474
- assert result, emojis(s) # assert min requirements met
475
- if verbose and not result:
476
- LOGGER.warning(s)
477
- return result
478
-
479
-
480
- @TryExcept()
481
- def check_requirements(
482
- requirements=ROOT / "requirements.txt", exclude=(), install=True, cmds=""
483
- ):
484
- # Check installed dependencies meet YOLOv5 requirements (pass *.txt file or list of packages or single package str)
485
- prefix = colorstr("red", "bold", "requirements:")
486
- check_python() # check python version
487
- if isinstance(requirements, Path): # requirements.txt file
488
- file = requirements.resolve()
489
- assert file.exists(), f"{prefix} {file} not found, check failed."
490
- with file.open() as f:
491
- requirements = [
492
- f"{x.name}{x.specifier}"
493
- for x in pkg.parse_requirements(f)
494
- if x.name not in exclude
495
- ]
496
- elif isinstance(requirements, str):
497
- requirements = [requirements]
498
-
499
- s = ""
500
- n = 0
501
- for r in requirements:
502
- try:
503
- pkg.require(r)
504
- except (
505
- pkg.VersionConflict,
506
- pkg.DistributionNotFound,
507
- ): # exception if requirements not met
508
- s += f'"{r}" '
509
- n += 1
510
-
511
- if s and install and AUTOINSTALL: # check environment variable
512
- LOGGER.info(
513
- f"{prefix} YOLOv5 requirement{'s' * (n > 1)} {s}not found, attempting AutoUpdate..."
514
- )
515
- try:
516
- # assert check_online(), "AutoUpdate skipped (offline)"
517
- LOGGER.info(
518
- check_output(f"pip install {s} {cmds}", shell=True).decode()
519
- )
520
- source = file if "file" in locals() else requirements
521
- s = (
522
- f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n"
523
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
524
- )
525
- LOGGER.info(s)
526
- except Exception as e:
527
- LOGGER.warning(f"{prefix} ❌ {e}")
528
-
529
-
530
- def check_img_size(imgsz, s=32, floor=0):
531
- # Verify image size is a multiple of stride s in each dimension
532
- if isinstance(imgsz, int): # integer i.e. img_size=640
533
- new_size = max(make_divisible(imgsz, int(s)), floor)
534
- else: # list i.e. img_size=[640, 480]
535
- imgsz = list(imgsz) # convert to list if tuple
536
- new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz]
537
- if new_size != imgsz:
538
- LOGGER.warning(
539
- f"WARNING ⚠️ --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}"
540
- )
541
- return new_size
542
-
543
-
544
- def check_imshow(warn=False):
545
- # Check if environment supports image displays
546
- try:
547
- assert not is_notebook()
548
- assert not is_docker()
549
- cv2.imshow("test", np.zeros((1, 1, 3)))
550
- cv2.waitKey(1)
551
- cv2.destroyAllWindows()
552
- cv2.waitKey(1)
553
- return True
554
- except Exception as e:
555
- if warn:
556
- LOGGER.warning(
557
- f"WARNING ⚠️ Environment does not support cv2.imshow() or PIL Image.show()\n{e}"
558
- )
559
- return False
560
-
561
-
562
- def check_suffix(file="yolov5s.pt", suffix=(".pt",), msg=""):
563
- # Check file(s) for acceptable suffix
564
- if file and suffix:
565
- if isinstance(suffix, str):
566
- suffix = [suffix]
567
- for f in file if isinstance(file, (list, tuple)) else [file]:
568
- s = Path(f).suffix.lower() # file suffix
569
- if len(s):
570
- assert s in suffix, f"{msg}{f} acceptable suffix is {suffix}"
571
-
572
-
573
- def check_yaml(file, suffix=(".yaml", ".yml")):
574
- # Search/download YAML file (if necessary) and return path, checking suffix
575
- return check_file(file, suffix)
576
-
577
-
578
- def check_file(file, suffix=""):
579
- # Search/download file (if necessary) and return path
580
- check_suffix(file, suffix) # optional
581
- file = str(file) # convert to str()
582
- if os.path.isfile(file) or not file: # exists
583
- return file
584
- elif file.startswith(("http:/", "https:/")): # download
585
- url = file # warning: Pathlib turns :// -> :/
586
- file = Path(
587
- urllib.parse.unquote(file).split("?")[0]
588
- ).name # '%2F' to '/', split https://url.com/file.txt?auth
589
- if os.path.isfile(file):
590
- LOGGER.info(
591
- f"Found {url} locally at {file}"
592
- ) # file already exists
593
- else:
594
- LOGGER.info(f"Downloading {url} to {file}...")
595
- torch.hub.download_url_to_file(url, file)
596
- assert (
597
- Path(file).exists() and Path(file).stat().st_size > 0
598
- ), f"File download failed: {url}" # check
599
- return file
600
- elif file.startswith("clearml://"): # ClearML Dataset ID
601
- assert (
602
- "clearml" in sys.modules
603
- ), "ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'."
604
- return file
605
- else: # search
606
- files = []
607
- for d in "data", "models", "utils": # search directories
608
- files.extend(
609
- glob.glob(str(ROOT / d / "**" / file), recursive=True)
610
- ) # find file
611
- assert len(files), f"File not found: {file}" # assert file was found
612
- assert (
613
- len(files) == 1
614
- ), f"Multiple files match '{file}', specify exact path: {files}" # assert unique
615
- return files[0] # return file
616
-
617
-
618
- def check_font(font=FONT, progress=False):
619
- # Download font to CONFIG_DIR if necessary
620
- font = Path(font)
621
- file = CONFIG_DIR / font.name
622
- if not font.exists() and not file.exists():
623
- url = f"https://ultralytics.com/assets/{font.name}"
624
- LOGGER.info(f"Downloading {url} to {file}...")
625
- torch.hub.download_url_to_file(url, str(file), progress=progress)
626
-
627
-
628
- def check_dataset(data, autodownload=True):
629
- # Download, check and/or unzip dataset if not found locally
630
-
631
- # Download (optional)
632
- extract_dir = ""
633
- if isinstance(data, (str, Path)) and (
634
- is_zipfile(data) or is_tarfile(data)
635
- ):
636
- download(
637
- data,
638
- dir=f"{DATASETS_DIR}/{Path(data).stem}",
639
- unzip=True,
640
- delete=False,
641
- curl=False,
642
- threads=1,
643
- )
644
- data = next((DATASETS_DIR / Path(data).stem).rglob("*.yaml"))
645
- extract_dir, autodownload = data.parent, False
646
-
647
- # Read yaml (optional)
648
- if isinstance(data, (str, Path)):
649
- data = yaml_load(data) # dictionary
650
-
651
- # Checks
652
- for k in "train", "val", "names":
653
- assert k in data, emojis(f"data.yaml '{k}:' field missing ❌")
654
- if isinstance(data["names"], (list, tuple)): # old array format
655
- data["names"] = dict(enumerate(data["names"])) # convert to dict
656
- assert all(
657
- isinstance(k, int) for k in data["names"].keys()
658
- ), "data.yaml names keys must be integers, i.e. 2: car"
659
- data["nc"] = len(data["names"])
660
-
661
- # Resolve paths
662
- path = Path(
663
- extract_dir or data.get("path") or ""
664
- ) # optional 'path' default to '.'
665
- if not path.is_absolute():
666
- path = (ROOT / path).resolve()
667
- data["path"] = path # download scripts
668
- for k in "train", "val", "test":
669
- if data.get(k): # prepend path
670
- if isinstance(data[k], str):
671
- x = (path / data[k]).resolve()
672
- if not x.exists() and data[k].startswith("../"):
673
- x = (path / data[k][3:]).resolve()
674
- data[k] = str(x)
675
- else:
676
- data[k] = [str((path / x).resolve()) for x in data[k]]
677
-
678
- # Parse yaml
679
- train, val, test, s = (
680
- data.get(x) for x in ("train", "val", "test", "download")
681
- )
682
- if val:
683
- val = [
684
- Path(x).resolve()
685
- for x in (val if isinstance(val, list) else [val])
686
- ] # val path
687
- if not all(x.exists() for x in val):
688
- LOGGER.info(
689
- "\nDataset not found ⚠️, missing paths %s"
690
- % [str(x) for x in val if not x.exists()]
691
- )
692
- if not s or not autodownload:
693
- raise Exception("Dataset not found ❌")
694
- t = time.time()
695
- if s.startswith("http") and s.endswith(".zip"): # URL
696
- f = Path(s).name # filename
697
- LOGGER.info(f"Downloading {s} to {f}...")
698
- torch.hub.download_url_to_file(s, f)
699
- Path(DATASETS_DIR).mkdir(
700
- parents=True, exist_ok=True
701
- ) # create root
702
- unzip_file(f, path=DATASETS_DIR) # unzip
703
- Path(f).unlink() # remove zip
704
- r = None # success
705
- elif s.startswith("bash "): # bash script
706
- LOGGER.info(f"Running {s} ...")
707
- r = os.system(s)
708
- else: # python script
709
- r = exec(s, {"yaml": data}) # return None
710
- dt = f"({round(time.time() - t, 1)}s)"
711
- s = (
712
- f"success ✅ {dt}, saved to {colorstr('bold', DATASETS_DIR)}"
713
- if r in (0, None)
714
- else f"failure {dt} ❌"
715
- )
716
- LOGGER.info(f"Dataset download {s}")
717
- check_font(
718
- "Arial.ttf" if is_ascii(data["names"]) else "Arial.Unicode.ttf",
719
- progress=True,
720
- ) # download fonts
721
- return data # dictionary
722
-
723
-
724
- def check_amp(model):
725
- # Check PyTorch Automatic Mixed Precision (AMP) functionality. Return True on correct operation
726
- from models.common import AutoShape, DetectMultiBackend
727
-
728
- def amp_allclose(model, im):
729
- # All close FP32 vs AMP results
730
- m = AutoShape(model, verbose=False) # model
731
- a = m(im).xywhn[0] # FP32 inference
732
- m.amp = True
733
- b = m(im).xywhn[0] # AMP inference
734
- return a.shape == b.shape and torch.allclose(
735
- a, b, atol=0.1
736
- ) # close to 10% absolute tolerance
737
-
738
- prefix = colorstr("AMP: ")
739
- device = next(model.parameters()).device # get model device
740
- if device.type in ("cpu", "mps"):
741
- return False # AMP only used on CUDA devices
742
- f = ROOT / "data" / "images" / "bus.jpg" # image to check
743
- im = (
744
- f
745
- if f.exists()
746
- else "https://ultralytics.com/images/bus.jpg"
747
- if check_online()
748
- else np.ones((640, 640, 3))
749
- )
750
- try:
751
- assert amp_allclose(deepcopy(model), im) or amp_allclose(
752
- DetectMultiBackend("yolov5n.pt", device), im
753
- )
754
- LOGGER.info(f"{prefix}checks passed ✅")
755
- return True
756
- except Exception:
757
- help_url = "https://github.com/ultralytics/yolov5/issues/7908"
758
- LOGGER.warning(
759
- f"{prefix}checks failed ❌, disabling Automatic Mixed Precision. See {help_url}"
760
- )
761
- return False
762
-
763
-
764
- def yaml_load(file="data.yaml"):
765
- # Single-line safe yaml loading
766
- with open(file, errors="ignore") as f:
767
- return yaml.safe_load(f)
768
-
769
-
770
- def yaml_save(file="data.yaml", data={}):
771
- # Single-line safe yaml saving
772
- with open(file, "w") as f:
773
- yaml.safe_dump(
774
- {k: str(v) if isinstance(v, Path) else v for k, v in data.items()},
775
- f,
776
- sort_keys=False,
777
- )
778
-
779
-
780
- def unzip_file(file, path=None, exclude=(".DS_Store", "__MACOSX")):
781
- # Unzip a *.zip file to path/, excluding files containing strings in exclude list
782
- if path is None:
783
- path = Path(file).parent # default path
784
- with ZipFile(file) as zipObj:
785
- for f in zipObj.namelist(): # list all archived filenames in the zip
786
- if all(x not in f for x in exclude):
787
- zipObj.extract(f, path=path)
788
-
789
-
790
- def url2file(url):
791
- # Convert URL to filename, i.e. https://url.com/file.txt?auth -> file.txt
792
- url = str(Path(url)).replace(":/", "://") # Pathlib turns :// -> :/
793
- return Path(urllib.parse.unquote(url)).name.split("?")[
794
- 0
795
- ] # '%2F' to '/', split https://url.com/file.txt?auth
796
-
797
-
798
- def download(
799
- url, dir=".", unzip=True, delete=True, curl=False, threads=1, retry=3
800
- ):
801
- # Multithreaded file download and unzip function, used in data.yaml for autodownload
802
- def download_one(url, dir):
803
- # Download 1 file
804
- success = True
805
- if os.path.isfile(url):
806
- f = Path(url) # filename
807
- else: # does not exist
808
- f = dir / Path(url).name
809
- LOGGER.info(f"Downloading {url} to {f}...")
810
- for i in range(retry + 1):
811
- if curl:
812
- s = "sS" if threads > 1 else "" # silent
813
- r = os.system(
814
- f'curl -# -{s}L "{url}" -o "{f}" --retry 9 -C -'
815
- ) # curl download with retry, continue
816
- success = r == 0
817
- else:
818
- torch.hub.download_url_to_file(
819
- url, f, progress=threads == 1
820
- ) # torch download
821
- success = f.is_file()
822
- if success:
823
- break
824
- elif i < retry:
825
- LOGGER.warning(
826
- f"⚠️ Download failure, retrying {i + 1}/{retry} {url}..."
827
- )
828
- else:
829
- LOGGER.warning(f"❌ Failed to download {url}...")
830
-
831
- if (
832
- unzip
833
- and success
834
- and (f.suffix == ".gz" or is_zipfile(f) or is_tarfile(f))
835
- ):
836
- LOGGER.info(f"Unzipping {f}...")
837
- if is_zipfile(f):
838
- unzip_file(f, dir) # unzip
839
- elif is_tarfile(f):
840
- os.system(f"tar xf {f} --directory {f.parent}") # unzip
841
- elif f.suffix == ".gz":
842
- os.system(f"tar xfz {f} --directory {f.parent}") # unzip
843
- if delete:
844
- f.unlink() # remove zip
845
-
846
- dir = Path(dir)
847
- dir.mkdir(parents=True, exist_ok=True) # make directory
848
- if threads > 1:
849
- pool = ThreadPool(threads)
850
- pool.imap(
851
- lambda x: download_one(*x), zip(url, repeat(dir))
852
- ) # multithreaded
853
- pool.close()
854
- pool.join()
855
- else:
856
- for u in [url] if isinstance(url, (str, Path)) else url:
857
- download_one(u, dir)
858
-
859
-
860
- def make_divisible(x, divisor):
861
- # Returns nearest x divisible by divisor
862
- if isinstance(divisor, torch.Tensor):
863
- divisor = int(divisor.max()) # to int
864
- return math.ceil(x / divisor) * divisor
865
-
866
-
867
- def clean_str(s):
868
- # Cleans a string by replacing special characters with underscore _
869
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
870
-
871
-
872
- def one_cycle(y1=0.0, y2=1.0, steps=100):
873
- # lambda function for sinusoidal ramp from y1 to y2 https://arxiv.org/pdf/1812.01187.pdf
874
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
875
-
876
-
877
- def colorstr(*input):
878
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
879
- *args, string = (
880
- input if len(input) > 1 else ("blue", "bold", input[0])
881
- ) # color arguments, string
882
- colors = {
883
- "black": "\033[30m", # basic colors
884
- "red": "\033[31m",
885
- "green": "\033[32m",
886
- "yellow": "\033[33m",
887
- "blue": "\033[34m",
888
- "magenta": "\033[35m",
889
- "cyan": "\033[36m",
890
- "white": "\033[37m",
891
- "bright_black": "\033[90m", # bright colors
892
- "bright_red": "\033[91m",
893
- "bright_green": "\033[92m",
894
- "bright_yellow": "\033[93m",
895
- "bright_blue": "\033[94m",
896
- "bright_magenta": "\033[95m",
897
- "bright_cyan": "\033[96m",
898
- "bright_white": "\033[97m",
899
- "end": "\033[0m", # misc
900
- "bold": "\033[1m",
901
- "underline": "\033[4m",
902
- }
903
- return "".join(colors[x] for x in args) + f"{string}" + colors["end"]
904
-
905
-
906
- def labels_to_class_weights(labels, nc=80):
907
- # Get class weights (inverse frequency) from training labels
908
- if labels[0] is None: # no labels loaded
909
- return torch.Tensor()
910
-
911
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
912
- classes = labels[:, 0].astype(int) # labels = [class xywh]
913
- weights = np.bincount(classes, minlength=nc) # occurrences per class
914
-
915
- # Prepend gridpoint count (for uCE training)
916
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
917
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
918
-
919
- weights[weights == 0] = 1 # replace empty bins with 1
920
- weights = 1 / weights # number of targets per class
921
- weights /= weights.sum() # normalize
922
- return torch.from_numpy(weights).float()
923
-
924
-
925
- def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
926
- # Produces image weights based on class_weights and image contents
927
- # Usage: index = random.choices(range(n), weights=image_weights, k=1) # weighted image sample
928
- class_counts = np.array(
929
- [np.bincount(x[:, 0].astype(int), minlength=nc) for x in labels]
930
- )
931
- return (class_weights.reshape(1, nc) * class_counts).sum(1)
932
-
933
-
934
- def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
935
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
936
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
937
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
938
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
939
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
940
- return [
941
- 1,
942
- 2,
943
- 3,
944
- 4,
945
- 5,
946
- 6,
947
- 7,
948
- 8,
949
- 9,
950
- 10,
951
- 11,
952
- 13,
953
- 14,
954
- 15,
955
- 16,
956
- 17,
957
- 18,
958
- 19,
959
- 20,
960
- 21,
961
- 22,
962
- 23,
963
- 24,
964
- 25,
965
- 27,
966
- 28,
967
- 31,
968
- 32,
969
- 33,
970
- 34,
971
- 35,
972
- 36,
973
- 37,
974
- 38,
975
- 39,
976
- 40,
977
- 41,
978
- 42,
979
- 43,
980
- 44,
981
- 46,
982
- 47,
983
- 48,
984
- 49,
985
- 50,
986
- 51,
987
- 52,
988
- 53,
989
- 54,
990
- 55,
991
- 56,
992
- 57,
993
- 58,
994
- 59,
995
- 60,
996
- 61,
997
- 62,
998
- 63,
999
- 64,
1000
- 65,
1001
- 67,
1002
- 70,
1003
- 72,
1004
- 73,
1005
- 74,
1006
- 75,
1007
- 76,
1008
- 77,
1009
- 78,
1010
- 79,
1011
- 80,
1012
- 81,
1013
- 82,
1014
- 84,
1015
- 85,
1016
- 86,
1017
- 87,
1018
- 88,
1019
- 89,
1020
- 90,
1021
- ]
1022
-
1023
-
1024
- def xyxy2xywh(x):
1025
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
1026
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
1027
- y[..., 0] = (x[..., 0] + x[..., 2]) / 2 # x center
1028
- y[..., 1] = (x[..., 1] + x[..., 3]) / 2 # y center
1029
- y[..., 2] = x[..., 2] - x[..., 0] # width
1030
- y[..., 3] = x[..., 3] - x[..., 1] # height
1031
- return y
1032
-
1033
-
1034
- def xywh2xyxy(x):
1035
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
1036
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
1037
- y[..., 0] = x[..., 0] - x[..., 2] / 2 # top left x
1038
- y[..., 1] = x[..., 1] - x[..., 3] / 2 # top left y
1039
- y[..., 2] = x[..., 0] + x[..., 2] / 2 # bottom right x
1040
- y[..., 3] = x[..., 1] + x[..., 3] / 2 # bottom right y
1041
- return y
1042
-
1043
-
1044
- def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
1045
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
1046
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
1047
- y[..., 0] = w * (x[..., 0] - x[..., 2] / 2) + padw # top left x
1048
- y[..., 1] = h * (x[..., 1] - x[..., 3] / 2) + padh # top left y
1049
- y[..., 2] = w * (x[..., 0] + x[..., 2] / 2) + padw # bottom right x
1050
- y[..., 3] = h * (x[..., 1] + x[..., 3] / 2) + padh # bottom right y
1051
- return y
1052
-
1053
-
1054
- def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0):
1055
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right
1056
- if clip:
1057
- clip_boxes(x, (h - eps, w - eps)) # warning: inplace clip
1058
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
1059
- y[..., 0] = ((x[..., 0] + x[..., 2]) / 2) / w # x center
1060
- y[..., 1] = ((x[..., 1] + x[..., 3]) / 2) / h # y center
1061
- y[..., 2] = (x[..., 2] - x[..., 0]) / w # width
1062
- y[..., 3] = (x[..., 3] - x[..., 1]) / h # height
1063
- return y
1064
-
1065
-
1066
- def xyn2xy(x, w=640, h=640, padw=0, padh=0):
1067
- # Convert normalized segments into pixel segments, shape (n,2)
1068
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
1069
- y[..., 0] = w * x[..., 0] + padw # top left x
1070
- y[..., 1] = h * x[..., 1] + padh # top left y
1071
- return y
1072
-
1073
-
1074
- def segment2box(segment, width=640, height=640):
1075
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
1076
- x, y = segment.T # segment xy
1077
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
1078
- x, y, = (
1079
- x[inside],
1080
- y[inside],
1081
- )
1082
- return (
1083
- np.array([x.min(), y.min(), x.max(), y.max()])
1084
- if any(x)
1085
- else np.zeros((1, 4))
1086
- ) # xyxy
1087
-
1088
-
1089
- def segments2boxes(segments):
1090
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
1091
- boxes = []
1092
- for s in segments:
1093
- x, y = s.T # segment xy
1094
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
1095
- return xyxy2xywh(np.array(boxes)) # cls, xywh
1096
-
1097
-
1098
- def resample_segments(segments, n=1000):
1099
- # Up-sample an (n,2) segment
1100
- for i, s in enumerate(segments):
1101
- s = np.concatenate((s, s[0:1, :]), axis=0)
1102
- x = np.linspace(0, len(s) - 1, n)
1103
- xp = np.arange(len(s))
1104
- segments[i] = (
1105
- np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)])
1106
- .reshape(2, -1)
1107
- .T
1108
- ) # segment xy
1109
- return segments
1110
-
1111
-
1112
- def scale_boxes(img1_shape, boxes, img0_shape, ratio_pad=None):
1113
- # Rescale boxes (xyxy) from img1_shape to img0_shape
1114
- if ratio_pad is None: # calculate from img0_shape
1115
- gain = min(
1116
- img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]
1117
- ) # gain = old / new
1118
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (
1119
- img1_shape[0] - img0_shape[0] * gain
1120
- ) / 2 # wh padding
1121
- else:
1122
- gain = ratio_pad[0][0]
1123
- pad = ratio_pad[1]
1124
-
1125
- boxes[..., [0, 2]] -= pad[0] # x padding
1126
- boxes[..., [1, 3]] -= pad[1] # y padding
1127
- boxes[..., :4] /= gain
1128
- clip_boxes(boxes, img0_shape)
1129
- return boxes
1130
-
1131
-
1132
- def scale_segments(
1133
- img1_shape, segments, img0_shape, ratio_pad=None, normalize=False
1134
- ):
1135
- # Rescale coords (xyxy) from img1_shape to img0_shape
1136
- if ratio_pad is None: # calculate from img0_shape
1137
- gain = min(
1138
- img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]
1139
- ) # gain = old / new
1140
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (
1141
- img1_shape[0] - img0_shape[0] * gain
1142
- ) / 2 # wh padding
1143
- else:
1144
- gain = ratio_pad[0][0]
1145
- pad = ratio_pad[1]
1146
-
1147
- segments[:, 0] -= pad[0] # x padding
1148
- segments[:, 1] -= pad[1] # y padding
1149
- segments /= gain
1150
- clip_segments(segments, img0_shape)
1151
- if normalize:
1152
- segments[:, 0] /= img0_shape[1] # width
1153
- segments[:, 1] /= img0_shape[0] # height
1154
- return segments
1155
-
1156
-
1157
- def clip_boxes(boxes, shape):
1158
- # Clip boxes (xyxy) to image shape (height, width)
1159
- if isinstance(boxes, torch.Tensor): # faster individually
1160
- boxes[..., 0].clamp_(0, shape[1]) # x1
1161
- boxes[..., 1].clamp_(0, shape[0]) # y1
1162
- boxes[..., 2].clamp_(0, shape[1]) # x2
1163
- boxes[..., 3].clamp_(0, shape[0]) # y2
1164
- else: # np.array (faster grouped)
1165
- boxes[..., [0, 2]] = boxes[..., [0, 2]].clip(0, shape[1]) # x1, x2
1166
- boxes[..., [1, 3]] = boxes[..., [1, 3]].clip(0, shape[0]) # y1, y2
1167
-
1168
-
1169
- def clip_segments(segments, shape):
1170
- # Clip segments (xy1,xy2,...) to image shape (height, width)
1171
- if isinstance(segments, torch.Tensor): # faster individually
1172
- segments[:, 0].clamp_(0, shape[1]) # x
1173
- segments[:, 1].clamp_(0, shape[0]) # y
1174
- else: # np.array (faster grouped)
1175
- segments[:, 0] = segments[:, 0].clip(0, shape[1]) # x
1176
- segments[:, 1] = segments[:, 1].clip(0, shape[0]) # y
1177
-
1178
-
1179
- def non_max_suppression(
1180
- prediction,
1181
- conf_thres=0.25,
1182
- iou_thres=0.45,
1183
- classes=None,
1184
- agnostic=False,
1185
- multi_label=False,
1186
- labels=(),
1187
- max_det=300,
1188
- nm=0, # number of masks
1189
- ):
1190
- """Non-Maximum Suppression (NMS) on inference results to reject overlapping detections
1191
-
1192
- Returns:
1193
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
1194
- """
1195
-
1196
- # Checks
1197
- assert (
1198
- 0 <= conf_thres <= 1
1199
- ), f"Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0"
1200
- assert (
1201
- 0 <= iou_thres <= 1
1202
- ), f"Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0"
1203
- if isinstance(
1204
- prediction, (list, tuple)
1205
- ): # YOLOv5 model in validation model, output = (inference_out, loss_out)
1206
- prediction = prediction[0] # select only inference output
1207
-
1208
- device = prediction.device
1209
- mps = "mps" in device.type # Apple MPS
1210
- if mps: # MPS not fully supported yet, convert tensors to CPU before NMS
1211
- prediction = prediction.cpu()
1212
- bs = prediction.shape[0] # batch size
1213
- nc = prediction.shape[2] - nm - 5 # number of classes
1214
- xc = prediction[..., 4] > conf_thres # candidates
1215
-
1216
- # Settings
1217
- # min_wh = 2 # (pixels) minimum box width and height
1218
- max_wh = 7680 # (pixels) maximum box width and height
1219
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
1220
- time_limit = 0.5 + 0.05 * bs # seconds to quit after
1221
- redundant = True # require redundant detections
1222
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
1223
- merge = False # use merge-NMS
1224
-
1225
- t = time.time()
1226
- mi = 5 + nc # mask start index
1227
- output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
1228
- for xi, x in enumerate(prediction): # image index, image inference
1229
- # Apply constraints
1230
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
1231
- x = x[xc[xi]] # confidence
1232
-
1233
- # Cat apriori labels if autolabelling
1234
- if labels and len(labels[xi]):
1235
- lb = labels[xi]
1236
- v = torch.zeros((len(lb), nc + nm + 5), device=x.device)
1237
- v[:, :4] = lb[:, 1:5] # box
1238
- v[:, 4] = 1.0 # conf
1239
- v[range(len(lb)), lb[:, 0].long() + 5] = 1.0 # cls
1240
- x = torch.cat((x, v), 0)
1241
-
1242
- # If none remain process next image
1243
- if not x.shape[0]:
1244
- continue
1245
-
1246
- # Compute conf
1247
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
1248
-
1249
- # Box/Mask
1250
- box = xywh2xyxy(
1251
- x[:, :4]
1252
- ) # center_x, center_y, width, height) to (x1, y1, x2, y2)
1253
- mask = x[:, mi:] # zero columns if no masks
1254
-
1255
- # Detections matrix nx6 (xyxy, conf, cls)
1256
- if multi_label:
1257
- i, j = (x[:, 5:mi] > conf_thres).nonzero(as_tuple=False).T
1258
- x = torch.cat(
1259
- (box[i], x[i, 5 + j, None], j[:, None].float(), mask[i]), 1
1260
- )
1261
- else: # best class only
1262
- conf, j = x[:, 5:mi].max(1, keepdim=True)
1263
- x = torch.cat((box, conf, j.float(), mask), 1)[
1264
- conf.view(-1) > conf_thres
1265
- ]
1266
-
1267
- # Filter by class
1268
- if classes is not None:
1269
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
1270
-
1271
- # Apply finite constraint
1272
- # if not torch.isfinite(x).all():
1273
- # x = x[torch.isfinite(x).all(1)]
1274
-
1275
- # Check shape
1276
- n = x.shape[0] # number of boxes
1277
- if not n: # no boxes
1278
- continue
1279
- x = x[
1280
- x[:, 4].argsort(descending=True)[:max_nms]
1281
- ] # sort by confidence and remove excess boxes
1282
-
1283
- # Batched NMS
1284
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
1285
- boxes, scores = (
1286
- x[:, :4] + c,
1287
- x[:, 4],
1288
- ) # boxes (offset by class), scores
1289
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
1290
- i = i[:max_det] # limit detections
1291
- if merge and (
1292
- 1 < n < 3e3
1293
- ): # Merge NMS (boxes merged using weighted mean)
1294
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
1295
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
1296
- weights = iou * scores[None] # box weights
1297
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(
1298
- 1, keepdim=True
1299
- ) # merged boxes
1300
- if redundant:
1301
- i = i[iou.sum(1) > 1] # require redundancy
1302
-
1303
- output[xi] = x[i]
1304
- if mps:
1305
- output[xi] = output[xi].to(device)
1306
- if (time.time() - t) > time_limit:
1307
- LOGGER.warning(
1308
- f"WARNING ⚠️ NMS time limit {time_limit:.3f}s exceeded"
1309
- )
1310
- break # time limit exceeded
1311
-
1312
- return output
1313
-
1314
-
1315
- def strip_optimizer(
1316
- f="best.pt", s=""
1317
- ): # from utils.general import *; strip_optimizer()
1318
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
1319
- x = torch.load(f, map_location=torch.device("cpu"))
1320
- if x.get("ema"):
1321
- x["model"] = x["ema"] # replace model with ema
1322
- for k in "optimizer", "best_fitness", "ema", "updates": # keys
1323
- x[k] = None
1324
- x["epoch"] = -1
1325
- x["model"].half() # to FP16
1326
- for p in x["model"].parameters():
1327
- p.requires_grad = False
1328
- torch.save(x, s or f)
1329
- mb = os.path.getsize(s or f) / 1e6 # filesize
1330
- LOGGER.info(
1331
- f"Optimizer stripped from {f},{f' saved as {s},' if s else ''} {mb:.1f}MB"
1332
- )
1333
-
1334
-
1335
- def print_mutation(
1336
- keys, results, hyp, save_dir, bucket, prefix=colorstr("evolve: ")
1337
- ):
1338
- evolve_csv = save_dir / "evolve.csv"
1339
- evolve_yaml = save_dir / "hyp_evolve.yaml"
1340
- keys = tuple(keys) + tuple(hyp.keys()) # [results + hyps]
1341
- keys = tuple(x.strip() for x in keys)
1342
- vals = results + tuple(hyp.values())
1343
- n = len(keys)
1344
-
1345
- # Download (optional)
1346
- if bucket:
1347
- url = f"gs://{bucket}/evolve.csv"
1348
- if gsutil_getsize(url) > (
1349
- evolve_csv.stat().st_size if evolve_csv.exists() else 0
1350
- ):
1351
- os.system(
1352
- f"gsutil cp {url} {save_dir}"
1353
- ) # download evolve.csv if larger than local
1354
-
1355
- # Log to evolve.csv
1356
- s = (
1357
- ""
1358
- if evolve_csv.exists()
1359
- else (("%20s," * n % keys).rstrip(",") + "\n")
1360
- ) # add header
1361
- with open(evolve_csv, "a") as f:
1362
- f.write(s + ("%20.5g," * n % vals).rstrip(",") + "\n")
1363
-
1364
- # Save yaml
1365
- with open(evolve_yaml, "w") as f:
1366
- data = pd.read_csv(evolve_csv, skipinitialspace=True)
1367
- data = data.rename(columns=lambda x: x.strip()) # strip keys
1368
- i = np.argmax(fitness(data.values[:, :4])) #
1369
- generations = len(data)
1370
- f.write(
1371
- "# YOLOv5 Hyperparameter Evolution Results\n"
1372
- + f"# Best generation: {i}\n"
1373
- + f"# Last generation: {generations - 1}\n"
1374
- + "# "
1375
- + ", ".join(f"{x.strip():>20s}" for x in keys[:7])
1376
- + "\n"
1377
- + "# "
1378
- + ", ".join(f"{x:>20.5g}" for x in data.values[i, :7])
1379
- + "\n\n"
1380
- )
1381
- yaml.safe_dump(data.loc[i][7:].to_dict(), f, sort_keys=False)
1382
-
1383
- # Print to screen
1384
- LOGGER.info(
1385
- prefix
1386
- + f"{generations} generations finished, current result:\n"
1387
- + prefix
1388
- + ", ".join(f"{x.strip():>20s}" for x in keys)
1389
- + "\n"
1390
- + prefix
1391
- + ", ".join(f"{x:20.5g}" for x in vals)
1392
- + "\n\n"
1393
- )
1394
-
1395
- if bucket:
1396
- os.system(
1397
- f"gsutil cp {evolve_csv} {evolve_yaml} gs://{bucket}"
1398
- ) # upload
1399
-
1400
-
1401
- def apply_classifier(x, model, img, im0):
1402
- # Apply a second stage classifier to YOLO outputs
1403
- # Example model = torchvision.models.__dict__['efficientnet_b0'](pretrained=True).to(device).eval()
1404
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
1405
- for i, d in enumerate(x): # per image
1406
- if d is not None and len(d):
1407
- d = d.clone()
1408
-
1409
- # Reshape and pad cutouts
1410
- b = xyxy2xywh(d[:, :4]) # boxes
1411
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
1412
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
1413
- d[:, :4] = xywh2xyxy(b).long()
1414
-
1415
- # Rescale boxes from img_size to im0 size
1416
- scale_boxes(img.shape[2:], d[:, :4], im0[i].shape)
1417
-
1418
- # Classes
1419
- pred_cls1 = d[:, 5].long()
1420
- ims = []
1421
- for a in d:
1422
- cutout = im0[i][int(a[1]) : int(a[3]), int(a[0]) : int(a[2])]
1423
- im = cv2.resize(cutout, (224, 224)) # BGR
1424
-
1425
- im = im[:, :, ::-1].transpose(
1426
- 2, 0, 1
1427
- ) # BGR to RGB, to 3x416x416
1428
- im = np.ascontiguousarray(
1429
- im, dtype=np.float32
1430
- ) # uint8 to float32
1431
- im /= 255 # 0 - 255 to 0.0 - 1.0
1432
- ims.append(im)
1433
-
1434
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(
1435
- 1
1436
- ) # classifier prediction
1437
- x[i] = x[i][
1438
- pred_cls1 == pred_cls2
1439
- ] # retain matching class detections
1440
-
1441
- return x
1442
-
1443
-
1444
- def increment_path(path, exist_ok=False, sep="", mkdir=False):
1445
- # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
1446
- path = Path(path) # os-agnostic
1447
- if path.exists() and not exist_ok:
1448
- path, suffix = (
1449
- (path.with_suffix(""), path.suffix)
1450
- if path.is_file()
1451
- else (path, "")
1452
- )
1453
-
1454
- # Method 1
1455
- for n in range(2, 9999):
1456
- p = f"{path}{sep}{n}{suffix}" # increment path
1457
- if not os.path.exists(p): #
1458
- break
1459
- path = Path(p)
1460
-
1461
- # Method 2 (deprecated)
1462
- # dirs = glob.glob(f"{path}{sep}*") # similar paths
1463
- # matches = [re.search(rf"{path.stem}{sep}(\d+)", d) for d in dirs]
1464
- # i = [int(m.groups()[0]) for m in matches if m] # indices
1465
- # n = max(i) + 1 if i else 2 # increment number
1466
- # path = Path(f"{path}{sep}{n}{suffix}") # increment path
1467
-
1468
- if mkdir:
1469
- path.mkdir(parents=True, exist_ok=True) # make directory
1470
-
1471
- return path
1472
-
1473
-
1474
- # OpenCV Multilanguage-friendly functions ------------------------------------------------------------------------------------
1475
- imshow_ = cv2.imshow # copy to avoid recursion errors
1476
-
1477
-
1478
- def imread(path, flags=cv2.IMREAD_COLOR):
1479
- return cv2.imdecode(np.fromfile(path, np.uint8), flags)
1480
-
1481
-
1482
- def imwrite(path, im):
1483
- try:
1484
- cv2.imencode(Path(path).suffix, im)[1].tofile(path)
1485
- return True
1486
- except Exception:
1487
- return False
1488
-
1489
-
1490
- def imshow(path, im):
1491
- imshow_(path.encode("unicode_escape").decode(), im)
1492
-
1493
-
1494
- cv2.imread, cv2.imwrite, cv2.imshow = imread, imwrite, imshow # redefine
1495
-
1496
- # Variables ------------------------------------------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/share/$types.d.ts DELETED
@@ -1,9 +0,0 @@
1
- import type * as Kit from '@sveltejs/kit';
2
-
3
- type Expand<T> = T extends infer O ? { [K in keyof O]: O[K] } : never;
4
- type RouteParams = { id: string }
5
- type RouteId = '/conversation/[id]/share';
6
-
7
- export type EntryGenerator = () => Promise<Array<RouteParams>> | Array<RouteParams>;
8
- export type RequestHandler = Kit.RequestHandler<RouteParams, RouteId>;
9
- export type RequestEvent = Kit.RequestEvent<RouteParams, RouteId>;
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-v1/app.py DELETED
@@ -1,259 +0,0 @@
1
- from h2o_wave import main, app, Q, ui, data
2
- from gradio_client import Client
3
- import ast
4
-
5
-
6
- async def init_ui(q: Q) -> None:
7
- q.page['meta'] = ui.meta_card(
8
- box='',
9
- layouts=[
10
- ui.layout(breakpoint='xs', min_height='100vh', zones=[
11
- ui.zone('main', size='1', direction=ui.ZoneDirection.ROW, zones=[
12
- ui.zone('sidebar', size='250px'),
13
- ui.zone('body', direction=ui.ZoneDirection.COLUMN, zones=[
14
- ui.zone('title', size='55px'),
15
- ui.zone('content', size='1'),
16
- ui.zone('footer'),
17
- ]),
18
- ])
19
- ])
20
- ],
21
- title='NeonAI Chat',
22
- )
23
- q.page['sidebar'] = ui.nav_card(
24
- box='sidebar', color='primary', title='OpenGPT v1', subtitle='A Revolt of Gooogle!',
25
- value=f"#{q.args['#']}' if q.args['#'] else '#page1",
26
- image='https://huggingface.co/spaces/AchyuthGamer/OpenGPT/resolve/main/opengpt-main%3Dlogo.jpg', items=[
27
- ui.nav_group('', items=[
28
- ui.nav_item(name='dwave-docs', label='Wave docs', path='https://opengptai.blogspot.com/achyuthgpt/'),
29
- ui.nav_item(name='NeonAI Chat', label='Open GPT', path='https://github.com/achyuth4/NeonAI-Chat'),
30
- ui.nav_item(name='fine-tune', label='LLM Studio', path='https://github.com/achyuth4/NeonAI-LLMstudio'),
31
- ui.nav_item(name='more-models', label='More spaces', path='https://huggingface.co/achyuthgamer'),
32
- ]),
33
- ],
34
- secondary_items=[
35
- ui.toggle(name='dark_mode', label='Dark mode', trigger=True),
36
- ui.text('<center>Developer - Achyuth Reddy.</center>')
37
- ]
38
- )
39
-
40
- q.page['chatbot'] = ui.chatbot_card(
41
- box=ui.box('content'),
42
- data=data('content from_user', t='list'),
43
- name='chatbot'
44
- )
45
- q.page['title'] = ui.section_card(
46
- box='title',
47
- title='',
48
- subtitle='',
49
- items=[
50
- ui.dropdown(name='model', trigger=True, label='', value='gpt', choices=[
51
- ui.choice(name='gpt', label='Gpt Model'),
52
- ui.choice(name='falcon', label='Falcon Model'),
53
- ui.choice(name='mpt', label='Mpt Model'),
54
- ]),
55
- ui.button(name='clear', label='Clear', icon='Delete'),
56
- ],
57
- )
58
-
59
- """
60
- :param load_8bit: load model in 8-bit using bitsandbytes
61
- :param load_4bit: load model in 4-bit using bitsandbytes
62
- :param load_half: load model in float16
63
- :param infer_devices: whether to control devices with gpu_id. If False, then spread across GPUs
64
- :param base_model: model HF-type name. If use --base_model to preload model, cannot unload in gradio in models tab
65
- :param tokenizer_base_model: tokenizer HF-type name. Usually not required, inferred from base_model.
66
- :param lora_weights: LORA weights path/HF link
67
- :param gpu_id: if infer_devices, then use gpu_id for cuda device ID, or auto mode if gpu_id != -1
68
- :param compile_model Whether to compile the model
69
- :param use_cache: Whether to use caching in model (some models fail when multiple threads use)
70
- :param inference_server: Consume base_model as type of model at this address
71
- Address can be text-generation-server hosting that base_model
72
- e.g. python generate.py --inference_server="http://192.168.1.46:6112" --base_model=h2oai/h2ogpt-oasst1-512-12b
73
- Or Address can be "openai_chat" or "openai" for OpenAI API
74
- e.g. python generate.py --inference_server="openai_chat" --base_model=gpt-3.5-turbo
75
- e.g. python generate.py --inference_server="openai" --base_model=text-davinci-003
76
- :param prompt_type: type of prompt, usually matched to fine-tuned model or plain for foundational model
77
- :param prompt_dict: If prompt_type=custom, then expects (some) items returned by get_prompt(..., return_dict=True)
78
- :param model_lock: Lock models to specific combinations, for ease of use and extending to many models
79
- Only used if gradio = True
80
- List of dicts, each dict has base_model, tokenizer_base_model, lora_weights, inference_server, prompt_type, and prompt_dict
81
- If all models have same prompt_type, and prompt_dict, can still specify that once in CLI outside model_lock as default for dict
82
- Can specify model_lock instead of those items on CLI
83
- As with CLI itself, base_model can infer prompt_type and prompt_dict if in prompter.py.
84
- Also, tokenizer_base_model and lora_weights are optional.
85
- Also, inference_server is optional if loading model from local system.
86
- All models provided will automatically appear in compare model mode
87
- Model loading-unloading and related choices will be disabled. Model/lora/server adding will be disabled
88
- :param model_lock_columns: How many columns to show if locking models (and so showing all at once)
89
- If None, then defaults to up to 3
90
- if -1, then all goes into 1 row
91
- Maximum value is 4 due to non-dynamic gradio rendering elements
92
- :param fail_if_cannot_connect: if doing model locking (e.g. with many models), fail if True. Otherwise ignore.
93
- Useful when many endpoints and want to just see what works, but still have to wait for timeout.
94
- :param temperature: generation temperature
95
- :param top_p: generation top_p
96
- :param top_k: generation top_k
97
- :param num_beams: generation number of beams
98
- :param repetition_penalty: generation repetition penalty
99
- :param num_return_sequences: generation number of sequences (1 forced for chat)
100
- :param do_sample: generation sample
101
- :param max_new_tokens: generation max new tokens
102
- :param min_new_tokens: generation min tokens
103
- :param early_stopping: generation early stopping
104
- :param max_time: maximum time to allow for generation
105
- :param memory_restriction_level: 0 = no restriction to tokens or model, 1 = some restrictions on token 2 = HF like restriction 3 = very low memory case
106
- :param debug: enable debug mode
107
- :param save_dir: directory chat data is saved to
108
- :param share: whether to share the gradio app with sharable URL
109
- :param local_files_only: whether to only use local files instead of doing to HF for models
110
- :param resume_download: whether to resume downloads from HF for models
111
- :param use_auth_token: whether to use HF auth token (requires CLI did huggingface-cli login before)
112
- :param trust_remote_code: whether to use trust any code needed for HF model
113
- :param offload_folder: path for spilling model onto disk
114
- :param src_lang: source languages to include if doing translation (None = all)
115
- :param tgt_lang: target languages to include if doing translation (None = all)
116
- :param cli: whether to use CLI (non-gradio) interface.
117
- :param cli_loop: whether to loop for CLI (False usually only for testing)
118
- :param gradio: whether to enable gradio, or to enable benchmark mode
119
- :param gradio_offline_level: > 0, then change fonts so full offline
120
- == 1 means backend won't need internet for fonts, but front-end UI might if font not cached
121
- == 2 means backend and frontend don't need internet to download any fonts.
122
- Note: Some things always disabled include HF telemetry, gradio telemetry, chromadb posthog that involve uploading.
123
- This option further disables google fonts for downloading, which is less intrusive than uploading,
124
- but still required in air-gapped case. The fonts don't look as nice as google fonts, but ensure full offline behavior.
125
- Also set --share=False to avoid sharing a gradio live link.
126
- :param chat: whether to enable chat mode with chat history
127
- :param chat_context: whether to use extra helpful context if human_bot
128
- :param stream_output: whether to stream output
129
- :param show_examples: whether to show clickable examples in gradio
130
- :param verbose: whether to show verbose prints
131
- :param h2ocolors: whether to use H2O.ai theme
132
- :param height: height of chat window
133
- :param show_lora: whether to show LORA options in UI (expert so can be hard to understand)
134
- :param login_mode_if_model0: set to True to load --base_model after client logs in, to be able to free GPU memory when model is swapped
135
- :param block_gradio_exit: whether to block gradio exit (used for testing)
136
- :param concurrency_count: gradio concurrency count (1 is optimal for LLMs)
137
- :param api_open: If False, don't let API calls skip gradio queue
138
- :param allow_api: whether to allow API calls at all to gradio server
139
- :param input_lines: how many input lines to show for chat box (>1 forces shift-enter for submit, else enter is submit)
140
- :param gradio_size: Overall size of text and spaces: "xsmall", "small", "medium", "large".
141
- Small useful for many chatbots in model_lock mode
142
- :param auth: gradio auth for launcher in form [(user1, pass1), (user2, pass2), ...]
143
- e.g. --auth=[('jon','password')] with no spaces
144
- :param max_max_time: Maximum max_time for gradio slider
145
- :param max_max_new_tokens: Maximum max_new_tokens for gradio slider
146
- :param sanitize_user_prompt: whether to remove profanity from user input (slows down input processing)
147
- :param sanitize_bot_response: whether to remove profanity and repeat lines from bot output (about 2x slower generation for long streaming cases due to better_profanity being slow)
148
- :param extra_model_options: extra models to show in list in gradio
149
- :param extra_lora_options: extra LORA to show in list in gradio
150
- :param extra_server_options: extra servers to show in list in gradio
151
- :param score_model: which model to score responses (None means no scoring)
152
- :param eval_filename: json file to use for evaluation, if None is sharegpt
153
- :param eval_prompts_only_num: for no gradio benchmark, if using eval_filename prompts for eval instead of examples
154
- :param eval_prompts_only_seed: for no gradio benchmark, seed for eval_filename sampling
155
- :param eval_as_output: for no gradio benchmark, whether to test eval_filename output itself
156
- :param langchain_mode: Data source to include. Choose "UserData" to only consume files from make_db.py.
157
- WARNING: wiki_full requires extra data processing via read_wiki_full.py and requires really good workstation to generate db, unless already present.
158
- :param langchain_action: Mode langchain operations in on documents.
159
- Query: Make query of document(s)
160
- Summarize or Summarize_map_reduce: Summarize document(s) via map_reduce
161
- Summarize_all: Summarize document(s) using entire document at once
162
- Summarize_refine: Summarize document(s) using entire document, and try to refine before returning summary
163
- :param force_langchain_evaluate: Whether to force langchain LLM use even if not doing langchain, mostly for testing.
164
- :param user_path: user path to glob from to generate db for vector search, for 'UserData' langchain mode.
165
- If already have db, any new/changed files are added automatically if path set, does not have to be same path used for prior db sources
166
- :param detect_user_path_changes_every_query: whether to detect if any files changed or added every similarity search (by file hashes).
167
- Expensive for large number of files, so not done by default. By default only detect changes during db loading.
168
- :param visible_langchain_modes: dbs to generate at launch to be ready for LLM
169
- Can be up to ['wiki', 'wiki_full', 'UserData', 'MyData', 'github h2oGPT', 'DriverlessAI docs']
170
- But wiki_full is expensive and requires preparation
171
- To allow scratch space only live in session, add 'MyData' to list
172
- Default: If only want to consume local files, e.g. prepared by make_db.py, only include ['UserData']
173
- FIXME: Avoid 'All' for now, not implemented
174
- :param visible_langchain_actions: Which actions to allow
175
- :param document_choice: Default document choice when taking subset of collection
176
- :param load_db_if_exists: Whether to load chroma db if exists or re-generate db
177
- :param keep_sources_in_context: Whether to keep url sources in context, not helpful usually
178
- :param db_type: 'faiss' for in-memory or 'chroma' or 'weaviate' for persisted on disk
179
- :param use_openai_embedding: Whether to use OpenAI embeddings for vector db
180
- :param use_openai_model: Whether to use OpenAI model for use with vector db
181
- :param hf_embedding_model: Which HF embedding model to use for vector db
182
- Default is instructor-large with 768 parameters per embedding if have GPUs, else all-MiniLM-L6-v1 if no GPUs
183
- Can also choose simpler model with 384 parameters per embedding: "sentence-transformers/all-MiniLM-L6-v2"
184
- Can also choose even better embedding with 1024 parameters: 'hkunlp/instructor-xl'
185
- We support automatically changing of embeddings for chroma, with a backup of db made if this is done
186
- :param allow_upload_to_user_data: Whether to allow file uploads to update shared vector db
187
- :param allow_upload_to_my_data: Whether to allow file uploads to update scratch vector db
188
- :param enable_url_upload: Whether to allow upload from URL
189
- :param enable_text_upload: Whether to allow upload of text
190
- :param enable_sources_list: Whether to allow list (or download for non-shared db) of list of sources for chosen db
191
- :param chunk: Whether to chunk data (True unless know data is already optimally chunked)
192
- :param chunk_size: Size of chunks, with typically top-4 passed to LLM, so neesd to be in context length
193
- :param top_k_docs: number of chunks to give LLM
194
- :param reverse_docs: whether to reverse docs order so most relevant is closest to question.
195
- Best choice for sufficiently smart model, and truncation occurs for oldest context, so best then too.
196
- But smaller 6_9 models fail to use newest context and can get stuck on old information.
197
- :param auto_reduce_chunks: Whether to automatically reduce top_k_docs to fit context given prompt
198
- :param max_chunks: If top_k_docs=-1, maximum number of chunks to allow
199
- :param n_jobs: Number of processors to use when consuming documents (-1 = all, is default)
200
- :param enable_captions: Whether to support captions using BLIP for image files as documents, then preloads that model
201
- :param captions_model: Which model to use for captions.
202
- captions_model: str = "Salesforce/blip-image-captioning-base", # continue capable
203
- captions_model: str = "Salesforce/blip2-flan-t5-xl", # question/answer capable, 16GB state
204
- captions_model: str = "Salesforce/blip2-flan-t5-xxl", # question/answer capable, 60GB state
205
- Note: opt-based blip2 are not permissive license due to opt and Meta license restrictions
206
- :param pre_load_caption_model: Whether to preload caption model, or load after forking parallel doc loader
207
- parallel loading disabled if preload and have images, to prevent deadlocking on cuda context
208
- Recommended if using larger caption model
209
- :param caption_gpu: If support caption, then use GPU if exists
210
- :param enable_ocr: Whether to support OCR on images
211
- :return:
212
- """
213
-
214
- @app('/')
215
- async def serve(q: Q):
216
- if not q.client.initialized:
217
- await init_ui(q)
218
- q.client.model_client = Client('https://gpt.h2o.ai/')
219
- q.client.initialized = True
220
-
221
- # A new message arrived.
222
- if q.args.chatbot:
223
- # Append user message.
224
- q.page['chatbot'].data += [q.args.chatbot, True]
225
- # Append bot response.
226
- kwargs = dict(instruction_nochat=q.args.chatbot)
227
- try:
228
- res = q.client.model_client.predict(str(dict(kwargs)), api_name='/submit_nochat_api')
229
- bot_res = ast.literal_eval(res)['response']
230
- q.page['chatbot'].data += [bot_res, False]
231
- except:
232
- q.page['meta'] = ui.meta_card(box='', notification_bar=ui.notification_bar(
233
- text='An error occurred during prediction. Please try later or a different model.',
234
- type='error',
235
- ))
236
- elif q.args.clear:
237
- # Recreate the card.
238
- q.page['chatbot'] = ui.chatbot_card(
239
- box=ui.box('content'),
240
- data=data('content from_user', t='list'),
241
- name='chatbot'
242
- )
243
- elif q.args.dark_mode is not None:
244
- q.page['meta'].theme = 'achyuthgpt-dark' if q.args.dark_mode else 'light'
245
- q.page['sidebar'].color = 'card' if q.args.dark_mode else 'primary'
246
- elif q.args.model:
247
- try:
248
- q.client.model_client = Client(f'https://{q.args.model}.h2o.ai/')
249
- q.page['meta'] = ui.meta_card(box='', notification_bar=ui.notification_bar(
250
- text='Model changed successfully.',
251
- type='success',
252
- ))
253
- except:
254
- q.page['meta'] = ui.meta_card(box='', notification_bar=ui.notification_bar(
255
- text='An error occurred while changing the model. Please try a different one.',
256
- type='error',
257
- ))
258
-
259
- await q.page.save()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapting/YouTube-Downloader/tube/utils.py DELETED
@@ -1,36 +0,0 @@
1
- import shutil
2
- import streamlit as st
3
- from pathlib import Path
4
- from .var import OUTPUT_DIR
5
-
6
-
7
-
8
-
9
- def compress_folder_2_zip(output_filename: str, dir_name:str):
10
- path = Path(output_filename+'.zip')
11
- if path.exists():
12
- return
13
-
14
- prompt = st.info('Start compressing...')
15
- with st.spinner("Compressing"):
16
- shutil.make_archive(output_filename.replace('.zip', ''), 'zip', dir_name)
17
- prompt.empty()
18
-
19
-
20
- def remove_dir_rec(pth):
21
- pth = Path(pth)
22
- if pth.exists():
23
- for child in pth.glob('*'):
24
- if child.is_file():
25
- child.unlink()
26
- else:
27
- remove_dir_rec(child)
28
- pth.rmdir()
29
- def clear_cache(dir_name:str = OUTPUT_DIR):
30
- remove_dir_rec(dir_name)
31
-
32
-
33
-
34
-
35
- if __name__ == '__main__':
36
- compress_folder_2_zip('test',dir_name='../downloads')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/__init__.py DELETED
@@ -1,13 +0,0 @@
1
- from typing import Dict
2
-
3
- from agentverse.registry import Registry
4
-
5
- visibility_registry = Registry(name="VisibilityRegistry")
6
-
7
- from .base import BaseVisibility
8
- from .all import AllVisibility
9
- from .classroom import ClassroomVisibility
10
- from .oneself import OneselfVisibility
11
- from .prisoner import PrisonerVisibility
12
- from .sde_team import SdeTeamVisibility
13
- from .pokemon import PokemonVisibility
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Factory.d.ts DELETED
@@ -1,6 +0,0 @@
1
- import Ball from './Ball';
2
- import Base from '../base/Base';
3
-
4
- export default function Factory(
5
- config?: Base.IConfig
6
- ): Ball;
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.d.ts DELETED
@@ -1,5 +0,0 @@
1
- import DropDownList from './DropDownList';
2
-
3
- export default function (
4
- config?: DropDownList.IConfig
5
- ): DropDownList;
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simpledropdownlist/Factory.d.ts DELETED
@@ -1,6 +0,0 @@
1
- import SimpleDropDownList from './SimpleDropDownList';
2
-
3
- export default function (
4
- config?: SimpleDropDownList.IConfig,
5
- creators?: SimpleDropDownList.ICreatorsConfig,
6
- ): SimpleDropDownList;
 
 
 
 
 
 
 
spaces/AiBototicus/BucksAI-4/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: BucksAI 4
3
- emoji: 👀
4
- colorFrom: red
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.24.1
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlanMars/QYL-AI-Space/modules/models/modeling_moss.py DELETED
@@ -1,711 +0,0 @@
1
- """ PyTorch Moss model."""
2
-
3
- from typing import Optional, Tuple, Union
4
-
5
- import torch
6
- import torch.utils.checkpoint
7
- from torch import nn
8
- from torch.nn import CrossEntropyLoss
9
-
10
- from transformers.activations import ACT2FN
11
- from transformers.modeling_utils import PreTrainedModel
12
- from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
13
- from transformers.utils import (
14
- add_code_sample_docstrings,
15
- add_start_docstrings,
16
- add_start_docstrings_to_model_forward,
17
- logging
18
- )
19
-
20
- from .configuration_moss import MossConfig
21
-
22
-
23
- logger = logging.get_logger(__name__)
24
-
25
- _CHECKPOINT_FOR_DOC = "fnlp/moss-moon-003-base"
26
- _CONFIG_FOR_DOC = "MossConfig"
27
-
28
-
29
- MOSS_PRETRAINED_MODEL_ARCHIVE_LIST = [
30
- "fnlp/moss-moon-003-base",
31
- "fnlp/moss-moon-003-sft",
32
- "fnlp/moss-moon-003-sft-plugin",
33
- ]
34
-
35
-
36
- # Copied from transformers.models.gptj.modeling_gptj.create_sinusoidal_positions
37
- def create_sinusoidal_positions(num_pos: int, dim: int) -> torch.Tensor:
38
- inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2) / dim))
39
- sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(num_pos, dtype=torch.float), inv_freq).float()
40
- return torch.cat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1)
41
-
42
-
43
- # Copied from transformers.models.gptj.modeling_gptj.rotate_every_two
44
- def rotate_every_two(x: torch.Tensor) -> torch.Tensor:
45
- x1 = x[:, :, :, ::2]
46
- x2 = x[:, :, :, 1::2]
47
- x = torch.stack((-x2, x1), dim=-1)
48
- return x.flatten(-2) # in einsum notation: rearrange(x, '... d j -> ... (d j)')
49
-
50
-
51
- # Copied from transformers.models.gptj.modeling_gptj.apply_rotary_pos_emb
52
- def apply_rotary_pos_emb(tensor: torch.Tensor, sin: torch.Tensor, cos: torch.Tensor) -> torch.Tensor:
53
- sin = torch.repeat_interleave(sin[:, :, None, :], 2, 3)
54
- cos = torch.repeat_interleave(cos[:, :, None, :], 2, 3)
55
- return (tensor * cos) + (rotate_every_two(tensor) * sin)
56
-
57
-
58
- class MossAttention(nn.Module):
59
- def __init__(self, config):
60
- super().__init__()
61
-
62
- max_positions = config.max_position_embeddings
63
- self.register_buffer(
64
- "causal_mask",
65
- torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view(
66
- 1, 1, max_positions, max_positions
67
- ),
68
- )
69
-
70
- self.attn_dropout = nn.Dropout(config.attn_pdrop)
71
- self.resid_dropout = nn.Dropout(config.resid_pdrop)
72
-
73
- self.embed_dim = config.hidden_size
74
- self.num_attention_heads = config.num_attention_heads
75
- self.head_dim = self.embed_dim // self.num_attention_heads
76
- if self.head_dim * self.num_attention_heads != self.embed_dim:
77
- raise ValueError(
78
- f"embed_dim must be divisible by num_attention_heads (got `embed_dim`: {self.embed_dim} and"
79
- f" `num_attention_heads`: {self.num_attention_heads})."
80
- )
81
- self.scale_attn = torch.sqrt(torch.tensor(self.head_dim, dtype=torch.float32)).to(torch.get_default_dtype())
82
- self.qkv_proj = nn.Linear(self.embed_dim, self.embed_dim * 3, bias=False)
83
-
84
- self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)
85
- self.rotary_dim = config.rotary_dim
86
- pos_embd_dim = self.rotary_dim or self.embed_dim
87
- self.embed_positions = create_sinusoidal_positions(max_positions, pos_embd_dim)
88
-
89
- def _split_heads(self, x, n_head, dim_head, mp_num):
90
- reshaped = x.reshape(x.shape[:-1] + (n_head // mp_num, dim_head))
91
- reshaped = reshaped.reshape(x.shape[:-2] + (-1,) + reshaped.shape[-1:])
92
- return reshaped
93
-
94
- def _merge_heads(self, tensor, num_attention_heads, attn_head_size):
95
- """
96
- Merges attn_head_size dim and num_attn_heads dim into n_ctx
97
- """
98
- if len(tensor.shape) == 5:
99
- tensor = tensor.permute(0, 1, 3, 2, 4).contiguous()
100
- elif len(tensor.shape) == 4:
101
- tensor = tensor.permute(0, 2, 1, 3).contiguous()
102
- else:
103
- raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(tensor.shape)}")
104
- new_shape = tensor.size()[:-2] + (num_attention_heads * attn_head_size,)
105
- return tensor.view(new_shape)
106
-
107
- def _attn(
108
- self,
109
- query,
110
- key,
111
- value,
112
- attention_mask=None,
113
- head_mask=None,
114
- ):
115
- # compute causal mask from causal mask buffer
116
- query_length, key_length = query.size(-2), key.size(-2)
117
- causal_mask = self.causal_mask[:, :, key_length - query_length : key_length, :key_length]
118
-
119
- # Keep the attention weights computation in fp32 to avoid overflow issues
120
- query = query.to(torch.float32)
121
- key = key.to(torch.float32)
122
-
123
- attn_weights = torch.matmul(query, key.transpose(-1, -2))
124
-
125
- attn_weights = attn_weights / self.scale_attn
126
- mask_value = torch.finfo(attn_weights.dtype).min
127
- # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`.
128
- # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device`
129
- mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device)
130
- attn_weights = torch.where(causal_mask, attn_weights, mask_value)
131
-
132
- if attention_mask is not None:
133
- # Apply the attention mask
134
- attn_weights = attn_weights + attention_mask
135
-
136
- attn_weights = nn.Softmax(dim=-1)(attn_weights)
137
- attn_weights = attn_weights.to(value.dtype)
138
- attn_weights = self.attn_dropout(attn_weights)
139
-
140
- # Mask heads if we want to
141
- if head_mask is not None:
142
- attn_weights = attn_weights * head_mask
143
-
144
- attn_output = torch.matmul(attn_weights, value)
145
-
146
- return attn_output, attn_weights
147
-
148
- def forward(
149
- self,
150
- hidden_states: Optional[torch.FloatTensor],
151
- layer_past: Optional[Tuple[torch.Tensor]] = None,
152
- attention_mask: Optional[torch.FloatTensor] = None,
153
- position_ids: Optional[torch.LongTensor] = None,
154
- head_mask: Optional[torch.FloatTensor] = None,
155
- use_cache: Optional[bool] = False,
156
- output_attentions: Optional[bool] = False,
157
- ) -> Union[
158
- Tuple[torch.Tensor, Tuple[torch.Tensor]],
159
- Optional[Tuple[torch.Tensor, Tuple[torch.Tensor], Tuple[torch.Tensor, ...]]],
160
- ]:
161
- qkv = self.qkv_proj(hidden_states)
162
- # TODO(enijkamp): factor out number of logical TPU-v4 cores or make forward pass agnostic
163
- mp_num = 4
164
- qkv_split = qkv.reshape(qkv.shape[:-1] + (mp_num, -1))
165
-
166
- local_dim = self.head_dim * self.num_attention_heads // mp_num
167
- query, value, key = torch.split(qkv_split, local_dim, dim=-1)
168
- query = self._split_heads(query, self.num_attention_heads, self.head_dim, mp_num=mp_num)
169
- key = self._split_heads(key, self.num_attention_heads, self.head_dim, mp_num=mp_num)
170
-
171
- value = self._split_heads(value, self.num_attention_heads, self.head_dim, mp_num=mp_num)
172
- value = value.permute(0, 2, 1, 3)
173
-
174
- embed_positions = self.embed_positions
175
- if embed_positions.device != position_ids.device:
176
- embed_positions = embed_positions.to(position_ids.device)
177
- self.embed_positions = embed_positions
178
-
179
- sincos = embed_positions[position_ids]
180
- sin, cos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1)
181
-
182
- if self.rotary_dim is not None:
183
- k_rot = key[:, :, :, : self.rotary_dim]
184
- k_pass = key[:, :, :, self.rotary_dim :]
185
-
186
- q_rot = query[:, :, :, : self.rotary_dim]
187
- q_pass = query[:, :, :, self.rotary_dim :]
188
-
189
- k_rot = apply_rotary_pos_emb(k_rot, sin, cos)
190
- q_rot = apply_rotary_pos_emb(q_rot, sin, cos)
191
-
192
- key = torch.cat([k_rot, k_pass], dim=-1)
193
- query = torch.cat([q_rot, q_pass], dim=-1)
194
- else:
195
- key = apply_rotary_pos_emb(key, sin, cos)
196
- query = apply_rotary_pos_emb(query, sin, cos)
197
-
198
- key = key.permute(0, 2, 1, 3)
199
- query = query.permute(0, 2, 1, 3)
200
-
201
- if layer_past is not None:
202
- past_key = layer_past[0]
203
- past_value = layer_past[1]
204
- key = torch.cat((past_key, key), dim=-2)
205
- value = torch.cat((past_value, value), dim=-2)
206
-
207
- if use_cache is True:
208
- present = (key, value)
209
- else:
210
- present = None
211
-
212
- # compute self-attention: V x Softmax(QK^T)
213
- attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
214
-
215
- attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_dim)
216
- attn_output = self.out_proj(attn_output)
217
- attn_output = self.resid_dropout(attn_output)
218
-
219
- outputs = (attn_output, present)
220
- if output_attentions:
221
- outputs += (attn_weights,)
222
-
223
- return outputs # a, present, (attentions)
224
-
225
-
226
- # Copied from transformers.models.gptj.modeling_gptj.GPTJMLP with GPTJ->Moss
227
- class MossMLP(nn.Module):
228
- def __init__(self, intermediate_size, config): # in MLP: intermediate_size= 4 * embed_dim
229
- super().__init__()
230
- embed_dim = config.n_embd
231
-
232
- self.fc_in = nn.Linear(embed_dim, intermediate_size)
233
- self.fc_out = nn.Linear(intermediate_size, embed_dim)
234
-
235
- self.act = ACT2FN[config.activation_function]
236
- self.dropout = nn.Dropout(config.resid_pdrop)
237
-
238
- def forward(self, hidden_states: Optional[torch.FloatTensor]) -> torch.FloatTensor:
239
- hidden_states = self.fc_in(hidden_states)
240
- hidden_states = self.act(hidden_states)
241
- hidden_states = self.fc_out(hidden_states)
242
- hidden_states = self.dropout(hidden_states)
243
- return hidden_states
244
-
245
-
246
- # Copied from transformers.models.gptj.modeling_gptj.GPTJBlock with GPTJ->Moss
247
- class MossBlock(nn.Module):
248
- def __init__(self, config):
249
- super().__init__()
250
- inner_dim = config.n_inner if config.n_inner is not None else 4 * config.n_embd
251
- self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
252
- self.attn = MossAttention(config)
253
- self.mlp = MossMLP(inner_dim, config)
254
-
255
- def forward(
256
- self,
257
- hidden_states: Optional[torch.FloatTensor],
258
- layer_past: Optional[Tuple[torch.Tensor]] = None,
259
- attention_mask: Optional[torch.FloatTensor] = None,
260
- position_ids: Optional[torch.LongTensor] = None,
261
- head_mask: Optional[torch.FloatTensor] = None,
262
- use_cache: Optional[bool] = False,
263
- output_attentions: Optional[bool] = False,
264
- ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]:
265
- residual = hidden_states
266
- hidden_states = self.ln_1(hidden_states)
267
- attn_outputs = self.attn(
268
- hidden_states=hidden_states,
269
- layer_past=layer_past,
270
- attention_mask=attention_mask,
271
- position_ids=position_ids,
272
- head_mask=head_mask,
273
- use_cache=use_cache,
274
- output_attentions=output_attentions,
275
- )
276
- attn_output = attn_outputs[0] # output_attn: a, present, (attentions)
277
- outputs = attn_outputs[1:]
278
-
279
- feed_forward_hidden_states = self.mlp(hidden_states)
280
- hidden_states = attn_output + feed_forward_hidden_states + residual
281
-
282
- if use_cache:
283
- outputs = (hidden_states,) + outputs
284
- else:
285
- outputs = (hidden_states,) + outputs[1:]
286
-
287
- return outputs # hidden_states, present, (attentions)
288
-
289
-
290
- class MossPreTrainedModel(PreTrainedModel):
291
- """
292
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
293
- models.
294
- """
295
-
296
- config_class = MossConfig
297
- base_model_prefix = "transformer"
298
- supports_gradient_checkpointing = True
299
- _no_split_modules = ["MossBlock"]
300
-
301
- def __init__(self, *inputs, **kwargs):
302
- super().__init__(*inputs, **kwargs)
303
-
304
- def _init_weights(self, module):
305
- """Initialize the weights."""
306
- if isinstance(module, (nn.Linear,)):
307
- # Slightly different from Mesh Transformer JAX which uses truncated_normal for initialization
308
- # cf https://github.com/pytorch/pytorch/pull/5617
309
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
310
- if module.bias is not None:
311
- module.bias.data.zero_()
312
- elif isinstance(module, nn.Embedding):
313
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
314
- if module.padding_idx is not None:
315
- module.weight.data[module.padding_idx].zero_()
316
- elif isinstance(module, nn.LayerNorm):
317
- module.bias.data.zero_()
318
- module.weight.data.fill_(1.0)
319
-
320
- def _set_gradient_checkpointing(self, module, value=False):
321
- if isinstance(module, MossModel):
322
- module.gradient_checkpointing = value
323
-
324
-
325
- MOSS_START_DOCSTRING = r"""
326
- This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
327
- it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
328
- behavior.
329
-
330
- Parameters:
331
- config ([`MossConfig`]): Model configuration class with all the parameters of the model.
332
- Initializing with a config file does not load the weights associated with the model, only the
333
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
334
- """
335
-
336
- MOSS_INPUTS_DOCSTRING = r"""
337
- Args:
338
- input_ids (`torch.LongTensor` of shape `({0})`):
339
- Indices of input sequence tokens in the vocabulary.
340
-
341
- Indices can be obtained using [`AutoProcenizer`]. See [`PreTrainedTokenizer.encode`] and
342
- [`PreTrainedTokenizer.__call__`] for details.
343
-
344
- [What are input IDs?](../glossary#input-ids)
345
- attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
346
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
347
-
348
- - 1 for tokens that are **not masked**,
349
- - 0 for tokens that are **masked**.
350
-
351
- [What are attention masks?](../glossary#attention-mask)
352
- token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*):
353
- Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
354
- 1]`:
355
-
356
- - 0 corresponds to a *sentence A* token,
357
- - 1 corresponds to a *sentence B* token.
358
-
359
- [What are token type IDs?](../glossary#token-type-ids)
360
- position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
361
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
362
- config.n_positions - 1]`.
363
-
364
- [What are position IDs?](../glossary#position-ids)
365
- head_mask (`torch.FloatTensor` of shape `(num_attention_heads,)` or `(n_layer, num_attention_heads)`, *optional*):
366
- Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
367
-
368
- - 1 indicates the head is **not masked**,
369
- - 0 indicates the head is **masked**.
370
-
371
- inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_dim)`, *optional*):
372
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
373
- is useful if you want more control over how to convert *input_ids* indices into associated vectors than the
374
- model's internal embedding lookup matrix.
375
- output_attentions (`bool`, *optional*):
376
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
377
- tensors for more detail.
378
- output_hidden_states (`bool`, *optional*):
379
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
380
- more detail.
381
- return_dict (`bool`, *optional*):
382
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
383
- """
384
-
385
-
386
- @add_start_docstrings(
387
- "The bare Moss Model transformer outputting raw hidden-states without any specific head on top.",
388
- MOSS_START_DOCSTRING,
389
- )
390
- class MossModel(MossPreTrainedModel):
391
- def __init__(self, config):
392
- super().__init__(config)
393
-
394
- self.embed_dim = config.n_embd
395
- self.vocab_size = config.vocab_size
396
- self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
397
- self.drop = nn.Dropout(config.embd_pdrop)
398
- self.h = nn.ModuleList([MossBlock(config) for _ in range(config.n_layer)])
399
- self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon)
400
- self.rotary_dim = min(config.rotary_dim, config.n_ctx // config.num_attention_heads)
401
-
402
- self.gradient_checkpointing = False
403
-
404
- # Initialize weights and apply final processing
405
- self.post_init()
406
-
407
- def get_input_embeddings(self):
408
- return self.wte
409
-
410
- def set_input_embeddings(self, new_embeddings):
411
- self.wte = new_embeddings
412
-
413
- @add_start_docstrings_to_model_forward(MOSS_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
414
- @add_code_sample_docstrings(
415
- checkpoint=_CHECKPOINT_FOR_DOC,
416
- output_type=BaseModelOutputWithPast,
417
- config_class=_CONFIG_FOR_DOC,
418
- )
419
- def forward(
420
- self,
421
- input_ids: Optional[torch.LongTensor] = None,
422
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
423
- attention_mask: Optional[torch.FloatTensor] = None,
424
- token_type_ids: Optional[torch.LongTensor] = None,
425
- position_ids: Optional[torch.LongTensor] = None,
426
- head_mask: Optional[torch.FloatTensor] = None,
427
- inputs_embeds: Optional[torch.FloatTensor] = None,
428
- use_cache: Optional[bool] = None,
429
- output_attentions: Optional[bool] = None,
430
- output_hidden_states: Optional[bool] = None,
431
- return_dict: Optional[bool] = None,
432
- ) -> Union[Tuple, BaseModelOutputWithPast]:
433
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
434
- output_hidden_states = (
435
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
436
- )
437
- use_cache = use_cache if use_cache is not None else self.config.use_cache
438
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
439
-
440
- if input_ids is not None and inputs_embeds is not None:
441
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
442
- elif input_ids is not None:
443
- input_shape = input_ids.size()
444
- input_ids = input_ids.view(-1, input_shape[-1])
445
- batch_size = input_ids.shape[0]
446
- elif inputs_embeds is not None:
447
- input_shape = inputs_embeds.size()[:-1]
448
- batch_size = inputs_embeds.shape[0]
449
- else:
450
- raise ValueError("You have to specify either input_ids or inputs_embeds")
451
-
452
- device = input_ids.device if input_ids is not None else inputs_embeds.device
453
-
454
- if token_type_ids is not None:
455
- token_type_ids = token_type_ids.view(-1, input_shape[-1])
456
-
457
- if position_ids is not None:
458
- position_ids = position_ids.view(-1, input_shape[-1]).long()
459
-
460
- if past_key_values is None:
461
- past_length = 0
462
- past_key_values = tuple([None] * len(self.h))
463
- else:
464
- past_length = past_key_values[0][0].size(-2)
465
-
466
- if position_ids is None:
467
- position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)
468
- position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
469
-
470
- # Attention mask.
471
- if attention_mask is not None:
472
- if batch_size <= 0:
473
- raise ValueError("batch_size has to be defined and > 0")
474
- attention_mask = attention_mask.view(batch_size, -1)
475
- # We create a 3D attention mask from a 2D tensor mask.
476
- # Sizes are [batch_size, 1, 1, to_seq_length]
477
- # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length]
478
- # this attention mask is more simple than the triangular masking of causal attention
479
- # used in OpenAI GPT, we just need to prepare the broadcast dimension here.
480
- attention_mask = attention_mask[:, None, None, :]
481
-
482
- # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
483
- # masked positions, this operation will create a tensor which is 0.0 for
484
- # positions we want to attend and the dtype's smallest value for masked positions.
485
- # Since we are adding it to the raw scores before the softmax, this is
486
- # effectively the same as removing these entirely.
487
- attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility
488
- attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min
489
-
490
- # Prepare head mask if needed
491
- # 1.0 in head_mask indicate we keep the head
492
- # attention_probs has shape bsz x num_attention_heads x N x N
493
- # head_mask has shape n_layer x batch x num_attention_heads x N x N
494
- head_mask = self.get_head_mask(head_mask, self.config.n_layer)
495
-
496
- if inputs_embeds is None:
497
- inputs_embeds = self.wte(input_ids)
498
-
499
- hidden_states = inputs_embeds
500
-
501
- if token_type_ids is not None:
502
- token_type_embeds = self.wte(token_type_ids)
503
- hidden_states = hidden_states + token_type_embeds
504
-
505
- hidden_states = self.drop(hidden_states)
506
-
507
- output_shape = input_shape + (hidden_states.size(-1),)
508
-
509
- if self.gradient_checkpointing and self.training:
510
- if use_cache:
511
- logger.warning_once(
512
- "`use_cache=True` is incompatible with `config.gradient_checkpointing=True`. Setting "
513
- "`use_cache=False`..."
514
- )
515
- use_cache = False
516
-
517
- presents = () if use_cache else None
518
- all_self_attentions = () if output_attentions else None
519
- all_hidden_states = () if output_hidden_states else None
520
- for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
521
- if output_hidden_states:
522
- all_hidden_states = all_hidden_states + (hidden_states,)
523
-
524
- if self.gradient_checkpointing and self.training:
525
-
526
- def create_custom_forward(module):
527
- def custom_forward(*inputs):
528
- # None for past_key_value
529
- return module(*inputs, use_cache, output_attentions)
530
-
531
- return custom_forward
532
-
533
- outputs = torch.utils.checkpoint.checkpoint(
534
- create_custom_forward(block),
535
- hidden_states,
536
- None,
537
- attention_mask,
538
- position_ids,
539
- head_mask[i],
540
- )
541
- else:
542
- outputs = block(
543
- hidden_states=hidden_states,
544
- layer_past=layer_past,
545
- attention_mask=attention_mask,
546
- position_ids=position_ids,
547
- head_mask=head_mask[i],
548
- use_cache=use_cache,
549
- output_attentions=output_attentions,
550
- )
551
-
552
- hidden_states = outputs[0]
553
- if use_cache is True:
554
- presents = presents + (outputs[1],)
555
-
556
- if output_attentions:
557
- all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
558
-
559
- hidden_states = self.ln_f(hidden_states)
560
-
561
- hidden_states = hidden_states.view(output_shape)
562
- # Add last hidden state
563
- if output_hidden_states:
564
- all_hidden_states = all_hidden_states + (hidden_states,)
565
-
566
- if not return_dict:
567
- return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)
568
-
569
- return BaseModelOutputWithPast(
570
- last_hidden_state=hidden_states,
571
- past_key_values=presents,
572
- hidden_states=all_hidden_states,
573
- attentions=all_self_attentions,
574
- )
575
-
576
-
577
- @add_start_docstrings(
578
- """
579
- The Moss Model transformer with a language modeling head on top.
580
- """,
581
- MOSS_START_DOCSTRING,
582
- )
583
- class MossForCausalLM(MossPreTrainedModel):
584
- _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.causal_mask"]
585
-
586
- def __init__(self, config):
587
- super().__init__(config)
588
- self.transformer = MossModel(config)
589
- self.lm_head = nn.Linear(config.n_embd, config.vocab_size)
590
-
591
- # Initialize weights and apply final processing
592
- self.post_init()
593
-
594
- def get_output_embeddings(self):
595
- return self.lm_head
596
-
597
- def set_output_embeddings(self, new_embeddings):
598
- self.lm_head = new_embeddings
599
-
600
- def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):
601
- token_type_ids = kwargs.get("token_type_ids", None)
602
- # only last token for inputs_ids if past is defined in kwargs
603
- if past_key_values:
604
- input_ids = input_ids[:, -1].unsqueeze(-1)
605
- if token_type_ids is not None:
606
- token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
607
-
608
- attention_mask = kwargs.get("attention_mask", None)
609
- position_ids = kwargs.get("position_ids", None)
610
-
611
- if attention_mask is not None and position_ids is None:
612
- # create position_ids on the fly for batch generation
613
- position_ids = attention_mask.long().cumsum(-1) - 1
614
- position_ids.masked_fill_(attention_mask == 0, 1)
615
- if past_key_values:
616
- position_ids = position_ids[:, -1].unsqueeze(-1)
617
-
618
- return {
619
- "input_ids": input_ids,
620
- "past_key_values": past_key_values,
621
- "use_cache": kwargs.get("use_cache"),
622
- "position_ids": position_ids,
623
- "attention_mask": attention_mask,
624
- "token_type_ids": token_type_ids,
625
- }
626
-
627
- @add_start_docstrings_to_model_forward(MOSS_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
628
- @add_code_sample_docstrings(
629
- checkpoint=_CHECKPOINT_FOR_DOC,
630
- output_type=CausalLMOutputWithPast,
631
- config_class=_CONFIG_FOR_DOC,
632
- )
633
- def forward(
634
- self,
635
- input_ids: Optional[torch.LongTensor] = None,
636
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
637
- attention_mask: Optional[torch.FloatTensor] = None,
638
- token_type_ids: Optional[torch.LongTensor] = None,
639
- position_ids: Optional[torch.LongTensor] = None,
640
- head_mask: Optional[torch.FloatTensor] = None,
641
- inputs_embeds: Optional[torch.FloatTensor] = None,
642
- labels: Optional[torch.LongTensor] = None,
643
- use_cache: Optional[bool] = None,
644
- output_attentions: Optional[bool] = None,
645
- output_hidden_states: Optional[bool] = None,
646
- return_dict: Optional[bool] = None,
647
- ) -> Union[Tuple, CausalLMOutputWithPast]:
648
- r"""
649
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
650
- Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
651
- `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
652
- are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
653
- """
654
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
655
-
656
- transformer_outputs = self.transformer(
657
- input_ids,
658
- past_key_values=past_key_values,
659
- attention_mask=attention_mask,
660
- token_type_ids=token_type_ids,
661
- position_ids=position_ids,
662
- head_mask=head_mask,
663
- inputs_embeds=inputs_embeds,
664
- use_cache=use_cache,
665
- output_attentions=output_attentions,
666
- output_hidden_states=output_hidden_states,
667
- return_dict=return_dict,
668
- )
669
- hidden_states = transformer_outputs[0]
670
-
671
- # make sure sampling in fp16 works correctly and
672
- # compute loss in fp32 to match with mesh-tf version
673
- # https://github.com/EleutherAI/gpt-neo/blob/89ce74164da2fb16179106f54e2269b5da8db333/models/gpt2/gpt2.py#L179
674
- lm_logits = self.lm_head(hidden_states).to(torch.float32)
675
-
676
- loss = None
677
- if labels is not None:
678
- # Shift so that tokens < n predict n
679
- shift_logits = lm_logits[..., :-1, :].contiguous()
680
- shift_labels = labels[..., 1:].contiguous()
681
- # Flatten the tokens
682
- loss_fct = CrossEntropyLoss()
683
- loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
684
-
685
- loss = loss.to(hidden_states.dtype)
686
-
687
- if not return_dict:
688
- output = (lm_logits,) + transformer_outputs[1:]
689
- return ((loss,) + output) if loss is not None else output
690
-
691
- return CausalLMOutputWithPast(
692
- loss=loss,
693
- logits=lm_logits,
694
- past_key_values=transformer_outputs.past_key_values,
695
- hidden_states=transformer_outputs.hidden_states,
696
- attentions=transformer_outputs.attentions,
697
- )
698
-
699
- @staticmethod
700
- def _reorder_cache(
701
- past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
702
- ) -> Tuple[Tuple[torch.Tensor]]:
703
- """
704
- This function is used to re-order the `past_key_values` cache if [`~PretrainedModel.beam_search`] or
705
- [`~PretrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
706
- beam_idx at every generation step.
707
- """
708
- return tuple(
709
- tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
710
- for layer_past in past_key_values
711
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWang/lama/saicinpainting/evaluation/losses/__init__.py DELETED
File without changes
spaces/Alfasign/fdvdv/app.py DELETED
@@ -1,7 +0,0 @@
1
- import requests response = requests.post( 'https://api.v6.unrealspeech.com/stream',
2
-
3
- headers = { 'Authorization' : 'Bearer VqUmMUjnSPfuxttMk4SjWGVR9fbdVLBSwXxpWUq9iwDWYRQDhGQxfQ' },
4
- json = { 'Text': '''<YOUR_TEXT>''', 'VoiceId': '<VOICE_ID>', 'Bitrate': '128k', } )
5
- with open('audio.mp3', 'wb') as f: f.write(response.content)
6
-
7
- import gradio as grdef greet(name):return "Hello " + name + "!!"iface = gr.Interface(fn=greet, inputs="text", outputs="text")iface.launch()
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/models/__init__.py DELETED
@@ -1,67 +0,0 @@
1
- """This package contains modules related to objective functions, optimizations, and network architectures.
2
-
3
- To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel.
4
- You need to implement the following five functions:
5
- -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
6
- -- <set_input>: unpack data from dataset and apply preprocessing.
7
- -- <forward>: produce intermediate results.
8
- -- <optimize_parameters>: calculate loss, gradients, and update network weights.
9
- -- <modify_commandline_options>: (optionally) add model-specific options and set default options.
10
-
11
- In the function <__init__>, you need to define four lists:
12
- -- self.loss_names (str list): specify the training losses that you want to plot and save.
13
- -- self.model_names (str list): define networks used in our training.
14
- -- self.visual_names (str list): specify the images that you want to display and save.
15
- -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage.
16
-
17
- Now you can use the model class by specifying flag '--model dummy'.
18
- See our template model class 'template_model.py' for more details.
19
- """
20
-
21
- import importlib
22
- from src.face3d.models.base_model import BaseModel
23
-
24
-
25
- def find_model_using_name(model_name):
26
- """Import the module "models/[model_name]_model.py".
27
-
28
- In the file, the class called DatasetNameModel() will
29
- be instantiated. It has to be a subclass of BaseModel,
30
- and it is case-insensitive.
31
- """
32
- model_filename = "face3d.models." + model_name + "_model"
33
- modellib = importlib.import_module(model_filename)
34
- model = None
35
- target_model_name = model_name.replace('_', '') + 'model'
36
- for name, cls in modellib.__dict__.items():
37
- if name.lower() == target_model_name.lower() \
38
- and issubclass(cls, BaseModel):
39
- model = cls
40
-
41
- if model is None:
42
- print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name))
43
- exit(0)
44
-
45
- return model
46
-
47
-
48
- def get_option_setter(model_name):
49
- """Return the static method <modify_commandline_options> of the model class."""
50
- model_class = find_model_using_name(model_name)
51
- return model_class.modify_commandline_options
52
-
53
-
54
- def create_model(opt):
55
- """Create a model given the option.
56
-
57
- This function warps the class CustomDatasetDataLoader.
58
- This is the main interface between this package and 'train.py'/'test.py'
59
-
60
- Example:
61
- >>> from models import create_model
62
- >>> model = create_model(opt)
63
- """
64
- model = find_model_using_name(opt.model)
65
- instance = model(opt)
66
- print("model [%s] was created" % type(instance).__name__)
67
- return instance
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/index.md DELETED
@@ -1,98 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- <p align="center">
14
- <br>
15
- <img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/>
16
- <br>
17
- </p>
18
-
19
- # Diffusers
20
-
21
- 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](conceptual/philosophy#usability-over-performance), [simple over easy](conceptual/philosophy#simple-over-easy), and [customizability over abstractions](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
22
-
23
- The library has three main components:
24
-
25
- - State-of-the-art [diffusion pipelines](api/pipelines/overview) for inference with just a few lines of code.
26
- - Interchangeable [noise schedulers](api/schedulers/overview) for balancing trade-offs between generation speed and quality.
27
- - Pretrained [models](api/models) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.
28
-
29
- <div class="mt-10">
30
- <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
31
- <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview"
32
- ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
33
- <p class="text-gray-700">Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time!</p>
34
- </a>
35
- <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview"
36
- ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
37
- <p class="text-gray-700">Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques.</p>
38
- </a>
39
- <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy"
40
- ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
41
- <p class="text-gray-700">Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library.</p>
42
- </a>
43
- <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models/overview"
44
- ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
45
- <p class="text-gray-700">Technical descriptions of how 🤗 Diffusers classes and methods work.</p>
46
- </a>
47
- </div>
48
- </div>
49
-
50
- ## Supported pipelines
51
-
52
- | Pipeline | Paper/Repository | Tasks |
53
- |---|---|:---:|
54
- | [alt_diffusion](./api/pipelines/alt_diffusion) | [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation |
55
- | [audio_diffusion](./api/pipelines/audio_diffusion) | [Audio Diffusion](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation |
56
- | [controlnet](./api/pipelines/controlnet) | [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation |
57
- | [cycle_diffusion](./api/pipelines/cycle_diffusion) | [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
58
- | [dance_diffusion](./api/pipelines/dance_diffusion) | [Dance Diffusion](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
59
- | [ddpm](./api/pipelines/ddpm) | [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
60
- | [ddim](./api/pipelines/ddim) | [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
61
- | [if](./if) | [**IF**](./api/pipelines/if) | Image Generation |
62
- | [if_img2img](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
63
- | [if_inpainting](./if) | [**IF**](./api/pipelines/if) | Image-to-Image Generation |
64
- | [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
65
- | [latent_diffusion](./api/pipelines/latent_diffusion) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
66
- | [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
67
- | [paint_by_example](./api/pipelines/paint_by_example) | [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
68
- | [pndm](./api/pipelines/pndm) | [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
69
- | [score_sde_ve](./api/pipelines/score_sde_ve) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
70
- | [score_sde_vp](./api/pipelines/score_sde_vp) | [Score-Based Generative Modeling through Stochastic Differential Equations](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
71
- | [semantic_stable_diffusion](./api/pipelines/semantic_stable_diffusion) | [Semantic Guidance](https://arxiv.org/abs/2301.12247) | Text-Guided Generation |
72
- | [stable_diffusion_adapter](./api/pipelines/stable_diffusion/adapter) | [**T2I-Adapter**](https://arxiv.org/abs/2302.08453) | Image-to-Image Text-Guided Generation | -
73
- | [stable_diffusion_text2img](./api/pipelines/stable_diffusion/text2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation |
74
- | [stable_diffusion_img2img](./api/pipelines/stable_diffusion/img2img) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation |
75
- | [stable_diffusion_inpaint](./api/pipelines/stable_diffusion/inpaint) | [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting |
76
- | [stable_diffusion_panorama](./api/pipelines/stable_diffusion/panorama) | [MultiDiffusion](https://multidiffusion.github.io/) | Text-to-Panorama Generation |
77
- | [stable_diffusion_pix2pix](./api/pipelines/stable_diffusion/pix2pix) | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | Text-Guided Image Editing|
78
- | [stable_diffusion_pix2pix_zero](./api/pipelines/stable_diffusion/pix2pix_zero) | [Zero-shot Image-to-Image Translation](https://pix2pixzero.github.io/) | Text-Guided Image Editing |
79
- | [stable_diffusion_attend_and_excite](./api/pipelines/stable_diffusion/attend_and_excite) | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation |
80
- | [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation Unconditional Image Generation |
81
- | [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
82
- | [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [Stable Diffusion Latent Upscaler](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
83
- | [stable_diffusion_model_editing](./api/pipelines/stable_diffusion/model_editing) | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://time-diffusion.github.io/) | Text-to-Image Model Editing |
84
- | [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
85
- | [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
86
- | [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Depth-Conditional Stable Diffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation |
87
- | [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
88
- | [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [Safe Stable Diffusion](https://arxiv.org/abs/2211.05105) | Text-Guided Generation |
89
- | [stable_unclip](./stable_unclip) | Stable unCLIP | Text-to-Image Generation |
90
- | [stable_unclip](./stable_unclip) | Stable unCLIP | Image-to-Image Text-Guided Generation |
91
- | [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
92
- | [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
93
- | [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)(implementation by [kakaobrain](https://github.com/kakaobrain/karlo)) | Text-to-Image Generation |
94
- | [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
95
- | [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
96
- | [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
97
- | [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
98
- | [stable_diffusion_ldm3d](./api/pipelines/stable_diffusion/ldm3d_diffusion) | [LDM3D: Latent Diffusion Model for 3D](https://arxiv.org/abs/2305.10853) | Text to Image and Depth Generation |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/unconditional_image_generation.md DELETED
@@ -1,54 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Unconditional 이미지 생성
14
-
15
- [[Colab에서 열기]]
16
-
17
- Unconditional 이미지 생성은 비교적 간단한 작업입니다. 모델이 텍스트나 이미지와 같은 추가 조건 없이 이미 학습된 학습 데이터와 유사한 이미지만 생성합니다.
18
-
19
- ['DiffusionPipeline']은 추론을 위해 미리 학습된 diffusion 시스템을 사용하는 가장 쉬운 방법입니다.
20
-
21
- 먼저 ['DiffusionPipeline']의 인스턴스를 생성하고 다운로드할 파이프라인의 [체크포인트](https://huggingface.co/models?library=diffusers&sort=downloads)를 지정합니다. 허브의 🧨 diffusion 체크포인트 중 하나를 사용할 수 있습니다(사용할 체크포인트는 나비 이미지를 생성합니다).
22
-
23
- <Tip>
24
-
25
- 💡 나만의 unconditional 이미지 생성 모델을 학습시키고 싶으신가요? 학습 가이드를 살펴보고 나만의 이미지를 생성하는 방법을 알아보세요.
26
-
27
- </Tip>
28
-
29
-
30
- 이 가이드에서는 unconditional 이미지 생성에 ['DiffusionPipeline']과 [DDPM](https://arxiv.org/abs/2006.11239)을 사용합니다:
31
-
32
- ```python
33
- >>> from diffusers import DiffusionPipeline
34
-
35
- >>> generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128")
36
- ```
37
- [diffusion 파이프라인]은 모든 모델링, 토큰화, 스케줄링 구성 요소를 다운로드하고 캐시합니다. 이 모델은 약 14억 개의 파라미터로 구성되어 있기 때문에 GPU에서 실행할 것을 강력히 권장합니다. PyTorch에서와 마찬가지로 제너레이터 객체를 GPU로 옮길 수 있습니다:
38
- ```python
39
- >>> generator.to("cuda")
40
- ```
41
- 이제 제너레이터를 사용하여 이미지를 생성할 수 있습니다:
42
- ```python
43
- >>> image = generator().images[0]
44
- ```
45
- 출력은 기본적으로 [PIL.Image](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) 객체로 감싸집니다.
46
-
47
- 다음을 호출하여 이미지를 저장할 수 있습니다:
48
- ```python
49
- >>> image.save("generated_image.png")
50
- ```
51
-
52
- 아래 스페이스(데모 링크)를 이용해 보고, 추론 단계의 매개변수를 자유롭게 조절하여 이미지 품질에 어떤 영향을 미치는지 확인해 보세요!
53
-
54
- <iframe src="https://stevhliu-ddpm-butterflies-128.hf.space" frameborder="0" width="850" height="500"></iframe>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/embeddings_flax.py DELETED
@@ -1,95 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- import math
15
-
16
- import flax.linen as nn
17
- import jax.numpy as jnp
18
-
19
-
20
- def get_sinusoidal_embeddings(
21
- timesteps: jnp.ndarray,
22
- embedding_dim: int,
23
- freq_shift: float = 1,
24
- min_timescale: float = 1,
25
- max_timescale: float = 1.0e4,
26
- flip_sin_to_cos: bool = False,
27
- scale: float = 1.0,
28
- ) -> jnp.ndarray:
29
- """Returns the positional encoding (same as Tensor2Tensor).
30
-
31
- Args:
32
- timesteps: a 1-D Tensor of N indices, one per batch element.
33
- These may be fractional.
34
- embedding_dim: The number of output channels.
35
- min_timescale: The smallest time unit (should probably be 0.0).
36
- max_timescale: The largest time unit.
37
- Returns:
38
- a Tensor of timing signals [N, num_channels]
39
- """
40
- assert timesteps.ndim == 1, "Timesteps should be a 1d-array"
41
- assert embedding_dim % 2 == 0, f"Embedding dimension {embedding_dim} should be even"
42
- num_timescales = float(embedding_dim // 2)
43
- log_timescale_increment = math.log(max_timescale / min_timescale) / (num_timescales - freq_shift)
44
- inv_timescales = min_timescale * jnp.exp(jnp.arange(num_timescales, dtype=jnp.float32) * -log_timescale_increment)
45
- emb = jnp.expand_dims(timesteps, 1) * jnp.expand_dims(inv_timescales, 0)
46
-
47
- # scale embeddings
48
- scaled_time = scale * emb
49
-
50
- if flip_sin_to_cos:
51
- signal = jnp.concatenate([jnp.cos(scaled_time), jnp.sin(scaled_time)], axis=1)
52
- else:
53
- signal = jnp.concatenate([jnp.sin(scaled_time), jnp.cos(scaled_time)], axis=1)
54
- signal = jnp.reshape(signal, [jnp.shape(timesteps)[0], embedding_dim])
55
- return signal
56
-
57
-
58
- class FlaxTimestepEmbedding(nn.Module):
59
- r"""
60
- Time step Embedding Module. Learns embeddings for input time steps.
61
-
62
- Args:
63
- time_embed_dim (`int`, *optional*, defaults to `32`):
64
- Time step embedding dimension
65
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
66
- Parameters `dtype`
67
- """
68
- time_embed_dim: int = 32
69
- dtype: jnp.dtype = jnp.float32
70
-
71
- @nn.compact
72
- def __call__(self, temb):
73
- temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name="linear_1")(temb)
74
- temb = nn.silu(temb)
75
- temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name="linear_2")(temb)
76
- return temb
77
-
78
-
79
- class FlaxTimesteps(nn.Module):
80
- r"""
81
- Wrapper Module for sinusoidal Time step Embeddings as described in https://arxiv.org/abs/2006.11239
82
-
83
- Args:
84
- dim (`int`, *optional*, defaults to `32`):
85
- Time step embedding dimension
86
- """
87
- dim: int = 32
88
- flip_sin_to_cos: bool = False
89
- freq_shift: float = 1
90
-
91
- @nn.compact
92
- def __call__(self, timesteps):
93
- return get_sinusoidal_embeddings(
94
- timesteps, embedding_dim=self.dim, flip_sin_to_cos=self.flip_sin_to_cos, freq_shift=self.freq_shift
95
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/yolact.py DELETED
@@ -1,146 +0,0 @@
1
- import torch
2
-
3
- from mmdet.core import bbox2result
4
- from ..builder import DETECTORS, build_head
5
- from .single_stage import SingleStageDetector
6
-
7
-
8
- @DETECTORS.register_module()
9
- class YOLACT(SingleStageDetector):
10
- """Implementation of `YOLACT <https://arxiv.org/abs/1904.02689>`_"""
11
-
12
- def __init__(self,
13
- backbone,
14
- neck,
15
- bbox_head,
16
- segm_head,
17
- mask_head,
18
- train_cfg=None,
19
- test_cfg=None,
20
- pretrained=None):
21
- super(YOLACT, self).__init__(backbone, neck, bbox_head, train_cfg,
22
- test_cfg, pretrained)
23
- self.segm_head = build_head(segm_head)
24
- self.mask_head = build_head(mask_head)
25
- self.init_segm_mask_weights()
26
-
27
- def init_segm_mask_weights(self):
28
- """Initialize weights of the YOLACT segm head and YOLACT mask head."""
29
- self.segm_head.init_weights()
30
- self.mask_head.init_weights()
31
-
32
- def forward_dummy(self, img):
33
- """Used for computing network flops.
34
-
35
- See `mmdetection/tools/analysis_tools/get_flops.py`
36
- """
37
- raise NotImplementedError
38
-
39
- def forward_train(self,
40
- img,
41
- img_metas,
42
- gt_bboxes,
43
- gt_labels,
44
- gt_bboxes_ignore=None,
45
- gt_masks=None):
46
- """
47
- Args:
48
- img (Tensor): of shape (N, C, H, W) encoding input images.
49
- Typically these should be mean centered and std scaled.
50
- img_metas (list[dict]): list of image info dict where each dict
51
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
52
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
53
- For details on the values of these keys see
54
- `mmdet/datasets/pipelines/formatting.py:Collect`.
55
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
56
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
57
- gt_labels (list[Tensor]): class indices corresponding to each box
58
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
59
- boxes can be ignored when computing the loss.
60
- gt_masks (None | Tensor) : true segmentation masks for each box
61
- used if the architecture supports a segmentation task.
62
-
63
- Returns:
64
- dict[str, Tensor]: a dictionary of loss components
65
- """
66
- # convert Bitmap mask or Polygon Mask to Tensor here
67
- gt_masks = [
68
- gt_mask.to_tensor(dtype=torch.uint8, device=img.device)
69
- for gt_mask in gt_masks
70
- ]
71
-
72
- x = self.extract_feat(img)
73
-
74
- cls_score, bbox_pred, coeff_pred = self.bbox_head(x)
75
- bbox_head_loss_inputs = (cls_score, bbox_pred) + (gt_bboxes, gt_labels,
76
- img_metas)
77
- losses, sampling_results = self.bbox_head.loss(
78
- *bbox_head_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
79
-
80
- segm_head_outs = self.segm_head(x[0])
81
- loss_segm = self.segm_head.loss(segm_head_outs, gt_masks, gt_labels)
82
- losses.update(loss_segm)
83
-
84
- mask_pred = self.mask_head(x[0], coeff_pred, gt_bboxes, img_metas,
85
- sampling_results)
86
- loss_mask = self.mask_head.loss(mask_pred, gt_masks, gt_bboxes,
87
- img_metas, sampling_results)
88
- losses.update(loss_mask)
89
-
90
- # check NaN and Inf
91
- for loss_name in losses.keys():
92
- assert torch.isfinite(torch.stack(losses[loss_name]))\
93
- .all().item(), '{} becomes infinite or NaN!'\
94
- .format(loss_name)
95
-
96
- return losses
97
-
98
- def simple_test(self, img, img_metas, rescale=False):
99
- """Test function without test time augmentation."""
100
- x = self.extract_feat(img)
101
-
102
- cls_score, bbox_pred, coeff_pred = self.bbox_head(x)
103
-
104
- bbox_inputs = (cls_score, bbox_pred,
105
- coeff_pred) + (img_metas, self.test_cfg, rescale)
106
- det_bboxes, det_labels, det_coeffs = self.bbox_head.get_bboxes(
107
- *bbox_inputs)
108
- bbox_results = [
109
- bbox2result(det_bbox, det_label, self.bbox_head.num_classes)
110
- for det_bbox, det_label in zip(det_bboxes, det_labels)
111
- ]
112
-
113
- num_imgs = len(img_metas)
114
- scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
115
- if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
116
- segm_results = [[[] for _ in range(self.mask_head.num_classes)]
117
- for _ in range(num_imgs)]
118
- else:
119
- # if det_bboxes is rescaled to the original image size, we need to
120
- # rescale it back to the testing scale to obtain RoIs.
121
- if rescale and not isinstance(scale_factors[0], float):
122
- scale_factors = [
123
- torch.from_numpy(scale_factor).to(det_bboxes[0].device)
124
- for scale_factor in scale_factors
125
- ]
126
- _bboxes = [
127
- det_bboxes[i][:, :4] *
128
- scale_factors[i] if rescale else det_bboxes[i][:, :4]
129
- for i in range(len(det_bboxes))
130
- ]
131
- mask_preds = self.mask_head(x[0], det_coeffs, _bboxes, img_metas)
132
- # apply mask post-processing to each image individually
133
- segm_results = []
134
- for i in range(num_imgs):
135
- if det_bboxes[i].shape[0] == 0:
136
- segm_results.append(
137
- [[] for _ in range(self.mask_head.num_classes)])
138
- else:
139
- segm_result = self.mask_head.get_seg_masks(
140
- mask_preds[i], det_labels[i], img_metas[i], rescale)
141
- segm_results.append(segm_result)
142
- return list(zip(bbox_results, segm_results))
143
-
144
- def aug_test(self, imgs, img_metas, rescale=False):
145
- """Test with augmentations."""
146
- raise NotImplementedError
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/time_counter.py DELETED
@@ -1,62 +0,0 @@
1
- import json
2
- import time
3
-
4
-
5
- class TimeCounter:
6
- def __init__(self) -> None:
7
- pass
8
-
9
- def clear(self):
10
- self.timedict = {}
11
- self.basetime = time.perf_counter()
12
-
13
- def timeit(self, name):
14
- nowtime = time.perf_counter() - self.basetime
15
- self.timedict[name] = nowtime
16
- self.basetime = time.perf_counter()
17
-
18
-
19
- class TimeHolder:
20
- def __init__(self) -> None:
21
- self.timedict = {}
22
-
23
- def update(self, _timedict: dict):
24
- for k, v in _timedict.items():
25
- if k not in self.timedict:
26
- self.timedict[k] = AverageMeter(name=k, val_only=True)
27
- self.timedict[k].update(val=v)
28
-
29
- def final_res(self):
30
- return {k: v.avg for k, v in self.timedict.items()}
31
-
32
- def __str__(self):
33
- return json.dumps(self.final_res(), indent=2)
34
-
35
-
36
- class AverageMeter(object):
37
- """Computes and stores the average and current value"""
38
-
39
- def __init__(self, name, fmt=":f", val_only=False):
40
- self.name = name
41
- self.fmt = fmt
42
- self.val_only = val_only
43
- self.reset()
44
-
45
- def reset(self):
46
- self.val = 0
47
- self.avg = 0
48
- self.sum = 0
49
- self.count = 0
50
-
51
- def update(self, val, n=1):
52
- self.val = val
53
- self.sum += val * n
54
- self.count += n
55
- self.avg = self.sum / self.count
56
-
57
- def __str__(self):
58
- if self.val_only:
59
- fmtstr = "{name} {val" + self.fmt + "}"
60
- else:
61
- fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})"
62
- return fmtstr.format(**self.__dict__)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py DELETED
@@ -1,61 +0,0 @@
1
- # SPDX-FileCopyrightText: 2015 Eric Larson
2
- #
3
- # SPDX-License-Identifier: Apache-2.0
4
-
5
- import logging
6
-
7
- from pip._vendor import requests
8
-
9
- from pip._vendor.cachecontrol.adapter import CacheControlAdapter
10
- from pip._vendor.cachecontrol.cache import DictCache
11
- from pip._vendor.cachecontrol.controller import logger
12
-
13
- from argparse import ArgumentParser
14
-
15
-
16
- def setup_logging():
17
- logger.setLevel(logging.DEBUG)
18
- handler = logging.StreamHandler()
19
- logger.addHandler(handler)
20
-
21
-
22
- def get_session():
23
- adapter = CacheControlAdapter(
24
- DictCache(), cache_etags=True, serializer=None, heuristic=None
25
- )
26
- sess = requests.Session()
27
- sess.mount("http://", adapter)
28
- sess.mount("https://", adapter)
29
-
30
- sess.cache_controller = adapter.controller
31
- return sess
32
-
33
-
34
- def get_args():
35
- parser = ArgumentParser()
36
- parser.add_argument("url", help="The URL to try and cache")
37
- return parser.parse_args()
38
-
39
-
40
- def main(args=None):
41
- args = get_args()
42
- sess = get_session()
43
-
44
- # Make a request to get a response
45
- resp = sess.get(args.url)
46
-
47
- # Turn on logging
48
- setup_logging()
49
-
50
- # try setting the cache
51
- sess.cache_controller.cache_response(resp.request, resp.raw)
52
-
53
- # Now try to get it
54
- if sess.cache_controller.cached_request(resp.request):
55
- print("Cached!")
56
- else:
57
- print("Not cached :(")
58
-
59
-
60
- if __name__ == "__main__":
61
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/_structures.py DELETED
@@ -1,61 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
-
6
- class InfinityType:
7
- def __repr__(self) -> str:
8
- return "Infinity"
9
-
10
- def __hash__(self) -> int:
11
- return hash(repr(self))
12
-
13
- def __lt__(self, other: object) -> bool:
14
- return False
15
-
16
- def __le__(self, other: object) -> bool:
17
- return False
18
-
19
- def __eq__(self, other: object) -> bool:
20
- return isinstance(other, self.__class__)
21
-
22
- def __gt__(self, other: object) -> bool:
23
- return True
24
-
25
- def __ge__(self, other: object) -> bool:
26
- return True
27
-
28
- def __neg__(self: object) -> "NegativeInfinityType":
29
- return NegativeInfinity
30
-
31
-
32
- Infinity = InfinityType()
33
-
34
-
35
- class NegativeInfinityType:
36
- def __repr__(self) -> str:
37
- return "-Infinity"
38
-
39
- def __hash__(self) -> int:
40
- return hash(repr(self))
41
-
42
- def __lt__(self, other: object) -> bool:
43
- return True
44
-
45
- def __le__(self, other: object) -> bool:
46
- return True
47
-
48
- def __eq__(self, other: object) -> bool:
49
- return isinstance(other, self.__class__)
50
-
51
- def __gt__(self, other: object) -> bool:
52
- return False
53
-
54
- def __ge__(self, other: object) -> bool:
55
- return False
56
-
57
- def __neg__(self: object) -> InfinityType:
58
- return Infinity
59
-
60
-
61
- NegativeInfinity = NegativeInfinityType()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dep_util.py DELETED
@@ -1,25 +0,0 @@
1
- from distutils.dep_util import newer_group
2
-
3
-
4
- # yes, this is was almost entirely copy-pasted from
5
- # 'newer_pairwise()', this is just another convenience
6
- # function.
7
- def newer_pairwise_group(sources_groups, targets):
8
- """Walk both arguments in parallel, testing if each source group is newer
9
- than its corresponding target. Returns a pair of lists (sources_groups,
10
- targets) where sources is newer than target, according to the semantics
11
- of 'newer_group()'.
12
- """
13
- if len(sources_groups) != len(targets):
14
- raise ValueError(
15
- "'sources_group' and 'targets' must be the same length")
16
-
17
- # build a pair of lists (sources_groups, targets) where source is newer
18
- n_sources = []
19
- n_targets = []
20
- for i in range(len(sources_groups)):
21
- if newer_group(sources_groups[i], targets[i]):
22
- n_sources.append(sources_groups[i])
23
- n_targets.append(targets[i])
24
-
25
- return n_sources, n_targets
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AzumaSeren100/XuanShen-Bert-VITS2/commons.py DELETED
@@ -1,161 +0,0 @@
1
- import math
2
- import numpy as np
3
- import torch
4
- from torch import nn
5
- from torch.nn import functional as F
6
-
7
-
8
- def init_weights(m, mean=0.0, std=0.01):
9
- classname = m.__class__.__name__
10
- if classname.find("Conv") != -1:
11
- m.weight.data.normal_(mean, std)
12
-
13
-
14
- def get_padding(kernel_size, dilation=1):
15
- return int((kernel_size*dilation - dilation)/2)
16
-
17
-
18
- def convert_pad_shape(pad_shape):
19
- l = pad_shape[::-1]
20
- pad_shape = [item for sublist in l for item in sublist]
21
- return pad_shape
22
-
23
-
24
- def intersperse(lst, item):
25
- result = [item] * (len(lst) * 2 + 1)
26
- result[1::2] = lst
27
- return result
28
-
29
-
30
- def kl_divergence(m_p, logs_p, m_q, logs_q):
31
- """KL(P||Q)"""
32
- kl = (logs_q - logs_p) - 0.5
33
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
34
- return kl
35
-
36
-
37
- def rand_gumbel(shape):
38
- """Sample from the Gumbel distribution, protect from overflows."""
39
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
40
- return -torch.log(-torch.log(uniform_samples))
41
-
42
-
43
- def rand_gumbel_like(x):
44
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
45
- return g
46
-
47
-
48
- def slice_segments(x, ids_str, segment_size=4):
49
- ret = torch.zeros_like(x[:, :, :segment_size])
50
- for i in range(x.size(0)):
51
- idx_str = ids_str[i]
52
- idx_end = idx_str + segment_size
53
- ret[i] = x[i, :, idx_str:idx_end]
54
- return ret
55
-
56
-
57
- def rand_slice_segments(x, x_lengths=None, segment_size=4):
58
- b, d, t = x.size()
59
- if x_lengths is None:
60
- x_lengths = t
61
- ids_str_max = x_lengths - segment_size + 1
62
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
63
- ret = slice_segments(x, ids_str, segment_size)
64
- return ret, ids_str
65
-
66
-
67
- def get_timing_signal_1d(
68
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
69
- position = torch.arange(length, dtype=torch.float)
70
- num_timescales = channels // 2
71
- log_timescale_increment = (
72
- math.log(float(max_timescale) / float(min_timescale)) /
73
- (num_timescales - 1))
74
- inv_timescales = min_timescale * torch.exp(
75
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
76
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
77
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
78
- signal = F.pad(signal, [0, 0, 0, channels % 2])
79
- signal = signal.view(1, channels, length)
80
- return signal
81
-
82
-
83
- def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
84
- b, channels, length = x.size()
85
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
86
- return x + signal.to(dtype=x.dtype, device=x.device)
87
-
88
-
89
- def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
90
- b, channels, length = x.size()
91
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
92
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
93
-
94
-
95
- def subsequent_mask(length):
96
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
97
- return mask
98
-
99
-
100
- @torch.jit.script
101
- def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
102
- n_channels_int = n_channels[0]
103
- in_act = input_a + input_b
104
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
105
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
106
- acts = t_act * s_act
107
- return acts
108
-
109
-
110
- def convert_pad_shape(pad_shape):
111
- l = pad_shape[::-1]
112
- pad_shape = [item for sublist in l for item in sublist]
113
- return pad_shape
114
-
115
-
116
- def shift_1d(x):
117
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
118
- return x
119
-
120
-
121
- def sequence_mask(length, max_length=None):
122
- if max_length is None:
123
- max_length = length.max()
124
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
125
- return x.unsqueeze(0) < length.unsqueeze(1)
126
-
127
-
128
- def generate_path(duration, mask):
129
- """
130
- duration: [b, 1, t_x]
131
- mask: [b, 1, t_y, t_x]
132
- """
133
- device = duration.device
134
-
135
- b, _, t_y, t_x = mask.shape
136
- cum_duration = torch.cumsum(duration, -1)
137
-
138
- cum_duration_flat = cum_duration.view(b * t_x)
139
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
140
- path = path.view(b, t_x, t_y)
141
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
142
- path = path.unsqueeze(1).transpose(2,3) * mask
143
- return path
144
-
145
-
146
- def clip_grad_value_(parameters, clip_value, norm_type=2):
147
- if isinstance(parameters, torch.Tensor):
148
- parameters = [parameters]
149
- parameters = list(filter(lambda p: p.grad is not None, parameters))
150
- norm_type = float(norm_type)
151
- if clip_value is not None:
152
- clip_value = float(clip_value)
153
-
154
- total_norm = 0
155
- for p in parameters:
156
- param_norm = p.grad.data.norm(norm_type)
157
- total_norm += param_norm.item() ** norm_type
158
- if clip_value is not None:
159
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
160
- total_norm = total_norm ** (1. / norm_type)
161
- return total_norm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/backups.py DELETED
@@ -1,141 +0,0 @@
1
- import os
2
- import shutil
3
- import hashlib
4
- import time
5
- import base64
6
-
7
-
8
-
9
-
10
- LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
11
- WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
12
- GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
13
-
14
- def import_google_drive_backup():
15
- print("Importing Google Drive backup...")
16
- weights_exist = False
17
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
18
- for filename in files:
19
- filepath = os.path.join(root, filename)
20
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
21
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
22
- backup_folderpath = os.path.dirname(backup_filepath)
23
- if not os.path.exists(backup_folderpath):
24
- os.makedirs(backup_folderpath)
25
- print(f'Created backup folder: {backup_folderpath}', flush=True)
26
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
27
- print(f'Imported file from Google Drive backup: {filename}')
28
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
29
- weights_exist = True
30
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
31
- weights_folderpath = os.path.dirname(weights_filepath)
32
- if not os.path.exists(weights_folderpath):
33
- os.makedirs(weights_folderpath)
34
- print(f'Created weights folder: {weights_folderpath}', flush=True)
35
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
36
- print(f'Imported file from weights: {filename}')
37
- if weights_exist:
38
- print("Copied weights from Google Drive backup to local weights folder.")
39
- else:
40
- print("No weights found in Google Drive backup.")
41
- print("Google Drive backup import completed.")
42
-
43
- def get_md5_hash(file_path):
44
- hash_md5 = hashlib.md5()
45
- with open(file_path, "rb") as f:
46
- for chunk in iter(lambda: f.read(4096), b""):
47
- hash_md5.update(chunk)
48
- return hash_md5.hexdigest()
49
-
50
- def copy_weights_folder_to_drive():
51
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
52
- try:
53
- if not os.path.exists(destination_folder):
54
- os.makedirs(destination_folder)
55
-
56
- num_copied = 0
57
- for filename in os.listdir(WEIGHTS_FOLDER):
58
- if filename.endswith('.pth'):
59
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
60
- destination_file = os.path.join(destination_folder, filename)
61
- if not os.path.exists(destination_file):
62
- shutil.copy2(source_file, destination_file)
63
- num_copied += 1
64
- print(f"Copied {filename} to Google Drive!")
65
-
66
- if num_copied == 0:
67
- print("No new finished models found for copying.")
68
- else:
69
- print(f"Finished copying {num_copied} files to Google Drive!")
70
-
71
- except Exception as e:
72
- print(f"An error occurred while copying weights: {str(e)}")
73
- # You can log the error or take appropriate actions here.
74
-
75
- def backup_files():
76
- print("\nStarting backup loop...")
77
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
78
- fully_updated = False # boolean to track if all files are up to date
79
-
80
- while True:
81
- try:
82
- updated = False # flag to check if any files were updated
83
- last_backup_timestamps = {}
84
-
85
- try:
86
- with open(last_backup_timestamps_path, 'r') as f:
87
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
88
- except FileNotFoundError:
89
- pass # File does not exist yet, which is fine
90
-
91
- for root, dirs, files in os.walk(LOGS_FOLDER):
92
- for filename in files:
93
- if filename != 'last_backup_timestamps.txt':
94
- filepath = os.path.join(root, filename)
95
- if os.path.isfile(filepath):
96
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
97
- backup_folderpath = os.path.dirname(backup_filepath)
98
- if not os.path.exists(backup_folderpath):
99
- os.makedirs(backup_folderpath)
100
- print(f'Created backup folder: {backup_folderpath}', flush=True)
101
- # check if file has changed since last backup
102
- last_backup_timestamp = last_backup_timestamps.get(filepath)
103
- current_timestamp = os.path.getmtime(filepath)
104
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
105
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
106
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
107
- if last_backup_timestamp is None:
108
- print(f'Backed up file: {filename}')
109
- else:
110
- print(f'Updating backed up file: {filename}')
111
- updated = True
112
- fully_updated = False # if a file is updated, all files are not up to date
113
-
114
- # check if any files were deleted in Colab and delete them from the backup drive
115
- for filepath in list(last_backup_timestamps.keys()):
116
- if not os.path.exists(filepath):
117
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
118
- if os.path.exists(backup_filepath):
119
- os.remove(backup_filepath)
120
- print(f'Deleted file: {filepath}')
121
- del last_backup_timestamps[filepath]
122
- updated = True
123
- fully_updated = False # if a file is deleted, all files are not up to date
124
-
125
- if not updated and not fully_updated:
126
- print("Files are up to date.")
127
- fully_updated = True # if all files are up to date, set the boolean to True
128
- copy_weights_folder_to_drive()
129
- sleep_time = 15
130
- else:
131
- sleep_time = 0.1
132
-
133
- with open(last_backup_timestamps_path, 'w') as f:
134
- for filepath, timestamp in last_backup_timestamps.items():
135
- f.write(f'{filepath}:{timestamp}\n')
136
-
137
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
138
-
139
- except Exception as e:
140
- print(f"An error occurred: {str(e)}")
141
- # You can log the error or take appropriate actions here.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/demucs/augment.py DELETED
@@ -1,106 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import random
8
- import torch as th
9
- from torch import nn
10
-
11
-
12
- class Shift(nn.Module):
13
- """
14
- Randomly shift audio in time by up to `shift` samples.
15
- """
16
- def __init__(self, shift=8192):
17
- super().__init__()
18
- self.shift = shift
19
-
20
- def forward(self, wav):
21
- batch, sources, channels, time = wav.size()
22
- length = time - self.shift
23
- if self.shift > 0:
24
- if not self.training:
25
- wav = wav[..., :length]
26
- else:
27
- offsets = th.randint(self.shift, [batch, sources, 1, 1], device=wav.device)
28
- offsets = offsets.expand(-1, -1, channels, -1)
29
- indexes = th.arange(length, device=wav.device)
30
- wav = wav.gather(3, indexes + offsets)
31
- return wav
32
-
33
-
34
- class FlipChannels(nn.Module):
35
- """
36
- Flip left-right channels.
37
- """
38
- def forward(self, wav):
39
- batch, sources, channels, time = wav.size()
40
- if self.training and wav.size(2) == 2:
41
- left = th.randint(2, (batch, sources, 1, 1), device=wav.device)
42
- left = left.expand(-1, -1, -1, time)
43
- right = 1 - left
44
- wav = th.cat([wav.gather(2, left), wav.gather(2, right)], dim=2)
45
- return wav
46
-
47
-
48
- class FlipSign(nn.Module):
49
- """
50
- Random sign flip.
51
- """
52
- def forward(self, wav):
53
- batch, sources, channels, time = wav.size()
54
- if self.training:
55
- signs = th.randint(2, (batch, sources, 1, 1), device=wav.device, dtype=th.float32)
56
- wav = wav * (2 * signs - 1)
57
- return wav
58
-
59
-
60
- class Remix(nn.Module):
61
- """
62
- Shuffle sources to make new mixes.
63
- """
64
- def __init__(self, group_size=4):
65
- """
66
- Shuffle sources within one batch.
67
- Each batch is divided into groups of size `group_size` and shuffling is done within
68
- each group separatly. This allow to keep the same probability distribution no matter
69
- the number of GPUs. Without this grouping, using more GPUs would lead to a higher
70
- probability of keeping two sources from the same track together which can impact
71
- performance.
72
- """
73
- super().__init__()
74
- self.group_size = group_size
75
-
76
- def forward(self, wav):
77
- batch, streams, channels, time = wav.size()
78
- device = wav.device
79
-
80
- if self.training:
81
- group_size = self.group_size or batch
82
- if batch % group_size != 0:
83
- raise ValueError(f"Batch size {batch} must be divisible by group size {group_size}")
84
- groups = batch // group_size
85
- wav = wav.view(groups, group_size, streams, channels, time)
86
- permutations = th.argsort(th.rand(groups, group_size, streams, 1, 1, device=device),
87
- dim=1)
88
- wav = wav.gather(1, permutations.expand(-1, -1, -1, channels, time))
89
- wav = wav.view(batch, streams, channels, time)
90
- return wav
91
-
92
-
93
- class Scale(nn.Module):
94
- def __init__(self, proba=1., min=0.25, max=1.25):
95
- super().__init__()
96
- self.proba = proba
97
- self.min = min
98
- self.max = max
99
-
100
- def forward(self, wav):
101
- batch, streams, channels, time = wav.size()
102
- device = wav.device
103
- if self.training and random.random() < self.proba:
104
- scales = th.empty(batch, streams, 1, 1, device=device).uniform_(self.min, self.max)
105
- wav *= scales
106
- return wav
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_123812KB .py DELETED
@@ -1,118 +0,0 @@
1
- import torch
2
- from torch import nn
3
- import torch.nn.functional as F
4
-
5
- from . import spec_utils
6
-
7
-
8
- class Conv2DBNActiv(nn.Module):
9
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
10
- super(Conv2DBNActiv, self).__init__()
11
- self.conv = nn.Sequential(
12
- nn.Conv2d(
13
- nin,
14
- nout,
15
- kernel_size=ksize,
16
- stride=stride,
17
- padding=pad,
18
- dilation=dilation,
19
- bias=False,
20
- ),
21
- nn.BatchNorm2d(nout),
22
- activ(),
23
- )
24
-
25
- def __call__(self, x):
26
- return self.conv(x)
27
-
28
-
29
- class SeperableConv2DBNActiv(nn.Module):
30
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
31
- super(SeperableConv2DBNActiv, self).__init__()
32
- self.conv = nn.Sequential(
33
- nn.Conv2d(
34
- nin,
35
- nin,
36
- kernel_size=ksize,
37
- stride=stride,
38
- padding=pad,
39
- dilation=dilation,
40
- groups=nin,
41
- bias=False,
42
- ),
43
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
44
- nn.BatchNorm2d(nout),
45
- activ(),
46
- )
47
-
48
- def __call__(self, x):
49
- return self.conv(x)
50
-
51
-
52
- class Encoder(nn.Module):
53
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
54
- super(Encoder, self).__init__()
55
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
56
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
57
-
58
- def __call__(self, x):
59
- skip = self.conv1(x)
60
- h = self.conv2(skip)
61
-
62
- return h, skip
63
-
64
-
65
- class Decoder(nn.Module):
66
- def __init__(
67
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
68
- ):
69
- super(Decoder, self).__init__()
70
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
71
- self.dropout = nn.Dropout2d(0.1) if dropout else None
72
-
73
- def __call__(self, x, skip=None):
74
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
75
- if skip is not None:
76
- skip = spec_utils.crop_center(skip, x)
77
- x = torch.cat([x, skip], dim=1)
78
- h = self.conv(x)
79
-
80
- if self.dropout is not None:
81
- h = self.dropout(h)
82
-
83
- return h
84
-
85
-
86
- class ASPPModule(nn.Module):
87
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
88
- super(ASPPModule, self).__init__()
89
- self.conv1 = nn.Sequential(
90
- nn.AdaptiveAvgPool2d((1, None)),
91
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
92
- )
93
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
94
- self.conv3 = SeperableConv2DBNActiv(
95
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
96
- )
97
- self.conv4 = SeperableConv2DBNActiv(
98
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
99
- )
100
- self.conv5 = SeperableConv2DBNActiv(
101
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
102
- )
103
- self.bottleneck = nn.Sequential(
104
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
105
- )
106
-
107
- def forward(self, x):
108
- _, _, h, w = x.size()
109
- feat1 = F.interpolate(
110
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
111
- )
112
- feat2 = self.conv2(x)
113
- feat3 = self.conv3(x)
114
- feat4 = self.conv4(x)
115
- feat5 = self.conv5(x)
116
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
117
- bottle = self.bottleneck(out)
118
- return bottle
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/slicer2.py DELETED
@@ -1,260 +0,0 @@
1
- import numpy as np
2
-
3
-
4
- # This function is obtained from librosa.
5
- def get_rms(
6
- y,
7
- frame_length=2048,
8
- hop_length=512,
9
- pad_mode="constant",
10
- ):
11
- padding = (int(frame_length // 2), int(frame_length // 2))
12
- y = np.pad(y, padding, mode=pad_mode)
13
-
14
- axis = -1
15
- # put our new within-frame axis at the end for now
16
- out_strides = y.strides + tuple([y.strides[axis]])
17
- # Reduce the shape on the framing axis
18
- x_shape_trimmed = list(y.shape)
19
- x_shape_trimmed[axis] -= frame_length - 1
20
- out_shape = tuple(x_shape_trimmed) + tuple([frame_length])
21
- xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides)
22
- if axis < 0:
23
- target_axis = axis - 1
24
- else:
25
- target_axis = axis + 1
26
- xw = np.moveaxis(xw, -1, target_axis)
27
- # Downsample along the target axis
28
- slices = [slice(None)] * xw.ndim
29
- slices[axis] = slice(0, None, hop_length)
30
- x = xw[tuple(slices)]
31
-
32
- # Calculate power
33
- power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True)
34
-
35
- return np.sqrt(power)
36
-
37
-
38
- class Slicer:
39
- def __init__(
40
- self,
41
- sr: int,
42
- threshold: float = -40.0,
43
- min_length: int = 5000,
44
- min_interval: int = 300,
45
- hop_size: int = 20,
46
- max_sil_kept: int = 5000,
47
- ):
48
- if not min_length >= min_interval >= hop_size:
49
- raise ValueError(
50
- "The following condition must be satisfied: min_length >= min_interval >= hop_size"
51
- )
52
- if not max_sil_kept >= hop_size:
53
- raise ValueError(
54
- "The following condition must be satisfied: max_sil_kept >= hop_size"
55
- )
56
- min_interval = sr * min_interval / 1000
57
- self.threshold = 10 ** (threshold / 20.0)
58
- self.hop_size = round(sr * hop_size / 1000)
59
- self.win_size = min(round(min_interval), 4 * self.hop_size)
60
- self.min_length = round(sr * min_length / 1000 / self.hop_size)
61
- self.min_interval = round(min_interval / self.hop_size)
62
- self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size)
63
-
64
- def _apply_slice(self, waveform, begin, end):
65
- if len(waveform.shape) > 1:
66
- return waveform[
67
- :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size)
68
- ]
69
- else:
70
- return waveform[
71
- begin * self.hop_size : min(waveform.shape[0], end * self.hop_size)
72
- ]
73
-
74
- # @timeit
75
- def slice(self, waveform):
76
- if len(waveform.shape) > 1:
77
- samples = waveform.mean(axis=0)
78
- else:
79
- samples = waveform
80
- if samples.shape[0] <= self.min_length:
81
- return [waveform]
82
- rms_list = get_rms(
83
- y=samples, frame_length=self.win_size, hop_length=self.hop_size
84
- ).squeeze(0)
85
- sil_tags = []
86
- silence_start = None
87
- clip_start = 0
88
- for i, rms in enumerate(rms_list):
89
- # Keep looping while frame is silent.
90
- if rms < self.threshold:
91
- # Record start of silent frames.
92
- if silence_start is None:
93
- silence_start = i
94
- continue
95
- # Keep looping while frame is not silent and silence start has not been recorded.
96
- if silence_start is None:
97
- continue
98
- # Clear recorded silence start if interval is not enough or clip is too short
99
- is_leading_silence = silence_start == 0 and i > self.max_sil_kept
100
- need_slice_middle = (
101
- i - silence_start >= self.min_interval
102
- and i - clip_start >= self.min_length
103
- )
104
- if not is_leading_silence and not need_slice_middle:
105
- silence_start = None
106
- continue
107
- # Need slicing. Record the range of silent frames to be removed.
108
- if i - silence_start <= self.max_sil_kept:
109
- pos = rms_list[silence_start : i + 1].argmin() + silence_start
110
- if silence_start == 0:
111
- sil_tags.append((0, pos))
112
- else:
113
- sil_tags.append((pos, pos))
114
- clip_start = pos
115
- elif i - silence_start <= self.max_sil_kept * 2:
116
- pos = rms_list[
117
- i - self.max_sil_kept : silence_start + self.max_sil_kept + 1
118
- ].argmin()
119
- pos += i - self.max_sil_kept
120
- pos_l = (
121
- rms_list[
122
- silence_start : silence_start + self.max_sil_kept + 1
123
- ].argmin()
124
- + silence_start
125
- )
126
- pos_r = (
127
- rms_list[i - self.max_sil_kept : i + 1].argmin()
128
- + i
129
- - self.max_sil_kept
130
- )
131
- if silence_start == 0:
132
- sil_tags.append((0, pos_r))
133
- clip_start = pos_r
134
- else:
135
- sil_tags.append((min(pos_l, pos), max(pos_r, pos)))
136
- clip_start = max(pos_r, pos)
137
- else:
138
- pos_l = (
139
- rms_list[
140
- silence_start : silence_start + self.max_sil_kept + 1
141
- ].argmin()
142
- + silence_start
143
- )
144
- pos_r = (
145
- rms_list[i - self.max_sil_kept : i + 1].argmin()
146
- + i
147
- - self.max_sil_kept
148
- )
149
- if silence_start == 0:
150
- sil_tags.append((0, pos_r))
151
- else:
152
- sil_tags.append((pos_l, pos_r))
153
- clip_start = pos_r
154
- silence_start = None
155
- # Deal with trailing silence.
156
- total_frames = rms_list.shape[0]
157
- if (
158
- silence_start is not None
159
- and total_frames - silence_start >= self.min_interval
160
- ):
161
- silence_end = min(total_frames, silence_start + self.max_sil_kept)
162
- pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start
163
- sil_tags.append((pos, total_frames + 1))
164
- # Apply and return slices.
165
- if len(sil_tags) == 0:
166
- return [waveform]
167
- else:
168
- chunks = []
169
- if sil_tags[0][0] > 0:
170
- chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0]))
171
- for i in range(len(sil_tags) - 1):
172
- chunks.append(
173
- self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0])
174
- )
175
- if sil_tags[-1][1] < total_frames:
176
- chunks.append(
177
- self._apply_slice(waveform, sil_tags[-1][1], total_frames)
178
- )
179
- return chunks
180
-
181
-
182
- def main():
183
- import os.path
184
- from argparse import ArgumentParser
185
-
186
- import librosa
187
- import soundfile
188
-
189
- parser = ArgumentParser()
190
- parser.add_argument("audio", type=str, help="The audio to be sliced")
191
- parser.add_argument(
192
- "--out", type=str, help="Output directory of the sliced audio clips"
193
- )
194
- parser.add_argument(
195
- "--db_thresh",
196
- type=float,
197
- required=False,
198
- default=-40,
199
- help="The dB threshold for silence detection",
200
- )
201
- parser.add_argument(
202
- "--min_length",
203
- type=int,
204
- required=False,
205
- default=5000,
206
- help="The minimum milliseconds required for each sliced audio clip",
207
- )
208
- parser.add_argument(
209
- "--min_interval",
210
- type=int,
211
- required=False,
212
- default=300,
213
- help="The minimum milliseconds for a silence part to be sliced",
214
- )
215
- parser.add_argument(
216
- "--hop_size",
217
- type=int,
218
- required=False,
219
- default=10,
220
- help="Frame length in milliseconds",
221
- )
222
- parser.add_argument(
223
- "--max_sil_kept",
224
- type=int,
225
- required=False,
226
- default=500,
227
- help="The maximum silence length kept around the sliced clip, presented in milliseconds",
228
- )
229
- args = parser.parse_args()
230
- out = args.out
231
- if out is None:
232
- out = os.path.dirname(os.path.abspath(args.audio))
233
- audio, sr = librosa.load(args.audio, sr=None, mono=False)
234
- slicer = Slicer(
235
- sr=sr,
236
- threshold=args.db_thresh,
237
- min_length=args.min_length,
238
- min_interval=args.min_interval,
239
- hop_size=args.hop_size,
240
- max_sil_kept=args.max_sil_kept,
241
- )
242
- chunks = slicer.slice(audio)
243
- if not os.path.exists(out):
244
- os.makedirs(out)
245
- for i, chunk in enumerate(chunks):
246
- if len(chunk.shape) > 1:
247
- chunk = chunk.T
248
- soundfile.write(
249
- os.path.join(
250
- out,
251
- f"%s_%d.wav"
252
- % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i),
253
- ),
254
- chunk,
255
- sr,
256
- )
257
-
258
-
259
- if __name__ == "__main__":
260
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar El Zombie Caminar 1 Mod Apk.md DELETED
@@ -1,47 +0,0 @@
1
-
2
- <h1>Descargar El Zombie Caminar 1 Mod APK: Un divertido y emocionante juego de zombies</h1>
3
- <p>Si eres un fan de los juegos de zombis, es posible que hayas oído hablar de The Walking Zombie, un popular juego de acción que te permite experimentar la diversión del combate en un apocalipsis zombi. Pero ¿sabías que se puede descargar el zombi caminar 1 mod APK y disfrutar del juego con más características y beneficios? En este artículo, le diremos todo lo que necesita saber sobre The Walking Zombie 1 mod APK, incluyendo lo que es, por qué debe descargarlo, qué características ofrece, y cómo descargarlo e instalarlo en su dispositivo. Así que, vamos a empezar! </p>
4
- <h2>Introducción</h2>
5
- <p>Los zombies son uno de los temas más populares en los videojuegos, ya que proporcionan una experiencia emocionante y desafiante para los jugadores. Hay muchos juegos de zombies disponibles en el mercado, pero no todos ellos valen la pena su tiempo y atención. Algunos de ellos son aburridos, repetitivos o están mal diseñados. Por eso necesitas encontrar un juego de zombies divertido, emocionante y bien hecho. Uno de estos juegos es The Walking Zombie, un juego que ha recibido críticas positivas de críticos y jugadores por igual. </p>
6
- <h2>descargar el zombie caminar 1 mod apk</h2><br /><p><b><b>Download File</b> >>> <a href="https://bltlly.com/2v6KTF">https://bltlly.com/2v6KTF</a></b></p><br /><br />
7
- <h3>¿Qué es el zombi que camina 1?</h3>
8
- <p>The Walking Zombie 1 es un juego de acción desarrollado por Rodinia Games y lanzado en 2016. Es uno de los mejores juegos de zombies en Google Play, destaca por sus gráficos en 3D de alta resolución y efectos de sonido. El juego tiene lugar en un apocalipsis zombi, donde tienes que eliminar hordas de zombies en tres escenarios diferentes. Puedes usar tres armas diferentes: pistola, escopeta y ametralladora. Cada arma tiene sus propias ventajas y desventajas, como el número de balas por clip y el tiempo de recarga. Tienes que ser estratégico y cuidadoso al elegir tu arma y manejar tu munición. </p>
9
- <h3>¿Por qué descargar The Walking Zombie 1 mod APK? </h3>
10
-
11
- <p>El Walking Zombie 1 mod APK es una versión modificada del juego original que le da más características y beneficios. Por ejemplo, puedes obtener dinero y municiones ilimitadas, lo que significa que puedes comprar cualquier arma que quieras y nunca quedarte sin balas. También puedes disfrutar del juego sin anuncios ni interrupciones. Además, el mod APK puede hacer el juego más fácil y más divertido para usted, ya que puede matar zombies más rápido y sobrevivir más tiempo. </p>
12
- <h2>Características de The Walking Zombie 1 mod APK</h2>
13
- <p>Como mencionamos antes, El Caminar Zombie 1 mod APK ofrece muchas características que hacen que el juego mejor que la versión original. Estas son algunas de las principales características que se pueden disfrutar cuando se descarga The Walking Zombie 1 mod APK:</p>
14
- <h3>Gráficos 3D de alta resolución y efectos de sonido</h3>
15
- <p>El Walking Zombie 1 mod APK conserva los mismos gráficos de alta calidad y efectos de sonido como el juego original. Puedes admirar los entornos realistas y detallados, como el cementerio, la casa del terror y la ciudad destruida. También se pueden escuchar los sonidos espeluznantes e inmersivos de zombies gimiendo, armas de fuego, y explosiones sucediendo. Los gráficos y efectos de sonido crean una atmósfera espeluznante y emocionante que te mantendrá al límite. </ <h3>Tres armas diferentes para elegir</h3>
16
- <p>El Walking Zombie 1 mod APK le da acceso a tres armas diferentes que se pueden utilizar para luchar contra los zombies. Puedes elegir entre una pistola, una escopeta y una ametralladora. Cada arma tiene sus propias características, como daños, alcance, precisión y tiempo de recarga. Puede cambiar entre las armas dependiendo de la situación y su preferencia. Por ejemplo, puede usar la pistola para disparos de largo alcance, la escopeta para disparos de corto alcance y la ametralladora para ráfagas de fuego rápido. </p>
17
- <h3>Tres escenarios diferentes para sobrevivir en</h3>
18
-
19
- <h3>Dinero y munición ilimitados</h3>
20
- <p>El Walking Zombie 1 mod APK le da dinero ilimitado y municiones, lo que significa que usted puede comprar cualquier arma que desee y nunca se quede sin balas. No tienes que ver anuncios o pagar dinero real para obtener más recursos. También puedes mejorar tus armas para hacerlas más poderosas y efectivas. Con dinero y munición ilimitadas, puedes disfrutar del juego sin limitaciones ni frustraciones. </p>
21
- <h2>Cómo descargar e instalar The Walking Zombie 1 mod APK</h2>
22
- <p>Si usted está interesado en la descarga de The Walking Zombie 1 mod APK, es necesario seguir algunos pasos simples para asegurar una instalación suave y segura. Estos son los pasos que debes seguir:</p>
23
- <p></p>
24
- <h3>Paso 1: Habilitar fuentes desconocidas en el dispositivo</h3>
25
- <p>Antes de que pueda instalar The Walking Zombie 1 mod APK, es necesario habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no sean de Google Play. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad y luego a fuentes desconocidas. Activa la opción y confirma tu elección. </p>
26
- <h3>Paso 2: Descargar el archivo mod APK de una fuente de confianza</h3>
27
- <p>Siguiente, es necesario descargar el archivo APK mod de una fuente de confianza. Hay muchos sitios web que ofrecen The Walking Zombie 1 mod APK, pero no todos ellos son fiables o seguros. Algunos de ellos pueden contener virus o malware que pueden dañar su dispositivo o robar sus datos. Es por eso que usted necesita tener cuidado y elegir un sitio web de buena reputación que tiene comentarios positivos y comentarios de otros usuarios. También puede escanear el archivo con una aplicación antivirus antes de abrirlo. </p>
28
- <h3>Paso 3: Localizar e instalar el archivo mod APK</h3>
29
- <p>Después de descargar el archivo APK mod, necesita localizarlo en su dispositivo e instalarlo. Puede usar una aplicación de administrador de archivos para encontrar el archivo en su carpeta de descargas o donde lo haya guardado. Luego, toca el archivo y sigue las instrucciones en la pantalla para instalarlo. </p>
30
- <h3>Paso 4: Disfruta del juego</h3>
31
-
32
- <h2>Conclusión</h2>
33
- <p>El Walking Zombie 1 es uno de los mejores juegos de zombies en Google Play, pero puede ser aún mejor con The Walking Zombie 1 mod APK. El mod APK le da dinero ilimitado y munición, acceso a todas las armas, sin anuncios, y más diversión y emoción. Puede descargar El Walking Zombie 1 mod APK de una fuente de confianza e instalarlo en su dispositivo de forma fácil y segura. Si usted está buscando un juego de zombies divertido y emocionante, El Walking Zombie 1 mod APK es la elección perfecta para usted. </p>
34
- <h2>Preguntas frecuentes</h2>
35
- <p>Aquí hay algunas preguntas frecuentes sobre The Walking Zombie 1 mod APK:</p>
36
- <h4>Q: ¿Es seguro el zombi caminante 1 mod APK? </h4>
37
- <p>A: Sí, El Walking Zombie 1 mod APK es seguro si se descarga desde una fuente de confianza y escanear con una aplicación antivirus antes de instalarlo. Sin embargo, siempre debes tener cuidado al descargar cualquier mod APK de fuentes desconocidas, ya que podrían contener virus o malware que pueden dañar tu dispositivo o robar tus datos. </p>
38
- <h4>Q: ¿Necesito rootear mi dispositivo para instalar The Walking Zombie 1 mod APK? </h4>
39
- <p>A: No, no necesitas rootear tu dispositivo para instalar The Walking Zombie 1 mod APK. Solo necesita habilitar fuentes desconocidas en la configuración de su dispositivo y seguir los pasos mencionados anteriormente. </p>
40
- <h4> <h4>Q: ¿Cuál es la diferencia entre The Walking Zombie 1 y The Walking Zombie 2?</h4>
41
- <p>A: The Walking Zombie 1 y The Walking Zombie 2 son juegos de zombies desarrollados por Rodinia Games, pero tienen algunas diferencias. The Walking Zombie 1 es un juego de disparos en primera persona que se centra en el combate y la supervivencia en tres escenarios. The Walking Zombie 2 es un juego de rol que sigue una historia y te permite personalizar a tu personaje, explorar un mundo abierto e interactuar con otros supervivientes. </p>
42
- <h4>P: ¿Cómo puedo obtener más dinero y municiones en The Walking Zombie 1?</h4>
43
-
44
- <h4>Q: ¿Puedo jugar The Walking Zombie 1 sin conexión? </h4>
45
- <p>A: Sí, puedes jugar The Walking Zombie 1 sin conexión a Internet. Sin embargo, es posible que necesite conectarse a Internet una vez para verificar la licencia del juego y descargar datos adicionales. </p> 64aa2da5cf<br />
46
- <br />
47
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/tags.py DELETED
@@ -1,487 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import logging
6
- import platform
7
- import sys
8
- import sysconfig
9
- from importlib.machinery import EXTENSION_SUFFIXES
10
- from typing import (
11
- Dict,
12
- FrozenSet,
13
- Iterable,
14
- Iterator,
15
- List,
16
- Optional,
17
- Sequence,
18
- Tuple,
19
- Union,
20
- cast,
21
- )
22
-
23
- from . import _manylinux, _musllinux
24
-
25
- logger = logging.getLogger(__name__)
26
-
27
- PythonVersion = Sequence[int]
28
- MacVersion = Tuple[int, int]
29
-
30
- INTERPRETER_SHORT_NAMES: Dict[str, str] = {
31
- "python": "py", # Generic.
32
- "cpython": "cp",
33
- "pypy": "pp",
34
- "ironpython": "ip",
35
- "jython": "jy",
36
- }
37
-
38
-
39
- _32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32
40
-
41
-
42
- class Tag:
43
- """
44
- A representation of the tag triple for a wheel.
45
-
46
- Instances are considered immutable and thus are hashable. Equality checking
47
- is also supported.
48
- """
49
-
50
- __slots__ = ["_interpreter", "_abi", "_platform", "_hash"]
51
-
52
- def __init__(self, interpreter: str, abi: str, platform: str) -> None:
53
- self._interpreter = interpreter.lower()
54
- self._abi = abi.lower()
55
- self._platform = platform.lower()
56
- # The __hash__ of every single element in a Set[Tag] will be evaluated each time
57
- # that a set calls its `.disjoint()` method, which may be called hundreds of
58
- # times when scanning a page of links for packages with tags matching that
59
- # Set[Tag]. Pre-computing the value here produces significant speedups for
60
- # downstream consumers.
61
- self._hash = hash((self._interpreter, self._abi, self._platform))
62
-
63
- @property
64
- def interpreter(self) -> str:
65
- return self._interpreter
66
-
67
- @property
68
- def abi(self) -> str:
69
- return self._abi
70
-
71
- @property
72
- def platform(self) -> str:
73
- return self._platform
74
-
75
- def __eq__(self, other: object) -> bool:
76
- if not isinstance(other, Tag):
77
- return NotImplemented
78
-
79
- return (
80
- (self._hash == other._hash) # Short-circuit ASAP for perf reasons.
81
- and (self._platform == other._platform)
82
- and (self._abi == other._abi)
83
- and (self._interpreter == other._interpreter)
84
- )
85
-
86
- def __hash__(self) -> int:
87
- return self._hash
88
-
89
- def __str__(self) -> str:
90
- return f"{self._interpreter}-{self._abi}-{self._platform}"
91
-
92
- def __repr__(self) -> str:
93
- return f"<{self} @ {id(self)}>"
94
-
95
-
96
- def parse_tag(tag: str) -> FrozenSet[Tag]:
97
- """
98
- Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
99
-
100
- Returning a set is required due to the possibility that the tag is a
101
- compressed tag set.
102
- """
103
- tags = set()
104
- interpreters, abis, platforms = tag.split("-")
105
- for interpreter in interpreters.split("."):
106
- for abi in abis.split("."):
107
- for platform_ in platforms.split("."):
108
- tags.add(Tag(interpreter, abi, platform_))
109
- return frozenset(tags)
110
-
111
-
112
- def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]:
113
- value = sysconfig.get_config_var(name)
114
- if value is None and warn:
115
- logger.debug(
116
- "Config variable '%s' is unset, Python ABI tag may be incorrect", name
117
- )
118
- return value
119
-
120
-
121
- def _normalize_string(string: str) -> str:
122
- return string.replace(".", "_").replace("-", "_")
123
-
124
-
125
- def _abi3_applies(python_version: PythonVersion) -> bool:
126
- """
127
- Determine if the Python version supports abi3.
128
-
129
- PEP 384 was first implemented in Python 3.2.
130
- """
131
- return len(python_version) > 1 and tuple(python_version) >= (3, 2)
132
-
133
-
134
- def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]:
135
- py_version = tuple(py_version) # To allow for version comparison.
136
- abis = []
137
- version = _version_nodot(py_version[:2])
138
- debug = pymalloc = ucs4 = ""
139
- with_debug = _get_config_var("Py_DEBUG", warn)
140
- has_refcount = hasattr(sys, "gettotalrefcount")
141
- # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
142
- # extension modules is the best option.
143
- # https://github.com/pypa/pip/issues/3383#issuecomment-173267692
144
- has_ext = "_d.pyd" in EXTENSION_SUFFIXES
145
- if with_debug or (with_debug is None and (has_refcount or has_ext)):
146
- debug = "d"
147
- if py_version < (3, 8):
148
- with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
149
- if with_pymalloc or with_pymalloc is None:
150
- pymalloc = "m"
151
- if py_version < (3, 3):
152
- unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
153
- if unicode_size == 4 or (
154
- unicode_size is None and sys.maxunicode == 0x10FFFF
155
- ):
156
- ucs4 = "u"
157
- elif debug:
158
- # Debug builds can also load "normal" extension modules.
159
- # We can also assume no UCS-4 or pymalloc requirement.
160
- abis.append(f"cp{version}")
161
- abis.insert(
162
- 0,
163
- "cp{version}{debug}{pymalloc}{ucs4}".format(
164
- version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4
165
- ),
166
- )
167
- return abis
168
-
169
-
170
- def cpython_tags(
171
- python_version: Optional[PythonVersion] = None,
172
- abis: Optional[Iterable[str]] = None,
173
- platforms: Optional[Iterable[str]] = None,
174
- *,
175
- warn: bool = False,
176
- ) -> Iterator[Tag]:
177
- """
178
- Yields the tags for a CPython interpreter.
179
-
180
- The tags consist of:
181
- - cp<python_version>-<abi>-<platform>
182
- - cp<python_version>-abi3-<platform>
183
- - cp<python_version>-none-<platform>
184
- - cp<less than python_version>-abi3-<platform> # Older Python versions down to 3.2.
185
-
186
- If python_version only specifies a major version then user-provided ABIs and
187
- the 'none' ABItag will be used.
188
-
189
- If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
190
- their normal position and not at the beginning.
191
- """
192
- if not python_version:
193
- python_version = sys.version_info[:2]
194
-
195
- interpreter = f"cp{_version_nodot(python_version[:2])}"
196
-
197
- if abis is None:
198
- if len(python_version) > 1:
199
- abis = _cpython_abis(python_version, warn)
200
- else:
201
- abis = []
202
- abis = list(abis)
203
- # 'abi3' and 'none' are explicitly handled later.
204
- for explicit_abi in ("abi3", "none"):
205
- try:
206
- abis.remove(explicit_abi)
207
- except ValueError:
208
- pass
209
-
210
- platforms = list(platforms or platform_tags())
211
- for abi in abis:
212
- for platform_ in platforms:
213
- yield Tag(interpreter, abi, platform_)
214
- if _abi3_applies(python_version):
215
- yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
216
- yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
217
-
218
- if _abi3_applies(python_version):
219
- for minor_version in range(python_version[1] - 1, 1, -1):
220
- for platform_ in platforms:
221
- interpreter = "cp{version}".format(
222
- version=_version_nodot((python_version[0], minor_version))
223
- )
224
- yield Tag(interpreter, "abi3", platform_)
225
-
226
-
227
- def _generic_abi() -> Iterator[str]:
228
- abi = sysconfig.get_config_var("SOABI")
229
- if abi:
230
- yield _normalize_string(abi)
231
-
232
-
233
- def generic_tags(
234
- interpreter: Optional[str] = None,
235
- abis: Optional[Iterable[str]] = None,
236
- platforms: Optional[Iterable[str]] = None,
237
- *,
238
- warn: bool = False,
239
- ) -> Iterator[Tag]:
240
- """
241
- Yields the tags for a generic interpreter.
242
-
243
- The tags consist of:
244
- - <interpreter>-<abi>-<platform>
245
-
246
- The "none" ABI will be added if it was not explicitly provided.
247
- """
248
- if not interpreter:
249
- interp_name = interpreter_name()
250
- interp_version = interpreter_version(warn=warn)
251
- interpreter = "".join([interp_name, interp_version])
252
- if abis is None:
253
- abis = _generic_abi()
254
- platforms = list(platforms or platform_tags())
255
- abis = list(abis)
256
- if "none" not in abis:
257
- abis.append("none")
258
- for abi in abis:
259
- for platform_ in platforms:
260
- yield Tag(interpreter, abi, platform_)
261
-
262
-
263
- def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
264
- """
265
- Yields Python versions in descending order.
266
-
267
- After the latest version, the major-only version will be yielded, and then
268
- all previous versions of that major version.
269
- """
270
- if len(py_version) > 1:
271
- yield f"py{_version_nodot(py_version[:2])}"
272
- yield f"py{py_version[0]}"
273
- if len(py_version) > 1:
274
- for minor in range(py_version[1] - 1, -1, -1):
275
- yield f"py{_version_nodot((py_version[0], minor))}"
276
-
277
-
278
- def compatible_tags(
279
- python_version: Optional[PythonVersion] = None,
280
- interpreter: Optional[str] = None,
281
- platforms: Optional[Iterable[str]] = None,
282
- ) -> Iterator[Tag]:
283
- """
284
- Yields the sequence of tags that are compatible with a specific version of Python.
285
-
286
- The tags consist of:
287
- - py*-none-<platform>
288
- - <interpreter>-none-any # ... if `interpreter` is provided.
289
- - py*-none-any
290
- """
291
- if not python_version:
292
- python_version = sys.version_info[:2]
293
- platforms = list(platforms or platform_tags())
294
- for version in _py_interpreter_range(python_version):
295
- for platform_ in platforms:
296
- yield Tag(version, "none", platform_)
297
- if interpreter:
298
- yield Tag(interpreter, "none", "any")
299
- for version in _py_interpreter_range(python_version):
300
- yield Tag(version, "none", "any")
301
-
302
-
303
- def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
304
- if not is_32bit:
305
- return arch
306
-
307
- if arch.startswith("ppc"):
308
- return "ppc"
309
-
310
- return "i386"
311
-
312
-
313
- def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]:
314
- formats = [cpu_arch]
315
- if cpu_arch == "x86_64":
316
- if version < (10, 4):
317
- return []
318
- formats.extend(["intel", "fat64", "fat32"])
319
-
320
- elif cpu_arch == "i386":
321
- if version < (10, 4):
322
- return []
323
- formats.extend(["intel", "fat32", "fat"])
324
-
325
- elif cpu_arch == "ppc64":
326
- # TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
327
- if version > (10, 5) or version < (10, 4):
328
- return []
329
- formats.append("fat64")
330
-
331
- elif cpu_arch == "ppc":
332
- if version > (10, 6):
333
- return []
334
- formats.extend(["fat32", "fat"])
335
-
336
- if cpu_arch in {"arm64", "x86_64"}:
337
- formats.append("universal2")
338
-
339
- if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
340
- formats.append("universal")
341
-
342
- return formats
343
-
344
-
345
- def mac_platforms(
346
- version: Optional[MacVersion] = None, arch: Optional[str] = None
347
- ) -> Iterator[str]:
348
- """
349
- Yields the platform tags for a macOS system.
350
-
351
- The `version` parameter is a two-item tuple specifying the macOS version to
352
- generate platform tags for. The `arch` parameter is the CPU architecture to
353
- generate platform tags for. Both parameters default to the appropriate value
354
- for the current system.
355
- """
356
- version_str, _, cpu_arch = platform.mac_ver()
357
- if version is None:
358
- version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2])))
359
- else:
360
- version = version
361
- if arch is None:
362
- arch = _mac_arch(cpu_arch)
363
- else:
364
- arch = arch
365
-
366
- if (10, 0) <= version and version < (11, 0):
367
- # Prior to Mac OS 11, each yearly release of Mac OS bumped the
368
- # "minor" version number. The major version was always 10.
369
- for minor_version in range(version[1], -1, -1):
370
- compat_version = 10, minor_version
371
- binary_formats = _mac_binary_formats(compat_version, arch)
372
- for binary_format in binary_formats:
373
- yield "macosx_{major}_{minor}_{binary_format}".format(
374
- major=10, minor=minor_version, binary_format=binary_format
375
- )
376
-
377
- if version >= (11, 0):
378
- # Starting with Mac OS 11, each yearly release bumps the major version
379
- # number. The minor versions are now the midyear updates.
380
- for major_version in range(version[0], 10, -1):
381
- compat_version = major_version, 0
382
- binary_formats = _mac_binary_formats(compat_version, arch)
383
- for binary_format in binary_formats:
384
- yield "macosx_{major}_{minor}_{binary_format}".format(
385
- major=major_version, minor=0, binary_format=binary_format
386
- )
387
-
388
- if version >= (11, 0):
389
- # Mac OS 11 on x86_64 is compatible with binaries from previous releases.
390
- # Arm64 support was introduced in 11.0, so no Arm binaries from previous
391
- # releases exist.
392
- #
393
- # However, the "universal2" binary format can have a
394
- # macOS version earlier than 11.0 when the x86_64 part of the binary supports
395
- # that version of macOS.
396
- if arch == "x86_64":
397
- for minor_version in range(16, 3, -1):
398
- compat_version = 10, minor_version
399
- binary_formats = _mac_binary_formats(compat_version, arch)
400
- for binary_format in binary_formats:
401
- yield "macosx_{major}_{minor}_{binary_format}".format(
402
- major=compat_version[0],
403
- minor=compat_version[1],
404
- binary_format=binary_format,
405
- )
406
- else:
407
- for minor_version in range(16, 3, -1):
408
- compat_version = 10, minor_version
409
- binary_format = "universal2"
410
- yield "macosx_{major}_{minor}_{binary_format}".format(
411
- major=compat_version[0],
412
- minor=compat_version[1],
413
- binary_format=binary_format,
414
- )
415
-
416
-
417
- def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
418
- linux = _normalize_string(sysconfig.get_platform())
419
- if is_32bit:
420
- if linux == "linux_x86_64":
421
- linux = "linux_i686"
422
- elif linux == "linux_aarch64":
423
- linux = "linux_armv7l"
424
- _, arch = linux.split("_", 1)
425
- yield from _manylinux.platform_tags(linux, arch)
426
- yield from _musllinux.platform_tags(arch)
427
- yield linux
428
-
429
-
430
- def _generic_platforms() -> Iterator[str]:
431
- yield _normalize_string(sysconfig.get_platform())
432
-
433
-
434
- def platform_tags() -> Iterator[str]:
435
- """
436
- Provides the platform tags for this installation.
437
- """
438
- if platform.system() == "Darwin":
439
- return mac_platforms()
440
- elif platform.system() == "Linux":
441
- return _linux_platforms()
442
- else:
443
- return _generic_platforms()
444
-
445
-
446
- def interpreter_name() -> str:
447
- """
448
- Returns the name of the running interpreter.
449
- """
450
- name = sys.implementation.name
451
- return INTERPRETER_SHORT_NAMES.get(name) or name
452
-
453
-
454
- def interpreter_version(*, warn: bool = False) -> str:
455
- """
456
- Returns the version of the running interpreter.
457
- """
458
- version = _get_config_var("py_version_nodot", warn=warn)
459
- if version:
460
- version = str(version)
461
- else:
462
- version = _version_nodot(sys.version_info[:2])
463
- return version
464
-
465
-
466
- def _version_nodot(version: PythonVersion) -> str:
467
- return "".join(map(str, version))
468
-
469
-
470
- def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
471
- """
472
- Returns the sequence of tag triples for the running interpreter.
473
-
474
- The order of the sequence corresponds to priority order for the
475
- interpreter, from most to least important.
476
- """
477
-
478
- interp_name = interpreter_name()
479
- if interp_name == "cp":
480
- yield from cpython_tags(warn=warn)
481
- else:
482
- yield from generic_tags()
483
-
484
- if interp_name == "pp":
485
- yield from compatible_tags(interpreter="pp3")
486
- else:
487
- yield from compatible_tags()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/engine/defaults.py DELETED
@@ -1,543 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
-
4
- """
5
- This file contains components with some default boilerplate logic user may need
6
- in training / testing. They will not work for everyone, but many users may find them useful.
7
-
8
- The behavior of functions/classes in this file is subject to change,
9
- since they are meant to represent the "common default behavior" people need in their projects.
10
- """
11
-
12
- import argparse
13
- import logging
14
- import os
15
- import sys
16
- from collections import OrderedDict
17
- import torch
18
- from fvcore.common.file_io import PathManager
19
- from fvcore.nn.precise_bn import get_bn_modules
20
- from torch.nn.parallel import DistributedDataParallel
21
-
22
- import detectron2.data.transforms as T
23
- from detectron2.checkpoint import DetectionCheckpointer
24
- from detectron2.data import (
25
- MetadataCatalog,
26
- build_detection_test_loader,
27
- build_detection_train_loader,
28
- )
29
- from detectron2.evaluation import (
30
- DatasetEvaluator,
31
- inference_on_dataset,
32
- print_csv_format,
33
- verify_results,
34
- )
35
- from detectron2.modeling import build_model
36
- from detectron2.solver import build_lr_scheduler, build_optimizer
37
- from detectron2.utils import comm
38
- from detectron2.utils.collect_env import collect_env_info
39
- from detectron2.utils.env import seed_all_rng
40
- from detectron2.utils.events import CommonMetricPrinter, JSONWriter, TensorboardXWriter
41
- from detectron2.utils.logger import setup_logger
42
-
43
- from . import hooks
44
- from .train_loop import SimpleTrainer
45
-
46
- __all__ = [
47
- "default_argument_parser",
48
- "default_setup",
49
- "DefaultPredictor",
50
- "DefaultTrainer",
51
- ]
52
-
53
-
54
- def default_argument_parser():
55
- """
56
- Create a parser with some common arguments used by detectron2 users.
57
-
58
- Returns:
59
- argparse.ArgumentParser:
60
- """
61
- parser = argparse.ArgumentParser(description="Detectron2 Training")
62
- parser.add_argument(
63
- "--config-file", default="", metavar="FILE", help="path to config file"
64
- )
65
- parser.add_argument(
66
- "--resume",
67
- action="store_true",
68
- help="whether to attempt to resume from the checkpoint directory",
69
- )
70
- parser.add_argument(
71
- "--eval-only", action="store_true", help="perform evaluation only"
72
- )
73
- parser.add_argument(
74
- "--num-gpus", type=int, default=1, help="number of gpus *per machine*"
75
- )
76
- parser.add_argument("--num-machines", type=int, default=1)
77
- parser.add_argument(
78
- "--machine-rank",
79
- type=int,
80
- default=0,
81
- help="the rank of this machine (unique per machine)",
82
- )
83
-
84
- # PyTorch still may leave orphan processes in multi-gpu training.
85
- # Therefore we use a deterministic way to obtain port,
86
- # so that users are aware of orphan processes by seeing the port occupied.
87
- port = (
88
- 2 ** 15
89
- + 2 ** 14
90
- + hash(os.getuid() if sys.platform != "win32" else 1) % 2 ** 14
91
- )
92
- parser.add_argument("--dist-url", default="tcp://127.0.0.1:{}".format(port))
93
- parser.add_argument(
94
- "opts",
95
- help="Modify config options using the command-line",
96
- default=None,
97
- nargs=argparse.REMAINDER,
98
- )
99
- return parser
100
-
101
-
102
- def default_setup(cfg, args):
103
- """
104
- Perform some basic common setups at the beginning of a job, including:
105
-
106
- 1. Set up the detectron2 logger
107
- 2. Log basic information about environment, cmdline arguments, and config
108
- 3. Backup the config to the output directory
109
-
110
- Args:
111
- cfg (CfgNode): the full config to be used
112
- args (argparse.NameSpace): the command line arguments to be logged
113
- """
114
- output_dir = cfg.OUTPUT_DIR
115
- if comm.is_main_process() and output_dir:
116
- PathManager.mkdirs(output_dir)
117
-
118
- rank = comm.get_rank()
119
- setup_logger(output_dir, distributed_rank=rank, name="fvcore")
120
- logger = setup_logger(output_dir, distributed_rank=rank)
121
-
122
- logger.info(
123
- "Rank of current process: {}. World size: {}".format(
124
- rank, comm.get_world_size()
125
- )
126
- )
127
- logger.info("Environment info:\n" + collect_env_info())
128
-
129
- logger.info("Command line arguments: " + str(args))
130
- if hasattr(args, "config_file") and args.config_file != "":
131
- logger.info(
132
- "Contents of args.config_file={}:\n{}".format(
133
- args.config_file, PathManager.open(args.config_file, "r").read()
134
- )
135
- )
136
-
137
- logger.info("Running with full config:\n{}".format(cfg))
138
- if comm.is_main_process() and output_dir:
139
- # Note: some of our scripts may expect the existence of
140
- # config.yaml in output directory
141
- path = os.path.join(output_dir, "config.yaml")
142
- with PathManager.open(path, "w") as f:
143
- f.write(cfg.dump())
144
- logger.info("Full config saved to {}".format(path))
145
-
146
- # make sure each worker has a different, yet deterministic seed if specified
147
- seed_all_rng(None if cfg.SEED < 0 else cfg.SEED + rank)
148
-
149
- # cudnn benchmark has large overhead. It shouldn't be used considering the small size of
150
- # typical validation set.
151
- if not (hasattr(args, "eval_only") and args.eval_only):
152
- torch.backends.cudnn.benchmark = cfg.CUDNN_BENCHMARK
153
-
154
-
155
- class DefaultPredictor:
156
- """
157
- Create a simple end-to-end predictor with the given config that runs on
158
- single device for a single input image.
159
-
160
- Compared to using the model directly, this class does the following additions:
161
-
162
- 1. Load checkpoint from `cfg.MODEL.WEIGHTS`.
163
- 2. Always take BGR image as the input and apply conversion defined by `cfg.INPUT.FORMAT`.
164
- 3. Apply resizing defined by `cfg.INPUT.{MIN,MAX}_SIZE_TEST`.
165
- 4. Take one input image and produce a single output, instead of a batch.
166
-
167
- If you'd like to do anything more fancy, please refer to its source code
168
- as examples to build and use the model manually.
169
-
170
- Attributes:
171
- metadata (Metadata): the metadata of the underlying dataset, obtained from
172
- cfg.DATASETS.TEST.
173
-
174
- Examples:
175
-
176
- .. code-block:: python
177
-
178
- pred = DefaultPredictor(cfg)
179
- inputs = cv2.imread("input.jpg")
180
- outputs = pred(inputs)
181
- """
182
-
183
- def __init__(self, cfg):
184
- self.cfg = cfg.clone() # cfg can be modified by model
185
- self.model = build_model(self.cfg)
186
- self.model.eval()
187
- self.metadata = MetadataCatalog.get(cfg.DATASETS.TEST[0])
188
-
189
- checkpointer = DetectionCheckpointer(self.model)
190
- checkpointer.load(cfg.MODEL.WEIGHTS)
191
-
192
- self.transform_gen = T.ResizeShortestEdge(
193
- [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST
194
- )
195
-
196
- self.input_format = cfg.INPUT.FORMAT
197
- assert self.input_format in ["RGB", "BGR"], self.input_format
198
-
199
- def __call__(self, original_image):
200
- """
201
- Args:
202
- original_image (np.ndarray): an image of shape (H, W, C) (in BGR order).
203
-
204
- Returns:
205
- predictions (dict):
206
- the output of the model for one image only.
207
- See :doc:`/tutorials/models` for details about the format.
208
- """
209
- with torch.no_grad(): # https://github.com/sphinx-doc/sphinx/issues/4258
210
- # Apply pre-processing to image.
211
- if self.input_format == "RGB":
212
- # whether the model expects BGR inputs or RGB
213
- original_image = original_image[:, :, ::-1]
214
- height, width = original_image.shape[:2]
215
- image = self.transform_gen.get_transform(original_image).apply_image(
216
- original_image
217
- )
218
- image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
219
-
220
- inputs = {"image": image, "height": height, "width": width}
221
- predictions, box_features = self.model([inputs])
222
- predictions = predictions[0]
223
- return predictions, box_features
224
-
225
-
226
- class DefaultTrainer(SimpleTrainer):
227
- """
228
- A trainer with default training logic. Compared to `SimpleTrainer`, it
229
- contains the following logic in addition:
230
-
231
- 1. Create model, optimizer, scheduler, dataloader from the given config.
232
- 2. Load a checkpoint or `cfg.MODEL.WEIGHTS`, if exists, when
233
- `resume_or_load` is called.
234
- 3. Register a few common hooks.
235
-
236
- It is created to simplify the **standard model training workflow** and reduce code boilerplate
237
- for users who only need the standard training workflow, with standard features.
238
- It means this class makes *many assumptions* about your training logic that
239
- may easily become invalid in a new research. In fact, any assumptions beyond those made in the
240
- :class:`SimpleTrainer` are too much for research.
241
-
242
- The code of this class has been annotated about restrictive assumptions it mades.
243
- When they do not work for you, you're encouraged to:
244
-
245
- 1. Overwrite methods of this class, OR:
246
- 2. Use :class:`SimpleTrainer`, which only does minimal SGD training and
247
- nothing else. You can then add your own hooks if needed. OR:
248
- 3. Write your own training loop similar to `tools/plain_train_net.py`.
249
-
250
- Also note that the behavior of this class, like other functions/classes in
251
- this file, is not stable, since it is meant to represent the "common default behavior".
252
- It is only guaranteed to work well with the standard models and training workflow in detectron2.
253
- To obtain more stable behavior, write your own training logic with other public APIs.
254
-
255
- Examples:
256
-
257
- .. code-block:: python
258
-
259
- trainer = DefaultTrainer(cfg)
260
- trainer.resume_or_load() # load last checkpoint or MODEL.WEIGHTS
261
- trainer.train()
262
-
263
- Attributes:
264
- scheduler:
265
- checkpointer (DetectionCheckpointer):
266
- cfg (CfgNode):
267
- """
268
-
269
- def __init__(self, cfg):
270
- """
271
- Args:
272
- cfg (CfgNode):
273
- """
274
- logger = logging.getLogger("detectron2")
275
- if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2
276
- setup_logger()
277
- # Assume these objects must be constructed in this order.
278
- model = self.build_model(cfg)
279
- optimizer = self.build_optimizer(cfg, model)
280
- data_loader = self.build_train_loader(cfg)
281
-
282
- # For training, wrap with DDP. But don't need this for inference.
283
- if comm.get_world_size() > 1:
284
- model = DistributedDataParallel(
285
- model, device_ids=[comm.get_local_rank()], broadcast_buffers=False
286
- )
287
- super().__init__(model, data_loader, optimizer)
288
-
289
- self.scheduler = self.build_lr_scheduler(cfg, optimizer)
290
- # Assume no other objects need to be checkpointed.
291
- # We can later make it checkpoint the stateful hooks
292
- self.checkpointer = DetectionCheckpointer(
293
- # Assume you want to save checkpoints together with logs/statistics
294
- model,
295
- cfg.OUTPUT_DIR,
296
- optimizer=optimizer,
297
- scheduler=self.scheduler,
298
- )
299
- self.start_iter = 0
300
- self.max_iter = cfg.SOLVER.MAX_ITER
301
- self.cfg = cfg
302
-
303
- self.register_hooks(self.build_hooks())
304
-
305
- def resume_or_load(self, resume=True):
306
- """
307
- If `resume==True`, and last checkpoint exists, resume from it and load all
308
- checkpointables (eg. optimizer and scheduler).
309
-
310
- Otherwise, load the model specified by the config (skip all checkpointables).
311
-
312
- Args:
313
- resume (bool): whether to do resume or not
314
- """
315
- checkpoint = self.checkpointer.resume_or_load(
316
- self.cfg.MODEL.WEIGHTS, resume=resume
317
- )
318
- self.start_iter = checkpoint.get("iteration", -1) if resume else -1
319
- # The checkpoint stores the training iteration that just finished, thus we start
320
- # at the next iteration (or iter zero if there's no checkpoint).
321
- self.start_iter += 1
322
-
323
- def build_hooks(self):
324
- """
325
- Build a list of default hooks, including timing, evaluation,
326
- checkpointing, lr scheduling, precise BN, writing events.
327
-
328
- Returns:
329
- list[HookBase]:
330
- """
331
- cfg = self.cfg.clone()
332
- cfg.defrost()
333
- cfg.DATALOADER.NUM_WORKERS = 0 # save some memory and time for PreciseBN
334
-
335
- ret = [
336
- hooks.IterationTimer(),
337
- hooks.LRScheduler(self.optimizer, self.scheduler),
338
- hooks.PreciseBN(
339
- # Run at the same freq as (but before) evaluation.
340
- cfg.TEST.EVAL_PERIOD,
341
- self.model,
342
- # Build a new data loader to not affect training
343
- self.build_train_loader(cfg),
344
- cfg.TEST.PRECISE_BN.NUM_ITER,
345
- )
346
- if cfg.TEST.PRECISE_BN.ENABLED and get_bn_modules(self.model)
347
- else None,
348
- ]
349
-
350
- # Do PreciseBN before checkpointer, because it updates the model and need to
351
- # be saved by checkpointer.
352
- # This is not always the best: if checkpointing has a different frequency,
353
- # some checkpoints may have more precise statistics than others.
354
- if comm.is_main_process():
355
- ret.append(
356
- hooks.PeriodicCheckpointer(
357
- self.checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD
358
- )
359
- )
360
-
361
- def test_and_save_results():
362
- self._last_eval_results = self.test(self.cfg, self.model)
363
- return self._last_eval_results
364
-
365
- # Do evaluation after checkpointer, because then if it fails,
366
- # we can use the saved checkpoint to debug.
367
- ret.append(hooks.EvalHook(cfg.TEST.EVAL_PERIOD, test_and_save_results))
368
-
369
- if comm.is_main_process():
370
- # run writers in the end, so that evaluation metrics are written
371
- ret.append(hooks.PeriodicWriter(self.build_writers(), period=20))
372
- return ret
373
-
374
- def build_writers(self):
375
- """
376
- Build a list of writers to be used. By default it contains
377
- writers that write metrics to the screen,
378
- a json file, and a tensorboard event file respectively.
379
- If you'd like a different list of writers, you can overwrite it in
380
- your trainer.
381
-
382
- Returns:
383
- list[EventWriter]: a list of :class:`EventWriter` objects.
384
-
385
- It is now implemented by:
386
-
387
- .. code-block:: python
388
-
389
- return [
390
- CommonMetricPrinter(self.max_iter),
391
- JSONWriter(os.path.join(self.cfg.OUTPUT_DIR, "metrics.json")),
392
- TensorboardXWriter(self.cfg.OUTPUT_DIR),
393
- ]
394
-
395
- """
396
- # Here the default print/log frequency of each writer is used.
397
- return [
398
- # It may not always print what you want to see, since it prints "common" metrics only.
399
- CommonMetricPrinter(self.max_iter),
400
- JSONWriter(os.path.join(self.cfg.OUTPUT_DIR, "metrics.json")),
401
- TensorboardXWriter(self.cfg.OUTPUT_DIR),
402
- ]
403
-
404
- def train(self):
405
- """
406
- Run training.
407
-
408
- Returns:
409
- OrderedDict of results, if evaluation is enabled. Otherwise None.
410
- """
411
- super().train(self.start_iter, self.max_iter)
412
- if len(self.cfg.TEST.EXPECTED_RESULTS) and comm.is_main_process():
413
- assert hasattr(
414
- self, "_last_eval_results"
415
- ), "No evaluation results obtained during training!"
416
- verify_results(self.cfg, self._last_eval_results)
417
- return self._last_eval_results
418
-
419
- @classmethod
420
- def build_model(cls, cfg):
421
- """
422
- Returns:
423
- torch.nn.Module:
424
-
425
- It now calls :func:`detectron2.modeling.build_model`.
426
- Overwrite it if you'd like a different model.
427
- """
428
- model = build_model(cfg)
429
- logger = logging.getLogger(__name__)
430
- logger.info("Model:\n{}".format(model))
431
- return model
432
-
433
- @classmethod
434
- def build_optimizer(cls, cfg, model):
435
- """
436
- Returns:
437
- torch.optim.Optimizer:
438
-
439
- It now calls :func:`detectron2.solver.build_optimizer`.
440
- Overwrite it if you'd like a different optimizer.
441
- """
442
- return build_optimizer(cfg, model)
443
-
444
- @classmethod
445
- def build_lr_scheduler(cls, cfg, optimizer):
446
- """
447
- It now calls :func:`detectron2.solver.build_lr_scheduler`.
448
- Overwrite it if you'd like a different scheduler.
449
- """
450
- return build_lr_scheduler(cfg, optimizer)
451
-
452
- @classmethod
453
- def build_train_loader(cls, cfg):
454
- """
455
- Returns:
456
- iterable
457
-
458
- It now calls :func:`detectron2.data.build_detection_train_loader`.
459
- Overwrite it if you'd like a different data loader.
460
- """
461
- return build_detection_train_loader(cfg)
462
-
463
- @classmethod
464
- def build_test_loader(cls, cfg, dataset_name):
465
- """
466
- Returns:
467
- iterable
468
-
469
- It now calls :func:`detectron2.data.build_detection_test_loader`.
470
- Overwrite it if you'd like a different data loader.
471
- """
472
- return build_detection_test_loader(cfg, dataset_name)
473
-
474
- @classmethod
475
- def build_evaluator(cls, cfg, dataset_name):
476
- """
477
- Returns:
478
- DatasetEvaluator or None
479
-
480
- It is not implemented by default.
481
- """
482
- raise NotImplementedError(
483
- """
484
- If you want DefaultTrainer to automatically run evaluation,
485
- please implement `build_evaluator()` in subclasses (see train_net.py for example).
486
- Alternatively, you can call evaluation functions yourself (see Colab balloon tutorial for example).
487
- """
488
- )
489
-
490
- @classmethod
491
- def test(cls, cfg, model, evaluators=None):
492
- """
493
- Args:
494
- cfg (CfgNode):
495
- model (nn.Module):
496
- evaluators (list[DatasetEvaluator] or None): if None, will call
497
- :meth:`build_evaluator`. Otherwise, must have the same length as
498
- `cfg.DATASETS.TEST`.
499
-
500
- Returns:
501
- dict: a dict of result metrics
502
- """
503
- logger = logging.getLogger(__name__)
504
- if isinstance(evaluators, DatasetEvaluator):
505
- evaluators = [evaluators]
506
- if evaluators is not None:
507
- assert len(cfg.DATASETS.TEST) == len(evaluators), "{} != {}".format(
508
- len(cfg.DATASETS.TEST), len(evaluators)
509
- )
510
-
511
- results = OrderedDict()
512
- for idx, dataset_name in enumerate(cfg.DATASETS.TEST):
513
- data_loader = cls.build_test_loader(cfg, dataset_name)
514
- # When evaluators are passed in as arguments,
515
- # implicitly assume that evaluators can be created before data_loader.
516
- if evaluators is not None:
517
- evaluator = evaluators[idx]
518
- else:
519
- try:
520
- evaluator = cls.build_evaluator(cfg, dataset_name)
521
- except NotImplementedError:
522
- logger.warn(
523
- "No evaluator found. Use `DefaultTrainer.test(evaluators=)`, "
524
- "or implement its `build_evaluator` method."
525
- )
526
- results[dataset_name] = {}
527
- continue
528
- results_i = inference_on_dataset(model, data_loader, evaluator)
529
- results[dataset_name] = results_i
530
- if comm.is_main_process():
531
- assert isinstance(
532
- results_i, dict
533
- ), "Evaluator must return a dict on the main process. Got {} instead.".format(
534
- results_i
535
- )
536
- logger.info(
537
- "Evaluation results for {} in csv format:".format(dataset_name)
538
- )
539
- print_csv_format(results_i)
540
-
541
- if len(results) == 1:
542
- results = list(results.values())[0]
543
- return results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/run_instant_tests.sh DELETED
@@ -1,27 +0,0 @@
1
- #!/bin/bash -e
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
-
4
- BIN="python tools/train_net.py"
5
- OUTPUT="instant_test_output"
6
- NUM_GPUS=2
7
-
8
- CFG_LIST=( "${@:1}" )
9
- if [ ${#CFG_LIST[@]} -eq 0 ]; then
10
- CFG_LIST=( ./configs/quick_schedules/*instant_test.yaml )
11
- fi
12
-
13
- echo "========================================================================"
14
- echo "Configs to run:"
15
- echo "${CFG_LIST[@]}"
16
- echo "========================================================================"
17
-
18
- for cfg in "${CFG_LIST[@]}"; do
19
- echo "========================================================================"
20
- echo "Running $cfg ..."
21
- echo "========================================================================"
22
- $BIN --num-gpus $NUM_GPUS --config-file "$cfg" \
23
- SOLVER.IMS_PER_BATCH $(($NUM_GPUS * 2)) \
24
- OUTPUT_DIR "$OUTPUT"
25
- rm -rf "$OUTPUT"
26
- done
27
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/MonoScene/monoscene/unet2d.py DELETED
@@ -1,198 +0,0 @@
1
- """
2
- Code adapted from https://github.com/shariqfarooq123/AdaBins/blob/main/models/unet_adaptive_bins.py
3
- """
4
- import torch
5
- import torch.nn as nn
6
- import torch.nn.functional as F
7
- import os
8
-
9
-
10
- class UpSampleBN(nn.Module):
11
- def __init__(self, skip_input, output_features):
12
- super(UpSampleBN, self).__init__()
13
- self._net = nn.Sequential(
14
- nn.Conv2d(skip_input, output_features, kernel_size=3, stride=1, padding=1),
15
- nn.BatchNorm2d(output_features),
16
- nn.LeakyReLU(),
17
- nn.Conv2d(
18
- output_features, output_features, kernel_size=3, stride=1, padding=1
19
- ),
20
- nn.BatchNorm2d(output_features),
21
- nn.LeakyReLU(),
22
- )
23
-
24
- def forward(self, x, concat_with):
25
- up_x = F.interpolate(
26
- x,
27
- size=(concat_with.shape[2], concat_with.shape[3]),
28
- mode="bilinear",
29
- align_corners=True,
30
- )
31
- f = torch.cat([up_x, concat_with], dim=1)
32
- return self._net(f)
33
-
34
-
35
- class DecoderBN(nn.Module):
36
- def __init__(
37
- self, num_features, bottleneck_features, out_feature, use_decoder=True
38
- ):
39
- super(DecoderBN, self).__init__()
40
- features = int(num_features)
41
- self.use_decoder = use_decoder
42
-
43
- self.conv2 = nn.Conv2d(
44
- bottleneck_features, features, kernel_size=1, stride=1, padding=1
45
- )
46
-
47
- self.out_feature_1_1 = out_feature
48
- self.out_feature_1_2 = out_feature
49
- self.out_feature_1_4 = out_feature
50
- self.out_feature_1_8 = out_feature
51
- self.out_feature_1_16 = out_feature
52
- self.feature_1_16 = features // 2
53
- self.feature_1_8 = features // 4
54
- self.feature_1_4 = features // 8
55
- self.feature_1_2 = features // 16
56
- self.feature_1_1 = features // 32
57
-
58
- if self.use_decoder:
59
- self.resize_output_1_1 = nn.Conv2d(
60
- self.feature_1_1, self.out_feature_1_1, kernel_size=1
61
- )
62
- self.resize_output_1_2 = nn.Conv2d(
63
- self.feature_1_2, self.out_feature_1_2, kernel_size=1
64
- )
65
- self.resize_output_1_4 = nn.Conv2d(
66
- self.feature_1_4, self.out_feature_1_4, kernel_size=1
67
- )
68
- self.resize_output_1_8 = nn.Conv2d(
69
- self.feature_1_8, self.out_feature_1_8, kernel_size=1
70
- )
71
- self.resize_output_1_16 = nn.Conv2d(
72
- self.feature_1_16, self.out_feature_1_16, kernel_size=1
73
- )
74
-
75
- self.up16 = UpSampleBN(
76
- skip_input=features + 224, output_features=self.feature_1_16
77
- )
78
- self.up8 = UpSampleBN(
79
- skip_input=self.feature_1_16 + 80, output_features=self.feature_1_8
80
- )
81
- self.up4 = UpSampleBN(
82
- skip_input=self.feature_1_8 + 48, output_features=self.feature_1_4
83
- )
84
- self.up2 = UpSampleBN(
85
- skip_input=self.feature_1_4 + 32, output_features=self.feature_1_2
86
- )
87
- self.up1 = UpSampleBN(
88
- skip_input=self.feature_1_2 + 3, output_features=self.feature_1_1
89
- )
90
- else:
91
- self.resize_output_1_1 = nn.Conv2d(3, out_feature, kernel_size=1)
92
- self.resize_output_1_2 = nn.Conv2d(32, out_feature * 2, kernel_size=1)
93
- self.resize_output_1_4 = nn.Conv2d(48, out_feature * 4, kernel_size=1)
94
-
95
- def forward(self, features):
96
- x_block0, x_block1, x_block2, x_block3, x_block4 = (
97
- features[4],
98
- features[5],
99
- features[6],
100
- features[8],
101
- features[11],
102
- )
103
- bs = x_block0.shape[0]
104
- x_d0 = self.conv2(x_block4)
105
-
106
- if self.use_decoder:
107
- x_1_16 = self.up16(x_d0, x_block3)
108
- x_1_8 = self.up8(x_1_16, x_block2)
109
- x_1_4 = self.up4(x_1_8, x_block1)
110
- x_1_2 = self.up2(x_1_4, x_block0)
111
- x_1_1 = self.up1(x_1_2, features[0])
112
- return {
113
- "1_1": self.resize_output_1_1(x_1_1),
114
- "1_2": self.resize_output_1_2(x_1_2),
115
- "1_4": self.resize_output_1_4(x_1_4),
116
- "1_8": self.resize_output_1_8(x_1_8),
117
- "1_16": self.resize_output_1_16(x_1_16),
118
- }
119
- else:
120
- x_1_1 = features[0]
121
- x_1_2, x_1_4, x_1_8, x_1_16 = (
122
- features[4],
123
- features[5],
124
- features[6],
125
- features[8],
126
- )
127
- x_global = features[-1].reshape(bs, 2560, -1).mean(2)
128
- return {
129
- "1_1": self.resize_output_1_1(x_1_1),
130
- "1_2": self.resize_output_1_2(x_1_2),
131
- "1_4": self.resize_output_1_4(x_1_4),
132
- "global": x_global,
133
- }
134
-
135
-
136
- class Encoder(nn.Module):
137
- def __init__(self, backend):
138
- super(Encoder, self).__init__()
139
- self.original_model = backend
140
-
141
- def forward(self, x):
142
- features = [x]
143
- for k, v in self.original_model._modules.items():
144
- if k == "blocks":
145
- for ki, vi in v._modules.items():
146
- features.append(vi(features[-1]))
147
- else:
148
- features.append(v(features[-1]))
149
- return features
150
-
151
-
152
- class UNet2D(nn.Module):
153
- def __init__(self, backend, num_features, out_feature, use_decoder=True):
154
- super(UNet2D, self).__init__()
155
- self.use_decoder = use_decoder
156
- self.encoder = Encoder(backend)
157
- self.decoder = DecoderBN(
158
- out_feature=out_feature,
159
- use_decoder=use_decoder,
160
- bottleneck_features=num_features,
161
- num_features=num_features,
162
- )
163
-
164
- def forward(self, x, **kwargs):
165
- encoded_feats = self.encoder(x)
166
- unet_out = self.decoder(encoded_feats, **kwargs)
167
- return unet_out
168
-
169
- def get_encoder_params(self): # lr/10 learning rate
170
- return self.encoder.parameters()
171
-
172
- def get_decoder_params(self): # lr learning rate
173
- return self.decoder.parameters()
174
-
175
- @classmethod
176
- def build(cls, **kwargs):
177
- basemodel_name = "tf_efficientnet_b7_ns"
178
- num_features = 2560
179
-
180
- print("Loading base model ()...".format(basemodel_name), end="")
181
- basemodel = torch.hub.load(
182
- "rwightman/gen-efficientnet-pytorch", basemodel_name, pretrained=True
183
- )
184
- print("Done.")
185
-
186
- # Remove last layer
187
- print("Removing last two layers (global_pool & classifier).")
188
- basemodel.global_pool = nn.Identity()
189
- basemodel.classifier = nn.Identity()
190
-
191
- # Building Encoder-Decoder model
192
- print("Building Encoder-Decoder model..", end="")
193
- m = cls(basemodel, num_features=num_features, **kwargs)
194
- print("Done.")
195
- return m
196
-
197
- if __name__ == '__main__':
198
- model = UNet2D.build(out_feature=256, use_decoder=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Carlosito16/aitGPT/app_with_prompt_v2.py DELETED
@@ -1,256 +0,0 @@
1
- # This version is the same model with only different UI, to be a chat-like experience
2
-
3
- import streamlit as st
4
- from streamlit_chat import message as st_message
5
- import pandas as pd
6
- import numpy as np
7
- import datetime
8
- import gspread
9
- import pickle
10
- import os
11
- import csv
12
- import json
13
- import torch
14
- from tqdm.auto import tqdm
15
- from langchain.text_splitter import RecursiveCharacterTextSplitter
16
-
17
-
18
- # from langchain.vectorstores import Chroma
19
- from langchain.vectorstores import FAISS
20
- from langchain.embeddings import HuggingFaceInstructEmbeddings
21
-
22
-
23
- from langchain import HuggingFacePipeline
24
- from langchain.chains import RetrievalQA
25
-
26
- from langchain.prompts import PromptTemplate
27
-
28
-
29
-
30
-
31
- prompt_template = """
32
-
33
- You are the chatbot and the face of Asian Institute of Technology (AIT). Your job is to give answers to prospective and current students about the school.
34
- Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
35
- Always make sure to be elaborate. And try to use vibrant, positive tone to represent good branding of the school.
36
- Never answer with any unfinished response.
37
-
38
- {context}
39
-
40
- Question: {question}
41
-
42
- Always make sure to elaborate your response and use vibrant, positive tone to represent good branding of the school.
43
- Never answer with any unfinished response.
44
-
45
-
46
- """
47
- PROMPT = PromptTemplate(
48
- template=prompt_template, input_variables=["context", "question"]
49
- )
50
- chain_type_kwargs = {"prompt": PROMPT}
51
-
52
-
53
- st.set_page_config(
54
- page_title = 'aitGPT',
55
- page_icon = '✅')
56
-
57
-
58
-
59
-
60
- @st.cache_data
61
- def load_scraped_web_info():
62
- with open("ait-web-document", "rb") as fp:
63
- ait_web_documents = pickle.load(fp)
64
-
65
-
66
- text_splitter = RecursiveCharacterTextSplitter(
67
- # Set a really small chunk size, just to show.
68
- chunk_size = 500,
69
- chunk_overlap = 100,
70
- length_function = len,
71
- )
72
-
73
- chunked_text = text_splitter.create_documents([doc for doc in tqdm(ait_web_documents)])
74
-
75
-
76
- @st.cache_resource
77
- def load_embedding_model():
78
- embedding_model = HuggingFaceInstructEmbeddings(model_name='hkunlp/instructor-base',
79
- model_kwargs = {'device': torch.device('cuda' if torch.cuda.is_available() else 'cpu')})
80
- return embedding_model
81
-
82
- @st.cache_data
83
- def load_faiss_index():
84
- vector_database = FAISS.load_local("faiss_index_web_and_curri_new", embedding_model) #CHANGE THIS FAISS EMBEDDED KNOWLEDGE
85
- return vector_database
86
-
87
- @st.cache_resource
88
- def load_llm_model():
89
- # llm = HuggingFacePipeline.from_model_id(model_id= 'lmsys/fastchat-t5-3b-v1.0',
90
- # task= 'text2text-generation',
91
- # model_kwargs={ "device_map": "auto",
92
- # "load_in_8bit": True,"max_length": 256, "temperature": 0,
93
- # "repetition_penalty": 1.5})
94
-
95
-
96
- llm = HuggingFacePipeline.from_model_id(model_id= 'lmsys/fastchat-t5-3b-v1.0',
97
- task= 'text2text-generation',
98
-
99
- model_kwargs={ "max_length": 256, "temperature": 0,
100
- "torch_dtype":torch.float32,
101
- "repetition_penalty": 1.3})
102
- return llm
103
-
104
-
105
- def load_retriever(llm, db):
106
- qa_retriever = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff",
107
- retriever=db.as_retriever(),
108
- chain_type_kwargs= chain_type_kwargs)
109
-
110
- return qa_retriever
111
-
112
- def retrieve_document(query_input):
113
- related_doc = vector_database.similarity_search(query_input)
114
- return related_doc
115
-
116
- def retrieve_answer():
117
- prompt_answer= st.session_state.my_text_input + " " + "Try to elaborate as much as you can."
118
- answer = qa_retriever.run(prompt_answer)
119
- log = {"timestamp": datetime.datetime.now(),
120
- "question":st.session_state.my_text_input,
121
- "generated_answer": answer[6:],
122
- "rating":0 }
123
-
124
- st.session_state.history.append(log)
125
- update_worksheet_qa()
126
- st.session_state.chat_history.append({"message": st.session_state.my_text_input, "is_user": True})
127
- st.session_state.chat_history.append({"message": answer[6:] , "is_user": False})
128
-
129
- st.session_state.my_text_input = ""
130
-
131
- return answer[6:] #this positional slicing helps remove "<pad> " at the beginning
132
-
133
- # def update_score():
134
- # st.session_state.session_rating = st.session_state.rating
135
-
136
-
137
- def update_worksheet_qa():
138
- # st.session_state.session_rating = st.session_state.rating
139
- #This if helps validate the initiated rating, if 0, then the google sheet would not be updated
140
- #(edited) now even with the score of 0, we still want to store the log because some users do not give the score to complete the logging
141
- # if st.session_state.session_rating == 0:
142
- worksheet_qa.append_row([st.session_state.history[-1]['timestamp'].strftime(datetime_format),
143
- st.session_state.history[-1]['question'],
144
- st.session_state.history[-1]['generated_answer'],
145
- 0])
146
- # else:
147
- # worksheet_qa.append_row([st.session_state.history[-1]['timestamp'].strftime(datetime_format),
148
- # st.session_state.history[-1]['question'],
149
- # st.session_state.history[-1]['generated_answer'],
150
- # st.session_state.session_rating
151
- # ])
152
-
153
- def update_worksheet_comment():
154
- worksheet_comment.append_row([datetime.datetime.now().strftime(datetime_format),
155
- feedback_input])
156
- success_message = st.success('Feedback successfully submitted, thank you', icon="✅",
157
- )
158
- time.sleep(3)
159
- success_message.empty()
160
-
161
-
162
- def clean_chat_history():
163
- st.session_state.chat_history = []
164
-
165
- #--------------
166
-
167
-
168
- if "history" not in st.session_state: #this one is for the google sheet logging
169
- st.session_state.history = []
170
-
171
-
172
- if "chat_history" not in st.session_state: #this one is to pass previous messages into chat flow
173
- st.session_state.chat_history = []
174
- # if "session_rating" not in st.session_state:
175
- # st.session_state.session_rating = 0
176
-
177
-
178
- credentials= json.loads(st.secrets['google_sheet_credential'])
179
-
180
- service_account = gspread.service_account_from_dict(credentials)
181
- workbook= service_account.open("aitGPT-qa-log")
182
- worksheet_qa = workbook.worksheet("Sheet1")
183
- worksheet_comment = workbook.worksheet("Sheet2")
184
- datetime_format= "%Y-%m-%d %H:%M:%S"
185
-
186
-
187
-
188
- load_scraped_web_info()
189
- embedding_model = load_embedding_model()
190
- vector_database = load_faiss_index()
191
- llm_model = load_llm_model()
192
- qa_retriever = load_retriever(llm= llm_model, db= vector_database)
193
-
194
-
195
- print("all load done")
196
-
197
-
198
-
199
-
200
-
201
-
202
-
203
-
204
- st.write("# aitGPT 🤖 ")
205
- st.markdown("""
206
- #### The aitGPT project is a virtual assistant developed by the :green[Asian Institute of Technology] that contains a vast amount of information gathered from 205 AIT-related websites.
207
- The goal of this chatbot is to provide an alternative way for applicants and current students to access information about the institute, including admission procedures, campus facilities, and more.
208
- """)
209
- st.write(' ⚠️ Please expect to wait **~ 10 - 20 seconds per question** as thi app is running on CPU against 3-billion-parameter LLM')
210
-
211
- st.markdown("---")
212
- st.write(" ")
213
- st.write("""
214
- ### ❔ Ask a question
215
- """)
216
-
217
-
218
- for chat in st.session_state.chat_history:
219
- st_message(**chat)
220
-
221
- query_input = st.text_input(label= 'What would you like to know about AIT?' , key = 'my_text_input', on_change= retrieve_answer )
222
- # generate_button = st.button(label = 'Ask question!')
223
-
224
- # if generate_button:
225
- # answer = retrieve_answer(query_input)
226
- # log = {"timestamp": datetime.datetime.now(),
227
- # "question":query_input,
228
- # "generated_answer": answer,
229
- # "rating":0 }
230
-
231
- # st.session_state.history.append(log)
232
- # update_worksheet_qa()
233
- # st.session_state.chat_history.append({"message": query_input, "is_user": True})
234
- # st.session_state.chat_history.append({"message": answer, "is_user": False})
235
-
236
- # print(st.session_state.chat_history)
237
-
238
-
239
- clear_button = st.button("Start new convo",
240
- on_click=clean_chat_history)
241
-
242
-
243
- st.write(" ")
244
- st.write(" ")
245
-
246
- st.markdown("---")
247
- st.write("""
248
- ### 💌 Your voice matters
249
- """)
250
-
251
- feedback_input = st.text_area(label= 'please leave your feedback or any ideas to make this bot more knowledgeable and fun')
252
- feedback_button = st.button(label = 'Submit feedback!')
253
-
254
- if feedback_button:
255
- update_worksheet_comment()
256
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cecil8352/vits-models/modules.py DELETED
@@ -1,388 +0,0 @@
1
- import math
2
- import numpy as np
3
- import torch
4
- from torch import nn
5
- from torch.nn import functional as F
6
-
7
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
8
- from torch.nn.utils import weight_norm, remove_weight_norm
9
-
10
- import commons
11
- from commons import init_weights, get_padding
12
- from transforms import piecewise_rational_quadratic_transform
13
-
14
-
15
- LRELU_SLOPE = 0.1
16
-
17
-
18
- class LayerNorm(nn.Module):
19
- def __init__(self, channels, eps=1e-5):
20
- super().__init__()
21
- self.channels = channels
22
- self.eps = eps
23
-
24
- self.gamma = nn.Parameter(torch.ones(channels))
25
- self.beta = nn.Parameter(torch.zeros(channels))
26
-
27
- def forward(self, x):
28
- x = x.transpose(1, -1)
29
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
30
- return x.transpose(1, -1)
31
-
32
-
33
- class ConvReluNorm(nn.Module):
34
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
35
- super().__init__()
36
- self.in_channels = in_channels
37
- self.hidden_channels = hidden_channels
38
- self.out_channels = out_channels
39
- self.kernel_size = kernel_size
40
- self.n_layers = n_layers
41
- self.p_dropout = p_dropout
42
- assert n_layers > 1, "Number of layers should be larger than 0."
43
-
44
- self.conv_layers = nn.ModuleList()
45
- self.norm_layers = nn.ModuleList()
46
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
47
- self.norm_layers.append(LayerNorm(hidden_channels))
48
- self.relu_drop = nn.Sequential(
49
- nn.ReLU(),
50
- nn.Dropout(p_dropout))
51
- for _ in range(n_layers-1):
52
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
53
- self.norm_layers.append(LayerNorm(hidden_channels))
54
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
55
- self.proj.weight.data.zero_()
56
- self.proj.bias.data.zero_()
57
-
58
- def forward(self, x, x_mask):
59
- x_org = x
60
- for i in range(self.n_layers):
61
- x = self.conv_layers[i](x * x_mask)
62
- x = self.norm_layers[i](x)
63
- x = self.relu_drop(x)
64
- x = x_org + self.proj(x)
65
- return x * x_mask
66
-
67
-
68
- class DDSConv(nn.Module):
69
- """
70
- Dialted and Depth-Separable Convolution
71
- """
72
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
73
- super().__init__()
74
- self.channels = channels
75
- self.kernel_size = kernel_size
76
- self.n_layers = n_layers
77
- self.p_dropout = p_dropout
78
-
79
- self.drop = nn.Dropout(p_dropout)
80
- self.convs_sep = nn.ModuleList()
81
- self.convs_1x1 = nn.ModuleList()
82
- self.norms_1 = nn.ModuleList()
83
- self.norms_2 = nn.ModuleList()
84
- for i in range(n_layers):
85
- dilation = kernel_size ** i
86
- padding = (kernel_size * dilation - dilation) // 2
87
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
88
- groups=channels, dilation=dilation, padding=padding
89
- ))
90
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
91
- self.norms_1.append(LayerNorm(channels))
92
- self.norms_2.append(LayerNorm(channels))
93
-
94
- def forward(self, x, x_mask, g=None):
95
- if g is not None:
96
- x = x + g
97
- for i in range(self.n_layers):
98
- y = self.convs_sep[i](x * x_mask)
99
- y = self.norms_1[i](y)
100
- y = F.gelu(y)
101
- y = self.convs_1x1[i](y)
102
- y = self.norms_2[i](y)
103
- y = F.gelu(y)
104
- y = self.drop(y)
105
- x = x + y
106
- return x * x_mask
107
-
108
-
109
- class WN(torch.nn.Module):
110
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
111
- super(WN, self).__init__()
112
- assert(kernel_size % 2 == 1)
113
- self.hidden_channels =hidden_channels
114
- self.kernel_size = kernel_size,
115
- self.dilation_rate = dilation_rate
116
- self.n_layers = n_layers
117
- self.gin_channels = gin_channels
118
- self.p_dropout = p_dropout
119
-
120
- self.in_layers = torch.nn.ModuleList()
121
- self.res_skip_layers = torch.nn.ModuleList()
122
- self.drop = nn.Dropout(p_dropout)
123
-
124
- if gin_channels != 0:
125
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
126
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
127
-
128
- for i in range(n_layers):
129
- dilation = dilation_rate ** i
130
- padding = int((kernel_size * dilation - dilation) / 2)
131
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
132
- dilation=dilation, padding=padding)
133
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
134
- self.in_layers.append(in_layer)
135
-
136
- # last one is not necessary
137
- if i < n_layers - 1:
138
- res_skip_channels = 2 * hidden_channels
139
- else:
140
- res_skip_channels = hidden_channels
141
-
142
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
143
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
144
- self.res_skip_layers.append(res_skip_layer)
145
-
146
- def forward(self, x, x_mask, g=None, **kwargs):
147
- output = torch.zeros_like(x)
148
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
149
-
150
- if g is not None:
151
- g = self.cond_layer(g)
152
-
153
- for i in range(self.n_layers):
154
- x_in = self.in_layers[i](x)
155
- if g is not None:
156
- cond_offset = i * 2 * self.hidden_channels
157
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
158
- else:
159
- g_l = torch.zeros_like(x_in)
160
-
161
- acts = commons.fused_add_tanh_sigmoid_multiply(
162
- x_in,
163
- g_l,
164
- n_channels_tensor)
165
- acts = self.drop(acts)
166
-
167
- res_skip_acts = self.res_skip_layers[i](acts)
168
- if i < self.n_layers - 1:
169
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
170
- x = (x + res_acts) * x_mask
171
- output = output + res_skip_acts[:,self.hidden_channels:,:]
172
- else:
173
- output = output + res_skip_acts
174
- return output * x_mask
175
-
176
- def remove_weight_norm(self):
177
- if self.gin_channels != 0:
178
- torch.nn.utils.remove_weight_norm(self.cond_layer)
179
- for l in self.in_layers:
180
- torch.nn.utils.remove_weight_norm(l)
181
- for l in self.res_skip_layers:
182
- torch.nn.utils.remove_weight_norm(l)
183
-
184
-
185
- class ResBlock1(torch.nn.Module):
186
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
187
- super(ResBlock1, self).__init__()
188
- self.convs1 = nn.ModuleList([
189
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
190
- padding=get_padding(kernel_size, dilation[0]))),
191
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
192
- padding=get_padding(kernel_size, dilation[1]))),
193
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
194
- padding=get_padding(kernel_size, dilation[2])))
195
- ])
196
- self.convs1.apply(init_weights)
197
-
198
- self.convs2 = nn.ModuleList([
199
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
200
- padding=get_padding(kernel_size, 1))),
201
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
202
- padding=get_padding(kernel_size, 1))),
203
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
204
- padding=get_padding(kernel_size, 1)))
205
- ])
206
- self.convs2.apply(init_weights)
207
-
208
- def forward(self, x, x_mask=None):
209
- for c1, c2 in zip(self.convs1, self.convs2):
210
- xt = F.leaky_relu(x, LRELU_SLOPE)
211
- if x_mask is not None:
212
- xt = xt * x_mask
213
- xt = c1(xt)
214
- xt = F.leaky_relu(xt, LRELU_SLOPE)
215
- if x_mask is not None:
216
- xt = xt * x_mask
217
- xt = c2(xt)
218
- x = xt + x
219
- if x_mask is not None:
220
- x = x * x_mask
221
- return x
222
-
223
- def remove_weight_norm(self):
224
- for l in self.convs1:
225
- remove_weight_norm(l)
226
- for l in self.convs2:
227
- remove_weight_norm(l)
228
-
229
-
230
- class ResBlock2(torch.nn.Module):
231
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
232
- super(ResBlock2, self).__init__()
233
- self.convs = nn.ModuleList([
234
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
235
- padding=get_padding(kernel_size, dilation[0]))),
236
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
237
- padding=get_padding(kernel_size, dilation[1])))
238
- ])
239
- self.convs.apply(init_weights)
240
-
241
- def forward(self, x, x_mask=None):
242
- for c in self.convs:
243
- xt = F.leaky_relu(x, LRELU_SLOPE)
244
- if x_mask is not None:
245
- xt = xt * x_mask
246
- xt = c(xt)
247
- x = xt + x
248
- if x_mask is not None:
249
- x = x * x_mask
250
- return x
251
-
252
- def remove_weight_norm(self):
253
- for l in self.convs:
254
- remove_weight_norm(l)
255
-
256
-
257
- class Log(nn.Module):
258
- def forward(self, x, x_mask, reverse=False, **kwargs):
259
- if not reverse:
260
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
261
- logdet = torch.sum(-y, [1, 2])
262
- return y, logdet
263
- else:
264
- x = torch.exp(x) * x_mask
265
- return x
266
-
267
-
268
- class Flip(nn.Module):
269
- def forward(self, x, *args, reverse=False, **kwargs):
270
- x = torch.flip(x, [1])
271
- if not reverse:
272
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
273
- return x, logdet
274
- else:
275
- return x
276
-
277
-
278
- class ElementwiseAffine(nn.Module):
279
- def __init__(self, channels):
280
- super().__init__()
281
- self.channels = channels
282
- self.m = nn.Parameter(torch.zeros(channels,1))
283
- self.logs = nn.Parameter(torch.zeros(channels,1))
284
-
285
- def forward(self, x, x_mask, reverse=False, **kwargs):
286
- if not reverse:
287
- y = self.m + torch.exp(self.logs) * x
288
- y = y * x_mask
289
- logdet = torch.sum(self.logs * x_mask, [1,2])
290
- return y, logdet
291
- else:
292
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
293
- return x
294
-
295
-
296
- class ResidualCouplingLayer(nn.Module):
297
- def __init__(self,
298
- channels,
299
- hidden_channels,
300
- kernel_size,
301
- dilation_rate,
302
- n_layers,
303
- p_dropout=0,
304
- gin_channels=0,
305
- mean_only=False):
306
- assert channels % 2 == 0, "channels should be divisible by 2"
307
- super().__init__()
308
- self.channels = channels
309
- self.hidden_channels = hidden_channels
310
- self.kernel_size = kernel_size
311
- self.dilation_rate = dilation_rate
312
- self.n_layers = n_layers
313
- self.half_channels = channels // 2
314
- self.mean_only = mean_only
315
-
316
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
317
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
318
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
319
- self.post.weight.data.zero_()
320
- self.post.bias.data.zero_()
321
-
322
- def forward(self, x, x_mask, g=None, reverse=False):
323
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
324
- h = self.pre(x0) * x_mask
325
- h = self.enc(h, x_mask, g=g)
326
- stats = self.post(h) * x_mask
327
- if not self.mean_only:
328
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
329
- else:
330
- m = stats
331
- logs = torch.zeros_like(m)
332
-
333
- if not reverse:
334
- x1 = m + x1 * torch.exp(logs) * x_mask
335
- x = torch.cat([x0, x1], 1)
336
- logdet = torch.sum(logs, [1,2])
337
- return x, logdet
338
- else:
339
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
340
- x = torch.cat([x0, x1], 1)
341
- return x
342
-
343
-
344
- class ConvFlow(nn.Module):
345
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
346
- super().__init__()
347
- self.in_channels = in_channels
348
- self.filter_channels = filter_channels
349
- self.kernel_size = kernel_size
350
- self.n_layers = n_layers
351
- self.num_bins = num_bins
352
- self.tail_bound = tail_bound
353
- self.half_channels = in_channels // 2
354
-
355
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
356
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
357
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
358
- self.proj.weight.data.zero_()
359
- self.proj.bias.data.zero_()
360
-
361
- def forward(self, x, x_mask, g=None, reverse=False):
362
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
363
- h = self.pre(x0)
364
- h = self.convs(h, x_mask, g=g)
365
- h = self.proj(h) * x_mask
366
-
367
- b, c, t = x0.shape
368
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
369
-
370
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
371
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
372
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
373
-
374
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
375
- unnormalized_widths,
376
- unnormalized_heights,
377
- unnormalized_derivatives,
378
- inverse=reverse,
379
- tails='linear',
380
- tail_bound=self.tail_bound
381
- )
382
-
383
- x = torch.cat([x0, x1], 1) * x_mask
384
- logdet = torch.sum(logabsdet * x_mask, [1,2])
385
- if not reverse:
386
- return x, logdet
387
- else:
388
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat/client/css/global.css DELETED
@@ -1,70 +0,0 @@
1
- @import url("https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&display=swap");
2
- * {
3
- --font-1: "Inter", sans-serif;
4
- --section-gap: 24px;
5
- --border-radius-1: 8px;
6
- margin: 0;
7
- padding: 0;
8
- box-sizing: border-box;
9
- position: relative;
10
- font-family: var(--font-1);
11
- }
12
-
13
- .theme-light {
14
- --colour-1: #f5f5f5;
15
- --colour-2: #000000;
16
- --colour-3: #474747;
17
- --colour-4: #949494;
18
- --colour-5: #ebebeb;
19
- --colour-6: #dadada;
20
-
21
- --accent: #3a3a3a;
22
- --blur-bg: #ffffff;
23
- --blur-border: #dbdbdb;
24
- --user-input: #282828;
25
- --conversations: #666666;
26
- }
27
-
28
- .theme-dark {
29
- --colour-1: #181818;
30
- --colour-2: #ccc;
31
- --colour-3: #dadada;
32
- --colour-4: #f0f0f0;
33
- --colour-5: #181818;
34
- --colour-6: #242424;
35
-
36
- --accent: #151718;
37
- --blur-bg: #242627;
38
- --blur-border: #242627;
39
- --user-input: #f5f5f5;
40
- --conversations: #555555;
41
- }
42
-
43
- html,
44
- body {
45
- background: var(--colour-1);
46
- color: var(--colour-3);
47
- }
48
-
49
- ol,
50
- ul {
51
- padding-left: 20px;
52
- }
53
-
54
- .shown {
55
- display: flex !important;
56
- }
57
-
58
- a:-webkit-any-link {
59
- color: var(--accent);
60
- }
61
-
62
- pre {
63
- white-space: pre-wrap;
64
- }
65
-
66
- @media screen and (max-height: 720px) {
67
- :root {
68
- --section-gap: 16px;
69
- }
70
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/retinanet/__init__.py DELETED
File without changes