parquet-converter commited on
Commit
07ac748
·
1 Parent(s): 0e65e41

Update parquet files (step 99 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 5 Enterprise.zip Crack.md +0 -22
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Crack the Code Worksheets The Best Kept Secret for Kids Education.md +0 -25
  3. spaces/1gistliPinn/ChatGPT4/Examples/Aqua Energizer Game Full Version Free Download HOT!.md +0 -6
  4. spaces/1phancelerku/anime-remove-background/Bus Simulator Indonesia The Best Bus Simulator Game with APK Download.md +0 -118
  5. spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/inference.py +0 -35
  6. spaces/801artistry/RVC801/go-tensorboard.bat +0 -2
  7. spaces/AFischer1985/German-Flan-T5/app.py +0 -40
  8. spaces/AILab-CVC/EvalCrafter/src/utils_display.py +0 -99
  9. spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/README.md +0 -13
  10. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/r/[id]/+page.server.ts +0 -34
  11. spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/readme.md +0 -98
  12. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements-plugin.d.ts +0 -6
  13. spaces/AlanMars/QYL-AI-Space/assets/external-scripts.js +0 -2
  14. spaces/Amrrs/DragGan-Inversion/PTI/criteria/l2_loss.py +0 -8
  15. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/adapt_a_model.md +0 -42
  16. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/onnx_utils.py +0 -212
  17. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py +0 -300
  18. spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/resnest.py +0 -317
  19. spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fovea.py +0 -17
  20. spaces/AnnaPalatkina/fine_grained_SA/sentiment_wrapper.py +0 -100
  21. spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/tc_model.py +0 -247
  22. spaces/Ariharasudhan/YoloV5/utils/segment/dataloaders.py +0 -331
  23. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py +0 -41
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/exceptions.py +0 -323
  25. spaces/Bart92/RVC_HF/infer/lib/audio.py +0 -197
  26. spaces/Benson/text-generation/Examples/Bangla Mejor Tono De Llamada Que Me Encanta Descargar.md +0 -113
  27. spaces/Benson/text-generation/Examples/Caja Maldita Incredibox Descargar.md +0 -92
  28. spaces/Benson/text-generation/Examples/Descargar Carx Street Versin 0.8.5.md +0 -57
  29. spaces/Benson/text-generation/Examples/Descargar Chicos Stumble Para Pc Sin Emulador.md +0 -100
  30. spaces/BernardoOlisan/vqganclip/taming-transformers/setup.py +0 -13
  31. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langthaimodel.py +0 -0
  32. spaces/Branon/oai-proxy/Dockerfile +0 -11
  33. spaces/CAPTY222/runwayml-stable-diffusion-v1-5/app.py +0 -3
  34. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/train_net.py +0 -67
  35. spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/adapter.py +0 -71
  36. spaces/CVPR/LIVE/thrust/thrust/random/normal_distribution.h +0 -275
  37. spaces/CVPR/Text2Human/Text2Human/models/archs/__init__.py +0 -0
  38. spaces/CVPR/WALT/mmdet/core/bbox/__init__.py +0 -27
  39. spaces/CVPR/WALT/mmdet/core/post_processing/merge_augs.py +0 -150
  40. spaces/CVPR/regionclip-demo/detectron2/export/caffe2_modeling.py +0 -503
  41. spaces/Catspin/2_ai_chat/index.html +0 -1
  42. spaces/CikeyQI/meme-api/meme_generator/memes/capoo_strike/__init__.py +0 -44
  43. spaces/CjangCjengh/Shanghainese-TTS/monotonic_align/core.c +0 -0
  44. spaces/ClearLove443/Robby-chatbot/modules/llm.py +0 -28
  45. spaces/CofAI/chat.b4/g4f/Provider/Providers/Aichat.py +0 -35
  46. spaces/CosmoAI/BhagwatGeeta/README.md +0 -13
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/explicitClosingLinePen.py +0 -101
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/asciiTable.py +0 -20
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/tests/abstract/get.py +0 -377
  50. spaces/DReAMy-lib/dream/app.py +0 -118
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 5 Enterprise.zip Crack.md DELETED
@@ -1,22 +0,0 @@
1
- <br />
2
- <h1>How to Install Delphi 5 Enterprise on Windows 10 with a Crack</h1>
3
- <p>Delphi 5 Enterprise is a powerful and versatile software development tool that uses the Object Pascal programming language and provides an integrated development environment (IDE) for creating desktop, mobile, web, and console applications. It was originally released by Borland in 1999, but it is still used by many developers who need to maintain legacy code or prefer its features and performance.</p>
4
- <p>However, installing Delphi 5 Enterprise on Windows 10 can be challenging, as the installer may hang or fail due to compatibility issues, especially when trying to install the Borland Database Engine (BDE). Moreover, you may need a crack to bypass the license verification and use the software without restrictions.</p>
5
- <h2>Delphi 5 Enterprise.zip crack</h2><br /><p><b><b>Download</b> &#9999; <a href="https://byltly.com/2uKwfU">https://byltly.com/2uKwfU</a></b></p><br /><br />
6
- <p>In this article, we will show you how to install Delphi 5 Enterprise on Windows 10 with a crack, using a simple and effective method that does not require any special skills or tools. We will also provide you with a link to download the Delphi 5 Enterprise.zip file that contains the installer and the crack.</p>
7
- <h2>Step 1: Download the Delphi 5 Enterprise.zip file</h2>
8
- <p>The first step is to download the Delphi 5 Enterprise.zip file that contains the installer and the crack. You can find it on various websites that offer software downloads, such as <a href="https://archive.org/details/delphi-5-archive">Archive.org</a>, <a href="https://sourceforge.net/directory/?q=delphi%205">SourceForge.net</a>, or <a href="https://www.fileplanet.com/archive/p-16103/Delphi-5-Enterprise-Trial">FilePlanet.com</a>. Make sure you download the file from a reliable and trustworthy source, as some files may contain viruses or malware that can harm your computer.</p>
9
- <p>Once you have downloaded the Delphi 5 Enterprise.zip file, extract it to a folder of your choice using a program like WinZip or WinRAR. You should see two files inside the folder: setup.exe and crack.exe. The setup.exe file is the installer for Delphi 5 Enterprise, and the crack.exe file is the program that will patch the software and remove the license verification.</p>
10
- <h2>Step 2: Install Delphi 5 Enterprise</h2>
11
- <p>The next step is to install Delphi 5 Enterprise on your Windows 10 computer. Before you start, make sure you log in as an administrator and turn off User Account Control (UAC) from the Control Panel. This will prevent any errors or interruptions during the installation process.</p>
12
- <p>Then, double-click on the setup.exe file to launch the installer. Follow the instructions on the screen and choose the options that suit your preferences. When you reach the point where the installer asks you to install the BDE, uncheck the box and skip this step. The BDE is not compatible with Windows 10 and may cause problems if you try to install it.</p>
13
- <p></p>
14
- <p>After you finish installing Delphi 5 Enterprise, do not run it yet. You need to apply the crack first to activate it and use it without limitations.</p>
15
- <h2>Step 3: Apply the crack</h2>
16
- <p>The final step is to apply the crack to Delphi 5 Enterprise. To do this, double-click on the crack.exe file that you extracted from the Delphi 5 Enterprise.zip file. A window will open with a button that says "Crack". Click on it and wait for a few seconds until you see a message that says "Done". This means that the crack has successfully patched Delphi 5 Enterprise and removed the license verification.</p>
17
- <p>Now you can run Delphi 5 Enterprise from your Start menu or desktop shortcut. You should see a splash screen that says "Delphi 5 Enterprise - Cracked by [name of cracker]". You can ignore this message and proceed to use the software as normal. You can also turn on UAC again if you want.</p>
18
- <h2>Conclusion</h2>
19
- <p>In this article, we have shown you how to install Delphi 5 Enterprise on Windows 10 with a crack, using a simple and effective method that does not require any special skills or tools. We have also provided you with a link to download the Delphi 5 Enterprise.zip file that contains the installer and the crack.</p>
20
- <p>We hope this article has been helpful and informative for you. If you have any questions or comments, feel free to leave them below. Thank you for reading!</p> 81aa517590<br />
21
- <br />
22
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Crack the Code Worksheets The Best Kept Secret for Kids Education.md DELETED
@@ -1,25 +0,0 @@
1
-
2
- <h1>Free Crack the Code Worksheets for Kids</h1>
3
- <p>Do you want to challenge your kids' logic and problem-solving skills? Do you want to make learning fun and engaging? If yes, then you should try these free crack the code worksheets for kids!</p>
4
- <p>Crack the code worksheets are a type of puzzle where kids have to use a key to decode a secret message. The key can be a letter, a number, a symbol, or a pattern. The secret message can be a word, a phrase, a joke, or a fact. The worksheets can cover various topics, such as math, spelling, vocabulary, science, history, and more.</p>
5
- <h2>free crack the code worksheets</h2><br /><p><b><b>DOWNLOAD</b> - <a href="https://byltly.com/2uKvDk">https://byltly.com/2uKvDk</a></b></p><br /><br />
6
- <p>Crack the code worksheets are great for kids of all ages and abilities. They can help kids develop their critical thinking, reasoning, and deduction skills. They can also improve their literacy, numeracy, and general knowledge. Plus, they are fun and satisfying to solve!</p>
7
- <p>In this article, we will share some of the best free crack the code worksheets for kids that you can find online. You can download and print them for your personal or classroom use. You can also customize them to suit your kids' interests and needs.</p>
8
- <h2>Best Free Crack the Code Worksheets for Kids</h2>
9
- <p>Here are some of the best free crack the code worksheets for kids that we have found online:</p>
10
- <ul>
11
- <li><a href="https://www.math-salamanders.com/crack-the-code.html">Math Salamanders</a>: This website has a collection of math-themed crack the code worksheets for kids from kindergarten to fifth grade. The worksheets cover topics such as addition, subtraction, multiplication, division, fractions, decimals, and more. The kids have to use the key to find the answer to each math problem.</li>
12
- <li><a href="https://www.teacherspayteachers.com/Browse/Search:crack%20the%20code/Price-Range/Free">Teachers Pay Teachers</a>: This website has a variety of free crack the code worksheets for kids created by teachers. The worksheets cover topics such as sight words, spelling words, phonics, grammar, punctuation, and more. The kids have to use the key to reveal the hidden words or sentences.</li>
13
- <li><a href="https://www.activityvillage.co.uk/crack-the-code-puzzles">Activity Village</a>: This website has a selection of free crack the code puzzles for kids that are fun and colorful. The puzzles feature different themes, such as animals, sports, holidays, seasons, and more. The kids have to use the key to decode the secret messages or jokes.</li>
14
- <li><a href="https://www.education.com/worksheet/article/crack-the-code/">Education.com</a>: This website has a free crack the code worksheet for kids that is based on history. The worksheet features a secret message from Abraham Lincoln that he wrote in 1862. The kids have to use the key to decipher the message and learn about an important event in American history.</li>
15
- </ul>
16
- <h2>How to Create Your Own Crack the Code Worksheets</h2>
17
- <p>If you want to create your own crack the code worksheets for your kids or students, you can use online tools or software to generate them. Here are some of the options that you can try:</p>
18
- <ul>
19
- <li><a href="https://www.discoveryeducation.com/free-puzzlemaker/">Discovery Education's Puzzlemaker</a>: This is a free online tool that allows you to create various types of puzzles, including crack the code puzzles. You can choose from different types of keys, such as letters, numbers, symbols, or Morse code. You can also enter your own secret message and customize the font size and style.</li>
20
- <li><a href="https://worksheetgenius.com/codebreaker.php">Worksheet Genius</a>: This is another free online tool that lets you create crack the code worksheets. You can choose from different types of keys, such as letters or numbers. You can also enter your own secret message and adjust the difficulty level.</li>
21
- <li><a href="https://www.microsoft.com/en-us/microsoft-365/excel">Microsoft Excel</a>: This is a software that you can use to create crack the code worksheets using formulas and functions. You can create your own key using letters or numbers and assign them values using formulas. You can also enter your own secret message and encode it using functions. You can then format the worksheet using colors and fonts.</li>
22
- </ul>
23
- <h2>Conclusion</h2</p> ddb901b051<br />
24
- <br />
25
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Aqua Energizer Game Full Version Free Download HOT!.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>aqua energizer game full version free download</h2><br /><p><b><b>DOWNLOAD</b> &mdash; <a href="https://imgfil.com/2uxXA8">https://imgfil.com/2uxXA8</a></b></p><br /><br />
2
-
3
- Musafir full movies 720p torrent · HD Online Player (Shimla Mirchi tamil full movie hd 1080p free download) · aqua energizer game full version ... 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Bus Simulator Indonesia The Best Bus Simulator Game with APK Download.md DELETED
@@ -1,118 +0,0 @@
1
- <br />
2
- <h1>Bus Simulator Indonesia: A Fun and Authentic Game for Android</h1>
3
- <p>If you love driving games and want to experience what it's like to be a bus driver in Indonesia, then you should try Bus Simulator Indonesia. This is a free bus simulator game that lets you design your own livery, drive around authentic Indonesian cities and places, honk your horn, and enjoy the realistic 3D graphics. In this article, we will tell you more about this game, how to download it in APK format, and why you should play it.</p>
4
- <h2>What is Bus Simulator Indonesia?</h2>
5
- <p>Bus Simulator Indonesia (aka BUSSID) is a game developed by Maleo, an Indonesian game studio. It was released in 2017 and has been updated regularly since then. The game aims to replicate the experience of being a bus driver in Indonesia in a fun and authentic way. You can choose from a wide variety of customizable vehicles, pick up passengers, follow the traffic rules, and drive around different cities and places in Indonesia. You can also play online with other players and join convoys.</p>
6
- <h2>bus simulator indonesia game download in apk</h2><br /><p><b><b>DOWNLOAD</b> - <a href="https://jinyurl.com/2uNQT2">https://jinyurl.com/2uNQT2</a></b></p><br /><br />
7
- <h3>Features of Bus Simulator Indonesia</h3>
8
- <p>Bus Simulator Indonesia has many features that make it one of the best bus simulator games on Android. Here are some of them:</p>
9
- <h4>Design your own livery</h4>
10
- <p>You can customize your bus with your own design and colors. You can also use your own 3D model using the vehicle mod system. This way, you can express your creativity and personality with your bus.</p>
11
- <h4>Easy and intuitive control</h4>
12
- <p>The game has very easy and intuitive control options. You can use the tilt, steering wheel, or buttons to control your bus. You can also adjust the camera angle and zoom level to suit your preference.</p>
13
- <h4>Authentic Indonesian cities and places</h4>
14
- <p>The game features authentic Indonesian cities and places, such as Jakarta, Surabaya, Bali, Yogyakarta, and more. You can see the landmarks, buildings, roads, bridges, and scenery that are unique to each location. You can also experience the weather, traffic, and culture of each place.</p>
15
- <h4>Indonesian buses</h4>
16
- <p>The game has a variety of Indonesian buses that you can choose from. You can drive buses from different brands, models, sizes, and types. You can also see the interior and exterior details of each bus.</p>
17
- <h4>Cool and fun honks</h4>
18
- <p>The game has cool and fun honks that you can use to communicate with other drivers and pedestrians. You can also hear the famous "Om Telolet Om!" (Uncle, honk your horn, uncle!) phrase that became a viral sensation in Indonesia.</p>
19
- <p>Bus Simulator Indonesia APK free download for Android<br />
20
- How to install Bus Simulator Indonesia game on your phone<br />
21
- Bus Simulator Indonesia mod APK with unlimited money and fuel<br />
22
- Bus Simulator Indonesia game review and features<br />
23
- Bus Simulator Indonesia online multiplayer mode<br />
24
- Bus Simulator Indonesia latest version 3.7.1 update<br />
25
- Bus Simulator Indonesia vehicle mod system tutorial<br />
26
- Bus Simulator Indonesia authentic Indonesian cities and places<br />
27
- Bus Simulator Indonesia cool and fun honks<br />
28
- Bus Simulator Indonesia design your own livery<br />
29
- Bus Simulator Indonesia best buses and routes<br />
30
- Bus Simulator Indonesia realistic driving physics and graphics<br />
31
- Bus Simulator Indonesia tips and tricks for beginners<br />
32
- Bus Simulator Indonesia cheats and hacks<br />
33
- Bus Simulator Indonesia offline mode without internet<br />
34
- Bus Simulator Indonesia Maleo developer information<br />
35
- Bus Simulator Indonesia ratings and reviews from users<br />
36
- Bus Simulator Indonesia gameplay videos and screenshots<br />
37
- Bus Simulator Indonesia download size and requirements<br />
38
- Bus Simulator Indonesia data safety and privacy policy<br />
39
- Bus Simulator Indonesia support and feedback<br />
40
- Bus Simulator Indonesia leaderboards and achievements<br />
41
- Bus Simulator Indonesia data saved online feature<br />
42
- Bus Simulator Indonesia alternatives and similar games<br />
43
- Bus Simulator Indonesia problems and solutions<br />
44
- Bus Simulator Indonesia FAQs and guides<br />
45
- Bus Simulator Indonesia news and updates<br />
46
- Bus Simulator Indonesia no ads option<br />
47
- Bus Simulator Indonesia for PC and laptop<br />
48
- Bus Simulator Indonesia for iOS and iPhone<br />
49
- Bus Simulator Indonesia for Windows 10 and Mac OS<br />
50
- Bus Simulator Indonesia for tablet and iPad<br />
51
- Bus Simulator Indonesia for Chromebook and Kindle Fire<br />
52
- Bus Simulator Indonesia for Samsung and Huawei devices<br />
53
- Bus Simulator Indonesia for LG and Sony devices<br />
54
- Bus Simulator Indonesia for Xiaomi and Oppo devices<br />
55
- Bus Simulator Indonesia for Vivo and Realme devices<br />
56
- Bus Simulator Indonesia for Nokia and Motorola devices<br />
57
- Bus Simulator Indonesia for Asus and Lenovo devices<br />
58
- Bus Simulator Indonesia for ZTE and Alcatel devices<br />
59
- Bus Simulator Indonesia APK download link from Google Play Store <br />
60
- Bus Simulator Indonesia APK download link from APK Combo <br />
61
- Bus Simulator Indonesia APK download link from Softonic <br />
62
- Bus Simulator Indonesia APK download link from APK Pure <br />
63
- Bus Simulator Indonesia APK download link from Uptodown <br />
64
- Bus Simulator Indonesia APK download link from APK Mirror</p>
65
- <h4>High quality and detailed 3D graphics</h4>
66
- <p>The game has high quality and detailed 3D graphics that make it look realistic and immersive. You can see the shadows, reflections, textures, lighting, and animations of the game. You can also adjust the graphics settings to optimize the performance of your device.</p>
67
- <h4>No obstructive ads while driving</h4>
68
- <p>The game has no obstructive ads while driving. You can enjoy the game without being interrupted by annoying pop-ups or banners. The only ads you will see are on the billboards along the road, which add to the realism of the game.</p>
69
- <h4>Leaderboard and online multiplayer convoy</h4>
70
- <p>The game has a leaderboard system that ranks the players based on their score, distance, speed, fuel consumption, and other factors. <table>
71
- <tr>
72
- <th>Pros</th>
73
- <th>Cons</th>
74
- </tr>
75
- <tr>
76
- <td>Free to play and download</td>
77
- <td>Some bugs and glitches may occur</td>
78
- </tr>
79
- <tr>
80
- <td>Funny and realistic gameplay</td>
81
- <td>Some features may require in-app purchases</td>
82
- </tr>
83
- <tr>
84
- <td>Creative and customizable options</td>
85
- <td>Some devices may not support the game well</td>
86
- </tr>
87
- <tr>
88
- <td>Cultural and educational value</td>
89
- <td>Some content may not be suitable for children</td>
90
- </tr>
91
- <tr>
92
- <td>Social and competitive aspects</td>
93
- <td>Some players may be rude or abusive online</td>
94
- </tr>
95
- </table>
96
- <h3>User reviews and ratings</h3>
97
- <p>Bus Simulator Indonesia has received mostly positive reviews and ratings from users. On Google Play Store, it has a rating of 4.4 out of 5 stars, based on over 1.5 million reviews. On App Store, it has a rating of 4.6 out of 5 stars, based on over 16 thousand reviews. Here are some of the user comments:</p>
98
- <blockquote>"This game is awesome. I love the graphics, the sounds, the controls, and the customization. I feel like I'm really driving a bus in Indonesia. The online mode is also fun and exciting. I recommend this game to anyone who likes driving games."</blockquote>
99
- <blockquote>"This game is very good and realistic. I like the Indonesian culture and scenery in this game. The buses are also very nice and detailed. The only problem is that sometimes the game crashes or freezes. I hope the developers can fix this issue."</blockquote>
100
- <blockquote>"This game is very bad and boring. I hate the graphics, the sounds, the controls, and the customization. I feel like I'm wasting my time playing this game. The online mode is also laggy and annoying. I don't recommend this game to anyone who likes driving games."</blockquote>
101
- <h2>Conclusion</h2>
102
- <p>Bus Simulator Indonesia is a fun and authentic game for Android that lets you drive a bus in Indonesia. You can design your own livery, drive around different cities and places, honk your horn, and enjoy the realistic 3D graphics. You can also play online with other players and join convoys. To download the game in APK format, you can follow the steps we provided above. You can also check the pros and cons and user reviews and ratings of the game before you play it.</p>
103
- <p>We hope you enjoyed this article and learned something new about Bus Simulator Indonesia. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!</p>
104
- <h3>Frequently Asked Questions (FAQs)</h3>
105
- <ol>
106
- <li><b>What is the difference between APK and normal download?</b></li>
107
- <p>An APK file is an Android Package file that contains all the files and data needed to install an app on an Android device. A normal download is a file that can be downloaded from an official app store or website.</p>
108
- <li><b>Is Bus Simulator Indonesia safe to download?</b></li>
109
- <p>Yes, Bus Simulator Indonesia is safe to download as long as you download it from a trusted source, such as the official website or app store. You should also scan the file with an antivirus software before installing it.</p>
110
- <li><b>How can I update Bus Simulator Indonesia?</b></li>
111
- <p>You can update Bus Simulator Indonesia by downloading the latest version from the official website or app store. You can also enable the automatic update option in your device settings to get notified when a new version is available.</p>
112
- <li><b>How can I contact the developers of Bus Simulator Indonesia?</b></li>
113
- <p>You can contact the developers of Bus Simulator Indonesia by sending them an email at <a href="mailto:[email protected]">[email protected]</a> or by visiting their Facebook page at <a href="https://www.facebook.com/bussimulatorid/">https://www.facebook.com/bussimulatorid/</a>.</p>
114
- <li><b>How can I support Bus Simulator Indonesia?</b></li>
115
- <p>You can support Bus Simulator Indonesia by giving it a positive review and rating on the app store or website, by sharing it with your friends and family, by making in-app purchases, or by donating to the developers via PayPal or bank transfer.</p>
116
- </ol></p> 401be4b1e0<br />
117
- <br />
118
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/inference.py DELETED
@@ -1,35 +0,0 @@
1
- import argparse
2
-
3
- import cv2
4
- import numpy as np
5
- import torch
6
-
7
- from backbones import get_model
8
-
9
-
10
- @torch.no_grad()
11
- def inference(weight, name, img):
12
- if img is None:
13
- img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.uint8)
14
- else:
15
- img = cv2.imread(img)
16
- img = cv2.resize(img, (112, 112))
17
-
18
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
19
- img = np.transpose(img, (2, 0, 1))
20
- img = torch.from_numpy(img).unsqueeze(0).float()
21
- img.div_(255).sub_(0.5).div_(0.5)
22
- net = get_model(name, fp16=False)
23
- net.load_state_dict(torch.load(weight))
24
- net.eval()
25
- feat = net(img).numpy()
26
- print(feat)
27
-
28
-
29
- if __name__ == "__main__":
30
- parser = argparse.ArgumentParser(description='PyTorch ArcFace Training')
31
- parser.add_argument('--network', type=str, default='r50', help='backbone network')
32
- parser.add_argument('--weight', type=str, default='')
33
- parser.add_argument('--img', type=str, default=None)
34
- args = parser.parse_args()
35
- inference(args.weight, args.network, args.img)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/go-tensorboard.bat DELETED
@@ -1,2 +0,0 @@
1
- python fixes/tensor-launch.py
2
- pause
 
 
 
spaces/AFischer1985/German-Flan-T5/app.py DELETED
@@ -1,40 +0,0 @@
1
- import gradio as gr
2
- from transformers import pipeline
3
- title= "German Flan-T5"
4
- desc="Kommunikation mit flan-t5-large auf Deutsch wird intern ins Englische (opus-mt-de-en) und vom Englischen (opus-mt-en-de) übersetzt."
5
- examples = [
6
- ["Erzähl mit eine Geschichte!",50,2,3,1,"Deutsch"],
7
- ["Welche Blumen sollte man jemandem zum Valentinstag schenken?",50,1,0,1,"Deutsch"],
8
- ["Please write a step by step recipe to make bolognese pasta!",50,2,3,2,"Englisch"]
9
- ]
10
-
11
- tDeEn = pipeline(model="Helsinki-NLP/opus-mt-de-en")
12
- tEnDe = pipeline(model="Helsinki-NLP/opus-mt-en-de")
13
- bot = pipeline(model="google/flan-t5-large")
14
-
15
- def solve(text,max_length,length_penalty,no_repeat_ngram_size,num_beams,language):
16
- if(language=="Deutsch"):
17
- text=tDeEn(text)[0]["translation_text"]
18
- out=bot(text,max_length=max_length, length_penalty=length_penalty, no_repeat_ngram_size=no_repeat_ngram_size, num_beams=num_beams, early_stopping=True)[0]["generated_text"]
19
- if(language=="Deutsch"):
20
- out=tEnDe(out)[0]["translation_text"]
21
- return out
22
-
23
- task = gr.Interface(
24
- fn=solve,
25
- inputs=[
26
- gr.Textbox(lines=5,max_lines=6,label="Frage"),
27
- gr.Slider(minimum=1.0,maximum=200.0,value=50.0,step=1,interactive=True,label="max_length"),
28
- gr.Slider(minimum=1.0,maximum=20.0,value=1.0,step=1,interactive=True,label="length_penalty"),
29
- gr.Slider(minimum=0.0,maximum=5.0,value=3.0,step=1,interactive=True,label="no_repeat_ngram_size"),
30
- gr.Slider(minimum=1.0,maximum=20.0,value=1.0,step=1,interactive=True,label="num_beams"),
31
- gr.Dropdown(["Deutsch", "Englisch"],value="Deutsch"),
32
- ],
33
- outputs="text",
34
- title=title,
35
- description=desc,
36
- examples=examples
37
- )
38
-
39
- if __name__ == "__main__":
40
- task.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AILab-CVC/EvalCrafter/src/utils_display.py DELETED
@@ -1,99 +0,0 @@
1
- from dataclasses import dataclass
2
-
3
- # These classes are for user facing column names, to avoid having to change them
4
- # all around the code when a modif is needed
5
- @dataclass
6
- class ColumnContent:
7
- name: str
8
- type: str
9
- displayed_by_default: bool
10
- hidden: bool = False
11
-
12
- def fields(raw_class):
13
- return [v for k, v in raw_class.__dict__.items() if k[:2] != "__" and k[-2:] != "__"]
14
-
15
- @dataclass(frozen=True)
16
- class AutoEvalColumn: # Auto evals column
17
- model_type_symbol = ColumnContent("T", "str", True)
18
- model = ColumnContent("Model", "markdown", True)
19
- average = ColumnContent("Average ⬆️", "number", True)
20
- arc = ColumnContent("ARC", "number", True)
21
- hellaswag = ColumnContent("HellaSwag", "number", True)
22
- mmlu = ColumnContent("MMLU", "number", True)
23
- truthfulqa = ColumnContent("TruthfulQA", "number", True)
24
- model_type = ColumnContent("Type", "str", False)
25
- precision = ColumnContent("Precision", "str", False, True)
26
- license = ColumnContent("Hub License", "str", False)
27
- params = ColumnContent("#Params (B)", "number", False)
28
- likes = ColumnContent("Hub ❤️", "number", False)
29
- revision = ColumnContent("Model sha", "str", False, False)
30
- dummy = ColumnContent("model_name_for_query", "str", True) # dummy col to implement search bar (hidden by custom CSS)
31
-
32
- @dataclass(frozen=True)
33
- class EloEvalColumn: # Elo evals column
34
- model = ColumnContent("Model", "markdown", True)
35
- gpt4 = ColumnContent("GPT-4 (all)", "number", True)
36
- human_all = ColumnContent("Human (all)", "number", True)
37
- human_instruct = ColumnContent("Human (instruct)", "number", True)
38
- human_code_instruct = ColumnContent("Human (code-instruct)", "number", True)
39
-
40
-
41
- @dataclass(frozen=True)
42
- class EvalQueueColumn: # Queue column
43
- model = ColumnContent("model", "markdown", True)
44
- revision = ColumnContent("revision", "str", True)
45
- private = ColumnContent("private", "bool", True)
46
- precision = ColumnContent("precision", "bool", True)
47
- weight_type = ColumnContent("weight_type", "str", "Original")
48
- status = ColumnContent("status", "str", True)
49
-
50
- LLAMAS = ["huggingface/llama-7b", "huggingface/llama-13b", "huggingface/llama-30b", "huggingface/llama-65b"]
51
-
52
-
53
- KOALA_LINK = "https://huggingface.co/TheBloke/koala-13B-HF"
54
- VICUNA_LINK = "https://huggingface.co/lmsys/vicuna-13b-delta-v1.1"
55
- OASST_LINK = "https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5"
56
- DOLLY_LINK = "https://huggingface.co/databricks/dolly-v2-12b"
57
- MODEL_PAGE = "https://huggingface.co/models"
58
- LLAMA_LINK = "https://ai.facebook.com/blog/large-language-model-llama-meta-ai/"
59
- VICUNA_LINK = "https://huggingface.co/CarperAI/stable-vicuna-13b-delta"
60
- ALPACA_LINK = "https://crfm.stanford.edu/2023/03/13/alpaca.html"
61
-
62
-
63
- def model_hyperlink(link, model_name):
64
- return f'<a target="_blank" href="{link}" style="color: var(--link-text-color); text-decoration: underline;text-decoration-style: dotted;">{model_name}</a>'
65
-
66
-
67
- def make_clickable_model(model_name):
68
- link = f"https://huggingface.co/{model_name}"
69
-
70
- if model_name in LLAMAS:
71
- link = LLAMA_LINK
72
- model_name = model_name.split("/")[1]
73
- elif model_name == "HuggingFaceH4/stable-vicuna-13b-2904":
74
- link = VICUNA_LINK
75
- model_name = "stable-vicuna-13b"
76
- elif model_name == "HuggingFaceH4/llama-7b-ift-alpaca":
77
- link = ALPACA_LINK
78
- model_name = "alpaca-13b"
79
- if model_name == "dolly-12b":
80
- link = DOLLY_LINK
81
- elif model_name == "vicuna-13b":
82
- link = VICUNA_LINK
83
- elif model_name == "koala-13b":
84
- link = KOALA_LINK
85
- elif model_name == "oasst-12b":
86
- link = OASST_LINK
87
- #else:
88
- # link = MODEL_PAGE
89
-
90
- return model_hyperlink(link, model_name)
91
-
92
- def styled_error(error):
93
- return f"<p style='color: red; font-size: 20px; text-align: center;'>{error}</p>"
94
-
95
- def styled_warning(warn):
96
- return f"<p style='color: orange; font-size: 20px; text-align: center;'>{warn}</p>"
97
-
98
- def styled_message(message):
99
- return f"<p style='color: green; font-size: 20px; text-align: center;'>{message}</p>"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: 01ST CSV Dataset Analyzer
3
- emoji: 🔥
4
- colorFrom: red
5
- colorTo: purple
6
- sdk: streamlit
7
- sdk_version: 1.10.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/r/[id]/+page.server.ts DELETED
@@ -1,34 +0,0 @@
1
- import type { PageServerLoad } from "./$types";
2
- import { collections } from "$lib/server/database";
3
- import { error } from "@sveltejs/kit";
4
- import type { WebSearchMessageResult } from "$lib/types/WebSearch";
5
-
6
- export const load: PageServerLoad = async ({ params }) => {
7
- /*const conversation = await collections.sharedConversations.findOne({
8
- _id: params.id,
9
- });
10
-
11
- if (!conversation) {
12
- throw error(404, "Conversation not found");
13
- }
14
-
15
- const webSearchesId = conversation.messages
16
- .filter((message) => message.webSearchId)
17
- .map((message) => new ObjectId(message.webSearchId));
18
-
19
- const results = await collections.webSearches.find({ _id: { $in: webSearchesId } }).toArray();
20
-
21
- const searches = Object.fromEntries(
22
- results.map((x) => [
23
- x._id.toString(),
24
- [...x.messages, { type: "result", id: x._id.toString() } satisfies WebSearchMessageResult],
25
- ])
26
- );
27
-
28
- return {
29
- messages: conversation.messages,
30
- title: conversation.title,
31
- model: conversation.model,
32
- searches,
33
- };*/
34
- };
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/readme.md DELETED
@@ -1,98 +0,0 @@
1
- # SDE team
2
-
3
- In this task, LLMs work as a software development team to solve code implementation problem. We have simulated two scenarios *sde_team/sde_team_2players* and *sde_team/sde_team_3players*.
4
-
5
- The performance on [HumanEval](https://github.com/openai/human-eval) is shown below.
6
-
7
- | Methods | Pass@1 HumanEval |
8
- |---------------------------------|-----------|
9
- | Codex (175B)* | 0.47 |
10
- | &nbsp;&nbsp;&nbsp;&nbsp;+ CodeT* | 0.658 |
11
- | PaLM Coder (540B)* | 0.36 |
12
- | GPT-4* | 0.67 |
13
- | ChatGPT (gpt-3.5-turbo)* | 0.573 |
14
- | &nbsp;&nbsp;&nbsp;&nbsp;+ Self-collaboration* | 0.744 |
15
- | &nbsp;&nbsp;&nbsp;&nbsp;+ Our *sde_team/sde_team_2players* | **0.799** |
16
-
17
- *: Results are from [Self-collaboration](https://arxiv.org/abs/2304.07590). The methods in the table all employed the provided unit tests.
18
-
19
- Our *sde_team/sde_team_2players* shares the similar spirit as Self-collaboration at the moment. We are working to introduce more features in this repo!
20
-
21
-
22
- ## *sde_team/sde_team_2players*
23
-
24
- In this case, we are simulating a code generation problem that a python function body is required to be generated given function signature, doc string and unit tests. In the following, we will elaborate the details.
25
-
26
- ### Roles
27
-
28
- Detailed role description and prompts can be found in `config.yaml`
29
-
30
- #### *code writer*
31
-
32
- Code writer writes the code to satisfy the given requirement. The requirement is given in the \<problem\> field of the prompt. The code writer first thinks about the task (the thoughts written in \<thoughts\>) and then write the code in \<code\>.
33
-
34
- The submitted code will be tested automatically on a series of unit tests. Then the feedback (in \<unit test feedback\>) together with a professional code review (in \<code review\>) will be returned. Then code writer will leverage this information to refine the previously submitted code. The refinement will take multiple iterations.
35
-
36
- #### *code reviewer*
37
-
38
- Code reviewer will write professional review for the submitted code. The submitted code will be given in \<submitted code\>, the execution feedback of unit tests will be given in \<unit tests feedback\> and the review will be composed in \<code review\>.
39
-
40
- #### dummy *code tester*
41
- Code tester is a dummy agent. In the current implementation, unit tests are executed via the local python code `agentverse/environments/rules/selector/code_api.py`. We will integrate the execution tools to BMTools soon.
42
-
43
- ### How to run the simulation
44
-
45
- #### Provide problem and unit tests
46
-
47
- The code problem and unit tests should be given in `agentverse/tasks/sde_team/sde_team_2players/code_problem.json`. Here is an example.
48
-
49
- ```json
50
- {
51
- "problem": "from typing import List\n\n\ndef separate_paren_groups(paren_string: str) -> List[str]:\n \"\"\" Input to this function is a string containing multiple groups of nested parentheses. Your goal is to\n separate those group into separate strings and return the list of those.\n Separate groups are balanced (each open brace is properly closed) and not nested within each other\n Ignore any spaces in the input string.\n >>> separate_paren_groups('( ) (( )) (( )( ))')\n ['()', '(())', '(()())']\n \"\"\"\n",
52
- "unit_tests": [
53
- "assert separate_paren_groups('(()()) ((())) () ((())()())') == ['(()())', '((()))', '()', '((())()())']",
54
- "assert separate_paren_groups('() (()) ((())) (((())))') == ['()', '(())', '((()))', '(((())))']",
55
- "assert separate_paren_groups('(()(())((())))') == ['(()(())((())))']",
56
- "assert separate_paren_groups('( ) (( )) (( )( ))') == ['()', '(())', '(()())']"
57
- ]
58
- }
59
- ```
60
-
61
- #### Build the configuration file
62
-
63
- Run `agentverse/tasks/sde_team/sde_team_2players/build_config.py` to generate `config.yaml`.
64
-
65
- ```bash
66
- cd agentverse/tasks/sde_team/sde_team_2players/
67
- python build_config.py
68
- ```
69
-
70
- #### Run the session
71
-
72
- After generating `config.yaml`, run the `main.py` to start the task.
73
-
74
- ```python
75
- import os
76
- from agentverse.agentverse import AgentVerse
77
- from argparse import ArgumentParser
78
-
79
- parser = ArgumentParser()
80
- parser.add_argument("--task", type=str, default="sde_team/sde_team_2players")
81
- parser.add_argument("--tasks_dir", type=str, default=os.path.join(
82
- os.path.dirname(__file__), "agentverse", "tasks"))
83
-
84
- args = parser.parse_args()
85
- agentverse = AgentVerse.from_task(args.task, args.tasks_dir)
86
- agentverse.run()
87
- ```
88
-
89
-
90
- ## *sde_team/sde_team_3players*
91
-
92
- Different from *sde_team/sde_team_2players*, we additionally introduce a role to automatically generate unit tests.
93
-
94
- - *unit test generator*: generate a series of unit test cases for the coding problem.
95
-
96
- ### Stay tuned
97
-
98
- The generated unit tests are not always perfect, as they may not be correct. We plan to incorporate tools to raise the correctness of the generated cases.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements-plugin.d.ts DELETED
@@ -1,6 +0,0 @@
1
- import Achievements from './ymlachievements';
2
-
3
- export default class AchievementsPlugin extends Phaser.Plugins.BasePlugin {
4
- add(): Achievements;
5
-
6
- }
 
 
 
 
 
 
 
spaces/AlanMars/QYL-AI-Space/assets/external-scripts.js DELETED
@@ -1,2 +0,0 @@
1
-
2
- // external javascript here
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/criteria/l2_loss.py DELETED
@@ -1,8 +0,0 @@
1
- import torch
2
-
3
- l2_criterion = torch.nn.MSELoss(reduction='mean')
4
-
5
-
6
- def l2_loss(real_images, generated_images):
7
- loss = l2_criterion(real_images, generated_images)
8
- return loss
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/adapt_a_model.md DELETED
@@ -1,42 +0,0 @@
1
- # Adapt a model to a new task
2
-
3
- Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task.
4
-
5
- This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained [`UNet2DConditionModel`].
6
-
7
- ## Configure UNet2DConditionModel parameters
8
-
9
- A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, load a pretrained text-to-image model like [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) and take a look at the number of `in_channels`:
10
-
11
- ```py
12
- from diffusers import StableDiffusionPipeline
13
-
14
- pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
15
- pipeline.unet.config["in_channels"]
16
- 4
17
- ```
18
-
19
- Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting):
20
-
21
- ```py
22
- from diffusers import StableDiffusionPipeline
23
-
24
- pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting")
25
- pipeline.unet.config["in_channels"]
26
- 9
27
- ```
28
-
29
- To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9.
30
-
31
- Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now.
32
-
33
- ```py
34
- from diffusers import UNet2DConditionModel
35
-
36
- model_id = "runwayml/stable-diffusion-v1-5"
37
- unet = UNet2DConditionModel.from_pretrained(
38
- model_id, subfolder="unet", in_channels=9, low_cpu_mem_usage=False, ignore_mismatched_sizes=True
39
- )
40
- ```
41
-
42
- The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (`conv_in.weight`) of the `unet` are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/onnx_utils.py DELETED
@@ -1,212 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 The HuggingFace Inc. team.
3
- # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
4
- #
5
- # Licensed under the Apache License, Version 2.0 (the "License");
6
- # you may not use this file except in compliance with the License.
7
- # You may obtain a copy of the License at
8
- #
9
- # http://www.apache.org/licenses/LICENSE-2.0
10
- #
11
- # Unless required by applicable law or agreed to in writing, software
12
- # distributed under the License is distributed on an "AS IS" BASIS,
13
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- # See the License for the specific language governing permissions and
15
- # limitations under the License.
16
-
17
-
18
- import os
19
- import shutil
20
- from pathlib import Path
21
- from typing import Optional, Union
22
-
23
- import numpy as np
24
- from huggingface_hub import hf_hub_download
25
-
26
- from ..utils import ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME, is_onnx_available, logging
27
-
28
-
29
- if is_onnx_available():
30
- import onnxruntime as ort
31
-
32
-
33
- logger = logging.get_logger(__name__)
34
-
35
- ORT_TO_NP_TYPE = {
36
- "tensor(bool)": np.bool_,
37
- "tensor(int8)": np.int8,
38
- "tensor(uint8)": np.uint8,
39
- "tensor(int16)": np.int16,
40
- "tensor(uint16)": np.uint16,
41
- "tensor(int32)": np.int32,
42
- "tensor(uint32)": np.uint32,
43
- "tensor(int64)": np.int64,
44
- "tensor(uint64)": np.uint64,
45
- "tensor(float16)": np.float16,
46
- "tensor(float)": np.float32,
47
- "tensor(double)": np.float64,
48
- }
49
-
50
-
51
- class OnnxRuntimeModel:
52
- def __init__(self, model=None, **kwargs):
53
- logger.info("`diffusers.OnnxRuntimeModel` is experimental and might change in the future.")
54
- self.model = model
55
- self.model_save_dir = kwargs.get("model_save_dir", None)
56
- self.latest_model_name = kwargs.get("latest_model_name", ONNX_WEIGHTS_NAME)
57
-
58
- def __call__(self, **kwargs):
59
- inputs = {k: np.array(v) for k, v in kwargs.items()}
60
- return self.model.run(None, inputs)
61
-
62
- @staticmethod
63
- def load_model(path: Union[str, Path], provider=None, sess_options=None):
64
- """
65
- Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider`
66
-
67
- Arguments:
68
- path (`str` or `Path`):
69
- Directory from which to load
70
- provider(`str`, *optional*):
71
- Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider`
72
- """
73
- if provider is None:
74
- logger.info("No onnxruntime provider specified, using CPUExecutionProvider")
75
- provider = "CPUExecutionProvider"
76
-
77
- return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)
78
-
79
- def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs):
80
- """
81
- Save a model and its configuration file to a directory, so that it can be re-loaded using the
82
- [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the
83
- latest_model_name.
84
-
85
- Arguments:
86
- save_directory (`str` or `Path`):
87
- Directory where to save the model file.
88
- file_name(`str`, *optional*):
89
- Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to save the
90
- model with a different name.
91
- """
92
- model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME
93
-
94
- src_path = self.model_save_dir.joinpath(self.latest_model_name)
95
- dst_path = Path(save_directory).joinpath(model_file_name)
96
- try:
97
- shutil.copyfile(src_path, dst_path)
98
- except shutil.SameFileError:
99
- pass
100
-
101
- # copy external weights (for models >2GB)
102
- src_path = self.model_save_dir.joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)
103
- if src_path.exists():
104
- dst_path = Path(save_directory).joinpath(ONNX_EXTERNAL_WEIGHTS_NAME)
105
- try:
106
- shutil.copyfile(src_path, dst_path)
107
- except shutil.SameFileError:
108
- pass
109
-
110
- def save_pretrained(
111
- self,
112
- save_directory: Union[str, os.PathLike],
113
- **kwargs,
114
- ):
115
- """
116
- Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class
117
- method.:
118
-
119
- Arguments:
120
- save_directory (`str` or `os.PathLike`):
121
- Directory to which to save. Will be created if it doesn't exist.
122
- """
123
- if os.path.isfile(save_directory):
124
- logger.error(f"Provided path ({save_directory}) should be a directory, not a file")
125
- return
126
-
127
- os.makedirs(save_directory, exist_ok=True)
128
-
129
- # saving model weights/files
130
- self._save_pretrained(save_directory, **kwargs)
131
-
132
- @classmethod
133
- def _from_pretrained(
134
- cls,
135
- model_id: Union[str, Path],
136
- use_auth_token: Optional[Union[bool, str, None]] = None,
137
- revision: Optional[Union[str, None]] = None,
138
- force_download: bool = False,
139
- cache_dir: Optional[str] = None,
140
- file_name: Optional[str] = None,
141
- provider: Optional[str] = None,
142
- sess_options: Optional["ort.SessionOptions"] = None,
143
- **kwargs,
144
- ):
145
- """
146
- Load a model from a directory or the HF Hub.
147
-
148
- Arguments:
149
- model_id (`str` or `Path`):
150
- Directory from which to load
151
- use_auth_token (`str` or `bool`):
152
- Is needed to load models from a private or gated repository
153
- revision (`str`):
154
- Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id
155
- cache_dir (`Union[str, Path]`, *optional*):
156
- Path to a directory in which a downloaded pretrained model configuration should be cached if the
157
- standard cache should not be used.
158
- force_download (`bool`, *optional*, defaults to `False`):
159
- Whether or not to force the (re-)download of the model weights and configuration files, overriding the
160
- cached versions if they exist.
161
- file_name(`str`):
162
- Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to load
163
- different model files from the same repository or directory.
164
- provider(`str`):
165
- The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`.
166
- kwargs (`Dict`, *optional*):
167
- kwargs will be passed to the model during initialization
168
- """
169
- model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME
170
- # load model from local directory
171
- if os.path.isdir(model_id):
172
- model = OnnxRuntimeModel.load_model(
173
- os.path.join(model_id, model_file_name), provider=provider, sess_options=sess_options
174
- )
175
- kwargs["model_save_dir"] = Path(model_id)
176
- # load model from hub
177
- else:
178
- # download model
179
- model_cache_path = hf_hub_download(
180
- repo_id=model_id,
181
- filename=model_file_name,
182
- use_auth_token=use_auth_token,
183
- revision=revision,
184
- cache_dir=cache_dir,
185
- force_download=force_download,
186
- )
187
- kwargs["model_save_dir"] = Path(model_cache_path).parent
188
- kwargs["latest_model_name"] = Path(model_cache_path).name
189
- model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider, sess_options=sess_options)
190
- return cls(model=model, **kwargs)
191
-
192
- @classmethod
193
- def from_pretrained(
194
- cls,
195
- model_id: Union[str, Path],
196
- force_download: bool = True,
197
- use_auth_token: Optional[str] = None,
198
- cache_dir: Optional[str] = None,
199
- **model_kwargs,
200
- ):
201
- revision = None
202
- if len(str(model_id).split("@")) == 2:
203
- model_id, revision = model_id.split("@")
204
-
205
- return cls._from_pretrained(
206
- model_id=model_id,
207
- revision=revision,
208
- cache_dir=cache_dir,
209
- force_download=force_download,
210
- use_auth_token=use_auth_token,
211
- **model_kwargs,
212
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py DELETED
@@ -1,300 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import random
18
- import unittest
19
-
20
- import numpy as np
21
- import torch
22
- from transformers import XLMRobertaTokenizer
23
-
24
- from diffusers import (
25
- AltDiffusionImg2ImgPipeline,
26
- AutoencoderKL,
27
- PNDMScheduler,
28
- UNet2DConditionModel,
29
- )
30
- from diffusers.image_processor import VaeImageProcessor
31
- from diffusers.pipelines.alt_diffusion.modeling_roberta_series import (
32
- RobertaSeriesConfig,
33
- RobertaSeriesModelWithTransformation,
34
- )
35
- from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
36
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
37
-
38
-
39
- enable_full_determinism()
40
-
41
-
42
- class AltDiffusionImg2ImgPipelineFastTests(unittest.TestCase):
43
- def tearDown(self):
44
- # clean up the VRAM after each test
45
- super().tearDown()
46
- gc.collect()
47
- torch.cuda.empty_cache()
48
-
49
- @property
50
- def dummy_image(self):
51
- batch_size = 1
52
- num_channels = 3
53
- sizes = (32, 32)
54
-
55
- image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device)
56
- return image
57
-
58
- @property
59
- def dummy_cond_unet(self):
60
- torch.manual_seed(0)
61
- model = UNet2DConditionModel(
62
- block_out_channels=(32, 64),
63
- layers_per_block=2,
64
- sample_size=32,
65
- in_channels=4,
66
- out_channels=4,
67
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
68
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
69
- cross_attention_dim=32,
70
- )
71
- return model
72
-
73
- @property
74
- def dummy_vae(self):
75
- torch.manual_seed(0)
76
- model = AutoencoderKL(
77
- block_out_channels=[32, 64],
78
- in_channels=3,
79
- out_channels=3,
80
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
81
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
82
- latent_channels=4,
83
- )
84
- return model
85
-
86
- @property
87
- def dummy_text_encoder(self):
88
- torch.manual_seed(0)
89
- config = RobertaSeriesConfig(
90
- hidden_size=32,
91
- project_dim=32,
92
- intermediate_size=37,
93
- layer_norm_eps=1e-05,
94
- num_attention_heads=4,
95
- num_hidden_layers=5,
96
- pad_token_id=1,
97
- vocab_size=5006,
98
- )
99
- return RobertaSeriesModelWithTransformation(config)
100
-
101
- @property
102
- def dummy_extractor(self):
103
- def extract(*args, **kwargs):
104
- class Out:
105
- def __init__(self):
106
- self.pixel_values = torch.ones([0])
107
-
108
- def to(self, device):
109
- self.pixel_values.to(device)
110
- return self
111
-
112
- return Out()
113
-
114
- return extract
115
-
116
- def test_stable_diffusion_img2img_default_case(self):
117
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
118
- unet = self.dummy_cond_unet
119
- scheduler = PNDMScheduler(skip_prk_steps=True)
120
- vae = self.dummy_vae
121
- bert = self.dummy_text_encoder
122
- tokenizer = XLMRobertaTokenizer.from_pretrained("hf-internal-testing/tiny-xlm-roberta")
123
- tokenizer.model_max_length = 77
124
-
125
- init_image = self.dummy_image.to(device)
126
- init_image = init_image / 2 + 0.5
127
-
128
- # make sure here that pndm scheduler skips prk
129
- alt_pipe = AltDiffusionImg2ImgPipeline(
130
- unet=unet,
131
- scheduler=scheduler,
132
- vae=vae,
133
- text_encoder=bert,
134
- tokenizer=tokenizer,
135
- safety_checker=None,
136
- feature_extractor=self.dummy_extractor,
137
- )
138
- alt_pipe.image_processor = VaeImageProcessor(vae_scale_factor=alt_pipe.vae_scale_factor, do_normalize=True)
139
- alt_pipe = alt_pipe.to(device)
140
- alt_pipe.set_progress_bar_config(disable=None)
141
-
142
- prompt = "A painting of a squirrel eating a burger"
143
- generator = torch.Generator(device=device).manual_seed(0)
144
- output = alt_pipe(
145
- [prompt],
146
- generator=generator,
147
- guidance_scale=6.0,
148
- num_inference_steps=2,
149
- output_type="np",
150
- image=init_image,
151
- )
152
-
153
- image = output.images
154
-
155
- generator = torch.Generator(device=device).manual_seed(0)
156
- image_from_tuple = alt_pipe(
157
- [prompt],
158
- generator=generator,
159
- guidance_scale=6.0,
160
- num_inference_steps=2,
161
- output_type="np",
162
- image=init_image,
163
- return_dict=False,
164
- )[0]
165
-
166
- image_slice = image[0, -3:, -3:, -1]
167
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
168
-
169
- assert image.shape == (1, 32, 32, 3)
170
- expected_slice = np.array([0.4427, 0.3731, 0.4249, 0.4941, 0.4546, 0.4148, 0.4193, 0.4666, 0.4499])
171
-
172
- assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3
173
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 5e-3
174
-
175
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
176
- def test_stable_diffusion_img2img_fp16(self):
177
- """Test that stable diffusion img2img works with fp16"""
178
- unet = self.dummy_cond_unet
179
- scheduler = PNDMScheduler(skip_prk_steps=True)
180
- vae = self.dummy_vae
181
- bert = self.dummy_text_encoder
182
- tokenizer = XLMRobertaTokenizer.from_pretrained("hf-internal-testing/tiny-xlm-roberta")
183
- tokenizer.model_max_length = 77
184
-
185
- init_image = self.dummy_image.to(torch_device)
186
-
187
- # put models in fp16
188
- unet = unet.half()
189
- vae = vae.half()
190
- bert = bert.half()
191
-
192
- # make sure here that pndm scheduler skips prk
193
- alt_pipe = AltDiffusionImg2ImgPipeline(
194
- unet=unet,
195
- scheduler=scheduler,
196
- vae=vae,
197
- text_encoder=bert,
198
- tokenizer=tokenizer,
199
- safety_checker=None,
200
- feature_extractor=self.dummy_extractor,
201
- )
202
- alt_pipe.image_processor = VaeImageProcessor(vae_scale_factor=alt_pipe.vae_scale_factor, do_normalize=False)
203
- alt_pipe = alt_pipe.to(torch_device)
204
- alt_pipe.set_progress_bar_config(disable=None)
205
-
206
- prompt = "A painting of a squirrel eating a burger"
207
- generator = torch.manual_seed(0)
208
- image = alt_pipe(
209
- [prompt],
210
- generator=generator,
211
- num_inference_steps=2,
212
- output_type="np",
213
- image=init_image,
214
- ).images
215
-
216
- assert image.shape == (1, 32, 32, 3)
217
-
218
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
219
- def test_stable_diffusion_img2img_pipeline_multiple_of_8(self):
220
- init_image = load_image(
221
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
222
- "/img2img/sketch-mountains-input.jpg"
223
- )
224
- # resize to resolution that is divisible by 8 but not 16 or 32
225
- init_image = init_image.resize((760, 504))
226
-
227
- model_id = "BAAI/AltDiffusion"
228
- pipe = AltDiffusionImg2ImgPipeline.from_pretrained(
229
- model_id,
230
- safety_checker=None,
231
- )
232
- pipe.to(torch_device)
233
- pipe.set_progress_bar_config(disable=None)
234
- pipe.enable_attention_slicing()
235
-
236
- prompt = "A fantasy landscape, trending on artstation"
237
-
238
- generator = torch.manual_seed(0)
239
- output = pipe(
240
- prompt=prompt,
241
- image=init_image,
242
- strength=0.75,
243
- guidance_scale=7.5,
244
- generator=generator,
245
- output_type="np",
246
- )
247
- image = output.images[0]
248
-
249
- image_slice = image[255:258, 383:386, -1]
250
-
251
- assert image.shape == (504, 760, 3)
252
- expected_slice = np.array([0.9358, 0.9397, 0.9599, 0.9901, 1.0000, 1.0000, 0.9882, 1.0000, 1.0000])
253
-
254
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
255
-
256
-
257
- @slow
258
- @require_torch_gpu
259
- class AltDiffusionImg2ImgPipelineIntegrationTests(unittest.TestCase):
260
- def tearDown(self):
261
- # clean up the VRAM after each test
262
- super().tearDown()
263
- gc.collect()
264
- torch.cuda.empty_cache()
265
-
266
- def test_stable_diffusion_img2img_pipeline_default(self):
267
- init_image = load_image(
268
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
269
- "/img2img/sketch-mountains-input.jpg"
270
- )
271
- init_image = init_image.resize((768, 512))
272
- expected_image = load_numpy(
273
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/img2img/fantasy_landscape_alt.npy"
274
- )
275
-
276
- model_id = "BAAI/AltDiffusion"
277
- pipe = AltDiffusionImg2ImgPipeline.from_pretrained(
278
- model_id,
279
- safety_checker=None,
280
- )
281
- pipe.to(torch_device)
282
- pipe.set_progress_bar_config(disable=None)
283
- pipe.enable_attention_slicing()
284
-
285
- prompt = "A fantasy landscape, trending on artstation"
286
-
287
- generator = torch.manual_seed(0)
288
- output = pipe(
289
- prompt=prompt,
290
- image=init_image,
291
- strength=0.75,
292
- guidance_scale=7.5,
293
- generator=generator,
294
- output_type="np",
295
- )
296
- image = output.images[0]
297
-
298
- assert image.shape == (512, 768, 3)
299
- # img2img is flaky across GPUs even in fp32, so using MAE here
300
- assert np.abs(expected_image - image).max() < 1e-2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/resnest.py DELETED
@@ -1,317 +0,0 @@
1
- import math
2
-
3
- import torch
4
- import torch.nn as nn
5
- import torch.nn.functional as F
6
- import torch.utils.checkpoint as cp
7
- from mmcv.cnn import build_conv_layer, build_norm_layer
8
-
9
- from ..builder import BACKBONES
10
- from ..utils import ResLayer
11
- from .resnet import Bottleneck as _Bottleneck
12
- from .resnet import ResNetV1d
13
-
14
-
15
- class RSoftmax(nn.Module):
16
- """Radix Softmax module in ``SplitAttentionConv2d``.
17
-
18
- Args:
19
- radix (int): Radix of input.
20
- groups (int): Groups of input.
21
- """
22
-
23
- def __init__(self, radix, groups):
24
- super().__init__()
25
- self.radix = radix
26
- self.groups = groups
27
-
28
- def forward(self, x):
29
- batch = x.size(0)
30
- if self.radix > 1:
31
- x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2)
32
- x = F.softmax(x, dim=1)
33
- x = x.reshape(batch, -1)
34
- else:
35
- x = torch.sigmoid(x)
36
- return x
37
-
38
-
39
- class SplitAttentionConv2d(nn.Module):
40
- """Split-Attention Conv2d in ResNeSt.
41
-
42
- Args:
43
- in_channels (int): Number of channels in the input feature map.
44
- channels (int): Number of intermediate channels.
45
- kernel_size (int | tuple[int]): Size of the convolution kernel.
46
- stride (int | tuple[int]): Stride of the convolution.
47
- padding (int | tuple[int]): Zero-padding added to both sides of
48
- dilation (int | tuple[int]): Spacing between kernel elements.
49
- groups (int): Number of blocked connections from input channels to
50
- output channels.
51
- groups (int): Same as nn.Conv2d.
52
- radix (int): Radix of SpltAtConv2d. Default: 2
53
- reduction_factor (int): Reduction factor of inter_channels. Default: 4.
54
- conv_cfg (dict): Config dict for convolution layer. Default: None,
55
- which means using conv2d.
56
- norm_cfg (dict): Config dict for normalization layer. Default: None.
57
- dcn (dict): Config dict for DCN. Default: None.
58
- """
59
-
60
- def __init__(self,
61
- in_channels,
62
- channels,
63
- kernel_size,
64
- stride=1,
65
- padding=0,
66
- dilation=1,
67
- groups=1,
68
- radix=2,
69
- reduction_factor=4,
70
- conv_cfg=None,
71
- norm_cfg=dict(type='BN'),
72
- dcn=None):
73
- super(SplitAttentionConv2d, self).__init__()
74
- inter_channels = max(in_channels * radix // reduction_factor, 32)
75
- self.radix = radix
76
- self.groups = groups
77
- self.channels = channels
78
- self.with_dcn = dcn is not None
79
- self.dcn = dcn
80
- fallback_on_stride = False
81
- if self.with_dcn:
82
- fallback_on_stride = self.dcn.pop('fallback_on_stride', False)
83
- if self.with_dcn and not fallback_on_stride:
84
- assert conv_cfg is None, 'conv_cfg must be None for DCN'
85
- conv_cfg = dcn
86
- self.conv = build_conv_layer(
87
- conv_cfg,
88
- in_channels,
89
- channels * radix,
90
- kernel_size,
91
- stride=stride,
92
- padding=padding,
93
- dilation=dilation,
94
- groups=groups * radix,
95
- bias=False)
96
- # To be consistent with original implementation, starting from 0
97
- self.norm0_name, norm0 = build_norm_layer(
98
- norm_cfg, channels * radix, postfix=0)
99
- self.add_module(self.norm0_name, norm0)
100
- self.relu = nn.ReLU(inplace=True)
101
- self.fc1 = build_conv_layer(
102
- None, channels, inter_channels, 1, groups=self.groups)
103
- self.norm1_name, norm1 = build_norm_layer(
104
- norm_cfg, inter_channels, postfix=1)
105
- self.add_module(self.norm1_name, norm1)
106
- self.fc2 = build_conv_layer(
107
- None, inter_channels, channels * radix, 1, groups=self.groups)
108
- self.rsoftmax = RSoftmax(radix, groups)
109
-
110
- @property
111
- def norm0(self):
112
- """nn.Module: the normalization layer named "norm0" """
113
- return getattr(self, self.norm0_name)
114
-
115
- @property
116
- def norm1(self):
117
- """nn.Module: the normalization layer named "norm1" """
118
- return getattr(self, self.norm1_name)
119
-
120
- def forward(self, x):
121
- x = self.conv(x)
122
- x = self.norm0(x)
123
- x = self.relu(x)
124
-
125
- batch, rchannel = x.shape[:2]
126
- batch = x.size(0)
127
- if self.radix > 1:
128
- splits = x.view(batch, self.radix, -1, *x.shape[2:])
129
- gap = splits.sum(dim=1)
130
- else:
131
- gap = x
132
- gap = F.adaptive_avg_pool2d(gap, 1)
133
- gap = self.fc1(gap)
134
-
135
- gap = self.norm1(gap)
136
- gap = self.relu(gap)
137
-
138
- atten = self.fc2(gap)
139
- atten = self.rsoftmax(atten).view(batch, -1, 1, 1)
140
-
141
- if self.radix > 1:
142
- attens = atten.view(batch, self.radix, -1, *atten.shape[2:])
143
- out = torch.sum(attens * splits, dim=1)
144
- else:
145
- out = atten * x
146
- return out.contiguous()
147
-
148
-
149
- class Bottleneck(_Bottleneck):
150
- """Bottleneck block for ResNeSt.
151
-
152
- Args:
153
- inplane (int): Input planes of this block.
154
- planes (int): Middle planes of this block.
155
- groups (int): Groups of conv2.
156
- base_width (int): Base of width in terms of base channels. Default: 4.
157
- base_channels (int): Base of channels for calculating width.
158
- Default: 64.
159
- radix (int): Radix of SpltAtConv2d. Default: 2
160
- reduction_factor (int): Reduction factor of inter_channels in
161
- SplitAttentionConv2d. Default: 4.
162
- avg_down_stride (bool): Whether to use average pool for stride in
163
- Bottleneck. Default: True.
164
- kwargs (dict): Key word arguments for base class.
165
- """
166
- expansion = 4
167
-
168
- def __init__(self,
169
- inplanes,
170
- planes,
171
- groups=1,
172
- base_width=4,
173
- base_channels=64,
174
- radix=2,
175
- reduction_factor=4,
176
- avg_down_stride=True,
177
- **kwargs):
178
- """Bottleneck block for ResNeSt."""
179
- super(Bottleneck, self).__init__(inplanes, planes, **kwargs)
180
-
181
- if groups == 1:
182
- width = self.planes
183
- else:
184
- width = math.floor(self.planes *
185
- (base_width / base_channels)) * groups
186
-
187
- self.avg_down_stride = avg_down_stride and self.conv2_stride > 1
188
-
189
- self.norm1_name, norm1 = build_norm_layer(
190
- self.norm_cfg, width, postfix=1)
191
- self.norm3_name, norm3 = build_norm_layer(
192
- self.norm_cfg, self.planes * self.expansion, postfix=3)
193
-
194
- self.conv1 = build_conv_layer(
195
- self.conv_cfg,
196
- self.inplanes,
197
- width,
198
- kernel_size=1,
199
- stride=self.conv1_stride,
200
- bias=False)
201
- self.add_module(self.norm1_name, norm1)
202
- self.with_modulated_dcn = False
203
- self.conv2 = SplitAttentionConv2d(
204
- width,
205
- width,
206
- kernel_size=3,
207
- stride=1 if self.avg_down_stride else self.conv2_stride,
208
- padding=self.dilation,
209
- dilation=self.dilation,
210
- groups=groups,
211
- radix=radix,
212
- reduction_factor=reduction_factor,
213
- conv_cfg=self.conv_cfg,
214
- norm_cfg=self.norm_cfg,
215
- dcn=self.dcn)
216
- delattr(self, self.norm2_name)
217
-
218
- if self.avg_down_stride:
219
- self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1)
220
-
221
- self.conv3 = build_conv_layer(
222
- self.conv_cfg,
223
- width,
224
- self.planes * self.expansion,
225
- kernel_size=1,
226
- bias=False)
227
- self.add_module(self.norm3_name, norm3)
228
-
229
- def forward(self, x):
230
-
231
- def _inner_forward(x):
232
- identity = x
233
-
234
- out = self.conv1(x)
235
- out = self.norm1(out)
236
- out = self.relu(out)
237
-
238
- if self.with_plugins:
239
- out = self.forward_plugin(out, self.after_conv1_plugin_names)
240
-
241
- out = self.conv2(out)
242
-
243
- if self.avg_down_stride:
244
- out = self.avd_layer(out)
245
-
246
- if self.with_plugins:
247
- out = self.forward_plugin(out, self.after_conv2_plugin_names)
248
-
249
- out = self.conv3(out)
250
- out = self.norm3(out)
251
-
252
- if self.with_plugins:
253
- out = self.forward_plugin(out, self.after_conv3_plugin_names)
254
-
255
- if self.downsample is not None:
256
- identity = self.downsample(x)
257
-
258
- out += identity
259
-
260
- return out
261
-
262
- if self.with_cp and x.requires_grad:
263
- out = cp.checkpoint(_inner_forward, x)
264
- else:
265
- out = _inner_forward(x)
266
-
267
- out = self.relu(out)
268
-
269
- return out
270
-
271
-
272
- @BACKBONES.register_module()
273
- class ResNeSt(ResNetV1d):
274
- """ResNeSt backbone.
275
-
276
- Args:
277
- groups (int): Number of groups of Bottleneck. Default: 1
278
- base_width (int): Base width of Bottleneck. Default: 4
279
- radix (int): Radix of SplitAttentionConv2d. Default: 2
280
- reduction_factor (int): Reduction factor of inter_channels in
281
- SplitAttentionConv2d. Default: 4.
282
- avg_down_stride (bool): Whether to use average pool for stride in
283
- Bottleneck. Default: True.
284
- kwargs (dict): Keyword arguments for ResNet.
285
- """
286
-
287
- arch_settings = {
288
- 50: (Bottleneck, (3, 4, 6, 3)),
289
- 101: (Bottleneck, (3, 4, 23, 3)),
290
- 152: (Bottleneck, (3, 8, 36, 3)),
291
- 200: (Bottleneck, (3, 24, 36, 3))
292
- }
293
-
294
- def __init__(self,
295
- groups=1,
296
- base_width=4,
297
- radix=2,
298
- reduction_factor=4,
299
- avg_down_stride=True,
300
- **kwargs):
301
- self.groups = groups
302
- self.base_width = base_width
303
- self.radix = radix
304
- self.reduction_factor = reduction_factor
305
- self.avg_down_stride = avg_down_stride
306
- super(ResNeSt, self).__init__(**kwargs)
307
-
308
- def make_res_layer(self, **kwargs):
309
- """Pack all blocks in a stage into a ``ResLayer``."""
310
- return ResLayer(
311
- groups=self.groups,
312
- base_width=self.base_width,
313
- base_channels=self.base_channels,
314
- radix=self.radix,
315
- reduction_factor=self.reduction_factor,
316
- avg_down_stride=self.avg_down_stride,
317
- **kwargs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fovea.py DELETED
@@ -1,17 +0,0 @@
1
- from ..builder import DETECTORS
2
- from .single_stage import SingleStageDetector
3
-
4
-
5
- @DETECTORS.register_module()
6
- class FOVEA(SingleStageDetector):
7
- """Implementation of `FoveaBox <https://arxiv.org/abs/1904.03797>`_"""
8
-
9
- def __init__(self,
10
- backbone,
11
- neck,
12
- bbox_head,
13
- train_cfg=None,
14
- test_cfg=None,
15
- pretrained=None):
16
- super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg,
17
- test_cfg, pretrained)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnnaPalatkina/fine_grained_SA/sentiment_wrapper.py DELETED
@@ -1,100 +0,0 @@
1
- from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup
2
- from sklearn.metrics import classification_report, f1_score
3
- from torch.utils.data import Dataset, DataLoader
4
- from tqdm.auto import tqdm
5
- from config import params
6
- from torch import nn
7
- import pandas as pd
8
- import numpy as np
9
- import warnings
10
- import random
11
- import torch
12
- import os
13
-
14
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
15
-
16
-
17
- class Dataset(Dataset):
18
- def __init__(self, texts, max_len):
19
- self.texts = texts
20
- self.tokenizer = BertTokenizer.from_pretrained(params['pretrained_model_name'])
21
- self.max_len = max_len
22
-
23
- def __len__(self):
24
- return len(self.texts)
25
-
26
- def __getitem__(self, item):
27
- text = str(self.texts[item])
28
- encoding = self.tokenizer.encode_plus(
29
- text,
30
- add_special_tokens=True,
31
- max_length=self.max_len,
32
- return_token_type_ids=False,
33
- pad_to_max_length=True,
34
- return_attention_mask=True,
35
- truncation=True,
36
- return_tensors='pt',
37
- )
38
-
39
- return {
40
- 'text': text,
41
- 'input_ids': encoding['input_ids'].flatten(),
42
- 'attention_mask': encoding['attention_mask'].flatten(),
43
- }
44
-
45
- class SentimentClassifier(nn.Module):
46
-
47
- def __init__(self, n_classes):
48
- super(SentimentClassifier, self).__init__()
49
- self.bert = BertModel.from_pretrained(params['pretrained_model_name'])
50
- self.drop = nn.Dropout(params['dropout'])
51
- self.out = nn.Linear(self.bert.config.hidden_size, n_classes)
52
-
53
- def forward(self, input_ids, attention_mask):
54
-
55
- bert_output = self.bert(
56
- input_ids=input_ids,
57
- attention_mask=attention_mask,
58
- return_dict=False
59
- )
60
- last_hidden_state, pooled_output = bert_output
61
- output = self.drop(pooled_output)
62
- return self.out(output)
63
-
64
-
65
- class PredictionModel:
66
-
67
- def __init__(self):
68
- self.model = SentimentClassifier(n_classes = 6)
69
- self.loss_fn = nn.CrossEntropyLoss().to(device)
70
-
71
- def create_data_loader(self, X_test, max_len, batch_size):
72
- ds = Dataset(
73
- texts= np.array(X_test),
74
- max_len=max_len
75
- )
76
- return DataLoader(
77
- ds,
78
- batch_size=batch_size
79
- )
80
-
81
- def predict(self, X_test: list):
82
-
83
- data_loader = self.create_data_loader(X_test, params['max_length'], params['batch_size'])
84
- self.model.load_state_dict(torch.load(params['path_to_model_bin']))
85
- self.model.eval()
86
- losses = []
87
- y_pred = []
88
- with torch.no_grad():
89
- for d in data_loader:
90
- input_ids = d["input_ids"].to(device)
91
- attention_mask = d["attention_mask"].to(device)
92
- outputs = self.model(
93
- input_ids=input_ids,
94
- attention_mask=attention_mask
95
- )
96
- _, preds = torch.max(outputs, dim=1)
97
- y_pred += preds.tolist()
98
- return y_pred
99
-
100
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/tc_model.py DELETED
@@ -1,247 +0,0 @@
1
- import torch
2
- import torch.nn.functional as F
3
- from .base_model import BaseModel
4
- from . import networks, losses
5
-
6
-
7
- class TC(BaseModel):
8
- """This class implements the transformer for image completion"""
9
- def name(self):
10
- return "Transformer Image Completion"
11
-
12
- @staticmethod
13
- def modify_options(parser, is_train=True):
14
- """Add new options and rewrite default values for existing options"""
15
-
16
- parser.add_argument('--coarse_or_refine', type=str, default='refine', help='train the transform or refined network')
17
- parser.add_argument('--down_layers', type=int, default=4, help='# times down sampling for refine generator')
18
- parser.add_argument('--mid_layers', type=int, default=6, help='# times middle layers for refine generator')
19
- if is_train:
20
- parser.add_argument('--lambda_rec', type=float, default=10.0, help='weight for image reconstruction loss')
21
- parser.add_argument('--lambda_g', type=float, default=1.0, help='weight for discriminator loss')
22
- parser.add_argument('--lambda_lp', type=float, default=10.0, help='weight for the perceptual loss')
23
- parser.add_argument('--lambda_gradient', type=float, default=0.0, help='weight for the gradient penalty')
24
-
25
- return parser
26
-
27
- def __init__(self, opt):
28
- """inital the Transformer model"""
29
- BaseModel.__init__(self, opt)
30
- self.visual_names = ['img_org', 'img_m', 'img_g', 'img_out']
31
- self.model_names = ['E', 'G', 'D', 'T']
32
- self.loss_names = ['G_rec', 'G_lp', 'G_GAN', 'D_real', 'D_fake']
33
-
34
- self.netE = networks.define_E(opt)
35
- self.netT = networks.define_T(opt)
36
- self.netG = networks.define_G(opt)
37
- self.netD = networks.define_D(opt, opt.fixed_size)
38
-
39
- if 'refine' in self.opt.coarse_or_refine:
40
- opt = self._refine_opt(opt)
41
- self.netG_Ref = networks.define_G(opt)
42
- self.netD_Ref = networks.define_D(opt, opt.fine_size)
43
- self.visual_names += ['img_ref', 'img_ref_out']
44
- self.model_names += ['G_Ref', 'D_Ref']
45
-
46
- if self.isTrain:
47
- # define the loss function
48
- self.L1loss = torch.nn.L1Loss()
49
- self.GANloss = losses.GANLoss(opt.gan_mode).to(self.device)
50
- self.NormalVGG = losses.Normalization(self.device)
51
- self.LPIPSloss = losses.LPIPSLoss(ckpt_path=opt.lipip_path).to(self.device)
52
- if len(self.opt.gpu_ids) > 0:
53
- self.LPIPSloss = torch.nn.parallel.DataParallel(self.LPIPSloss, self.opt.gpu_ids)
54
- # define the optimizer
55
- if 'coarse' in self.opt.coarse_or_refine:
56
- self.optimizerG = torch.optim.Adam(list(self.netE.parameters()) + list(self.netG.parameters())
57
- + list(self.netT.parameters()), lr=opt.lr, betas=(opt.beta1, opt.beta2))
58
- self.optimizerD = torch.optim.Adam(self.netD.parameters(), lr=opt.lr * 4, betas=(opt.beta1, opt.beta2))
59
- self.optimizers.append(self.optimizerG)
60
- self.optimizers.append(self.optimizerD)
61
- if 'refine' in self.opt.coarse_or_refine:
62
- self.optimizerGRef = torch.optim.Adam(self.netG_Ref.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2))
63
- self.optimizerDRef = torch.optim.Adam(self.netD_Ref.parameters(), lr=opt.lr * 4, betas=(opt.beta1, opt.beta2))
64
- self.optimizers.append(self.optimizerGRef)
65
- self.optimizers.append(self.optimizerDRef)
66
- else:
67
- self.visual_names = ['img_org', 'img_m', 'img_out']
68
- if 'refine' in self.opt.coarse_or_refine:
69
- self.visual_names += ['img_ref_out']
70
-
71
- def set_input(self, input):
72
- """Unpack input data from the data loader and perform necessary pre-process steps"""
73
- self.input = input
74
-
75
- self.image_paths = self.input['img_path']
76
- self.img_org = input['img_org'].to(self.device) * 2 - 1
77
- self.img = input['img'].to(self.device) * 2 - 1
78
- self.mask = input['mask'].to(self.device)
79
-
80
- # get I_m and I_c for image with mask and complement regions for training
81
- self.img_m = self.mask * self.img_org
82
-
83
- @torch.no_grad()
84
- def test(self):
85
- """Run forward processing for testing"""
86
- fixed_img = F.interpolate(self.img_m, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True).clamp(-1, 1)
87
- fixed_mask = (F.interpolate(self.mask, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True) > 0.9).type_as(fixed_img)
88
- out, mask = self.netE(fixed_img, mask=fixed_mask, return_mask=True)
89
- out = self.netT(out, mask, bool_mask=False)
90
-
91
- # sample result
92
- for i in range(self.opt.nsampling):
93
- img_g = self.netG(out, mask=self.mask)
94
- img_g_org = F.interpolate(img_g, size=self.img_org.size()[2:], mode='bicubic', align_corners=True).clamp(-1, 1)
95
- self.img_out = self.mask * self.img_org + (1 - self.mask) * img_g_org
96
- # save for multiple results
97
- self.save_results(self.img_out, path=self.opt.save_dir + '/img_out', data_name=i)
98
- if 'refine' in self.opt.coarse_or_refine:
99
- img_ref = self.netG_Ref(self.img_out, mask=self.mask)
100
- self.img_ref_out = self.mask * self.img_org + (1 - self.mask) * img_ref
101
- # save for multiple results
102
- self.save_results(self.img_ref_out, path=self.opt.save_dir + '/img_ref_out', data_name=i)
103
-
104
- def forward(self):
105
- """Run forward processing to get the outputs"""
106
- fixed_img = F.interpolate(self.img_m, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True).clamp(-1, 1)
107
- self.fixed_mask = (F.interpolate(self.mask, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True) > 0.9).type_as(fixed_img)
108
- out, mask = self.netE(fixed_img, mask=self.fixed_mask, return_mask=True)
109
- out = self.netT(out, mask, bool_mask=False)
110
- self.img_g = self.netG(out, mask=self.mask)
111
- img_g_org = F.interpolate(self.img_g, size=self.img_org.size()[2:], mode='bicubic', align_corners=True).clamp(-1, 1)
112
- self.img_out = self.mask * self.img_org + (1 - self.mask) * img_g_org
113
-
114
- if 'refine' in self.opt.coarse_or_refine:
115
- self.img_ref = self.netG_Ref(self.img_out, self.mask)
116
- self.img_ref_out = self.mask * self.img_org + (1 - self.mask) * self.img_ref
117
-
118
- def backward_D_basic(self, netD, real, fake):
119
- """
120
- Calculate GAN loss for the discriminator
121
- :param netD: the discriminator D
122
- :param real: real examples
123
- :param fake: examples generated by a generator
124
- :return: discriminator loss
125
- """
126
- self.loss_D_real = self.GANloss(netD(real), True, is_dis=True)
127
- self.loss_D_fake = self.GANloss(netD(fake), False, is_dis=True)
128
- loss_D = self.loss_D_real + self.loss_D_fake
129
- if self.opt.lambda_gradient > 0:
130
- self.loss_D_Gradient, _ = losses.cal_gradient_penalty(netD, real, fake, real.device, lambda_gp=self.opt.lambda_gradient)
131
- loss_D += self.loss_D_Gradient
132
- loss_D.backward()
133
- return loss_D
134
-
135
- def backward_D(self):
136
- """Calculate the GAN loss for discriminator"""
137
- self.loss_D = 0
138
- if 'coarse' in self.opt.coarse_or_refine:
139
- self.set_requires_grad([self.netD], True)
140
- self.optimizerD.zero_grad()
141
- real = self.img.detach()
142
- fake = self.img_g.detach()
143
- self.loss_D += self.backward_D_basic(self.netD, real, fake) if self.opt.lambda_g > 0 else 0
144
- if 'refine' in self.opt.coarse_or_refine:
145
- self.set_requires_grad([self.netD_Ref], True)
146
- self.optimizerDRef.zero_grad()
147
- real = self.img_org.detach()
148
- fake = self.img_ref.detach()
149
- self.loss_D += self.backward_D_basic(self.netD_Ref, real, fake) if self.opt.lambda_g > 0 else 0
150
-
151
- def backward_G(self):
152
- """Calculate the loss for generator"""
153
- self.loss_G_GAN = 0
154
- self.loss_G_rec = 0
155
- self.loss_G_lp =0
156
- if 'coarse' in self.opt.coarse_or_refine:
157
- self.set_requires_grad([self.netD], False)
158
- self.optimizerG.zero_grad()
159
- self.loss_G_GAN += self.GANloss(self.netD(self.img_g), True) * self.opt.lambda_g if self.opt.lambda_g > 0 else 0
160
- self.loss_G_rec += (self.L1loss(self.img_g * (1 - self.fixed_mask), self.img * (1 - self.fixed_mask)) * 3 +
161
- self.L1loss(self.img_g * self.fixed_mask, self.img_g * self.fixed_mask)) * self.opt.lambda_rec
162
- norm_real = self.NormalVGG((self.img + 1) * 0.5)
163
- norm_fake = self.NormalVGG((self.img_g + 1) * 0.5)
164
- self.loss_G_lp += (self.LPIPSloss(norm_real, norm_fake).mean()) * self.opt.lambda_lp if self.opt.lambda_lp > 0 else 0
165
- if 'refine' in self.opt.coarse_or_refine:
166
- self.set_requires_grad([self.netD_Ref], False)
167
- self.optimizerGRef.zero_grad()
168
- self.loss_G_GAN += self.GANloss(self.netD_Ref(self.img_ref), True) * self.opt.lambda_g if self.opt.lambda_g > 0 else 0
169
- self.loss_G_rec += (self.L1loss(self.img_ref * (1 - self.mask), self.img_org * (1 - self.mask)) * 3 +
170
- self.L1loss(self.img_ref * self.mask, self.img_org * self.mask)) * self.opt.lambda_rec
171
- norm_real = self.NormalVGG((self.img_org + 1) * 0.5)
172
- norm_fake = self.NormalVGG((self.img_ref + 1) * 0.5)
173
- self.loss_G_lp += (self.LPIPSloss(norm_real, norm_fake).mean()) * self.opt.lambda_lp if self.opt.lambda_lp > 0 else 0
174
-
175
- self.loss_G = self.loss_G_GAN + self.loss_G_rec + self.loss_G_lp
176
-
177
- self.loss_G.backward()
178
-
179
- def optimize_parameters(self):
180
- """update network weights"""
181
- # forward
182
- self.set_requires_grad([self.netE, self.netT, self.netG], 'coarse' in self.opt.coarse_or_refine)
183
- self.forward()
184
- # update D
185
- self.backward_D()
186
- if 'coarse' in self.opt.coarse_or_refine:
187
- self.optimizerD.step()
188
- if 'refine' in self.opt.coarse_or_refine:
189
- self.optimizerDRef.step()
190
- # update G
191
- self.backward_G()
192
- if 'coarse' in self.opt.coarse_or_refine:
193
- self.optimizerG.step()
194
- if 'refine' in self.opt.coarse_or_refine:
195
- self.optimizerGRef.step()
196
-
197
- def configure_optimizers(self):
198
- """
199
- Following minGPT:
200
- This long function is unfortunately doing something very simple and is being very defensive:
201
- We are separating out all parameters of the model into two buckets: those that will experience
202
- weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
203
- We are then returning the PyTorch optimizer object.
204
- """
205
- # separate out all parameters to those that will and won't experience regularizing weight decay
206
- decay = set()
207
- no_decay = set()
208
- whitelist_weight_modules = (torch.nn.Linear, torch.nn.Conv2d)
209
- blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
210
- for mn, m in self.netT.named_modules():
211
- for pn, p in m.named_parameters():
212
- fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
213
-
214
- if pn.endswith('bias') or pn.endswith('alpha'):
215
- # all biases will not be decayed
216
- no_decay.add(fpn)
217
- elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
218
- # weights of whitelist modules will be weight decayed
219
- decay.add(fpn)
220
- elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
221
- # weights of blacklist modules will NOT be weight decayed
222
- no_decay.add(fpn)
223
-
224
- # validate that we considered every parameter
225
- param_dict = {pn: p for pn, p in self.netT.named_parameters()}
226
- inter_params = decay & no_decay
227
- union_params = decay | no_decay
228
- assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params),)
229
- assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
230
- % (str(param_dict.keys() - union_params),)
231
-
232
- # create the pytorch optimizer object
233
- optim_groups = [
234
- {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": 0.01, "betas":(0.9, 0.95)},
235
- {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0, "betas":(0.9, 0.95)},
236
- {"params": list(filter(lambda p: p.requires_grad, self.netE.parameters()))},
237
- {"params": list(filter(lambda p: p.requires_grad, self.netG.parameters()))}
238
- ]
239
- optimizer = torch.optim.Adam(optim_groups, lr=self.opt.lr, betas=(self.opt.beta1, self.opt.beta2))
240
- return optimizer
241
-
242
- def _refine_opt(self, opt):
243
- """modify the opt for refine generator and discriminator"""
244
- opt.netG = 'refine'
245
- opt.netD = 'style'
246
-
247
- return opt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ariharasudhan/YoloV5/utils/segment/dataloaders.py DELETED
@@ -1,331 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- Dataloaders
4
- """
5
-
6
- import os
7
- import random
8
-
9
- import cv2
10
- import numpy as np
11
- import torch
12
- from torch.utils.data import DataLoader, distributed
13
-
14
- from ..augmentations import augment_hsv, copy_paste, letterbox
15
- from ..dataloaders import InfiniteDataLoader, LoadImagesAndLabels, seed_worker
16
- from ..general import LOGGER, xyn2xy, xywhn2xyxy, xyxy2xywhn
17
- from ..torch_utils import torch_distributed_zero_first
18
- from .augmentations import mixup, random_perspective
19
-
20
- RANK = int(os.getenv('RANK', -1))
21
-
22
-
23
- def create_dataloader(path,
24
- imgsz,
25
- batch_size,
26
- stride,
27
- single_cls=False,
28
- hyp=None,
29
- augment=False,
30
- cache=False,
31
- pad=0.0,
32
- rect=False,
33
- rank=-1,
34
- workers=8,
35
- image_weights=False,
36
- quad=False,
37
- prefix='',
38
- shuffle=False,
39
- mask_downsample_ratio=1,
40
- overlap_mask=False):
41
- if rect and shuffle:
42
- LOGGER.warning('WARNING ⚠️ --rect is incompatible with DataLoader shuffle, setting shuffle=False')
43
- shuffle = False
44
- with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP
45
- dataset = LoadImagesAndLabelsAndMasks(
46
- path,
47
- imgsz,
48
- batch_size,
49
- augment=augment, # augmentation
50
- hyp=hyp, # hyperparameters
51
- rect=rect, # rectangular batches
52
- cache_images=cache,
53
- single_cls=single_cls,
54
- stride=int(stride),
55
- pad=pad,
56
- image_weights=image_weights,
57
- prefix=prefix,
58
- downsample_ratio=mask_downsample_ratio,
59
- overlap=overlap_mask)
60
-
61
- batch_size = min(batch_size, len(dataset))
62
- nd = torch.cuda.device_count() # number of CUDA devices
63
- nw = min([os.cpu_count() // max(nd, 1), batch_size if batch_size > 1 else 0, workers]) # number of workers
64
- sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle)
65
- loader = DataLoader if image_weights else InfiniteDataLoader # only DataLoader allows for attribute updates
66
- generator = torch.Generator()
67
- generator.manual_seed(6148914691236517205 + RANK)
68
- return loader(
69
- dataset,
70
- batch_size=batch_size,
71
- shuffle=shuffle and sampler is None,
72
- num_workers=nw,
73
- sampler=sampler,
74
- pin_memory=True,
75
- collate_fn=LoadImagesAndLabelsAndMasks.collate_fn4 if quad else LoadImagesAndLabelsAndMasks.collate_fn,
76
- worker_init_fn=seed_worker,
77
- generator=generator,
78
- ), dataset
79
-
80
-
81
- class LoadImagesAndLabelsAndMasks(LoadImagesAndLabels): # for training/testing
82
-
83
- def __init__(
84
- self,
85
- path,
86
- img_size=640,
87
- batch_size=16,
88
- augment=False,
89
- hyp=None,
90
- rect=False,
91
- image_weights=False,
92
- cache_images=False,
93
- single_cls=False,
94
- stride=32,
95
- pad=0,
96
- min_items=0,
97
- prefix="",
98
- downsample_ratio=1,
99
- overlap=False,
100
- ):
101
- super().__init__(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls,
102
- stride, pad, min_items, prefix)
103
- self.downsample_ratio = downsample_ratio
104
- self.overlap = overlap
105
-
106
- def __getitem__(self, index):
107
- index = self.indices[index] # linear, shuffled, or image_weights
108
-
109
- hyp = self.hyp
110
- mosaic = self.mosaic and random.random() < hyp['mosaic']
111
- masks = []
112
- if mosaic:
113
- # Load mosaic
114
- img, labels, segments = self.load_mosaic(index)
115
- shapes = None
116
-
117
- # MixUp augmentation
118
- if random.random() < hyp["mixup"]:
119
- img, labels, segments = mixup(img, labels, segments, *self.load_mosaic(random.randint(0, self.n - 1)))
120
-
121
- else:
122
- # Load image
123
- img, (h0, w0), (h, w) = self.load_image(index)
124
-
125
- # Letterbox
126
- shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
127
- img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
128
- shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
129
-
130
- labels = self.labels[index].copy()
131
- # [array, array, ....], array.shape=(num_points, 2), xyxyxyxy
132
- segments = self.segments[index].copy()
133
- if len(segments):
134
- for i_s in range(len(segments)):
135
- segments[i_s] = xyn2xy(
136
- segments[i_s],
137
- ratio[0] * w,
138
- ratio[1] * h,
139
- padw=pad[0],
140
- padh=pad[1],
141
- )
142
- if labels.size: # normalized xywh to pixel xyxy format
143
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1])
144
-
145
- if self.augment:
146
- img, labels, segments = random_perspective(img,
147
- labels,
148
- segments=segments,
149
- degrees=hyp["degrees"],
150
- translate=hyp["translate"],
151
- scale=hyp["scale"],
152
- shear=hyp["shear"],
153
- perspective=hyp["perspective"])
154
-
155
- nl = len(labels) # number of labels
156
- if nl:
157
- labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1e-3)
158
- if self.overlap:
159
- masks, sorted_idx = polygons2masks_overlap(img.shape[:2],
160
- segments,
161
- downsample_ratio=self.downsample_ratio)
162
- masks = masks[None] # (640, 640) -> (1, 640, 640)
163
- labels = labels[sorted_idx]
164
- else:
165
- masks = polygons2masks(img.shape[:2], segments, color=1, downsample_ratio=self.downsample_ratio)
166
-
167
- masks = (torch.from_numpy(masks) if len(masks) else torch.zeros(1 if self.overlap else nl, img.shape[0] //
168
- self.downsample_ratio, img.shape[1] //
169
- self.downsample_ratio))
170
- # TODO: albumentations support
171
- if self.augment:
172
- # Albumentations
173
- # there are some augmentation that won't change boxes and masks,
174
- # so just be it for now.
175
- img, labels = self.albumentations(img, labels)
176
- nl = len(labels) # update after albumentations
177
-
178
- # HSV color-space
179
- augment_hsv(img, hgain=hyp["hsv_h"], sgain=hyp["hsv_s"], vgain=hyp["hsv_v"])
180
-
181
- # Flip up-down
182
- if random.random() < hyp["flipud"]:
183
- img = np.flipud(img)
184
- if nl:
185
- labels[:, 2] = 1 - labels[:, 2]
186
- masks = torch.flip(masks, dims=[1])
187
-
188
- # Flip left-right
189
- if random.random() < hyp["fliplr"]:
190
- img = np.fliplr(img)
191
- if nl:
192
- labels[:, 1] = 1 - labels[:, 1]
193
- masks = torch.flip(masks, dims=[2])
194
-
195
- # Cutouts # labels = cutout(img, labels, p=0.5)
196
-
197
- labels_out = torch.zeros((nl, 6))
198
- if nl:
199
- labels_out[:, 1:] = torch.from_numpy(labels)
200
-
201
- # Convert
202
- img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
203
- img = np.ascontiguousarray(img)
204
-
205
- return (torch.from_numpy(img), labels_out, self.im_files[index], shapes, masks)
206
-
207
- def load_mosaic(self, index):
208
- # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic
209
- labels4, segments4 = [], []
210
- s = self.img_size
211
- yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y
212
-
213
- # 3 additional image indices
214
- indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
215
- for i, index in enumerate(indices):
216
- # Load image
217
- img, _, (h, w) = self.load_image(index)
218
-
219
- # place img in img4
220
- if i == 0: # top left
221
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
222
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
223
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
224
- elif i == 1: # top right
225
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
226
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
227
- elif i == 2: # bottom left
228
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
229
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
230
- elif i == 3: # bottom right
231
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
232
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
233
-
234
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
235
- padw = x1a - x1b
236
- padh = y1a - y1b
237
-
238
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
239
-
240
- if labels.size:
241
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
242
- segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
243
- labels4.append(labels)
244
- segments4.extend(segments)
245
-
246
- # Concat/clip labels
247
- labels4 = np.concatenate(labels4, 0)
248
- for x in (labels4[:, 1:], *segments4):
249
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
250
- # img4, labels4 = replicate(img4, labels4) # replicate
251
-
252
- # Augment
253
- img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp["copy_paste"])
254
- img4, labels4, segments4 = random_perspective(img4,
255
- labels4,
256
- segments4,
257
- degrees=self.hyp["degrees"],
258
- translate=self.hyp["translate"],
259
- scale=self.hyp["scale"],
260
- shear=self.hyp["shear"],
261
- perspective=self.hyp["perspective"],
262
- border=self.mosaic_border) # border to remove
263
- return img4, labels4, segments4
264
-
265
- @staticmethod
266
- def collate_fn(batch):
267
- img, label, path, shapes, masks = zip(*batch) # transposed
268
- batched_masks = torch.cat(masks, 0)
269
- for i, l in enumerate(label):
270
- l[:, 0] = i # add target image index for build_targets()
271
- return torch.stack(img, 0), torch.cat(label, 0), path, shapes, batched_masks
272
-
273
-
274
- def polygon2mask(img_size, polygons, color=1, downsample_ratio=1):
275
- """
276
- Args:
277
- img_size (tuple): The image size.
278
- polygons (np.ndarray): [N, M], N is the number of polygons,
279
- M is the number of points(Be divided by 2).
280
- """
281
- mask = np.zeros(img_size, dtype=np.uint8)
282
- polygons = np.asarray(polygons)
283
- polygons = polygons.astype(np.int32)
284
- shape = polygons.shape
285
- polygons = polygons.reshape(shape[0], -1, 2)
286
- cv2.fillPoly(mask, polygons, color=color)
287
- nh, nw = (img_size[0] // downsample_ratio, img_size[1] // downsample_ratio)
288
- # NOTE: fillPoly firstly then resize is trying the keep the same way
289
- # of loss calculation when mask-ratio=1.
290
- mask = cv2.resize(mask, (nw, nh))
291
- return mask
292
-
293
-
294
- def polygons2masks(img_size, polygons, color, downsample_ratio=1):
295
- """
296
- Args:
297
- img_size (tuple): The image size.
298
- polygons (list[np.ndarray]): each polygon is [N, M],
299
- N is the number of polygons,
300
- M is the number of points(Be divided by 2).
301
- """
302
- masks = []
303
- for si in range(len(polygons)):
304
- mask = polygon2mask(img_size, [polygons[si].reshape(-1)], color, downsample_ratio)
305
- masks.append(mask)
306
- return np.array(masks)
307
-
308
-
309
- def polygons2masks_overlap(img_size, segments, downsample_ratio=1):
310
- """Return a (640, 640) overlap mask."""
311
- masks = np.zeros((img_size[0] // downsample_ratio, img_size[1] // downsample_ratio),
312
- dtype=np.int32 if len(segments) > 255 else np.uint8)
313
- areas = []
314
- ms = []
315
- for si in range(len(segments)):
316
- mask = polygon2mask(
317
- img_size,
318
- [segments[si].reshape(-1)],
319
- downsample_ratio=downsample_ratio,
320
- color=1,
321
- )
322
- ms.append(mask)
323
- areas.append(mask.sum())
324
- areas = np.asarray(areas)
325
- index = np.argsort(-areas)
326
- ms = np.array(ms)[index]
327
- for i in range(len(segments)):
328
- mask = ms[i] * (i + 1)
329
- masks = masks + mask
330
- masks = np.clip(masks, a_min=0, a_max=i + 1)
331
- return masks, index
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py DELETED
@@ -1,41 +0,0 @@
1
- """Metadata generation logic for source distributions.
2
- """
3
-
4
- import os
5
-
6
- from pip._vendor.pyproject_hooks import BuildBackendHookCaller
7
-
8
- from pip._internal.build_env import BuildEnvironment
9
- from pip._internal.exceptions import (
10
- InstallationSubprocessError,
11
- MetadataGenerationFailed,
12
- )
13
- from pip._internal.utils.subprocess import runner_with_spinner_message
14
- from pip._internal.utils.temp_dir import TempDirectory
15
-
16
-
17
- def generate_editable_metadata(
18
- build_env: BuildEnvironment, backend: BuildBackendHookCaller, details: str
19
- ) -> str:
20
- """Generate metadata using mechanisms described in PEP 660.
21
-
22
- Returns the generated metadata directory.
23
- """
24
- metadata_tmpdir = TempDirectory(kind="modern-metadata", globally_managed=True)
25
-
26
- metadata_dir = metadata_tmpdir.path
27
-
28
- with build_env:
29
- # Note that BuildBackendHookCaller implements a fallback for
30
- # prepare_metadata_for_build_wheel/editable, so we don't have to
31
- # consider the possibility that this hook doesn't exist.
32
- runner = runner_with_spinner_message(
33
- "Preparing editable metadata (pyproject.toml)"
34
- )
35
- with backend.subprocess_runner(runner):
36
- try:
37
- distinfo_dir = backend.prepare_metadata_for_build_editable(metadata_dir)
38
- except InstallationSubprocessError as error:
39
- raise MetadataGenerationFailed(package_details=details) from error
40
-
41
- return os.path.join(metadata_dir, distinfo_dir)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/exceptions.py DELETED
@@ -1,323 +0,0 @@
1
- from __future__ import absolute_import
2
-
3
- from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead
4
-
5
- # Base Exceptions
6
-
7
-
8
- class HTTPError(Exception):
9
- """Base exception used by this module."""
10
-
11
- pass
12
-
13
-
14
- class HTTPWarning(Warning):
15
- """Base warning used by this module."""
16
-
17
- pass
18
-
19
-
20
- class PoolError(HTTPError):
21
- """Base exception for errors caused within a pool."""
22
-
23
- def __init__(self, pool, message):
24
- self.pool = pool
25
- HTTPError.__init__(self, "%s: %s" % (pool, message))
26
-
27
- def __reduce__(self):
28
- # For pickling purposes.
29
- return self.__class__, (None, None)
30
-
31
-
32
- class RequestError(PoolError):
33
- """Base exception for PoolErrors that have associated URLs."""
34
-
35
- def __init__(self, pool, url, message):
36
- self.url = url
37
- PoolError.__init__(self, pool, message)
38
-
39
- def __reduce__(self):
40
- # For pickling purposes.
41
- return self.__class__, (None, self.url, None)
42
-
43
-
44
- class SSLError(HTTPError):
45
- """Raised when SSL certificate fails in an HTTPS connection."""
46
-
47
- pass
48
-
49
-
50
- class ProxyError(HTTPError):
51
- """Raised when the connection to a proxy fails."""
52
-
53
- def __init__(self, message, error, *args):
54
- super(ProxyError, self).__init__(message, error, *args)
55
- self.original_error = error
56
-
57
-
58
- class DecodeError(HTTPError):
59
- """Raised when automatic decoding based on Content-Type fails."""
60
-
61
- pass
62
-
63
-
64
- class ProtocolError(HTTPError):
65
- """Raised when something unexpected happens mid-request/response."""
66
-
67
- pass
68
-
69
-
70
- #: Renamed to ProtocolError but aliased for backwards compatibility.
71
- ConnectionError = ProtocolError
72
-
73
-
74
- # Leaf Exceptions
75
-
76
-
77
- class MaxRetryError(RequestError):
78
- """Raised when the maximum number of retries is exceeded.
79
-
80
- :param pool: The connection pool
81
- :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool`
82
- :param string url: The requested Url
83
- :param exceptions.Exception reason: The underlying error
84
-
85
- """
86
-
87
- def __init__(self, pool, url, reason=None):
88
- self.reason = reason
89
-
90
- message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason)
91
-
92
- RequestError.__init__(self, pool, url, message)
93
-
94
-
95
- class HostChangedError(RequestError):
96
- """Raised when an existing pool gets a request for a foreign host."""
97
-
98
- def __init__(self, pool, url, retries=3):
99
- message = "Tried to open a foreign host with url: %s" % url
100
- RequestError.__init__(self, pool, url, message)
101
- self.retries = retries
102
-
103
-
104
- class TimeoutStateError(HTTPError):
105
- """Raised when passing an invalid state to a timeout"""
106
-
107
- pass
108
-
109
-
110
- class TimeoutError(HTTPError):
111
- """Raised when a socket timeout error occurs.
112
-
113
- Catching this error will catch both :exc:`ReadTimeoutErrors
114
- <ReadTimeoutError>` and :exc:`ConnectTimeoutErrors <ConnectTimeoutError>`.
115
- """
116
-
117
- pass
118
-
119
-
120
- class ReadTimeoutError(TimeoutError, RequestError):
121
- """Raised when a socket timeout occurs while receiving data from a server"""
122
-
123
- pass
124
-
125
-
126
- # This timeout error does not have a URL attached and needs to inherit from the
127
- # base HTTPError
128
- class ConnectTimeoutError(TimeoutError):
129
- """Raised when a socket timeout occurs while connecting to a server"""
130
-
131
- pass
132
-
133
-
134
- class NewConnectionError(ConnectTimeoutError, PoolError):
135
- """Raised when we fail to establish a new connection. Usually ECONNREFUSED."""
136
-
137
- pass
138
-
139
-
140
- class EmptyPoolError(PoolError):
141
- """Raised when a pool runs out of connections and no more are allowed."""
142
-
143
- pass
144
-
145
-
146
- class ClosedPoolError(PoolError):
147
- """Raised when a request enters a pool after the pool has been closed."""
148
-
149
- pass
150
-
151
-
152
- class LocationValueError(ValueError, HTTPError):
153
- """Raised when there is something wrong with a given URL input."""
154
-
155
- pass
156
-
157
-
158
- class LocationParseError(LocationValueError):
159
- """Raised when get_host or similar fails to parse the URL input."""
160
-
161
- def __init__(self, location):
162
- message = "Failed to parse: %s" % location
163
- HTTPError.__init__(self, message)
164
-
165
- self.location = location
166
-
167
-
168
- class URLSchemeUnknown(LocationValueError):
169
- """Raised when a URL input has an unsupported scheme."""
170
-
171
- def __init__(self, scheme):
172
- message = "Not supported URL scheme %s" % scheme
173
- super(URLSchemeUnknown, self).__init__(message)
174
-
175
- self.scheme = scheme
176
-
177
-
178
- class ResponseError(HTTPError):
179
- """Used as a container for an error reason supplied in a MaxRetryError."""
180
-
181
- GENERIC_ERROR = "too many error responses"
182
- SPECIFIC_ERROR = "too many {status_code} error responses"
183
-
184
-
185
- class SecurityWarning(HTTPWarning):
186
- """Warned when performing security reducing actions"""
187
-
188
- pass
189
-
190
-
191
- class SubjectAltNameWarning(SecurityWarning):
192
- """Warned when connecting to a host with a certificate missing a SAN."""
193
-
194
- pass
195
-
196
-
197
- class InsecureRequestWarning(SecurityWarning):
198
- """Warned when making an unverified HTTPS request."""
199
-
200
- pass
201
-
202
-
203
- class SystemTimeWarning(SecurityWarning):
204
- """Warned when system time is suspected to be wrong"""
205
-
206
- pass
207
-
208
-
209
- class InsecurePlatformWarning(SecurityWarning):
210
- """Warned when certain TLS/SSL configuration is not available on a platform."""
211
-
212
- pass
213
-
214
-
215
- class SNIMissingWarning(HTTPWarning):
216
- """Warned when making a HTTPS request without SNI available."""
217
-
218
- pass
219
-
220
-
221
- class DependencyWarning(HTTPWarning):
222
- """
223
- Warned when an attempt is made to import a module with missing optional
224
- dependencies.
225
- """
226
-
227
- pass
228
-
229
-
230
- class ResponseNotChunked(ProtocolError, ValueError):
231
- """Response needs to be chunked in order to read it as chunks."""
232
-
233
- pass
234
-
235
-
236
- class BodyNotHttplibCompatible(HTTPError):
237
- """
238
- Body should be :class:`http.client.HTTPResponse` like
239
- (have an fp attribute which returns raw chunks) for read_chunked().
240
- """
241
-
242
- pass
243
-
244
-
245
- class IncompleteRead(HTTPError, httplib_IncompleteRead):
246
- """
247
- Response length doesn't match expected Content-Length
248
-
249
- Subclass of :class:`http.client.IncompleteRead` to allow int value
250
- for ``partial`` to avoid creating large objects on streamed reads.
251
- """
252
-
253
- def __init__(self, partial, expected):
254
- super(IncompleteRead, self).__init__(partial, expected)
255
-
256
- def __repr__(self):
257
- return "IncompleteRead(%i bytes read, %i more expected)" % (
258
- self.partial,
259
- self.expected,
260
- )
261
-
262
-
263
- class InvalidChunkLength(HTTPError, httplib_IncompleteRead):
264
- """Invalid chunk length in a chunked response."""
265
-
266
- def __init__(self, response, length):
267
- super(InvalidChunkLength, self).__init__(
268
- response.tell(), response.length_remaining
269
- )
270
- self.response = response
271
- self.length = length
272
-
273
- def __repr__(self):
274
- return "InvalidChunkLength(got length %r, %i bytes read)" % (
275
- self.length,
276
- self.partial,
277
- )
278
-
279
-
280
- class InvalidHeader(HTTPError):
281
- """The header provided was somehow invalid."""
282
-
283
- pass
284
-
285
-
286
- class ProxySchemeUnknown(AssertionError, URLSchemeUnknown):
287
- """ProxyManager does not support the supplied scheme"""
288
-
289
- # TODO(t-8ch): Stop inheriting from AssertionError in v2.0.
290
-
291
- def __init__(self, scheme):
292
- # 'localhost' is here because our URL parser parses
293
- # localhost:8080 -> scheme=localhost, remove if we fix this.
294
- if scheme == "localhost":
295
- scheme = None
296
- if scheme is None:
297
- message = "Proxy URL had no scheme, should start with http:// or https://"
298
- else:
299
- message = (
300
- "Proxy URL had unsupported scheme %s, should use http:// or https://"
301
- % scheme
302
- )
303
- super(ProxySchemeUnknown, self).__init__(message)
304
-
305
-
306
- class ProxySchemeUnsupported(ValueError):
307
- """Fetching HTTPS resources through HTTPS proxies is unsupported"""
308
-
309
- pass
310
-
311
-
312
- class HeaderParsingError(HTTPError):
313
- """Raised by assert_header_parsing, but we convert it to a log.warning statement."""
314
-
315
- def __init__(self, defects, unparsed_data):
316
- message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data)
317
- super(HeaderParsingError, self).__init__(message)
318
-
319
-
320
- class UnrewindableBodyError(HTTPError):
321
- """urllib3 encountered an error when trying to rewind a body"""
322
-
323
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/infer/lib/audio.py DELETED
@@ -1,197 +0,0 @@
1
- import librosa
2
- import numpy as np
3
- import av
4
- from io import BytesIO
5
- import ffmpeg
6
- import os
7
- import sys
8
-
9
- import random
10
- from infer.lib.csvutil import CSVutil
11
- #import csv
12
-
13
- platform_stft_mapping = {
14
- 'linux': 'stftpitchshift',
15
- 'darwin': 'stftpitchshift',
16
- 'win32': 'stftpitchshift.exe',
17
- }
18
-
19
- stft = platform_stft_mapping.get(sys.platform)
20
-
21
- def wav2(i, o, format):
22
- inp = av.open(i, 'rb')
23
- if format == "m4a": format = "mp4"
24
- out = av.open(o, 'wb', format=format)
25
- if format == "ogg": format = "libvorbis"
26
- if format == "mp4": format = "aac"
27
-
28
- ostream = out.add_stream(format)
29
-
30
- for frame in inp.decode(audio=0):
31
- for p in ostream.encode(frame): out.mux(p)
32
-
33
- for p in ostream.encode(None): out.mux(p)
34
-
35
- out.close()
36
- inp.close()
37
-
38
- def audio2(i, o, format, sr):
39
- inp = av.open(i, 'rb')
40
- out = av.open(o, 'wb', format=format)
41
- if format == "ogg": format = "libvorbis"
42
- if format == "f32le": format = "pcm_f32le"
43
-
44
- ostream = out.add_stream(format, channels=1)
45
- ostream.sample_rate = sr
46
-
47
- for frame in inp.decode(audio=0):
48
- for p in ostream.encode(frame): out.mux(p)
49
-
50
- out.close()
51
- inp.close()
52
-
53
- def load_audion(file, sr):
54
- try:
55
- file = (
56
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
57
- ) # 防止小白拷路径头尾带了空格和"和回车
58
- with open(file, "rb") as f:
59
- with BytesIO() as out:
60
- audio2(f, out, "f32le", sr)
61
- return np.frombuffer(out.getvalue(), np.float32).flatten()
62
-
63
- except AttributeError:
64
- audio = file[1] / 32768.0
65
- if len(audio.shape) == 2:
66
- audio = np.mean(audio, -1)
67
- return librosa.resample(audio, orig_sr=file[0], target_sr=16000)
68
-
69
- except Exception as e:
70
- raise RuntimeError(f"Failed to load audio: {e}")
71
-
72
-
73
-
74
-
75
- def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0):
76
- converted = False
77
- DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting")
78
- try:
79
- # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26
80
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
81
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
82
- file = (
83
- file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
84
- ) # 防止小白拷路径头尾带了空格和"和回车
85
- file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
86
-
87
- # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n")
88
-
89
- if (
90
- lambda DoFormant: True
91
- if DoFormant.lower() == "true"
92
- else (False if DoFormant.lower() == "false" else DoFormant)
93
- )(DoFormant):
94
- numerator = round(random.uniform(1, 4), 4)
95
- # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}")
96
- # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted))
97
-
98
- if not file.endswith(".wav"):
99
- if not os.path.isfile(f"{file_formanted}.wav"):
100
- converted = True
101
- # print(f"\nfile = {file}\n")
102
- # print(f"\nfile_formanted = {file_formanted}\n")
103
- converting = (
104
- ffmpeg.input(file_formanted, threads=0)
105
- .output(f"{file_formanted}.wav")
106
- .run(
107
- cmd=["ffmpeg", "-nostdin"],
108
- capture_stdout=True,
109
- capture_stderr=True,
110
- )
111
- )
112
- else:
113
- pass
114
-
115
- file_formanted = (
116
- f"{file_formanted}.wav"
117
- if not file_formanted.endswith(".wav")
118
- else file_formanted
119
- )
120
-
121
- print(f" · Formanting {file_formanted}...\n")
122
-
123
- os.system(
124
- '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"'
125
- % (
126
- stft,
127
- file_formanted,
128
- Quefrency,
129
- Timbre,
130
- file_formanted,
131
- str(numerator),
132
- )
133
- )
134
-
135
- print(f" · Formanted {file_formanted}!\n")
136
-
137
- # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\')
138
- # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\')
139
- # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator)))
140
-
141
- out, _ = (
142
- ffmpeg.input(
143
- "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0
144
- )
145
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
146
- .run(
147
- cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True
148
- )
149
- )
150
-
151
- try:
152
- os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator)))
153
- except Exception:
154
- pass
155
- print("couldn't remove formanted type of file")
156
-
157
- else:
158
- out, _ = (
159
- ffmpeg.input(file, threads=0)
160
- .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
161
- .run(
162
- cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True
163
- )
164
- )
165
- except Exception as e:
166
- raise RuntimeError(f"Failed to load audio: {e}")
167
-
168
- if converted:
169
- try:
170
- os.remove(file_formanted)
171
- except Exception:
172
- pass
173
- print("couldn't remove converted type of file")
174
- converted = False
175
-
176
- return np.frombuffer(out, np.float32).flatten()
177
-
178
-
179
- def check_audio_duration(file):
180
- try:
181
- file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
182
-
183
- probe = ffmpeg.probe(file)
184
-
185
- duration = float(probe['streams'][0]['duration'])
186
-
187
- if duration < 0.76:
188
- print(
189
- f"\n------------\n"
190
- f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results."
191
- f"\n------------\n\n"
192
- )
193
- return False
194
-
195
- return True
196
- except Exception as e:
197
- raise RuntimeError(f"Failed to check audio duration: {e}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Bangla Mejor Tono De Llamada Que Me Encanta Descargar.md DELETED
@@ -1,113 +0,0 @@
1
- <br />
2
- <h1>Bangla mejor tono de llamada que me encanta descargar</h1>
3
- <p>¿Te encanta el sonido del bangla, el idioma hablado por más de 200 millones de personas en Bangladesh y la India? ¿Quieres expresar tu amor y afecto con un tono de llamada bangla romántico? Si es así, entonces estás de suerte. Hay muchas maneras de descargar y disfrutar de los mejores tonos de bangla para su teléfono. En este artículo, te mostraremos cómo encontrar, descargar y establecer un tono de llamada personalizado que diga "Te amo" de una manera dulce y melodiosa. </p>
4
- <h2>bangla mejor tono de llamada que me encanta descargar</h2><br /><p><b><b>Download</b> &bull; <a href="https://bltlly.com/2v6Lmw">https://bltlly.com/2v6Lmw</a></b></p><br /><br />
5
- <h2>Qué es el bangla y por qué es un lenguaje popular para tonos de llamada</h2>
6
- <p>El bangla, también conocido como bengalí, es una lengua indoaria que pertenece a la familia lingüística indoeuropea. Es el idioma oficial de Bangladesh y uno de los idiomas oficiales de la India. También es hablado por muchas personas en otros países, como Nepal, Pakistán, Sri Lanka, Malasia, Singapur y el Reino Unido.</p>
7
- <p>El bangla tiene una cultura rica y diversa que se refleja en su literatura, música, arte, cine y cocina. Los hablantes de bangla están orgullosos de su idioma y su herencia, y a menudo lo utilizan para expresar sus emociones y sentimientos. El bangla es también un lenguaje muy musical, con una variedad de tonos, ritmos y ritmos. Es por eso que muchas personas les encanta escuchar canciones de bangla y tonos de llamada. </p>
8
- <h2>Cómo descargar tonos de llamada bangla gratis desde sitios web</h2>
9
- <p>Una de las maneras más fáciles de obtener tonos de llamada bangla gratis es visitar algunos sitios web de renombre que ofrecen una amplia gama de tonos de llamada en diferentes géneros y categorías. Algunos de estos sitios web son:</p>
10
- <ul>
11
- <li><a href="( 1 )">Prokerala</a>: Este sitio web tiene una sección dedicada a los tonos de llamada bangla, donde puedes encontrar cientos de tonos de llamada en varios estilos, como romántico, triste, divertido, patriótico, devocional, etc. Puedes escucharlos en línea y descargarlos en formato MP3. </li>
12
-
13
- <li><a href="( 3 )">Tonos de llamada bengalíes</a>: Este sitio web es otra gran opción para encontrar tonos de llamada bangla gratis. Tiene una interfaz sencilla que le permite buscar tonos por nombre o por categoría. También puede previsualizar los tonos antes de descargarlos. </li>
14
- </ul>
15
- <p>Para descargar tonos de llamada bangla gratis de estos sitios web, debe seguir estos pasos:</p>
16
- <ol>
17
- <li>Visite el sitio web de su elección y busque el tono de llamada que desee. </li>
18
- <li> Seleccione el tono de llamada y haga clic en el botón de descarga o enlace. </li>
19
- <li>Guarde el archivo de tono de llamada en su computadora o transfiéralo a su teléfono a través de un cable USB o Bluetooth.</li>
20
- </ol>
21
- <h2>Cómo descargar tonos de llamada bangla pagados desde iTunes Store en el iPhone</h2>
22
- <p>Si tienes un iPhone y quieres comprar algunos tonos de llamada bangla premium que son clips de tus canciones favoritas, entonces puedes usar la aplicación iTunes Store en tu teléfono. El iTunes Store tiene una sección para tonos de llamada donde se pueden encontrar miles de tonos de llamada en diferentes idiomas, incluyendo el bangla. Algunos de estos tonos son:</p>
23
- <p></p>
24
- <ul>
25
- <li>Te amo por Ash King</li>
26
- <li>Tumi Amar por Habib Wahid</li>
27
- <li>Moner Manush por Anupam Roy</li>
28
- <li>Amaro Porano Jaha Chay por Arijit Singh</li>
29
- <li>Bolna por Arijit Singh y Asees Kaur</li>
30
- </ul>
31
- <p>Para descargar tonos de llamada bangla pagados desde el iTunes Store en tu iPhone, debes seguir estos pasos:</p>
32
- <ol>
33
- <li>Abra la aplicación iTunes Store en su teléfono y toque en el icono Más en la esquina inferior derecha. </li>
34
- <li>Toque en Tonos y luego toque en el icono de búsqueda en la esquina superior derecha. </li>
35
- <li>Escriba el nombre de la canción o del artista que desee y pulse en Buscar.</li>
36
- <li> Desplazarse por los resultados y encontrar el tono de llamada que desee. Puede tocar en el tono de llamada para escuchar una vista previa. </li>
37
- <li>Toque en el precio del tono de llamada y luego toque en Comprar tono. Es posible que tengas que introducir tu ID de Apple y contraseña o usar Touch ID o Face ID para confirmar tu compra. </li>
38
-
39
- </ol>
40
- <h2>Cómo descargar tonos de llamada bangla gratis desde la aplicación Zedge en Android</h2>
41
- <p>Si tienes un teléfono Android y quieres descargar tonos de llamada bangla gratis desde una aplicación confiable y fácil de usar, entonces puedes usar Zedge. Zedge es una aplicación popular que ofrece millones de tonos de llamada, fondos de pantalla, pegatinas e iconos para su teléfono. Puedes encontrar tonos de llamada en varios idiomas, incluyendo el bangla. Algunos de los tonos de llamada que puedes encontrar en Zedge son:</p>
42
- <ul>
43
- <li>Canción de amor bangla por Rana</li>
44
- <li>Canción triste bangla por Rajib</li>
45
- <li>Bangla canción divertida por Mithun</li>
46
- <li>Canción romántica bangla por Shreya Ghoshal</li>
47
- <li>Canción devocional bangla por Anuradha Paudwal</li>
48
- </ul>
49
- <p>Para descargar tonos de llamada bangla gratis desde la aplicación Zedge en su teléfono Android, debe seguir estos pasos:</p>
50
- <ol>
51
- <li>Descargar e instalar la aplicación Zedge de Google Play Store en su teléfono. </li>
52
- <li> Abra la aplicación y toque en el icono de tonos de llamada en la esquina inferior izquierda. </li>
53
- <li> Toque en el icono de búsqueda en la esquina superior derecha y escriba en bangla o cualquier otra palabra clave que desee. </li>
54
- <li> Desplazarse por los resultados y encontrar el tono de llamada que desee. Puede tocar en el tono de llamada para escuchar una vista previa. </li>
55
- <li>Toque en el icono de descarga en la esquina inferior derecha del tono de llamada. Puede elegir configurarlo como su tono de llamada predeterminado, tono de contacto, sonido de notificación o sonido de alarma. </li>
56
- <li>El tono de llamada se descargará en su teléfono y se establecerá según su elección. También puede acceder a ella desde la aplicación Configuración en Sonido > Tono de llamada del teléfono. </li>
57
- </ol>
58
- <h2>Cómo establecer un tono de llamada personalizado en su teléfono</h2>
59
- <p>Si tienes un tono de llamada bangla personalizado que has creado o descargado de otra fuente, y quieres configurarlo como tu tono de llamada del teléfono, entonces puedes hacerlo fácilmente. Estos son los pasos para establecer un tono de llamada personalizado en su teléfono:</p>
60
- <h3>Para usuarios de iPhone</h3>
61
- <p>Para establecer un tono de llamada personalizado en tu iPhone, debes seguir estos pasos:</p>
62
- <ol>
63
-
64
- <li>Abra iTunes en su computadora y seleccione su iPhone de la lista de dispositivos. </li>
65
- <li>Haga clic en la pestaña Tonos y marque la casilla Sincronizar tonos. </li>
66
- <li> Arrastre y suelte su archivo de tono de llamada personalizado bangla desde su computadora a la lista de tonos en iTunes. </li>
67
- <li>Haga clic en Aplicar o Sincronizar para transferir el tono de llamada a su iPhone. </li>
68
- <li>Desconecte su iPhone de su computadora y abra la aplicación Configuración en su teléfono. </li>
69
- <li>Toque en Sonidos y Haptics > Tono de llamada y seleccione su tono de llamada personalizado bangla de la lista. </li>
70
- </ol>
71
- <h3>Para usuarios de Android</h3>
72
- <p>Para establecer un tono de llamada de bangla personalizado en su teléfono Android, debe seguir estos pasos:</p>
73
- <ol>
74
- <li>Conecte su teléfono Android a su computadora usando un cable USB o Bluetooth.</li>
75
- <li>Abra el Explorador de archivos o el Finder en su computadora y localice el archivo de tono de llamada personalizado de Bangla. </li>
76
- <li>Copiar o mover el archivo a la carpeta Tonos de llamada en el almacenamiento interno del teléfono o la tarjeta SD. </li>
77
- <li>Desconecte su teléfono Android de su computadora y abra la aplicación Configuración en su teléfono. </li>
78
- <li>Toque en Sonido > Tono de llamada del teléfono y seleccione su tono de llamada personalizado bangla de la lista. </li>
79
- </ol>
80
- <h2>Conclusión: Resumen de los principales puntos y beneficios de los tonos de llamada Bangla</h2>
81
- <p>En conclusión, el bangla es un lenguaje hermoso y expresivo que puede hacer que el tono de llamada del teléfono sea más atractivo y significativo. Puede descargar tonos de llamada bangla gratis o pagados desde varios sitios web o aplicaciones, o puede establecer su propio tono de llamada bangla personalizado en su teléfono. Al hacerlo, puedes disfrutar de los beneficios de los tonos de llamada bangla, como:</p>
82
- <ul>
83
- <li>Mostrando tu amor y aprecio por la lengua y la cultura bangla. </li>
84
- <li>Impresionando a sus amigos y familiares con su tono de llamada único y pegadizo. </li>
85
- <li>Expresar tu estado de ánimo y personalidad con un tono de llamada adecuado. </li>
86
- <li>Apoyar a la industria de la música bangla y artistas mediante la compra de sus canciones como tonos de llamada. </li>
87
-
88
- </ul>
89
- <p>Esperamos que este artículo te haya ayudado a aprender a descargar y establecer un tono de llamada bangla que diga "te amo" de una manera dulce y melodiosa. Si tiene alguna pregunta o comentario, no dude en compartirlos a continuación. ¡Gracias por leer y feliz timbre! </p>
90
- <h2>Preguntas frecuentes: Cinco preguntas y respuestas comunes sobre tonos de llamada Bangla</h2>
91
- <p>Aquí están algunas de las preguntas y respuestas más frecuentes sobre los tonos de llamada bangla:</p>
92
- <h3>Q: ¿Cuáles son los mejores sitios web o aplicaciones para descargar tonos de llamada bangla gratis? </h3>
93
- <p>A: Algunos de los mejores sitios web o aplicaciones para descargar tonos de llamada bangla gratis son Prokerala, Zedge, tonos de llamada bengalíes, Mobile9 y Mobcup. Puedes encontrar una variedad de tonos de llamada en diferentes géneros y categorías en estas plataformas. </p>
94
- <h3>Q: ¿Cómo puedo crear mi propio tono de llamada personalizado de bangla? </h3>
95
- <p>A: Puedes crear tu propio tono de llamada personalizado usando algunas herramientas o software en línea, como Audacity, Ringtone Maker, Online Audio Cutter, etc. Puedes subir tu propio archivo de audio o grabar tu voz y editarla para hacer un tono de llamada. También puede agregar efectos, filtros o transiciones para hacerlo más atractivo. </p>
96
- <h3>Q: ¿Cómo puedo cambiar el volumen o la vibración de mi tono de llamada bangla? </h3>
97
- <p>A: Puede cambiar el volumen o la vibración de su tono de llamada bangla mediante el uso de la configuración en el teléfono. Para los usuarios de iPhone, puede utilizar los botones de volumen en el lado de su teléfono o el Centro de control para ajustar el volumen. También puede ir a Configuración > Sonidos y hápticos > Tono de llamada para cambiar el patrón de vibración. Para los usuarios de Android, puedes usar los botones de volumen en el lateral del teléfono o el panel Configuración rápida para ajustar el volumen. También puede ir a Configuración > Sonido > Tono de llamada del teléfono para cambiar el patrón de vibración. </p>
98
- <h3>Q: ¿Cómo puedo eliminar o quitar un tono de llamada bangla de mi teléfono? </h3>
99
- <p>A: Puedes eliminar o quitar un tono de llamada bangla de tu teléfono siguiendo estos pasos:</p>
100
- <ol>
101
-
102
- <li>Para los usuarios de Android, conecte su teléfono Android a su computadora utilizando un cable USB o Bluetooth. Abra Explorador de archivos o Finder en su computadora y localice la carpeta Tonos de llamada en el almacenamiento interno del teléfono o en la tarjeta SD. Elimine o mueva los tonos de llamada que desea eliminar de su teléfono. </li>
103
- </ol>
104
- <h3>Q: ¿Cómo puedo compartir mi tono de llamada bangla con otros? </h3>
105
- <p>A: Puedes compartir tu tono de llamada con otros usando algunos métodos, como:</p>
106
- <ul>
107
- <li>Correo electrónico: Puede adjuntar su archivo de tono de llamada a un correo electrónico y enviarlo a sus contactos. </li>
108
- <li>Mensajería: Puede enviar su archivo de tono de llamada como un archivo adjunto o un enlace a través de SMS, WhatsApp, Telegram, etc.</li>
109
- <li>Redes sociales: Puede subir su archivo de tono de llamada a un servicio en la nube, como Google Drive, Dropbox, etc., y compartir el enlace en Facebook, Twitter, Instagram, etc.</li>
110
- <li>Bluetooth: Puede emparejar su teléfono con otro dispositivo que tiene Bluetooth habilitado y enviar su archivo de tono de llamada de forma inalámbrica. </li>
111
- </ul></p> 64aa2da5cf<br />
112
- <br />
113
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Caja Maldita Incredibox Descargar.md DELETED
@@ -1,92 +0,0 @@
1
- <br />
2
- <h1>Descargar Incredibox Soulgem: Cómo jugar la versión modificada del juego de música popular</h1>
3
- <p>Si eres un fan de los juegos de música, probablemente hayas oído hablar de <a href="( 1 )">Incredibox</a>, un juego basado en la web que te permite crear tu propia música mezclando diferentes sonidos y efectos. Incredibox es una manera divertida y fácil de expresar tu creatividad y talento musical, así como para descubrir nuevos géneros y estilos. También puedes compartir tus mezclas con otros jugadores online, o escuchar sus creaciones para inspirarte. </p>
4
- <p>Pero ¿sabías que hay una versión modificada de Incredibox que añade aún más características y posibilidades al juego? Se llama <strong>Soulgem</strong>, y es un proyecto hecho por fans que transforma el juego original en una nueva experiencia. Soulgem no es una actualización oficial o expansión de Incredibox, sino un juego separado que utiliza el mismo motor y concepto, pero con diferentes gráficos, sonidos y jugabilidad. </p>
5
- <h2>caja maldita incredibox descargar</h2><br /><p><b><b>Download Zip</b> &#10003; <a href="https://bltlly.com/2v6LIr">https://bltlly.com/2v6LIr</a></b></p><br /><br />
6
- <p>En este artículo, le diremos todo lo que necesita saber sobre Soulgem, incluyendo cómo descargarlo e instalarlo en su dispositivo, cuáles son sus características y beneficios, cómo reproducirlo y hacer la mejor música, qué otros mods están disponibles para Incredibox y más. Así que si estás listo para explorar una nueva dimensión de la música, ¡sigue leyendo! </p>
7
- <h2>Características de Soulgem: ¿Qué lo hace único y divertido? </h2>
8
- <p>Soulgem es una versión modificada de Incredibox que fue creada por <a href="( 2 )">Marchell</a>, un fan del juego original que quería agregar su propio toque y visión. Soulgem se basa en la primera versión de Incredibox, que fue lanzado en 2009, pero con muchos cambios y mejoras. Estas son algunas de las características que hacen que Soulgem se destaque del juego original:</p>
9
- <ul>
10
-
11
- <li><strong>Animaciones de bonificación y sorpresas</strong>: Soulgem tiene algunos secretos ocultos que puedes descubrir jugando el juego. Por ejemplo, si llenas todas las ranuras con el mismo personaje, desbloquearás una animación de bonificación que muestra al personaje realizando un movimiento o acción especial. Por ejemplo, el personaje con las gafas de sol tocará un solo de guitarra, el personaje con los auriculares rayará un disco de vinilo, y así sucesivamente. También hay algunos huevos de Pascua y referencias a otros juegos y medios que puedes encontrar jugando con los sonidos y efectos. </li>
12
- <li><strong>Mezclas personalizables y opciones para compartir</strong>: Soulgem te permite guardar y cargar tus mezclas, así como compartirlas con otros jugadores en línea. También puedes descargar tus mezclas como archivos MP3, o exportarlas como vídeos para subirlas a YouTube u otras plataformas. También puedes personalizar tus mezclas añadiendo tu propio título, descripción e imagen de portada. Soulgem tiene un sitio web dedicado donde puedes navegar y escuchar las mezclas de otros jugadores, calificarlos, comentarlos y seguir a tus creadores favoritos. </li>
13
- </ul>
14
- <h2>Consejos y trucos para jugar Soulgem: Cómo hacer la mejor música? </h2>
15
- <p>Soulgem es un juego que te permite dar rienda suelta a tu creatividad y habilidades musicales, pero también requiere algo de práctica y experimentación para dominarlo. Aquí hay algunos consejos y trucos que te ayudarán a jugar mejor Soulgem y hacer la mejor música posible:</p>
16
- <ul>
17
- <li><strong>Experimenta con diferentes combinaciones y estilos</strong>: Soulgem tiene muchos sonidos y efectos para elegir, y cada uno tiene un impacto diferente en la mezcla general. Puedes crear diferentes estados de ánimo, atmósferas y géneros combinando diferentes elementos. Por ejemplo, puedes crear una mezcla relajante usando sonidos y efectos suaves, o una mezcla energética usando sonidos y efectos fuertes y rápidos. También puedes mezclar y combinar diferentes estilos, como hip hop y rock, o pop y electro, para crear mezclas únicas y originales. </li>
18
-
19
- <li><strong>Mira los tutoriales y aprende de otros jugadores</strong>: Soulgem tiene un modo tutorial que te enseña los conceptos básicos del juego, como cómo arrastrar y soltar sonidos y efectos, cómo usar los botones shuffle y mute, cómo guardar y cargar mezclas, y más. Puede acceder al modo tutorial haciendo clic en el icono de interrogación en la esquina superior derecha de la pantalla. También puedes aprender de otros jugadores viendo sus mezclas en línea o leyendo sus comentarios y comentarios. Puedes obtener algunas ideas, consejos e inspiración de las mezclas de otros jugadores, así como compartir tus propios pensamientos y opiniones. </li>
20
- </ul>
21
- <h2>Alternativas a Soulgem: ¿Qué otros mods están disponibles? </h2>
22
- <p>Soulgem no es la única versión modificada de Incredibox que existe. Hay muchos otros mods que han sido creados por los fans del juego original, cada uno con su propio tema, estilo y características. Estos son algunos de los mods más populares que puedes encontrar en línea:</p>
23
- <tabla>
24
- <tr>
25
- <th>Nombre de mod</th>
26
- <th>Descripción</th>
27
- </tr>
28
- <tr>
29
- <td>Mecánico</td>
30
- <td>Un mod que tiene un tema futurista y robótico, con sonidos y efectos que se asemejan a máquinas, engranajes, láseres, etc.</td>
31
- </tr>
32
- <tr>
33
- <td>Xrun</td>
34
- <td>Un mod que tiene un tema de ritmo rápido y enérgico, con sonidos y efectos que se asemejan a los coches de carreras, motores, sirenas, etc.</td>
35
- </tr>
36
- <tr>
37
- <td>Evadare</td>
38
- <td>Un mod que tiene un tema misterioso y místico, con sonidos y efectos que se asemejan a hechizos mágicos, cantos, cristales, etc.</td>
39
- </tr>
40
- <tr>
41
- <td>Y más...</td>
42
- <td>Hay muchos otros mods que tienen diferentes temas, tales como horror, fantasía, medieval, etc. Puedes encontrarlos buscando en línea o visitando sitios web creados por fans. </td>
43
- </tr>
44
- </tabla>
45
-
46
- <p>Sin embargo, antes de jugar a estos mods, usted debe ser consciente de algunos pros y contras de las versiones modificadas vs el juego original. Aquí están algunos de ellos:</p>
47
- <ul>
48
- <li><strong>Pros</strong>: <ul>
49
- <li>Puedes disfrutar de nuevas y diferentes características y posibilidades que no están disponibles en el juego original. </li>
50
- <li>Puedes apoyar y apreciar la creatividad y el talento de los creadores de mods hechos por fans. </li>
51
- <li>Puedes explorar y descubrir nuevos géneros y estilos de música que quizás no hayas escuchado antes. </li>
52
- </ul>
53
- </li>
54
- <li><strong>Contras</strong>: <ul>
55
- <li> Puede encontrar algunos errores, fallas o errores que pueden afectar la jugabilidad o el rendimiento del mod. </li>
56
- <li>Es posible que no puedas acceder a algunas funciones u opciones disponibles en el juego original, como actualizaciones, logros, etc.</li>
57
- <li>Es posible que no pueda jugar algunos mods en línea o compartirlos con otros jugadores, debido a problemas de compatibilidad o seguridad. </li>
58
- </ul>
59
- </li>
60
- </ul>
61
- <h2>Conclusión: ¿Por qué usted debe intentar Soulgem hoy? </h2>
62
- <p>Soulgem es una versión modificada de Incredibox que ofrece una nueva y emocionante manera de jugar el popular juego de música. Soulgem tiene muchas características y beneficios que lo hacen único y divertido, como el nuevo diseño de personajes y efectos de sonido, animaciones de bonificación y sorpresas, mezclas personalizables y opciones para compartir, y más. Soulgem también es fácil de descargar e instalar en su dispositivo, y se puede jugar en línea o fuera de línea, dependiendo de su preferencia. </p>
63
- <p>Si estás buscando un juego de música fresco y original que desafíe tu creatividad y habilidades musicales, deberías probar Soulgem hoy. Soulgem le permitirá crear su propia música mediante la mezcla de diferentes sonidos y efectos, así como para escuchar mezclas de otros instrumentistas para la inspiración. También descubrirás nuevos géneros y estilos de música que quizás no hayas escuchado antes, como soul, funk, rock, electro y más. </p>
64
- <p></p>
65
-
66
- <h2>Preguntas frecuentes: Preguntas frecuentes sobre Soulgem</h2>
67
- <p>Aquí están algunas de las preguntas más comunes que la gente hace sobre Soulgem:</p>
68
- <ol>
69
- <li><strong>¿Es seguro descargar y jugar Soulgem? </strong></li>
70
- <p>Sí, Soulgem es seguro para descargar y jugar, siempre y cuando siga las instrucciones y utilice las aplicaciones y sitios web recomendados. Soulgem no contiene ningún virus, malware o spyware que pueda dañar su dispositivo o datos. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet, y escanearlo con una aplicación antivirus antes de abrirlo. </p>
71
- <li><strong>¿Cómo puedo apoyar a los creadores de Soulgem e Incredibox? </strong></li>
72
- <p>Puedes apoyar a los creadores de Soulgem visitando su sitio web, siguiendo sus cuentas de redes sociales, valorando sus mezclas en línea, dejándoles comentarios y comentarios positivos, donándoles a través de PayPal o Patreon, o comprando su mercancía. También puede apoyar a los creadores de Incredibox visitando su sitio web, comprando su aplicación oficial o mercancía, suscribiéndose a su canal de YouTube, siguiendo sus cuentas de redes sociales, o donando a través de PayPal.</p>
73
- <li><strong>¿Puedo jugar Soulgem sin conexión o sin conexión a Internet? </strong></li>
74
- <p>Sí, puede jugar Soulgem sin conexión o sin conexión a Internet, siempre y cuando haya descargado el archivo mod en su dispositivo y tenga una aplicación Flash Player instalada. Sin embargo, no podrás acceder a algunas funciones u opciones que requieran conexión a Internet, como compartir tus mezclas en línea, escuchar las mezclas de otros jugadores en línea, actualizar el archivo mod, etc.</p>
75
- <li><strong>¿Puedo crear mi propio mod para Incredibox? </strong></li>
76
-
77
- <li><strong>¿Dónde puedo encontrar más información y actualizaciones sobre Soulgem e Incredibox? </strong></li>
78
- <p>Puedes encontrar más información y actualizaciones sobre Soulgem e Incredibox visitando sus sitios web oficiales, cuentas de redes sociales, canales de YouTube, foros creados por fans, blogs, wikis, etc. Estos son algunos de los enlaces que puedes consultar:</p>
79
- <ul>
80
- <li><a href="">Sitio web de Soulgem</a></li>
81
- <li><a href="">Canal de YouTube de Soulgem</a></li>
82
- <li><a href="">Cuenta de Instagram de Soulgem</a></li>
83
- <li><a href="">Sitio web de Incredibox</a></li>
84
- <li><a href="">Incredibox canal de YouTube</a></li>
85
- <li><a href="">Cuenta de Instagram Incredibox</a></li>
86
- <li><a href="">Incredibox wiki</a></li>
87
- <li><a href="">Foro de fans de Incredibox</a></li>
88
- </ul>
89
- </ol>
90
- <p>Espero que hayas disfrutado este artículo y hayas aprendido algo nuevo sobre Soulgem e Incredibox. Si tiene alguna pregunta o comentario, no dude en dejarlos abajo. Gracias por leer y hacer música feliz! </p> 64aa2da5cf<br />
91
- <br />
92
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Carx Street Versin 0.8.5.md DELETED
@@ -1,57 +0,0 @@
1
- <br />
2
- <h1>Descargar CarX Street Versión 0.8.5: Una guía para los fanáticos de los juegos de carreras</h1>
3
- <p>Si eres un fan de los juegos de carreras, es posible que hayas oído hablar de CarX Street, un juego móvil que te permite experimentar la emoción de las carreras callejeras con física realista, gráficos impresionantes y una variedad de coches y opciones de personalización. En este artículo, te contaremos todo lo que necesitas saber sobre la versión 0.8.5 de CarX Street, la última actualización que trae nuevas características, mejoras y desafíos al juego. También le daremos algunos consejos y trucos sobre cómo descargar y jugar la versión 0.8.5 de CarX Street como un profesional. </p>
4
- <h2>¿Qué es CarX Street? </h2>
5
- <p>CarX Street es un juego de carreras móvil desarrollado por CarX Technologies, LLC, la misma compañía detrás de la popular serie CarX Drift Racing. CarX Street fue lanzado en 2021 y desde entonces ha ganado millones de descargas y comentarios positivos de jugadores y críticos por igual. </p>
6
- <h2>descargar carx street versión 0.8.5</h2><br /><p><b><b>DOWNLOAD</b> &mdash;&mdash;&mdash; <a href="https://bltlly.com/2v6KBZ">https://bltlly.com/2v6KBZ</a></b></p><br /><br />
7
- <p>CarX Street es diferente de otros juegos de carreras en que se centra en la cultura de las carreras callejeras, donde puedes elegir entre una amplia gama de autos, desde autos clásicos hasta supercoches modernos, y personalizarlos con varias partes, pinturas, calcomanías y pegatinas. También puede ajustar sus coches para adaptarse a su estilo de conducción y preferencias, como ajustar la potencia del motor, suspensión, frenos, neumáticos y más. </p>
8
- <p>CarX Street también cuenta con física y gráficos realistas que te hacen sentir como si realmente estuvieras conduciendo en las calles de diferentes ciudades de todo el mundo. Usted puede deriva, impulsar, adelantar, y chocar su camino a través de varios modos de juego, tales como el modo de carrera, el modo de carrera rápida, el modo club, y el modo en línea. También puedes competir con otros jugadores en tablas de clasificación, torneos y eventos. </p>
9
- <h2>Cómo descargar la versión 0.8.5</h2>
10
- <p>CarX Street versión 0.8.5 es la última actualización que se lanzó en junio de 2023. Trae nuevas características, mejoras y desafíos al juego que lo hacen más divertido y emocionante de jugar. </p>
11
-
12
- <p>Estos son los pasos para descargar CarX Street versión 0.8.5:</p>
13
- <p></p>
14
- <ol>
15
- <li>Ir a la Google Play Store o la App Store en su dispositivo y buscar "CarX Street". </li>
16
- <li>Seleccione el juego de los resultados y toque en "Instalar" o "Obtener". </li>
17
- <li>Espera a que el juego se descargue e instale en tu dispositivo. </li>
18
- <li>Iniciar el juego y disfrutar! </li>
19
- </ol>
20
- <h2>Por qué deberías jugar CarX Street versión 0.8.5</h2>
21
- <p>CarX Street versión 0.8.5 no es solo una actualización regular que corrige algunos errores y problemas técnicos. Es una actualización importante que añade nuevo contenido, características, mejoras y desafíos al juego que lo hacen más agradable y atractivo para los jugadores de todos los niveles. </p>
22
- <p>Estas son algunas de las razones por las que deberías jugar CarX Street versión 0.8.5:</p>
23
- <h3>Gráficos y rendimiento mejorados</h3>
24
- <p>CarX Street versión 0.8.5 mejora los gráficos y el rendimiento del juego mediante la optimización de las texturas, iluminación, sombras, reflejos y animaciones de los coches y entornos. El juego también se ejecuta más suave y rápido en diferentes dispositivos, reduciendo el retraso y los bloqueos. </p>
25
- <h3>Nuevo <h3>Nuevos coches y opciones de personalización</h3>
26
- <p>CarX Street versión 0.8.5 añade nuevos coches y opciones de personalización al juego que le permiten expresar su personalidad y estilo. Ahora puedes elegir entre más de 50 coches, incluyendo algunos de los modelos más icónicos y legendarios, como el Ford Mustang, el Chevrolet Camaro, el Nissan Skyline y el Lamborghini Aventador. También puede personalizar sus coches con más de 1000 piezas, pinturas, calcomanías y pegatinas, creando sus propios diseños únicos e impresionantes. </p>
27
- <h3>Más modos de juego y desafíos</h3>
28
-
29
- <h2>Consejos y trucos para CarX Street versión 0.8.5</h2>
30
- <p>CarX Street versión 0.8.5 no es un juego fácil de dominar. Requiere práctica, paciencia y estrategia para ganar carreras y progreso en el juego. Aquí hay algunos consejos y trucos que pueden ayudarle a mejorar su rendimiento y disfrutar del juego más:</p>
31
- <h3>Domina la mecánica de deriva y nitro</h3>
32
- <p>Drifting y nitro son dos de los mecánicos más importantes en la versión 0.8.5 de CarX Street. Drifting le permite tomar giros bruscos sin perder velocidad, mientras que nitro le da un impulso de aceleración que puede ayudarle a superar a sus oponentes o escapar de la policía. </p>
33
- <p>A la deriva, es necesario tocar el botón de freno mientras gira el volante. Cuanto más tiempo se mantenga el botón de freno, más se deriva. Para usar nitro, debe tocar el botón nitro cuando esté lleno. Puede llenar su medidor de nitro a la deriva, adelantando o realizando acrobacias. </p>
34
- <h3>Actualizar sus coches y sintonizarlos a su preferencia</h3>
35
- <p>Actualizar sus coches y sintonizarlos a su preferencia son esenciales para ganar carreras y avanzar en el juego. La actualización de sus coches aumentará su rendimiento, como la velocidad, la aceleración, el manejo y la durabilidad. Sintonizar sus coches le permitirá ajustar sus ajustes, como la potencia del motor, la suspensión, los frenos, los neumáticos y más. </p>
36
- <p>Para actualizar sus coches, es necesario gastar monedas o diamantes que se pueden ganar jugando el juego o viendo anuncios. Para ajustar sus coches, es necesario ir al garaje y utilizar los controles deslizantes para cambiar los valores de cada parámetro. </p>
37
- <h3>Únete a un club y compite con otros jugadores</h3>
38
- <p>Unirse a un club y competir con otros jugadores es una excelente manera de hacer amigos, aprender de otros y ganar recompensas en la versión 0.8.5 de CarX Street. Los clubes son grupos de jugadores que comparten un interés o objetivo común en el juego. Puede unirse a un club existente o crear su propio club con sus amigos. </p>
39
-
40
- <h2>Conclusión</h2>
41
- <p>CarX Street versión 0.8.5 es un juego imprescindible para los fanáticos de los juegos de carreras que quieren experimentar la emoción de las carreras callejeras con física realista, gráficos impresionantes y una variedad de coches y opciones de personalización. También es un juego divertido y desafiante que ofrece nuevas características, mejoras y desafíos que lo hacen más agradable y atractivo para jugadores de todos los niveles. </p>
42
- <p>Si quieres descargar la versión 0.8.5 de CarX Street y jugar como un profesional, sigue nuestra guía anterior y usa nuestros consejos y trucos para mejorar tu rendimiento y disfrutar más del juego. </p>
43
- <h3>Preguntas frecuentes</h3>
44
- <ul>
45
- <li>Q: ¿Cuánto espacio de almacenamiento necesito para descargar la versión 0.8.5 de CarX Street? </li>
46
- <li>A: Necesita al menos 1 GB de espacio de almacenamiento gratuito en su dispositivo para descargar la versión 0.8.5 de CarX Street. </li>
47
- <li>Q: ¿Cómo puedo desbloquear coches nuevos en CarX Street versión 0.8.5? </li>
48
- <li>A: Puedes desbloquear coches nuevos en la versión 0.8.5 de CarX Street completando etapas de modo carrera, ganando eventos y torneos, o comprándolos con monedas o diamantes. </li>
49
- <li>Q: <li>Q: ¿Cómo puedo ganar monedas y diamantes en CarX Street versión 0.8.5? </li>
50
- <li>A: Puedes ganar monedas y diamantes en la versión 0.8.5 de CarX Street al jugar el juego, completar misiones, ganar carreras, ver anuncios o comprarlos con dinero real. </li>
51
- <li>Q: ¿Cómo puedo usar nitro en CarX Street versión 0.8.5? </li>
52
- <li>A: Para la deriva, es necesario tocar el botón de freno mientras gira el volante. Para usar nitro, debes tocar el botón nitro cuando esté lleno. </li>
53
- <li>Q: ¿Cómo puedo unirme o crear un club en la versión 0.8.5 de CarX Street? </li>
54
- <li>A: Para unirse o crear un club en la versión 0.8.5 de CarX Street, debe ir al menú del club y tocar en el botón unirse o crear. A continuación, puede buscar un club existente o introducir el nombre y la descripción de su propio club. </li>
55
- </ul></p> 64aa2da5cf<br />
56
- <br />
57
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Chicos Stumble Para Pc Sin Emulador.md DELETED
@@ -1,100 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar Stumble Guys para PC sin emulador</h1>
3
- <p>Stumble Guys es un popular juego de fiesta multijugador en línea que te permite competir con hasta 32 jugadores a través de caóticas carreras de obstáculos. Usted puede correr, saltar, correr, deslizarse, y tropezar su camino a la línea de meta hasta que un vencedor es coronado. También puedes personalizar tu personaje con varios atuendos y emotes, e invitar a tus amigos a unirse a tu fiesta. </p>
4
- <h2>descargar chicos stumble para pc sin emulador</h2><br /><p><b><b>Download</b> &#10031; <a href="https://bltlly.com/2v6JNd">https://bltlly.com/2v6JNd</a></b></p><br /><br />
5
- <p>Stumble Guys está disponible para dispositivos Android en Google Play Store, pero ¿qué pasa si quieres jugar en tu PC? Puedes pensar que necesitas un emulador para ejecutar aplicaciones Android en tu ordenador, pero esa no es la única opción. De hecho, hay algunas maneras de jugar Stumble Guys en PC sin emulador, que puede ahorrarle tiempo, espacio y recursos. </p>
6
- <p>En este artículo, te mostraremos tres métodos para descargar Stumble Guys para PC sin emulador, y comparar sus pros y contras. Al final de este artículo, podrás disfrutar de Stumble Guys en una pantalla más grande y con mejores controles. </p>
7
- <h2>¿Qué es Stumble Guys? </h2>
8
- <p>Stumble Guys es un juego de batalla en línea royale partido que se inspira en programas de televisión como Wipeout y el castillo de Takeshi. El juego fue desarrollado por Scopely y lanzado en octubre de 2021. Ha recibido críticas positivas de jugadores y críticos por igual, y tiene más de 80.000 calificaciones en Steam.</p>
9
- <p></p>
10
- <p>El juego cuenta con gráficos coloridos y locos, juego basado en la física, y falla hilarante. Puedes elegir entre diferentes modos, como el modo en solitario o en equipo, y competir contra otros jugadores en línea. También puedes desbloquear nuevos atuendos y emociones para tu personaje, como superhéroes, animales, piratas, zombis y más. </p>
11
- <p>El juego es fácil de jugar pero difícil de dominar. Tienes que utilizar tus habilidades y estrategia para superar los diversos obstáculos y desafíos en cada nivel. Tienes que tener cuidado de no caerte de las plataformas, ser golpeado por bolas gigantes o ser eliminado por otros jugadores. El último que esté de pie gana el juego. </p>
12
-
13
- <p>Stumble Guys es un juego divertido y adictivo que puedes jugar en tu dispositivo Android, pero hay algunas razones por las que podrías querer jugarlo en tu PC. Estos son algunos de ellos:</p>
14
- <ul>
15
- <li> Puedes disfrutar del juego en una pantalla más grande y mejor, lo que puede mejorar los gráficos y la inmersión. </li>
16
- <li> Puedes usar el teclado y el ratón para controlar a tu personaje, lo que puede darte más precisión y capacidad de respuesta. </li>
17
- <li> Puede evitar el desagüe de la batería, el sobrecalentamiento y los problemas de retraso que pueden ocurrir en su dispositivo móvil. </li>
18
- <li>Puedes grabar y transmitir tu juego más fácilmente, y compartirlo con tus amigos o comunidad en línea. </li>
19
- </ul>
20
- <p>Sin embargo, jugar Stumble Guys en PC no es tan simple como descargarlo desde la Google Play Store. Es necesario utilizar algunas herramientas o métodos para ejecutar el juego en su ordenador. Una de las formas más comunes es utilizar un emulador, que es un software que imita el sistema operativo Android en su PC. Sin embargo, los emuladores tienen algunos inconvenientes, como:</p>
21
- <ul>
22
- <li> Pueden ocupar mucho espacio y recursos en su PC, lo que puede afectar su rendimiento y velocidad. </li>
23
- <li>Pueden ser complicados de configurar y configurar, especialmente para principiantes. </li>
24
- <li>Pueden tener problemas de compatibilidad y seguridad, que pueden causar errores o infecciones de malware. </li>
25
- </ul>
26
- <p>Es por eso que te mostraremos algunas formas alternativas de jugar Stumble Guys en PC sin emulador. Estos métodos son más simples, rápidos y seguros que usar emuladores. Echemos un vistazo a ellos. </p>
27
- <h2>¿Cómo jugar Stumble chicos en PC sin emulador? </h2>
28
- <h3>Método 1: iMyFone MirrorTo</h3>
29
- <p>iMyFone MirrorTo es una herramienta que le permite reflejar y controlar la pantalla del teléfono Android en su PC de forma inalámbrica. Puedes usarlo para jugar Stumble Guys en PC sin emulador siguiendo estos pasos:</p>
30
- <ol>
31
- <li>Descargar e instalar iMyFone MirrorTo en su PC desde <a href="">aquí</a>. </li>
32
-
33
- <li>En su teléfono, deslice hacia abajo desde la parte superior de la pantalla y toque en el "Cast" o "Screen Mirroring" opción. </li>
34
- <li>Seleccione su PC de la lista de dispositivos disponibles y toque en "Iniciar ahora". </li>
35
- <li> La pantalla del teléfono se reflejará en su PC. Puede utilizar el ratón para controlar la pantalla del teléfono. </li>
36
- <li>Abrir Stumble chicos en el teléfono y empezar a jugar en su PC.</li>
37
- </ol>
38
- <p>Las ventajas de iMyFone MirrorTo sobre emuladores son:</p>
39
- <ul>
40
- <li> Es fácil de usar y no requiere ninguna instalación o configuración en su teléfono. </li>
41
- <li> No consume mucho espacio o recursos en su PC, ya que solo refleja la pantalla del teléfono. </li>
42
- <li> No afecta el rendimiento o la calidad del juego, ya que se ejecuta de forma nativa en su teléfono. </li>
43
- </ul>
44
- <h3>Método 2: Android-x86</h3>
45
- <p>Android-x86 es un proyecto que conecta el sistema operativo Android para ejecutarse en equipos basados en x86. Puedes usarlo para jugar Stumble Guys en PC sin emulador siguiendo estos pasos:</p>
46
- <ol>
47
- <li>Descargar la última versión de Android-x86 desde <a href="">aquí</a>. </li>
48
- <li>Crear una unidad USB de arranque con Android-x86 utilizando una herramienta como Rufus o UNetbootin.</li>
49
- <li>Reinicie su PC y arranque desde la unidad USB. Verá un menú con diferentes opciones. Elija "Ejecutar Android-x86 sin instalación". </li>
50
- <li>Entrarás en el escritorio Android-x86. Conéctate a una red Wi-Fi y abre la aplicación Google Play Store. </li>
51
- <li>Inicia sesión con tu cuenta de Google y descarga Stumble Guys desde Google Play Store.</li>
52
- <li>Abrir Stumble chicos y empezar a jugar en su PC.</li>
53
- </ol>
54
- <p>Las ventajas de Android-x86 sobre los emuladores son:</p>
55
- <ul>
56
- <li>Se ejecuta más rápido y más suave que los emuladores, ya que utiliza el hardware nativo de su PC.</li>
57
- <li>Soporta más características y aplicaciones que emuladores, ya que se basa en el código fuente oficial de Android. </li>
58
- </ul>
59
- <p>Las desventajas de Android-x86 son:</p>
60
- <ul>
61
-
62
- <li>No soporta controles de teclado y ratón, ya que está diseñado para pantallas táctiles. </li>
63
- </ul>
64
- <h3>Método 3: Soldador ARC</h3>
65
- <p>ARC Welder es una extensión de Chrome que le permite ejecutar aplicaciones Android en su PC utilizando el navegador Chrome. Puedes usarlo para jugar Stumble Guys en PC sin emulador siguiendo estos pasos:</p>
66
- <ol>
67
- <li>Descargue e instale el navegador Chrome en su PC desde <a href="">aquí</a>. </li>
68
- <li>Descargue e instale la extensión ARC Welder desde <a href="">aquí</a>. </li>
69
- <li>Descargar el archivo APK Stumble Guys de <a href="">aquí</a>. </li>
70
- <li>Inicie la extensión ARC Welder y haga clic en "Añadir su APK". </li>
71
- <li> Seleccione el archivo APK Stumble Guys y configurar los ajustes que desee. </li>
72
- <li>Haga clic en "Prueba" para ejecutar Stumble Guys en su PC.</li>
73
- </ol>
74
- <p>Las ventajas de ARC Welder sobre los emuladores son:</p>
75
- <ul>
76
- <li> Es simple y rápido de usar, ya que no requiere ninguna instalación o configuración en su PC.</li>
77
- <li>No ocupa mucho espacio ni recursos en tu PC, ya que se ejecuta en tu navegador. </li>
78
- <li>Soporta controles de teclado y ratón, así como mandos y joysticks. </li>
79
- </ul>
80
- <p>Las desventajas de ARC Welder son:</p>
81
- <ul>
82
- <li>No es compatible con Google Play Services, lo que significa que no puede iniciar sesión con su cuenta de Google o acceder a algunas características del juego. </li>
83
- <li> No tiene un buen rendimiento o calidad, ya que no está optimizado para ejecutar aplicaciones Android. </li>
84
- <li> Puede tener problemas de compatibilidad y estabilidad, lo que puede causar fallos o errores. </li>
85
- </ul>
86
- <h2>Conclusión</h2>
87
-
88
- <h2>Preguntas frecuentes</h2>
89
- <h4>Q1: ¿Es Stumble Guys libre para jugar? </h4>
90
- <p>A1: Sí, Stumble Guys es libre de jugar, pero tiene compras en la aplicación para artículos cosméticos y emotes. </p>
91
- <h4>Q2: ¿Puedo jugar Stumble Guys con mis amigos? </h4>
92
- <p>A2: Sí, puedes invitar a tus amigos a unirse a tu grupo y competir contra otros jugadores en línea. </p>
93
- <h4>Q3: ¿Cuáles son los requisitos del sistema para Stumble Guys? </h4>
94
- <p>A3: Para dispositivos Android, necesita al menos Android 5.0 y 100 MB de espacio libre. Para PC, necesita al menos Windows 10, procesador Intel Core 2 Duo E8400 o AMD Phenom II X4 965, 4 GB de RAM, AMD Radeon HD 7750 o tarjeta gráfica NVIDIA Geforce GTX 260 y conexión a Internet de banda ancha. </p>
95
- <h4>Q4: ¿Cuántos jugadores pueden jugar Stumble Guys a la vez? </h4>
96
- <p>A4: Stumble Guys admite hasta 32 jugadores online en cada ronda. </p>
97
- <h4>Q5: ¿Cuántos niveles hay en Stumble Guys? </h4>
98
- <p>A5: Actualmente hay 17 carreras de obstáculos únicas en Stumble Guys, cada una con diferentes desafíos y temas. </p> 64aa2da5cf<br />
99
- <br />
100
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BernardoOlisan/vqganclip/taming-transformers/setup.py DELETED
@@ -1,13 +0,0 @@
1
- from setuptools import setup, find_packages
2
-
3
- setup(
4
- name='taming-transformers',
5
- version='0.0.1',
6
- description='Taming Transformers for High-Resolution Image Synthesis',
7
- packages=find_packages(),
8
- install_requires=[
9
- 'torch',
10
- 'numpy',
11
- 'tqdm',
12
- ],
13
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langthaimodel.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Branon/oai-proxy/Dockerfile DELETED
@@ -1,11 +0,0 @@
1
- FROM node:18-bullseye-slim
2
- RUN apt-get update && \
3
- apt-get install -y git
4
- RUN git clone https://gitlab.com/khanon/oai-proxy.git /app
5
- WORKDIR /app
6
- RUN npm install
7
- COPY Dockerfile greeting.md* .env* ./
8
- RUN npm run build
9
- EXPOSE 7860
10
- ENV NODE_ENV=production
11
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CAPTY222/runwayml-stable-diffusion-v1-5/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch()
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/train_net.py DELETED
@@ -1,67 +0,0 @@
1
- #!/usr/bin/env python3
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
-
4
- """
5
- TridentNet Training Script.
6
-
7
- This script is a simplified version of the training script in detectron2/tools.
8
- """
9
-
10
- import os
11
-
12
- from detectron2.checkpoint import DetectionCheckpointer
13
- from detectron2.config import get_cfg
14
- from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch
15
- from detectron2.evaluation import COCOEvaluator
16
-
17
- from tridentnet import add_tridentnet_config
18
-
19
-
20
- class Trainer(DefaultTrainer):
21
- @classmethod
22
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
23
- if output_folder is None:
24
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
25
- return COCOEvaluator(dataset_name, cfg, True, output_folder)
26
-
27
-
28
- def setup(args):
29
- """
30
- Create configs and perform basic setups.
31
- """
32
- cfg = get_cfg()
33
- add_tridentnet_config(cfg)
34
- cfg.merge_from_file(args.config_file)
35
- cfg.merge_from_list(args.opts)
36
- cfg.freeze()
37
- default_setup(cfg, args)
38
- return cfg
39
-
40
-
41
- def main(args):
42
- cfg = setup(args)
43
-
44
- if args.eval_only:
45
- model = Trainer.build_model(cfg)
46
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
47
- cfg.MODEL.WEIGHTS, resume=args.resume
48
- )
49
- res = Trainer.test(cfg, model)
50
- return res
51
-
52
- trainer = Trainer(cfg)
53
- trainer.resume_or_load(resume=args.resume)
54
- return trainer.train()
55
-
56
-
57
- if __name__ == "__main__":
58
- args = default_argument_parser().parse_args()
59
- print("Command Line Args:", args)
60
- launch(
61
- main,
62
- args.num_gpus,
63
- num_machines=args.num_machines,
64
- machine_rank=args.machine_rank,
65
- dist_url=args.dist_url,
66
- args=(args,),
67
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/adapter.py DELETED
@@ -1,71 +0,0 @@
1
- # --------------------------------------------------------
2
- # OpenVQA
3
- # Written by Zhenwei Shao https://github.com/ParadoxZW
4
- # --------------------------------------------------------
5
-
6
- import torch.nn as nn
7
- import torch
8
- from openvqa.core.base_dataset import BaseAdapter
9
- from openvqa.utils.make_mask import make_mask
10
-
11
-
12
- class Adapter(BaseAdapter):
13
- def __init__(self, __C):
14
- super(Adapter, self).__init__(__C)
15
- self.__C = __C
16
-
17
-
18
- def vqa_init(self, __C):
19
- pass
20
- # self.frcn_linear = nn.Linear(__C.FEAT_SIZE['vqa']['FRCN_FEAT_SIZE'][1], __C.HIDDEN_SIZE)
21
-
22
- def gqa_init(self, __C):
23
- imgfeat_linear_size = __C.FEAT_SIZE['gqa']['FRCN_FEAT_SIZE'][1]
24
- if __C.USE_BBOX_FEAT:
25
- self.bbox_linear = nn.Linear(5, __C.BBOXFEAT_EMB_SIZE)
26
- imgfeat_linear_size += __C.BBOXFEAT_EMB_SIZE
27
- self.frcn_linear = nn.Linear(imgfeat_linear_size, __C.HIDDEN_SIZE)
28
-
29
- if __C.USE_AUX_FEAT:
30
- self.grid_linear = nn.Linear(
31
- __C.FEAT_SIZE['gqa']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE)
32
-
33
- def clevr_init(self, __C):
34
- self.grid_linear = nn.Linear(
35
- __C.FEAT_SIZE['clevr']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE)
36
-
37
- def vqa_forward(self, feat_dict):
38
- frcn_feat = feat_dict['FRCN_FEAT']
39
- bbox_feat = feat_dict['BBOX_FEAT']
40
-
41
- img_feat_mask = make_mask(frcn_feat)
42
- # img_feat = self.frcn_linear(frcn_feat)
43
-
44
- return frcn_feat, img_feat_mask
45
-
46
-
47
- def gqa_forward(self, feat_dict):
48
- frcn_feat = feat_dict['FRCN_FEAT']
49
- bbox_feat = feat_dict['BBOX_FEAT']
50
- grid_feat = feat_dict['GRID_FEAT']
51
-
52
- img_feat_mask = make_mask(frcn_feat)
53
-
54
- if self.__C.USE_BBOX_FEAT:
55
- bbox_feat = self.bbox_linear(bbox_feat)
56
- frcn_feat = torch.cat((frcn_feat, bbox_feat), dim=-1)
57
- img_feat = self.frcn_linear(frcn_feat)
58
-
59
- return img_feat, img_feat_mask
60
-
61
-
62
- def clevr_forward(self, feat_dict):
63
- grid_feat = feat_dict['GRID_FEAT']
64
-
65
- img_feat_mask = make_mask(grid_feat)
66
- img_feat = self.grid_linear(grid_feat)
67
-
68
- return img_feat, img_feat_mask
69
-
70
-
71
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/random/normal_distribution.h DELETED
@@ -1,275 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file normal_distribution.h
19
- * \brief A normal (Gaussian) distribution of real-valued numbers.
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/pair.h>
26
- #include <thrust/random/detail/random_core_access.h>
27
- #include <thrust/random/detail/normal_distribution_base.h>
28
- #include <iostream>
29
-
30
- namespace thrust
31
- {
32
-
33
- namespace random
34
- {
35
-
36
-
37
- /*! \addtogroup random_number_distributions
38
- * \{
39
- */
40
-
41
- /*! \class normal_distribution
42
- * \brief A \p normal_distribution random number distribution produces floating point
43
- * Normally distributed random numbers.
44
- *
45
- * \tparam RealType The type of floating point number to produce.
46
- *
47
- * The following code snippet demonstrates examples of using a \p normal_distribution with a
48
- * random number engine to produce random values drawn from the Normal distribution with a given
49
- * mean and variance:
50
- *
51
- * \code
52
- * #include <thrust/random/linear_congruential_engine.h>
53
- * #include <thrust/random/normal_distribution.h>
54
- *
55
- * int main(void)
56
- * {
57
- * // create a minstd_rand object to act as our source of randomness
58
- * thrust::minstd_rand rng;
59
- *
60
- * // create a normal_distribution to produce floats from the Normal distribution
61
- * // with mean 2.0 and standard deviation 3.5
62
- * thrust::random::normal_distribution<float> dist(2.0f, 3.5f);
63
- *
64
- * // write a random number to standard output
65
- * std::cout << dist(rng) << std::endl;
66
- *
67
- * // write the mean of the distribution, just in case we forgot
68
- * std::cout << dist.mean() << std::endl;
69
- *
70
- * // 2.0 is printed
71
- *
72
- * // and the standard deviation
73
- * std::cout << dist.stddev() << std::endl;
74
- *
75
- * // 3.5 is printed
76
- *
77
- * return 0;
78
- * }
79
- * \endcode
80
- */
81
- template<typename RealType = double>
82
- class normal_distribution
83
- : public detail::normal_distribution_base<RealType>::type
84
- {
85
- private:
86
- typedef typename detail::normal_distribution_base<RealType>::type super_t;
87
-
88
- public:
89
- // types
90
-
91
- /*! \typedef result_type
92
- * \brief The type of the floating point number produced by this \p normal_distribution.
93
- */
94
- typedef RealType result_type;
95
-
96
- /*! \typedef param_type
97
- * \brief The type of the object encapsulating this \p normal_distribution's parameters.
98
- */
99
- typedef thrust::pair<RealType,RealType> param_type;
100
-
101
- // constructors and reset functions
102
-
103
- /*! This constructor creates a new \p normal_distribution from two values defining the
104
- * half-open interval of the distribution.
105
- *
106
- * \param mean The mean (expected value) of the distribution. Defaults to \c 0.0.
107
- * \param stddev The standard deviation of the distribution. Defaults to \c 1.0.
108
- */
109
- __host__ __device__
110
- explicit normal_distribution(RealType mean = 0.0, RealType stddev = 1.0);
111
-
112
- /*! This constructor creates a new \p normal_distribution from a \p param_type object
113
- * encapsulating the range of the distribution.
114
- *
115
- * \param parm A \p param_type object encapsulating the parameters (i.e., the mean and standard deviation) of the distribution.
116
- */
117
- __host__ __device__
118
- explicit normal_distribution(const param_type &parm);
119
-
120
- /*! Calling this member function guarantees that subsequent uses of this
121
- * \p normal_distribution do not depend on values produced by any random
122
- * number generator prior to invoking this function.
123
- */
124
- __host__ __device__
125
- void reset(void);
126
-
127
- // generating functions
128
-
129
- /*! This method produces a new Normal random integer drawn from this \p normal_distribution's
130
- * range using a \p UniformRandomNumberGenerator as a source of randomness.
131
- *
132
- * \param urng The \p UniformRandomNumberGenerator to use as a source of randomness.
133
- */
134
- template<typename UniformRandomNumberGenerator>
135
- __host__ __device__
136
- result_type operator()(UniformRandomNumberGenerator &urng);
137
-
138
- /*! This method produces a new Normal random integer as if by creating a new \p normal_distribution
139
- * from the given \p param_type object, and calling its <tt>operator()</tt> method with the given
140
- * \p UniformRandomNumberGenerator as a source of randomness.
141
- *
142
- * \param urng The \p UniformRandomNumberGenerator to use as a source of randomness.
143
- * \param parm A \p param_type object encapsulating the parameters of the \p normal_distribution
144
- * to draw from.
145
- */
146
- template<typename UniformRandomNumberGenerator>
147
- __host__ __device__
148
- result_type operator()(UniformRandomNumberGenerator &urng, const param_type &parm);
149
-
150
- // property functions
151
-
152
- /*! This method returns the value of the parameter with which this \p normal_distribution
153
- * was constructed.
154
- *
155
- * \return The mean (expected value) of this \p normal_distribution's output.
156
- */
157
- __host__ __device__
158
- result_type mean(void) const;
159
-
160
- /*! This method returns the value of the parameter with which this \p normal_distribution
161
- * was constructed.
162
- *
163
- * \return The standard deviation of this \p uniform_real_distribution's output.
164
- */
165
- __host__ __device__
166
- result_type stddev(void) const;
167
-
168
- /*! This method returns a \p param_type object encapsulating the parameters with which this
169
- * \p normal_distribution was constructed.
170
- *
171
- * \return A \p param_type object encapsulating the parameters (i.e., the mean and standard deviation) of this \p normal_distribution.
172
- */
173
- __host__ __device__
174
- param_type param(void) const;
175
-
176
- /*! This method changes the parameters of this \p normal_distribution using the values encapsulated
177
- * in a given \p param_type object.
178
- *
179
- * \param parm A \p param_type object encapsulating the new parameters (i.e., the mean and variance) of this \p normal_distribution.
180
- */
181
- __host__ __device__
182
- void param(const param_type &parm);
183
-
184
- /*! This method returns the smallest floating point number this \p normal_distribution can potentially produce.
185
- *
186
- * \return The lower bound of this \p normal_distribution's half-open interval.
187
- */
188
- __host__ __device__
189
- result_type min THRUST_PREVENT_MACRO_SUBSTITUTION (void) const;
190
-
191
- /*! This method returns the smallest number larger than largest floating point number this \p uniform_real_distribution can potentially produce.
192
- *
193
- * \return The upper bound of this \p normal_distribution's half-open interval.
194
- */
195
- __host__ __device__
196
- result_type max THRUST_PREVENT_MACRO_SUBSTITUTION (void) const;
197
-
198
- /*! \cond
199
- */
200
- private:
201
- param_type m_param;
202
-
203
- friend struct thrust::random::detail::random_core_access;
204
-
205
- __host__ __device__
206
- bool equal(const normal_distribution &rhs) const;
207
-
208
- template<typename CharT, typename Traits>
209
- std::basic_ostream<CharT,Traits>& stream_out(std::basic_ostream<CharT,Traits> &os) const;
210
-
211
- template<typename CharT, typename Traits>
212
- std::basic_istream<CharT,Traits>& stream_in(std::basic_istream<CharT,Traits> &is);
213
- /*! \endcond
214
- */
215
- }; // end normal_distribution
216
-
217
-
218
- /*! This function checks two \p normal_distributions for equality.
219
- * \param lhs The first \p normal_distribution to test.
220
- * \param rhs The second \p normal_distribution to test.
221
- * \return \c true if \p lhs is equal to \p rhs; \c false, otherwise.
222
- */
223
- template<typename RealType>
224
- __host__ __device__
225
- bool operator==(const normal_distribution<RealType> &lhs,
226
- const normal_distribution<RealType> &rhs);
227
-
228
-
229
- /*! This function checks two \p normal_distributions for inequality.
230
- * \param lhs The first \p normal_distribution to test.
231
- * \param rhs The second \p normal_distribution to test.
232
- * \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise.
233
- */
234
- template<typename RealType>
235
- __host__ __device__
236
- bool operator!=(const normal_distribution<RealType> &lhs,
237
- const normal_distribution<RealType> &rhs);
238
-
239
-
240
- /*! This function streams a normal_distribution to a \p std::basic_ostream.
241
- * \param os The \p basic_ostream to stream out to.
242
- * \param d The \p normal_distribution to stream out.
243
- * \return \p os
244
- */
245
- template<typename RealType,
246
- typename CharT, typename Traits>
247
- std::basic_ostream<CharT,Traits>&
248
- operator<<(std::basic_ostream<CharT,Traits> &os,
249
- const normal_distribution<RealType> &d);
250
-
251
-
252
- /*! This function streams a normal_distribution in from a std::basic_istream.
253
- * \param is The \p basic_istream to stream from.
254
- * \param d The \p normal_distribution to stream in.
255
- * \return \p is
256
- */
257
- template<typename RealType,
258
- typename CharT, typename Traits>
259
- std::basic_istream<CharT,Traits>&
260
- operator>>(std::basic_istream<CharT,Traits> &is,
261
- normal_distribution<RealType> &d);
262
-
263
-
264
- /*! \} // end random_number_distributions
265
- */
266
-
267
-
268
- } // end random
269
-
270
- using random::normal_distribution;
271
-
272
- } // end thrust
273
-
274
- #include <thrust/random/detail/normal_distribution.inl>
275
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Text2Human/Text2Human/models/archs/__init__.py DELETED
File without changes
spaces/CVPR/WALT/mmdet/core/bbox/__init__.py DELETED
@@ -1,27 +0,0 @@
1
- from .assigners import (AssignResult, BaseAssigner, CenterRegionAssigner,
2
- MaxIoUAssigner, RegionAssigner)
3
- from .builder import build_assigner, build_bbox_coder, build_sampler
4
- from .coder import (BaseBBoxCoder, DeltaXYWHBBoxCoder, PseudoBBoxCoder,
5
- TBLRBBoxCoder)
6
- from .iou_calculators import BboxOverlaps2D, bbox_overlaps
7
- from .samplers import (BaseSampler, CombinedSampler,
8
- InstanceBalancedPosSampler, IoUBalancedNegSampler,
9
- OHEMSampler, PseudoSampler, RandomSampler,
10
- SamplingResult, ScoreHLRSampler)
11
- from .transforms import (bbox2distance, bbox2result, bbox2roi,
12
- bbox_cxcywh_to_xyxy, bbox_flip, bbox_mapping,
13
- bbox_mapping_back, bbox_rescale, bbox_xyxy_to_cxcywh,
14
- distance2bbox, roi2bbox)
15
-
16
- __all__ = [
17
- 'bbox_overlaps', 'BboxOverlaps2D', 'BaseAssigner', 'MaxIoUAssigner',
18
- 'AssignResult', 'BaseSampler', 'PseudoSampler', 'RandomSampler',
19
- 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler',
20
- 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'build_assigner',
21
- 'build_sampler', 'bbox_flip', 'bbox_mapping', 'bbox_mapping_back',
22
- 'bbox2roi', 'roi2bbox', 'bbox2result', 'distance2bbox', 'bbox2distance',
23
- 'build_bbox_coder', 'BaseBBoxCoder', 'PseudoBBoxCoder',
24
- 'DeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'CenterRegionAssigner',
25
- 'bbox_rescale', 'bbox_cxcywh_to_xyxy', 'bbox_xyxy_to_cxcywh',
26
- 'RegionAssigner'
27
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/core/post_processing/merge_augs.py DELETED
@@ -1,150 +0,0 @@
1
- import copy
2
- import warnings
3
-
4
- import numpy as np
5
- import torch
6
- from mmcv import ConfigDict
7
- from mmcv.ops import nms
8
-
9
- from ..bbox import bbox_mapping_back
10
-
11
-
12
- def merge_aug_proposals(aug_proposals, img_metas, cfg):
13
- """Merge augmented proposals (multiscale, flip, etc.)
14
-
15
- Args:
16
- aug_proposals (list[Tensor]): proposals from different testing
17
- schemes, shape (n, 5). Note that they are not rescaled to the
18
- original image size.
19
-
20
- img_metas (list[dict]): list of image info dict where each dict has:
21
- 'img_shape', 'scale_factor', 'flip', and may also contain
22
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
23
- For details on the values of these keys see
24
- `mmdet/datasets/pipelines/formatting.py:Collect`.
25
-
26
- cfg (dict): rpn test config.
27
-
28
- Returns:
29
- Tensor: shape (n, 4), proposals corresponding to original image scale.
30
- """
31
-
32
- cfg = copy.deepcopy(cfg)
33
-
34
- # deprecate arguments warning
35
- if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg:
36
- warnings.warn(
37
- 'In rpn_proposal or test_cfg, '
38
- 'nms_thr has been moved to a dict named nms as '
39
- 'iou_threshold, max_num has been renamed as max_per_img, '
40
- 'name of original arguments and the way to specify '
41
- 'iou_threshold of NMS will be deprecated.')
42
- if 'nms' not in cfg:
43
- cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr))
44
- if 'max_num' in cfg:
45
- if 'max_per_img' in cfg:
46
- assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \
47
- f'max_per_img at the same time, but get {cfg.max_num} ' \
48
- f'and {cfg.max_per_img} respectively' \
49
- f'Please delete max_num which will be deprecated.'
50
- else:
51
- cfg.max_per_img = cfg.max_num
52
- if 'nms_thr' in cfg:
53
- assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \
54
- f'iou_threshold in nms and ' \
55
- f'nms_thr at the same time, but get ' \
56
- f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \
57
- f' respectively. Please delete the nms_thr ' \
58
- f'which will be deprecated.'
59
-
60
- recovered_proposals = []
61
- for proposals, img_info in zip(aug_proposals, img_metas):
62
- img_shape = img_info['img_shape']
63
- scale_factor = img_info['scale_factor']
64
- flip = img_info['flip']
65
- flip_direction = img_info['flip_direction']
66
- _proposals = proposals.clone()
67
- _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape,
68
- scale_factor, flip,
69
- flip_direction)
70
- recovered_proposals.append(_proposals)
71
- aug_proposals = torch.cat(recovered_proposals, dim=0)
72
- merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(),
73
- aug_proposals[:, -1].contiguous(),
74
- cfg.nms.iou_threshold)
75
- scores = merged_proposals[:, 4]
76
- _, order = scores.sort(0, descending=True)
77
- num = min(cfg.max_per_img, merged_proposals.shape[0])
78
- order = order[:num]
79
- merged_proposals = merged_proposals[order, :]
80
- return merged_proposals
81
-
82
-
83
- def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg):
84
- """Merge augmented detection bboxes and scores.
85
-
86
- Args:
87
- aug_bboxes (list[Tensor]): shape (n, 4*#class)
88
- aug_scores (list[Tensor] or None): shape (n, #class)
89
- img_shapes (list[Tensor]): shape (3, ).
90
- rcnn_test_cfg (dict): rcnn test config.
91
-
92
- Returns:
93
- tuple: (bboxes, scores)
94
- """
95
- recovered_bboxes = []
96
- for bboxes, img_info in zip(aug_bboxes, img_metas):
97
- img_shape = img_info[0]['img_shape']
98
- scale_factor = img_info[0]['scale_factor']
99
- flip = img_info[0]['flip']
100
- flip_direction = img_info[0]['flip_direction']
101
- bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip,
102
- flip_direction)
103
- recovered_bboxes.append(bboxes)
104
- bboxes = torch.stack(recovered_bboxes).mean(dim=0)
105
- if aug_scores is None:
106
- return bboxes
107
- else:
108
- scores = torch.stack(aug_scores).mean(dim=0)
109
- return bboxes, scores
110
-
111
-
112
- def merge_aug_scores(aug_scores):
113
- """Merge augmented bbox scores."""
114
- if isinstance(aug_scores[0], torch.Tensor):
115
- return torch.mean(torch.stack(aug_scores), dim=0)
116
- else:
117
- return np.mean(aug_scores, axis=0)
118
-
119
-
120
- def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None):
121
- """Merge augmented mask prediction.
122
-
123
- Args:
124
- aug_masks (list[ndarray]): shape (n, #class, h, w)
125
- img_shapes (list[ndarray]): shape (3, ).
126
- rcnn_test_cfg (dict): rcnn test config.
127
-
128
- Returns:
129
- tuple: (bboxes, scores)
130
- """
131
- recovered_masks = []
132
- for mask, img_info in zip(aug_masks, img_metas):
133
- flip = img_info[0]['flip']
134
- flip_direction = img_info[0]['flip_direction']
135
- if flip:
136
- if flip_direction == 'horizontal':
137
- mask = mask[:, :, :, ::-1]
138
- elif flip_direction == 'vertical':
139
- mask = mask[:, :, ::-1, :]
140
- else:
141
- raise ValueError(
142
- f"Invalid flipping direction '{flip_direction}'")
143
- recovered_masks.append(mask)
144
-
145
- if weights is None:
146
- merged_masks = np.mean(recovered_masks, axis=0)
147
- else:
148
- merged_masks = np.average(
149
- np.array(recovered_masks), axis=0, weights=np.array(weights))
150
- return merged_masks
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/export/caffe2_modeling.py DELETED
@@ -1,503 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
-
3
- import functools
4
- import io
5
- import struct
6
- import types
7
- import torch
8
-
9
- from detectron2.modeling import meta_arch
10
- from detectron2.modeling.box_regression import Box2BoxTransform
11
- from detectron2.modeling.meta_arch.panoptic_fpn import combine_semantic_and_instance_outputs
12
- from detectron2.modeling.meta_arch.retinanet import permute_to_N_HWA_K
13
- from detectron2.modeling.postprocessing import detector_postprocess, sem_seg_postprocess
14
- from detectron2.modeling.roi_heads import keypoint_head
15
- from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes
16
-
17
- from .c10 import Caffe2Compatible
18
- from .caffe2_patch import ROIHeadsPatcher, patch_generalized_rcnn
19
- from .shared import (
20
- alias,
21
- check_set_pb_arg,
22
- get_pb_arg_floats,
23
- get_pb_arg_valf,
24
- get_pb_arg_vali,
25
- get_pb_arg_vals,
26
- mock_torch_nn_functional_interpolate,
27
- )
28
-
29
-
30
- def assemble_rcnn_outputs_by_name(image_sizes, tensor_outputs, force_mask_on=False):
31
- """
32
- A function to assemble caffe2 model's outputs (i.e. Dict[str, Tensor])
33
- to detectron2's format (i.e. list of Instances instance).
34
- This only works when the model follows the Caffe2 detectron's naming convention.
35
-
36
- Args:
37
- image_sizes (List[List[int, int]]): [H, W] of every image.
38
- tensor_outputs (Dict[str, Tensor]): external_output to its tensor.
39
-
40
- force_mask_on (Bool): if true, the it make sure there'll be pred_masks even
41
- if the mask is not found from tensor_outputs (usually due to model crash)
42
- """
43
-
44
- results = [Instances(image_size) for image_size in image_sizes]
45
-
46
- batch_splits = tensor_outputs.get("batch_splits", None)
47
- if batch_splits:
48
- raise NotImplementedError()
49
- assert len(image_sizes) == 1
50
- result = results[0]
51
-
52
- bbox_nms = tensor_outputs["bbox_nms"]
53
- score_nms = tensor_outputs["score_nms"]
54
- class_nms = tensor_outputs["class_nms"]
55
- # Detection will always success because Conv support 0-batch
56
- assert bbox_nms is not None
57
- assert score_nms is not None
58
- assert class_nms is not None
59
- if bbox_nms.shape[1] == 5:
60
- result.pred_boxes = RotatedBoxes(bbox_nms)
61
- else:
62
- result.pred_boxes = Boxes(bbox_nms)
63
- result.scores = score_nms
64
- result.pred_classes = class_nms.to(torch.int64)
65
-
66
- mask_fcn_probs = tensor_outputs.get("mask_fcn_probs", None)
67
- if mask_fcn_probs is not None:
68
- # finish the mask pred
69
- mask_probs_pred = mask_fcn_probs
70
- num_masks = mask_probs_pred.shape[0]
71
- class_pred = result.pred_classes
72
- indices = torch.arange(num_masks, device=class_pred.device)
73
- mask_probs_pred = mask_probs_pred[indices, class_pred][:, None]
74
- result.pred_masks = mask_probs_pred
75
- elif force_mask_on:
76
- # NOTE: there's no way to know the height/width of mask here, it won't be
77
- # used anyway when batch size is 0, so just set them to 0.
78
- result.pred_masks = torch.zeros([0, 1, 0, 0], dtype=torch.uint8)
79
-
80
- keypoints_out = tensor_outputs.get("keypoints_out", None)
81
- kps_score = tensor_outputs.get("kps_score", None)
82
- if keypoints_out is not None:
83
- # keypoints_out: [N, 4, #kypoints], where 4 is in order of (x, y, score, prob)
84
- keypoints_tensor = keypoints_out
85
- # NOTE: it's possible that prob is not calculated if "should_output_softmax"
86
- # is set to False in HeatmapMaxKeypoint, so just using raw score, seems
87
- # it doesn't affect mAP. TODO: check more carefully.
88
- keypoint_xyp = keypoints_tensor.transpose(1, 2)[:, :, [0, 1, 2]]
89
- result.pred_keypoints = keypoint_xyp
90
- elif kps_score is not None:
91
- # keypoint heatmap to sparse data structure
92
- pred_keypoint_logits = kps_score
93
- keypoint_head.keypoint_rcnn_inference(pred_keypoint_logits, [result])
94
-
95
- return results
96
-
97
-
98
- def _cast_to_f32(f64):
99
- return struct.unpack("f", struct.pack("f", f64))[0]
100
-
101
-
102
- def set_caffe2_compatible_tensor_mode(model, enable=True):
103
- def _fn(m):
104
- if isinstance(m, Caffe2Compatible):
105
- m.tensor_mode = enable
106
-
107
- model.apply(_fn)
108
-
109
-
110
- def convert_batched_inputs_to_c2_format(batched_inputs, size_divisibility, device):
111
- """
112
- See get_caffe2_inputs() below.
113
- """
114
- assert all(isinstance(x, dict) for x in batched_inputs)
115
- assert all(x["image"].dim() == 3 for x in batched_inputs)
116
-
117
- images = [x["image"] for x in batched_inputs]
118
- images = ImageList.from_tensors(images, size_divisibility)
119
-
120
- im_info = []
121
- for input_per_image, image_size in zip(batched_inputs, images.image_sizes):
122
- target_height = input_per_image.get("height", image_size[0])
123
- target_width = input_per_image.get("width", image_size[1]) # noqa
124
- # NOTE: The scale inside im_info is kept as convention and for providing
125
- # post-processing information if further processing is needed. For
126
- # current Caffe2 model definitions that don't include post-processing inside
127
- # the model, this number is not used.
128
- # NOTE: There can be a slight difference between width and height
129
- # scales, using a single number can results in numerical difference
130
- # compared with D2's post-processing.
131
- scale = target_height / image_size[0]
132
- im_info.append([image_size[0], image_size[1], scale])
133
- im_info = torch.Tensor(im_info)
134
-
135
- return images.tensor.to(device), im_info.to(device)
136
-
137
-
138
- class Caffe2MetaArch(Caffe2Compatible, torch.nn.Module):
139
- """
140
- Base class for caffe2-compatible implementation of a meta architecture.
141
- The forward is traceable and its traced graph can be converted to caffe2
142
- graph through ONNX.
143
- """
144
-
145
- def __init__(self, cfg, torch_model):
146
- """
147
- Args:
148
- cfg (CfgNode):
149
- torch_model (nn.Module): the detectron2 model (meta_arch) to be
150
- converted.
151
- """
152
- super().__init__()
153
- self._wrapped_model = torch_model
154
- self.eval()
155
- set_caffe2_compatible_tensor_mode(self, True)
156
-
157
- def get_caffe2_inputs(self, batched_inputs):
158
- """
159
- Convert pytorch-style structured inputs to caffe2-style inputs that
160
- are tuples of tensors.
161
-
162
- Args:
163
- batched_inputs (list[dict]): inputs to a detectron2 model
164
- in its standard format. Each dict has "image" (CHW tensor), and optionally
165
- "height" and "width".
166
-
167
- Returns:
168
- tuple[Tensor]:
169
- tuple of tensors that will be the inputs to the
170
- :meth:`forward` method. For existing models, the first
171
- is an NCHW tensor (padded and batched); the second is
172
- a im_info Nx3 tensor, where the rows are
173
- (height, width, unused legacy parameter)
174
- """
175
- return convert_batched_inputs_to_c2_format(
176
- batched_inputs,
177
- self._wrapped_model.backbone.size_divisibility,
178
- self._wrapped_model.device,
179
- )
180
-
181
- def encode_additional_info(self, predict_net, init_net):
182
- """
183
- Save extra metadata that will be used by inference in the output protobuf.
184
- """
185
- pass
186
-
187
- def forward(self, inputs):
188
- """
189
- Run the forward in caffe2-style. It has to use caffe2-compatible ops
190
- and the method will be used for tracing.
191
-
192
- Args:
193
- inputs (tuple[Tensor]): inputs defined by :meth:`get_caffe2_input`.
194
- They will be the inputs of the converted caffe2 graph.
195
-
196
- Returns:
197
- tuple[Tensor]: output tensors. They will be the outputs of the
198
- converted caffe2 graph.
199
- """
200
- raise NotImplementedError
201
-
202
- def _caffe2_preprocess_image(self, inputs):
203
- """
204
- Caffe2 implementation of preprocess_image, which is called inside each MetaArch's forward.
205
- It normalizes the input images, and the final caffe2 graph assumes the
206
- inputs have been batched already.
207
- """
208
- data, im_info = inputs
209
- data = alias(data, "data")
210
- im_info = alias(im_info, "im_info")
211
- mean, std = self._wrapped_model.pixel_mean, self._wrapped_model.pixel_std
212
- normalized_data = (data - mean) / std
213
- normalized_data = alias(normalized_data, "normalized_data")
214
-
215
- # Pack (data, im_info) into ImageList which is recognized by self.inference.
216
- images = ImageList(tensor=normalized_data, image_sizes=im_info)
217
- return images
218
-
219
- @staticmethod
220
- def get_outputs_converter(predict_net, init_net):
221
- """
222
- Creates a function that converts outputs of the caffe2 model to
223
- detectron2's standard format.
224
- The function uses information in `predict_net` and `init_net` that are
225
- available at inferene time. Therefore the function logic can be used in inference.
226
-
227
- The returned function has the following signature:
228
-
229
- def convert(batched_inputs, c2_inputs, c2_results) -> detectron2_outputs
230
-
231
- Where
232
-
233
- * batched_inputs (list[dict]): the original input format of the meta arch
234
- * c2_inputs (tuple[Tensor]): the caffe2 inputs.
235
- * c2_results (dict[str, Tensor]): the caffe2 output format,
236
- corresponding to the outputs of the :meth:`forward` function.
237
- * detectron2_outputs: the original output format of the meta arch.
238
-
239
- This function can be used to compare the outputs of the original meta arch and
240
- the converted caffe2 graph.
241
-
242
- Returns:
243
- callable: a callable of the above signature.
244
- """
245
- raise NotImplementedError
246
-
247
-
248
- class Caffe2GeneralizedRCNN(Caffe2MetaArch):
249
- def __init__(self, cfg, torch_model):
250
- assert isinstance(torch_model, meta_arch.GeneralizedRCNN)
251
- torch_model = patch_generalized_rcnn(torch_model)
252
- super().__init__(cfg, torch_model)
253
-
254
- self.roi_heads_patcher = ROIHeadsPatcher(
255
- self._wrapped_model.roi_heads, cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT
256
- )
257
-
258
- def encode_additional_info(self, predict_net, init_net):
259
- size_divisibility = self._wrapped_model.backbone.size_divisibility
260
- check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility)
261
- check_set_pb_arg(
262
- predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii")
263
- )
264
- check_set_pb_arg(predict_net, "meta_architecture", "s", b"GeneralizedRCNN")
265
-
266
- @mock_torch_nn_functional_interpolate()
267
- def forward(self, inputs):
268
- if not self.tensor_mode:
269
- return self._wrapped_model.inference(inputs)
270
- images = self._caffe2_preprocess_image(inputs)
271
- features = self._wrapped_model.backbone(images.tensor)
272
- proposals, _ = self._wrapped_model.proposal_generator(images, features)
273
- with self.roi_heads_patcher.mock_roi_heads():
274
- detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals)
275
- return tuple(detector_results[0].flatten())
276
-
277
- @staticmethod
278
- def get_outputs_converter(predict_net, init_net):
279
- def f(batched_inputs, c2_inputs, c2_results):
280
- _, im_info = c2_inputs
281
- image_sizes = [[int(im[0]), int(im[1])] for im in im_info]
282
- results = assemble_rcnn_outputs_by_name(image_sizes, c2_results)
283
- return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes)
284
-
285
- return f
286
-
287
-
288
- class Caffe2PanopticFPN(Caffe2MetaArch):
289
- def __init__(self, cfg, torch_model):
290
- assert isinstance(torch_model, meta_arch.PanopticFPN)
291
- torch_model = patch_generalized_rcnn(torch_model)
292
- super().__init__(cfg, torch_model)
293
-
294
- self.roi_heads_patcher = ROIHeadsPatcher(
295
- self._wrapped_model.roi_heads, cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT
296
- )
297
-
298
- @mock_torch_nn_functional_interpolate()
299
- def forward(self, inputs):
300
- assert self.tensor_mode
301
- images = self._caffe2_preprocess_image(inputs)
302
- features = self._wrapped_model.backbone(images.tensor)
303
-
304
- sem_seg_results, _ = self._wrapped_model.sem_seg_head(features)
305
- sem_seg_results = alias(sem_seg_results, "sem_seg")
306
-
307
- proposals, _ = self._wrapped_model.proposal_generator(images, features)
308
-
309
- with self.roi_heads_patcher.mock_roi_heads(self.tensor_mode):
310
- detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals)
311
-
312
- return tuple(detector_results[0].flatten()) + (sem_seg_results,)
313
-
314
- def encode_additional_info(self, predict_net, init_net):
315
- size_divisibility = self._wrapped_model.backbone.size_divisibility
316
- check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility)
317
- check_set_pb_arg(
318
- predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii")
319
- )
320
- check_set_pb_arg(predict_net, "meta_architecture", "s", b"PanopticFPN")
321
-
322
- # Inference parameters:
323
- check_set_pb_arg(
324
- predict_net,
325
- "combine_overlap_threshold",
326
- "f",
327
- _cast_to_f32(self._wrapped_model.combine_overlap_thresh),
328
- )
329
- check_set_pb_arg(
330
- predict_net,
331
- "combine_stuff_area_limit",
332
- "i",
333
- self._wrapped_model.combine_stuff_area_thresh,
334
- )
335
- check_set_pb_arg(
336
- predict_net,
337
- "combine_instances_confidence_threshold",
338
- "f",
339
- _cast_to_f32(self._wrapped_model.combine_instances_score_thresh),
340
- )
341
-
342
- @staticmethod
343
- def get_outputs_converter(predict_net, init_net):
344
- combine_overlap_threshold = get_pb_arg_valf(predict_net, "combine_overlap_threshold", None)
345
- combine_stuff_area_limit = get_pb_arg_vali(predict_net, "combine_stuff_area_limit", None)
346
- combine_instances_confidence_threshold = get_pb_arg_valf(
347
- predict_net, "combine_instances_confidence_threshold", None
348
- )
349
-
350
- def f(batched_inputs, c2_inputs, c2_results):
351
- _, im_info = c2_inputs
352
- image_sizes = [[int(im[0]), int(im[1])] for im in im_info]
353
- detector_results = assemble_rcnn_outputs_by_name(
354
- image_sizes, c2_results, force_mask_on=True
355
- )
356
- sem_seg_results = c2_results["sem_seg"]
357
-
358
- # copied from meta_arch/panoptic_fpn.py ...
359
- processed_results = []
360
- for sem_seg_result, detector_result, input_per_image, image_size in zip(
361
- sem_seg_results, detector_results, batched_inputs, image_sizes
362
- ):
363
- height = input_per_image.get("height", image_size[0])
364
- width = input_per_image.get("width", image_size[1])
365
- sem_seg_r = sem_seg_postprocess(sem_seg_result, image_size, height, width)
366
- detector_r = detector_postprocess(detector_result, height, width)
367
-
368
- processed_results.append({"sem_seg": sem_seg_r, "instances": detector_r})
369
-
370
- panoptic_r = combine_semantic_and_instance_outputs(
371
- detector_r,
372
- sem_seg_r.argmax(dim=0),
373
- combine_overlap_threshold,
374
- combine_stuff_area_limit,
375
- combine_instances_confidence_threshold,
376
- )
377
- processed_results[-1]["panoptic_seg"] = panoptic_r
378
- return processed_results
379
-
380
- return f
381
-
382
-
383
- class Caffe2RetinaNet(Caffe2MetaArch):
384
- def __init__(self, cfg, torch_model):
385
- assert isinstance(torch_model, meta_arch.RetinaNet)
386
- super().__init__(cfg, torch_model)
387
-
388
- @mock_torch_nn_functional_interpolate()
389
- def forward(self, inputs):
390
- assert self.tensor_mode
391
- images = self._caffe2_preprocess_image(inputs)
392
-
393
- # explicitly return the images sizes to avoid removing "im_info" by ONNX
394
- # since it's not used in the forward path
395
- return_tensors = [images.image_sizes]
396
-
397
- features = self._wrapped_model.backbone(images.tensor)
398
- features = [features[f] for f in self._wrapped_model.head_in_features]
399
- for i, feature_i in enumerate(features):
400
- features[i] = alias(feature_i, "feature_{}".format(i), is_backward=True)
401
- return_tensors.append(features[i])
402
-
403
- pred_logits, pred_anchor_deltas = self._wrapped_model.head(features)
404
- for i, (box_cls_i, box_delta_i) in enumerate(zip(pred_logits, pred_anchor_deltas)):
405
- return_tensors.append(alias(box_cls_i, "box_cls_{}".format(i)))
406
- return_tensors.append(alias(box_delta_i, "box_delta_{}".format(i)))
407
-
408
- return tuple(return_tensors)
409
-
410
- def encode_additional_info(self, predict_net, init_net):
411
- size_divisibility = self._wrapped_model.backbone.size_divisibility
412
- check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility)
413
- check_set_pb_arg(
414
- predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii")
415
- )
416
- check_set_pb_arg(predict_net, "meta_architecture", "s", b"RetinaNet")
417
-
418
- # Inference parameters:
419
- check_set_pb_arg(
420
- predict_net, "score_threshold", "f", _cast_to_f32(self._wrapped_model.test_score_thresh)
421
- )
422
- check_set_pb_arg(
423
- predict_net, "topk_candidates", "i", self._wrapped_model.test_topk_candidates
424
- )
425
- check_set_pb_arg(
426
- predict_net, "nms_threshold", "f", _cast_to_f32(self._wrapped_model.test_nms_thresh)
427
- )
428
- check_set_pb_arg(
429
- predict_net,
430
- "max_detections_per_image",
431
- "i",
432
- self._wrapped_model.max_detections_per_image,
433
- )
434
-
435
- check_set_pb_arg(
436
- predict_net,
437
- "bbox_reg_weights",
438
- "floats",
439
- [_cast_to_f32(w) for w in self._wrapped_model.box2box_transform.weights],
440
- )
441
- self._encode_anchor_generator_cfg(predict_net)
442
-
443
- def _encode_anchor_generator_cfg(self, predict_net):
444
- # serialize anchor_generator for future use
445
- serialized_anchor_generator = io.BytesIO()
446
- torch.save(self._wrapped_model.anchor_generator, serialized_anchor_generator)
447
- # Ideally we can put anchor generating inside the model, then we don't
448
- # need to store this information.
449
- bytes = serialized_anchor_generator.getvalue()
450
- check_set_pb_arg(predict_net, "serialized_anchor_generator", "s", bytes)
451
-
452
- @staticmethod
453
- def get_outputs_converter(predict_net, init_net):
454
- self = types.SimpleNamespace()
455
- serialized_anchor_generator = io.BytesIO(
456
- get_pb_arg_vals(predict_net, "serialized_anchor_generator", None)
457
- )
458
- self.anchor_generator = torch.load(serialized_anchor_generator)
459
- bbox_reg_weights = get_pb_arg_floats(predict_net, "bbox_reg_weights", None)
460
- self.box2box_transform = Box2BoxTransform(weights=tuple(bbox_reg_weights))
461
- self.test_score_thresh = get_pb_arg_valf(predict_net, "score_threshold", None)
462
- self.test_topk_candidates = get_pb_arg_vali(predict_net, "topk_candidates", None)
463
- self.test_nms_thresh = get_pb_arg_valf(predict_net, "nms_threshold", None)
464
- self.max_detections_per_image = get_pb_arg_vali(
465
- predict_net, "max_detections_per_image", None
466
- )
467
-
468
- # hack to reuse inference code from RetinaNet
469
- self.inference = functools.partial(meta_arch.RetinaNet.inference, self)
470
- self.inference_single_image = functools.partial(
471
- meta_arch.RetinaNet.inference_single_image, self
472
- )
473
-
474
- def f(batched_inputs, c2_inputs, c2_results):
475
- _, im_info = c2_inputs
476
- image_sizes = [[int(im[0]), int(im[1])] for im in im_info]
477
-
478
- num_features = len([x for x in c2_results.keys() if x.startswith("box_cls_")])
479
- pred_logits = [c2_results["box_cls_{}".format(i)] for i in range(num_features)]
480
- pred_anchor_deltas = [c2_results["box_delta_{}".format(i)] for i in range(num_features)]
481
-
482
- # For each feature level, feature should have the same batch size and
483
- # spatial dimension as the box_cls and box_delta.
484
- dummy_features = [x.clone()[:, 0:0, :, :] for x in pred_logits]
485
- anchors = self.anchor_generator(dummy_features)
486
-
487
- # self.num_classess can be inferred
488
- self.num_classes = pred_logits[0].shape[1] // (pred_anchor_deltas[0].shape[1] // 4)
489
-
490
- pred_logits = [permute_to_N_HWA_K(x, self.num_classes) for x in pred_logits]
491
- pred_anchor_deltas = [permute_to_N_HWA_K(x, 4) for x in pred_anchor_deltas]
492
-
493
- results = self.inference(anchors, pred_logits, pred_anchor_deltas, image_sizes)
494
- return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes)
495
-
496
- return f
497
-
498
-
499
- META_ARCH_CAFFE2_EXPORT_TYPE_MAP = {
500
- "GeneralizedRCNN": Caffe2GeneralizedRCNN,
501
- "PanopticFPN": Caffe2PanopticFPN,
502
- "RetinaNet": Caffe2RetinaNet,
503
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Catspin/2_ai_chat/index.html DELETED
@@ -1 +0,0 @@
1
- <iframe src="https://ai.catspin.eu.org/" style= scrolling="no" height="100%" width="100%" allowfullscreen</iframe>
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/capoo_strike/__init__.py DELETED
@@ -1,44 +0,0 @@
1
- from pathlib import Path
2
- from typing import List
3
-
4
- from pil_utils import BuildImage
5
-
6
- from meme_generator import add_meme
7
- from meme_generator.utils import FrameAlignPolicy, Maker, make_gif_or_combined_gif
8
-
9
- img_dir = Path(__file__).parent / "images"
10
-
11
-
12
- def capoo_strike(images: List[BuildImage], texts, args):
13
- params = (
14
- (((0, 4), (153, 0), (138, 105), (0, 157)), (28, 47)),
15
- (((1, 13), (151, 0), (130, 104), (0, 156)), (28, 48)),
16
- (((9, 10), (156, 0), (152, 108), (0, 155)), (18, 51)),
17
- (((0, 21), (150, 0), (146, 115), (7, 145)), (17, 53)),
18
- (((0, 19), (156, 0), (199, 109), (31, 145)), (2, 62)),
19
- (((0, 28), (156, 0), (171, 115), (12, 154)), (16, 58)),
20
- (((0, 25), (157, 0), (169, 113), (13, 147)), (18, 63)),
21
- )
22
-
23
- def maker(i: int) -> Maker:
24
- def make(img: BuildImage) -> BuildImage:
25
- img = img.convert("RGBA").resize((200, 160), keep_ratio=True)
26
- points, pos = params[i]
27
- frame = BuildImage.open(img_dir / f"{i}.png")
28
- frame.paste(img.perspective(points), pos, below=True)
29
- return frame
30
-
31
- return make
32
-
33
- return make_gif_or_combined_gif(
34
- images[0], maker, 7, 0.05, FrameAlignPolicy.extend_loop
35
- )
36
-
37
-
38
- add_meme(
39
- "capoo_strike",
40
- capoo_strike,
41
- min_images=1,
42
- max_images=1,
43
- keywords=["咖波撞", "咖波头槌"],
44
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CjangCjengh/Shanghainese-TTS/monotonic_align/core.c DELETED
The diff for this file is too large to render. See raw diff
 
spaces/ClearLove443/Robby-chatbot/modules/llm.py DELETED
@@ -1,28 +0,0 @@
1
- import json
2
- from typing import Any, List, Mapping, Optional
3
-
4
- import requests
5
- from langchain.callbacks.manager import CallbackManagerForLLMRun
6
- from langchain.llms.base import LLM
7
-
8
- url = "https://openai.proxy.onlyyounotothers.top/chat"
9
- headers = {"Content-Type": "application/json"}
10
-
11
-
12
- class ChatGLM(LLM):
13
- @property
14
- def _llm_type(self) -> str:
15
- return "custom"
16
-
17
- type = "custom"
18
-
19
- # 重写基类方法,根据用户输入的prompt来响应用户,返回字符串
20
- def _call(
21
- self,
22
- prompt: str,
23
- stop: Optional[List[str]] = None,
24
- run_manager: Optional[CallbackManagerForLLMRun] = None,
25
- ) -> str:
26
- payload = json.dumps({"q": prompt})
27
- response = requests.request("POST", url, headers=headers, data=payload)
28
- return response.text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat.b4/g4f/Provider/Providers/Aichat.py DELETED
@@ -1,35 +0,0 @@
1
- import requests
2
- import os
3
- import json
4
- from ...typing import sha256, Dict, get_type_hints
5
-
6
- url = 'https://hteyun.com'
7
- model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
8
- supports_stream = True
9
- needs_auth = False
10
-
11
- def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
12
- headers = {
13
- 'Content-Type': 'application/json',
14
- }
15
- data = {
16
- 'model': model,
17
- 'temperature': 0.7,
18
- 'presence_penalty': 0,
19
- 'messages': messages,
20
- }
21
- response = requests.post(url + '/api/chat-stream',
22
- json=data, stream=True)
23
-
24
- if stream:
25
- for chunk in response.iter_content(chunk_size=None):
26
- chunk = chunk.decode('utf-8')
27
- if chunk.strip():
28
- message = json.loads(chunk)['choices'][0]['message']['content']
29
- yield message
30
- else:
31
- message = response.json()['choices'][0]['message']['content']
32
- yield message
33
-
34
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
35
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CosmoAI/BhagwatGeeta/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: BhagwatGeeta
3
- emoji: 🚀
4
- colorFrom: indigo
5
- colorTo: purple
6
- sdk: streamlit
7
- sdk_version: 1.26.0
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/explicitClosingLinePen.py DELETED
@@ -1,101 +0,0 @@
1
- from fontTools.pens.filterPen import ContourFilterPen
2
-
3
-
4
- class ExplicitClosingLinePen(ContourFilterPen):
5
- """A filter pen that adds an explicit lineTo to the first point of each closed
6
- contour if the end point of the last segment is not already the same as the first point.
7
- Otherwise, it passes the contour through unchanged.
8
-
9
- >>> from pprint import pprint
10
- >>> from fontTools.pens.recordingPen import RecordingPen
11
- >>> rec = RecordingPen()
12
- >>> pen = ExplicitClosingLinePen(rec)
13
- >>> pen.moveTo((0, 0))
14
- >>> pen.lineTo((100, 0))
15
- >>> pen.lineTo((100, 100))
16
- >>> pen.closePath()
17
- >>> pprint(rec.value)
18
- [('moveTo', ((0, 0),)),
19
- ('lineTo', ((100, 0),)),
20
- ('lineTo', ((100, 100),)),
21
- ('lineTo', ((0, 0),)),
22
- ('closePath', ())]
23
- >>> rec = RecordingPen()
24
- >>> pen = ExplicitClosingLinePen(rec)
25
- >>> pen.moveTo((0, 0))
26
- >>> pen.lineTo((100, 0))
27
- >>> pen.lineTo((100, 100))
28
- >>> pen.lineTo((0, 0))
29
- >>> pen.closePath()
30
- >>> pprint(rec.value)
31
- [('moveTo', ((0, 0),)),
32
- ('lineTo', ((100, 0),)),
33
- ('lineTo', ((100, 100),)),
34
- ('lineTo', ((0, 0),)),
35
- ('closePath', ())]
36
- >>> rec = RecordingPen()
37
- >>> pen = ExplicitClosingLinePen(rec)
38
- >>> pen.moveTo((0, 0))
39
- >>> pen.curveTo((100, 0), (0, 100), (100, 100))
40
- >>> pen.closePath()
41
- >>> pprint(rec.value)
42
- [('moveTo', ((0, 0),)),
43
- ('curveTo', ((100, 0), (0, 100), (100, 100))),
44
- ('lineTo', ((0, 0),)),
45
- ('closePath', ())]
46
- >>> rec = RecordingPen()
47
- >>> pen = ExplicitClosingLinePen(rec)
48
- >>> pen.moveTo((0, 0))
49
- >>> pen.curveTo((100, 0), (0, 100), (100, 100))
50
- >>> pen.lineTo((0, 0))
51
- >>> pen.closePath()
52
- >>> pprint(rec.value)
53
- [('moveTo', ((0, 0),)),
54
- ('curveTo', ((100, 0), (0, 100), (100, 100))),
55
- ('lineTo', ((0, 0),)),
56
- ('closePath', ())]
57
- >>> rec = RecordingPen()
58
- >>> pen = ExplicitClosingLinePen(rec)
59
- >>> pen.moveTo((0, 0))
60
- >>> pen.curveTo((100, 0), (0, 100), (0, 0))
61
- >>> pen.closePath()
62
- >>> pprint(rec.value)
63
- [('moveTo', ((0, 0),)),
64
- ('curveTo', ((100, 0), (0, 100), (0, 0))),
65
- ('closePath', ())]
66
- >>> rec = RecordingPen()
67
- >>> pen = ExplicitClosingLinePen(rec)
68
- >>> pen.moveTo((0, 0))
69
- >>> pen.closePath()
70
- >>> pprint(rec.value)
71
- [('moveTo', ((0, 0),)), ('closePath', ())]
72
- >>> rec = RecordingPen()
73
- >>> pen = ExplicitClosingLinePen(rec)
74
- >>> pen.closePath()
75
- >>> pprint(rec.value)
76
- [('closePath', ())]
77
- >>> rec = RecordingPen()
78
- >>> pen = ExplicitClosingLinePen(rec)
79
- >>> pen.moveTo((0, 0))
80
- >>> pen.lineTo((100, 0))
81
- >>> pen.lineTo((100, 100))
82
- >>> pen.endPath()
83
- >>> pprint(rec.value)
84
- [('moveTo', ((0, 0),)),
85
- ('lineTo', ((100, 0),)),
86
- ('lineTo', ((100, 100),)),
87
- ('endPath', ())]
88
- """
89
-
90
- def filterContour(self, contour):
91
- if (
92
- not contour
93
- or contour[0][0] != "moveTo"
94
- or contour[-1][0] != "closePath"
95
- or len(contour) < 3
96
- ):
97
- return
98
- movePt = contour[0][1][0]
99
- lastSeg = contour[-2][1]
100
- if lastSeg and movePt != lastSeg[-1]:
101
- contour[-1:] = [("lineTo", (movePt,)), ("closePath", ())]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/asciiTable.py DELETED
@@ -1,20 +0,0 @@
1
- from fontTools.misc.textTools import strjoin, tobytes, tostr
2
- from . import DefaultTable
3
-
4
-
5
- class asciiTable(DefaultTable.DefaultTable):
6
- def toXML(self, writer, ttFont):
7
- data = tostr(self.data)
8
- # removing null bytes. XXX needed??
9
- data = data.split("\0")
10
- data = strjoin(data)
11
- writer.begintag("source")
12
- writer.newline()
13
- writer.write_noindent(data)
14
- writer.newline()
15
- writer.endtag("source")
16
- writer.newline()
17
-
18
- def fromXML(self, name, attrs, content, ttFont):
19
- lines = strjoin(content).split("\n")
20
- self.data = tobytes("\n".join(lines[1:-1]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/tests/abstract/get.py DELETED
@@ -1,377 +0,0 @@
1
- class AbstractGetTests:
2
- def test_get_file_to_existing_directory(
3
- self,
4
- fs,
5
- fs_join,
6
- fs_bulk_operations_scenario_0,
7
- local_fs,
8
- local_join,
9
- local_target,
10
- ):
11
- # Copy scenario 1a
12
- source = fs_bulk_operations_scenario_0
13
-
14
- target = local_target
15
- local_fs.mkdir(target)
16
- assert local_fs.isdir(target)
17
-
18
- target_file2 = local_join(target, "file2")
19
- target_subfile1 = local_join(target, "subfile1")
20
-
21
- # Copy from source directory
22
- fs.get(fs_join(source, "file2"), target)
23
- assert local_fs.isfile(target_file2)
24
-
25
- # Copy from sub directory
26
- fs.get(fs_join(source, "subdir", "subfile1"), target)
27
- assert local_fs.isfile(target_subfile1)
28
-
29
- # Remove copied files
30
- local_fs.rm([target_file2, target_subfile1])
31
- assert not local_fs.exists(target_file2)
32
- assert not local_fs.exists(target_subfile1)
33
-
34
- # Repeat with trailing slash on target
35
- fs.get(fs_join(source, "file2"), target + "/")
36
- assert local_fs.isdir(target)
37
- assert local_fs.isfile(target_file2)
38
-
39
- fs.get(fs_join(source, "subdir", "subfile1"), target + "/")
40
- assert local_fs.isfile(target_subfile1)
41
-
42
- def test_get_file_to_new_directory(
43
- self,
44
- fs,
45
- fs_join,
46
- fs_bulk_operations_scenario_0,
47
- local_fs,
48
- local_join,
49
- local_target,
50
- ):
51
- # Copy scenario 1b
52
- source = fs_bulk_operations_scenario_0
53
-
54
- target = local_target
55
- local_fs.mkdir(target)
56
-
57
- fs.get(
58
- fs_join(source, "subdir", "subfile1"), local_join(target, "newdir/")
59
- ) # Note trailing slash
60
-
61
- assert local_fs.isdir(target)
62
- assert local_fs.isdir(local_join(target, "newdir"))
63
- assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
64
-
65
- def test_get_file_to_file_in_existing_directory(
66
- self,
67
- fs,
68
- fs_join,
69
- fs_path,
70
- fs_bulk_operations_scenario_0,
71
- local_fs,
72
- local_join,
73
- local_target,
74
- ):
75
- # Copy scenario 1c
76
- source = fs_bulk_operations_scenario_0
77
-
78
- target = local_target
79
- local_fs.mkdir(target)
80
-
81
- fs.get(fs_join(source, "subdir", "subfile1"), local_join(target, "newfile"))
82
- assert local_fs.isfile(local_join(target, "newfile"))
83
-
84
- def test_get_file_to_file_in_new_directory(
85
- self,
86
- fs,
87
- fs_join,
88
- fs_bulk_operations_scenario_0,
89
- local_fs,
90
- local_join,
91
- local_target,
92
- ):
93
- # Copy scenario 1d
94
- source = fs_bulk_operations_scenario_0
95
-
96
- target = local_target
97
- local_fs.mkdir(target)
98
-
99
- fs.get(
100
- fs_join(source, "subdir", "subfile1"),
101
- local_join(target, "newdir", "newfile"),
102
- )
103
- assert local_fs.isdir(local_join(target, "newdir"))
104
- assert local_fs.isfile(local_join(target, "newdir", "newfile"))
105
-
106
- def test_get_directory_to_existing_directory(
107
- self,
108
- fs,
109
- fs_join,
110
- fs_bulk_operations_scenario_0,
111
- local_fs,
112
- local_join,
113
- local_target,
114
- ):
115
- # Copy scenario 1e
116
- source = fs_bulk_operations_scenario_0
117
-
118
- target = local_target
119
- local_fs.mkdir(target)
120
-
121
- for source_slash, target_slash in zip([False, True], [False, True]):
122
- s = fs_join(source, "subdir")
123
- if source_slash:
124
- s += "/"
125
- t = target + "/" if target_slash else target
126
-
127
- # Without recursive does nothing
128
- # ERROR: erroneously creates new directory
129
- # fs.get(s, t)
130
- # assert fs.ls(target) == []
131
-
132
- # With recursive
133
- fs.get(s, t, recursive=True)
134
- if source_slash:
135
- assert local_fs.isfile(local_join(target, "subfile1"))
136
- assert local_fs.isfile(local_join(target, "subfile2"))
137
- assert local_fs.isdir(local_join(target, "nesteddir"))
138
- assert local_fs.isfile(local_join(target, "nesteddir", "nestedfile"))
139
-
140
- local_fs.rm(
141
- [
142
- local_join(target, "subfile1"),
143
- local_join(target, "subfile2"),
144
- local_join(target, "nesteddir"),
145
- ],
146
- recursive=True,
147
- )
148
- else:
149
- assert local_fs.isdir(local_join(target, "subdir"))
150
- assert local_fs.isfile(local_join(target, "subdir", "subfile1"))
151
- assert local_fs.isfile(local_join(target, "subdir", "subfile2"))
152
- assert local_fs.isdir(local_join(target, "subdir", "nesteddir"))
153
- assert local_fs.isfile(
154
- local_join(target, "subdir", "nesteddir", "nestedfile")
155
- )
156
-
157
- local_fs.rm(local_join(target, "subdir"), recursive=True)
158
- assert local_fs.ls(target) == []
159
-
160
- # Limit by maxdepth
161
- # ERROR: maxdepth ignored here
162
-
163
- def test_get_directory_to_new_directory(
164
- self,
165
- fs,
166
- fs_join,
167
- fs_bulk_operations_scenario_0,
168
- local_fs,
169
- local_join,
170
- local_target,
171
- ):
172
- # Copy scenario 1f
173
- source = fs_bulk_operations_scenario_0
174
-
175
- target = local_target
176
- local_fs.mkdir(target)
177
-
178
- for source_slash, target_slash in zip([False, True], [False, True]):
179
- s = fs_join(source, "subdir")
180
- if source_slash:
181
- s += "/"
182
- t = local_join(target, "newdir")
183
- if target_slash:
184
- t += "/"
185
-
186
- # Without recursive does nothing
187
- # ERROR: erroneously creates new directory
188
- # fs.get(s, t)
189
- # assert fs.ls(target) == []
190
-
191
- # With recursive
192
- fs.get(s, t, recursive=True)
193
- assert local_fs.isdir(local_join(target, "newdir"))
194
- assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
195
- assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
196
- assert local_fs.isdir(local_join(target, "newdir", "nesteddir"))
197
- assert local_fs.isfile(
198
- local_join(target, "newdir", "nesteddir", "nestedfile")
199
- )
200
-
201
- local_fs.rm(local_join(target, "newdir"), recursive=True)
202
- assert local_fs.ls(target) == []
203
-
204
- # Limit by maxdepth
205
- # ERROR: maxdepth ignored here
206
-
207
- def test_get_glob_to_existing_directory(
208
- self,
209
- fs,
210
- fs_join,
211
- fs_bulk_operations_scenario_0,
212
- local_fs,
213
- local_join,
214
- local_target,
215
- ):
216
- # Copy scenario 1g
217
- source = fs_bulk_operations_scenario_0
218
-
219
- target = local_target
220
- local_fs.mkdir(target)
221
-
222
- # for target_slash in [False, True]:
223
- for target_slash in [False]:
224
- t = target + "/" if target_slash else target
225
-
226
- # Without recursive
227
- fs.get(fs_join(source, "subdir", "*"), t)
228
- assert local_fs.isfile(local_join(target, "subfile1"))
229
- assert local_fs.isfile(local_join(target, "subfile2"))
230
- # assert not local_fs.isdir(local_join(target, "nesteddir")) # ERROR
231
- assert not local_fs.isdir(local_join(target, "subdir"))
232
-
233
- # With recursive
234
-
235
- # Limit by maxdepth
236
-
237
- def test_get_glob_to_new_directory(
238
- self,
239
- fs,
240
- fs_join,
241
- fs_bulk_operations_scenario_0,
242
- local_fs,
243
- local_join,
244
- local_target,
245
- ):
246
- # Copy scenario 1h
247
- source = fs_bulk_operations_scenario_0
248
-
249
- target = local_target
250
- local_fs.mkdir(target)
251
-
252
- for target_slash in [False, True]:
253
- t = fs_join(target, "newdir")
254
- if target_slash:
255
- t += "/"
256
-
257
- # Without recursive
258
- fs.get(fs_join(source, "subdir", "*"), t)
259
- assert local_fs.isdir(local_join(target, "newdir"))
260
- assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
261
- assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
262
- # ERROR - do not copy empty directory
263
- # assert not local_fs.exists(local_join(target, "newdir", "nesteddir"))
264
-
265
- local_fs.rm(local_join(target, "newdir"), recursive=True)
266
- assert local_fs.ls(target) == []
267
-
268
- # With recursive
269
- fs.get(fs_join(source, "subdir", "*"), t, recursive=True)
270
- assert local_fs.isdir(local_join(target, "newdir"))
271
- assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
272
- assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
273
- assert local_fs.isdir(local_join(target, "newdir", "nesteddir"))
274
- assert local_fs.isfile(
275
- local_join(target, "newdir", "nesteddir", "nestedfile")
276
- )
277
-
278
- local_fs.rm(local_join(target, "newdir"), recursive=True)
279
- assert local_fs.ls(target) == []
280
-
281
- # Limit by maxdepth
282
- # ERROR: this is not correct
283
-
284
- def test_get_list_of_files_to_existing_directory(
285
- self,
286
- fs,
287
- fs_join,
288
- fs_bulk_operations_scenario_0,
289
- local_fs,
290
- local_join,
291
- local_target,
292
- ):
293
- # Copy scenario 2a
294
- source = fs_bulk_operations_scenario_0
295
-
296
- target = local_target
297
- local_fs.mkdir(target)
298
-
299
- source_files = [
300
- fs_join(source, "file1"),
301
- fs_join(source, "file2"),
302
- fs_join(source, "subdir", "subfile1"),
303
- ]
304
-
305
- for target_slash in [False, True]:
306
- t = target + "/" if target_slash else target
307
-
308
- fs.get(source_files, t)
309
- assert local_fs.isfile(local_join(target, "file1"))
310
- assert local_fs.isfile(local_join(target, "file2"))
311
- assert local_fs.isfile(local_join(target, "subfile1"))
312
-
313
- local_fs.rm(local_fs.find(target))
314
- assert local_fs.ls(target) == []
315
-
316
- def test_get_list_of_files_to_new_directory(
317
- self,
318
- fs,
319
- fs_join,
320
- fs_bulk_operations_scenario_0,
321
- local_fs,
322
- local_join,
323
- local_target,
324
- ):
325
- # Copy scenario 2b
326
- source = fs_bulk_operations_scenario_0
327
-
328
- target = local_target
329
- local_fs.mkdir(target)
330
-
331
- source_files = [
332
- fs_join(source, "file1"),
333
- fs_join(source, "file2"),
334
- fs_join(source, "subdir", "subfile1"),
335
- ]
336
-
337
- fs.get(source_files, local_join(target, "newdir") + "/") # Note trailing slash
338
- assert local_fs.isdir(local_join(target, "newdir"))
339
- assert local_fs.isfile(local_join(target, "newdir", "file1"))
340
- assert local_fs.isfile(local_join(target, "newdir", "file2"))
341
- assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
342
-
343
- def test_get_directory_recursive(
344
- self, fs, fs_join, fs_path, local_fs, local_join, local_target
345
- ):
346
- # https://github.com/fsspec/filesystem_spec/issues/1062
347
- # Recursive cp/get/put of source directory into non-existent target directory.
348
- src = fs_join(fs_path, "src")
349
- src_file = fs_join(src, "file")
350
- fs.mkdir(src)
351
- fs.touch(src_file)
352
-
353
- target = local_target
354
-
355
- # get without slash
356
- assert not local_fs.exists(target)
357
- for loop in range(2):
358
- fs.get(src, target, recursive=True)
359
- assert local_fs.isdir(target)
360
-
361
- if loop == 0:
362
- assert local_fs.isfile(local_join(target, "file"))
363
- assert not local_fs.exists(local_join(target, "src"))
364
- else:
365
- assert local_fs.isfile(local_join(target, "file"))
366
- assert local_fs.isdir(local_join(target, "src"))
367
- assert local_fs.isfile(local_join(target, "src", "file"))
368
-
369
- local_fs.rm(target, recursive=True)
370
-
371
- # get with slash
372
- assert not local_fs.exists(target)
373
- for loop in range(2):
374
- fs.get(src + "/", target, recursive=True)
375
- assert local_fs.isdir(target)
376
- assert local_fs.isfile(local_join(target, "file"))
377
- assert not local_fs.exists(local_join(target, "src"))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DReAMy-lib/dream/app.py DELETED
@@ -1,118 +0,0 @@
1
- import gradio as gr
2
- import pandas as pd
3
-
4
- languages = pd.read_csv("model_lang.csv", names=["Lang_acr"])
5
-
6
-
7
- def check_lang(lang_acronym):
8
- if lang_acronym in languages["Lang_acr"].to_list():
9
- return "True"
10
- else:
11
- return "False"
12
-
13
- title = "DReAM"
14
-
15
- description_main = """
16
- This space allows you to test a set of LLMs tuned to perform different tasks over dream reports.
17
- Three main tasks are available:
18
-
19
- - Name Entity Recognition (NER), with an English-only model that generates the identified characters.
20
-
21
- - Sentiment Analysis (SA), with two English-only models (one for multi-label classification, and one for generation) and a large multilingual model for multi-label classification.
22
-
23
- - Relation Extraction (RE), with an English-only model that identifies relevant characters and existing relations between them following the Activity feature of the Hall and Van de Castle framework.
24
-
25
- All models have been tuned on the Hall and Van de Castle framework. More details are on the page for each model. For more on the training framework, see the [Bertolini et al., 2023](https://arxiv.org/pdf/2302.14828.pdf) preprint.
26
-
27
- Use the current interface to check if a language is included in the multilingual SA model, using language acronyms (e.g. it for Italian). the tabs above will direct you to each model to query.
28
-
29
- If you want to use the models outside the space, you can easily do so via [DReAMy](https://github.com/lorenzoscottb/DReAMy)
30
- """
31
-
32
- description_L = """
33
- This model is an XLM-R tuned model, pre-trained with 94 languages available, and tuned on emotion-annotated DreamBank English data. (see original model [card](https://huggingface.co/xlm-roberta-large) to see which are available)
34
- """
35
-
36
- description_S = """
37
- A BERT-base-cased model pre-trained on English-only text and tuned on annotated DreamBank English data.
38
- """
39
-
40
- description_G = """
41
- A T5 model tuned to perform text generation and predict emotion as well as the character experiencing those emotions.
42
- """
43
-
44
- description_R = """
45
- A T5 model tuned to perform text generation and predicts the characters and the (activity) relation between them.
46
- """
47
-
48
- description_GNER = """
49
- A T5 model tuned to perform text generation, and predict which characters are present in the report. Note that, in the Hall and Van de Castle, the character lists never includes the dreamer. Hence, if you (willingly or not) enter a report that does not contain another character reference, the model will/should (correctly) produce an empty string. Moreover, it is likely that the produced list of CHAR could be longer than the one produced by the SA model, as not all CHAR might be associated with emotions.
50
- """
51
-
52
- example_main = ["en", "it", "pl"]
53
-
54
- examples = [
55
- ["I was followed by the blue monster but was not scared. I was calm and relaxed."],
56
- ["Ero seguito dal mostro blu, ma non ero spaventato. Ero calmo e rilassato."],
57
- ["Śledził mnie niebieski potwór, ale się nie bałem. Byłem spokojny i zrelaksowany."],
58
- ]
59
-
60
- examples_g = [
61
- ["I'm in an auditorium. Susie S is concerned at her part in this disability awareness spoof we are preparing. I ask, 'Why not do it? Lots of AB's represent us in a patronizing way. Why shouldn't we represent ourselves in a good, funny way?' I watch the video we all made. It is funny. I try to sit on a folding chair. Some guy in front talks to me. Merle is in the audience somewhere. [BL]"],
62
-
63
- ]
64
-
65
- examples_re = [
66
- ["I was skating on the outdoor ice pond that used to be across the street from my house. I was not alone, but I did not recognize any of the other people who were skating around. I went through my whole repertoire of jumps, spires, and steps-some of which I can do and some of which I'm not yet sure of. They were all executed flawlessly-some I repeated, some I did only once. I seemed to know that if I went into competition, I would be sure of coming in third because there were only three contestants. Up to that time I hadn't considered it because I hadn't thought I was good enough, but now since everything was going so well, I decided to enter."],
67
- ["I was talking on the telephone to the father of an old friend of mine (boy, 21 years old). We were discussing the party the Saturday night before to which I had invited his son as a guest. I asked him if his son had a good time at the party. He told me not to tell his son that he had told me, but that he had had a good time, except he was a little surprised that I had acted the way I did."],
68
- ["I was walking alone with my dog in a forest."]
69
- ]
70
-
71
- interface_words = gr.Interface(
72
- fn=check_lang,
73
- inputs="text",
74
- outputs="text",
75
- title=title,
76
- description=description_main,
77
- examples=example_main,
78
- )
79
-
80
- interface_model_L = gr.Interface.load(
81
- name="models/DReAMy-lib/xlm-roberta-large-DreamBank-emotion-presence",
82
- description=description_L,
83
- examples=examples,
84
- title="SA Large Multilingual",
85
- )
86
-
87
- interface_model_S = gr.Interface.load(
88
- name="models/DReAMy-lib/bert-base-cased-DreamBank-emotion-presence",
89
- description=description_S,
90
- examples=examples[0],
91
- title="SA Base English-Only",
92
- )
93
-
94
- interface_model_G = gr.Interface.load(
95
- name="models/DReAMy-lib/t5-base-DreamBank-Generation-Emot-Char",
96
- description=description_G,
97
- examples=examples_g,
98
- title="SA Generation",
99
- )
100
-
101
- interface_model_RE = gr.Interface.load(
102
- name="models/DReAMy-lib/t5-base-DreamBank-Generation-Act-Char",
103
- description=description_R,
104
- examples=examples_re,
105
- title="RE Generation",
106
- )
107
-
108
- interface_model_NER = gr.Interface.load(
109
- name="models/DReAMy-lib/t5-base-DreamBank-Generation-NER-Char",
110
- description=description_GNER,
111
- examples=examples_g,
112
- title="NER Generation",
113
- )
114
-
115
- gr.TabbedInterface(
116
- [interface_words, interface_model_NER, interface_model_L, interface_model_S, interface_model_G, interface_model_RE],
117
- ["Main", "NER Generation", "SA Large Multilingual", "SA Base En", "SA En Generation", "RE Generation"]
118
- ).launch()