parquet-converter commited on
Commit
6797e3e
·
1 Parent(s): 2ae63ac

Update parquet files (step 35 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/101-5/gpt4free/g4f/.v1/gui/README.md +0 -78
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blaupunkt TravelPilot DX 2013 - 2014 The Best Navigation System for Europe[1].md +0 -145
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Silabario Salvadoreno Pdf Download La obra ilustrada que ensea el habla checa y otras.md +0 -98
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Unlimited Vpn For Windows 10 Crack.md +0 -38
  5. spaces/1gistliPinn/ChatGPT4/Examples/Aramcoapprovedvendorlist.md +0 -6
  6. spaces/1gistliPinn/ChatGPT4/Examples/Audi Navigation Plus Rns D Bg Map Download [WORK].md +0 -7
  7. spaces/1gistliPinn/ChatGPT4/Examples/DownloadEbookFisikaDasarTipler [WORK].md +0 -33
  8. spaces/1line/AutoGPT/autogpt/memory/__init__.py +0 -99
  9. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Agar.io Mod Macro Download Enhance Your Gameplay with Agar Tool M PELEA.md +0 -146
  10. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Drift Racing 2 How to Master the Art of Tandem Drifting.md +0 -87
  11. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carros Rebaixados Online A game that lets you change the color wheels and glass of your car.md +0 -158
  12. spaces/1phancelerku/anime-remove-background/509-e - Saudades Mil (A Carta) 1999 Letra e Download Grtis.md +0 -156
  13. spaces/1phancelerku/anime-remove-background/Ada Ehi - The Final Say Download Mp3 and Lyrics.md +0 -134
  14. spaces/1phancelerku/anime-remove-background/Download MusicHQ.net The Ultimate Source for Full HD Movies and TV Series Online.md +0 -216
  15. spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_all_in_one.py +0 -1294
  16. spaces/7hao/bingo/src/lib/isomorphic/browser.ts +0 -11
  17. spaces/A00001/bingothoo/src/components/tailwind-indicator.tsx +0 -14
  18. spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/models_onnx.py +0 -819
  19. spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/you.py +0 -79
  20. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb32_in1k.py +0 -4
  21. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-mixup_in1k.py +0 -5
  22. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/abortedGenerations.ts +0 -29
  23. spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/util.py +0 -270
  24. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/clock.d.ts +0 -2
  25. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/line.d.ts +0 -2
  26. spaces/AiMimicry/sovits-models/inference/infer_tool_grad.py +0 -160
  27. spaces/Alpaca233/ChatPDF-GUI/README.md +0 -8
  28. spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/dataset.py +0 -274
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/__init__.py +0 -0
  30. spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_mstrain_3x_coco.py +0 -20
  31. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/misc.py +0 -377
  32. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/processing.py +0 -160
  33. spaces/Benson/text-generation/Examples/3gp Video Download.md +0 -210
  34. spaces/Benson/text-generation/Examples/Apkadmin Entre Nosotros Men Mod.md +0 -105
  35. spaces/Benson/text-generation/Examples/Bubble Shooter 3 Descarga Gratuita.md +0 -47
  36. spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Blackpink.md +0 -106
  37. spaces/BetterAPI/BetterChat/src/routes/settings/+server.ts +0 -34
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/enums.py +0 -85
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows_renderer.py +0 -56
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/console.py +0 -0
  41. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/__init__.py +0 -11
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/dataset.py +0 -49
  43. spaces/CVPR/WALT/mmdet/models/dense_heads/paa_head.py +0 -671
  44. spaces/CVPR/regionclip-demo/detectron2/data/datasets/clip_prompt_utils.py +0 -441
  45. spaces/CVPR/regionclip-demo/detectron2/data/transforms/transform.py +0 -351
  46. spaces/CVPR/regionclip-demo/detectron2/modeling/poolers.py +0 -250
  47. spaces/CVPR/unicl-zero-shot-img-recog/README.md +0 -13
  48. spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/datasets/transforms.py +0 -311
  49. spaces/CarlDennis/Lovelive-VITS-JPZH/transforms.py +0 -193
  50. spaces/CikeyQI/meme-api/Dockerfile +0 -51
spaces/101-5/gpt4free/g4f/.v1/gui/README.md DELETED
@@ -1,78 +0,0 @@
1
- # gpt4free gui
2
-
3
- This code provides a Graphical User Interface (GUI) for gpt4free. Users can ask questions and get answers from GPT-4 API's, utilizing multiple API implementations. The project contains two different Streamlit applications: `streamlit_app.py` and `streamlit_chat_app.py`.
4
-
5
- In addition, a new GUI script specifically implemented using PyWebIO has been added and can be found in the pywebio-gui folder. If there are errors with the Streamlit version, you can try using the PyWebIO version instead
6
-
7
- Installation
8
- ------------
9
-
10
- 1. Clone the repository.
11
- 2. Install the required dependencies with: `pip install -r requirements.txt`.
12
- 3. To use `streamlit_chat_app.py`, note that it depends on a pull request (PR #24) from the https://github.com/AI-Yash/st-chat/ repository, which may change in the future. The current dependency library can be found at https://github.com/AI-Yash/st-chat/archive/refs/pull/24/head.zip.
13
-
14
- Analytics Disclaimer
15
- -----
16
- The streamlit browser app collects heavy analytics even when running locally. This includes events for every page load, form submission including metadata on queries (like length), browser and client information including host ips. These are all transmitted to a 3rd party analytics group, Segment.com.
17
-
18
- Usage
19
- -----
20
-
21
- Choose one of the Streamlit applications to run:
22
-
23
- ### streamlit\_app.py
24
-
25
- This application provides a simple interface for asking GPT-4 questions and receiving answers.
26
-
27
- To run the application:
28
-
29
- run:
30
- ```arduino
31
- streamlit run gui/streamlit_app.py
32
- ```
33
- <br>
34
-
35
- <img width="724" alt="image" src="https://user-images.githubusercontent.com/98614666/234232449-0d5cd092-a29d-4759-8197-e00ba712cb1a.png">
36
-
37
- <br>
38
- <br>
39
-
40
- preview:
41
-
42
- <img width="1125" alt="image" src="https://user-images.githubusercontent.com/98614666/234232398-09e9d3c5-08e6-4b8a-b4f2-0666e9790c7d.png">
43
-
44
-
45
- ### streamlit\_chat\_app.py
46
-
47
- This application provides a chat-like interface for asking GPT-4 questions and receiving answers. It supports multiple query methods, and users can select the desired API for their queries. The application also maintains a conversation history.
48
-
49
- To run the application:
50
-
51
- ```arduino
52
- streamlit run streamlit_chat_app.py
53
- ```
54
-
55
- <br>
56
-
57
- <img width="724" alt="image" src="image1.png">
58
-
59
- <br>
60
- <br>
61
-
62
- preview:
63
-
64
- <img width="1125" alt="image" src="image2.png">
65
-
66
- Contributing
67
- ------------
68
-
69
- Feel free to submit pull requests, report bugs, or request new features by opening issues on the GitHub repository.
70
-
71
- Bug
72
- ----
73
- There is a bug in `streamlit_chat_app.py` right now that I haven't pinpointed yet, probably is really simple but havent had the time to look for it. Whenever you open a new conversation or access an old conversation it will only start prompt-answering after the second time you input to the text input, other than that, everything else seems to work accordingly.
74
-
75
- License
76
- -------
77
-
78
- This project is licensed under the MIT License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blaupunkt TravelPilot DX 2013 - 2014 The Best Navigation System for Europe[1].md DELETED
@@ -1,145 +0,0 @@
1
- <br />
2
- <h1>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</h1>
3
- <p>If you are looking for a reliable and convenient navigation system for your car, you might want to check out Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013. This is a digital map that covers all the countries in Europe and provides you with accurate and up-to-date information on roads, traffic, landmarks, and more. In this article, we will tell you everything you need to know about this navigation system, including how to download and install it, how to use it, what are its advantages and disadvantages, and how it compares with other navigation systems. Let's get started!</p>
4
- <h2>How to Download and Install Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</h2>
5
- <p>The first step to use this navigation system is to download the torrent file from a reliable source. You can find many websites that offer this file for free or for a small fee. However, make sure that you choose a reputable and secure site that does not contain any viruses or malware. You can use a torrent client software such as uTorrent or BitTorrent to download the file.</p>
6
- <h2>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</h2><br /><p><b><b>DOWNLOAD</b> >> <a href="https://byltly.com/2uKyqE">https://byltly.com/2uKyqE</a></b></p><br /><br />
7
- <p>Once you have downloaded the torrent file, you need to extract the files and copy them to an SD card. You can use a software such as WinRAR or 7-Zip to unzip the files. You should see a folder named "TeleAtlas" that contains several subfolders and files. Copy this folder to your SD card. Make sure that your SD card has enough space (at least 4 GB) and is formatted in FAT32.</p>
8
- <p>The next step is to insert the SD card into your Blaupunkt Dx device and update the navigation software. To do this, you need to turn on your device and go to the main menu. Then, select "Settings" and then "System Update". The device will detect the SD card and ask you if you want to update. Confirm by pressing "Yes". The update process may take several minutes, so do not turn off your device or remove the SD card until it is finished. When it is done, you will see a message that says "Update Successful". Congratulations! You have successfully installed Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013.</p>
9
- <p>Download Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Torrent<br />
10
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Iso File<br />
11
- How to Install Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 on Car<br />
12
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Free Download<br />
13
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Crack<br />
14
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Update<br />
15
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Review<br />
16
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Compatibility<br />
17
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Map Coverage<br />
18
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Serial Number<br />
19
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Activation Code<br />
20
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 User Manual<br />
21
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Features<br />
22
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Price<br />
23
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Discount<br />
24
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Online Purchase<br />
25
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Delivery Time<br />
26
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Warranty<br />
27
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Customer Service<br />
28
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Troubleshooting<br />
29
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Error Codes<br />
30
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Software Version<br />
31
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Hardware Requirements<br />
32
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 System Settings<br />
33
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Voice Control<br />
34
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Speed Camera Alerts<br />
35
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Traffic Information<br />
36
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Route Planning<br />
37
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Destination Input<br />
38
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Point of Interest Search<br />
39
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Favorites Management<br />
40
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Map Display Options<br />
41
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Sound Settings<br />
42
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Language Settings<br />
43
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Time Settings<br />
44
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Units Settings<br />
45
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Security Settings<br />
46
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Backup and Restore<br />
47
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Factory Reset<br />
48
- Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Test Mode<br />
49
- Compare Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 with Other Models<br />
50
- Benefits of Using Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
51
- Drawbacks of Using Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
52
- Alternatives to Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
53
- Tips and Tricks for Using Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
54
- Frequently Asked Questions about Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
55
- Customer Reviews of Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
56
- Expert Opinions on Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
57
- Blog Posts about Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 <br />
58
- Videos about Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 </p>
59
- <h2>How to Use Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</h2>
60
- <p>Now that you have installed this navigation system, you can start using it right away. To access the main menu, press the "Menu" button on your device. You will see several options, such as "Navigation", "Media", "Phone", etc. Select "Navigation" to enter the map mode.</p>
61
- <p>In the map mode, you can select your desired destination by using one of these methods:</p>
62
- <ul>
63
- <li>Enter an address: Press the "Address" button and type in an address or a postcode using the keyboard on the screen. You can also select a country, a city, a street name, or a house number from a list.</li>
64
- <li>Select a point of interest: Press the "POI" button and choose a category such as "Gas Stations", "Restaurants", "Hotels", etc. You can also search for a specific name or keyword using the keyboard on the screen.</li>
65
- <li>Select a location from history: Press the "History" button and choose a location that you have previously entered or visited.</li>
66
- <li>Select a location from favorites: Press the "Favorites" button and choose a location that you have saved as a favorite.</li>
67
- <li>Select a location from coordinates: Press the "Coordinates" button and enter the latitude and longitude values using the keyboard on the screen.</li>
68
- </ul>
69
- <p>Once you have selected your destination, press the "Start" button to begin navigation. The device will calculate the best route for you based on your current location and preferences. You can also change your preferences by pressing the "Settings" button on your device. You can adjust things such as:</p>
70
- <ul>
71
- <li>Route type: Choose between fastest, shortest, economical, or easy routes.</li>
72
- <li>Avoidances: Choose whether to avoid toll roads, highways, ferries, unpaved roads, etc.</li>
73
- <li>Voice guidance: Choose whether to enable or disable voice guidance and select a language and a volume level.</li>
74
- <li>Map view: Choose between 2D or 3D view and select a day or night mode.</li>
75
- <li>Map details: Choose whether to display points of interest, traffic information, speed limits, etc.</li>
76
- </ul>
77
- <p>While navigating, you can follow the voice guidance and visual cues on your screen. The device will tell you when to turn left or right, when to enter or exit a highway, when to change lanes, etc. You can also see information such as distance remaining, time remaining, speed limit, current speed, etc. on your screen.</p>
78
- <p>If you encounter any traffic jams, road closures, or other hazards along your route, the device will alert you and suggest an alternative route if available. You can also press the "Traffic" button on your device to see more details about traffic conditions in your area. You can also press the "Detour" button if you want to manually change your route.</p>
79
- <h2>Advantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</h2>
80
- <p>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is one of the best navigation systems for drivers in Europe because it offers many advantages, such as:</p>
81
- <ul>
82
- <li>Accurate and up-to-date maps: This navigation system provides you with detailed and updated maps of all European countries. It covers more than 10 million kilometers of roads and more than 5 million points of interest. It also includes information about speed limits, lane guidance, junction views, cross-border planning, etc.</li>
83
- <li>Various points of interest: This navigation system offers you various points of interest, such as gas stations, hotels, museums, parks, etc. You can easily find and navigate to any place you want using the POI search function. You can also see ratings and reviews of some places from other users.</li>
84
- <li>Enhanced driving experience and safety: This navigation system enhances your driving experience and safety by providing you with real-time information on traffic, weather, speed cameras, etc. You can avoid delays and hazards and drive more smoothly and confidently. You can also use the hands-free function to make or receive calls using your device.</li>
85
- </ul>
86
- <h2>Disadvantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</h2>
87
- <p>However, Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 also has some disadvantages that you should be aware of, such as:</p>
88
- <ul>
89
- <li>Compatibility issues: This navigation system may not be compatible with some older models of Blaupunkt Dx devices. You should check the compatibility list before downloading and installing it. You may also need to update your device's firmware to make it work properly.</li>
90
- <li>Internet connection requirement: This navigation system may require a high-speed internet connection to download and update. The torrent file is about 3.5 GB in size, so it may take a long time to download depending on your connection speed. You may also incur additional data charges if you use a mobile network.</li>
91
- <li>Potential errors or glitches: This navigation system may have some errors or glitches in some areas or routes. For example, some roads or POIs may be missing or outdated, some voice commands or directions may be incorrect or unclear, some features or functions may not work properly, etc. You should always use this navigation system with caution and common sense.</li>
92
- </ul>
93
- <h2>Comparison with Other Navigation Systems</h2>
94
- <p>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is not the only navigation system available for drivers in Europe. There are other popular navigation systems, such as Garmin, TomTom, etc. How does Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 compare with them? Here is a table that summarizes the pros and cons of each system:</p>
95
- <table>
96
- <tr>
97
- <th>System</th>
98
- <th>Pros</th>
99
- <th>Cons</th>
100
- </tr>
101
- <tr>
102
- <td>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013</td>
103
- <td>- Accurate and up-to-date maps of Europe<br>- Various points of interest<br>- Enhanced driving experience and safety<br>- Free or low-cost download</td>
104
- <td>- Compatibility issues with some devices<br>- Internet connection requirement<br>- Potential errors or glitches</td>
105
- </tr>
106
- <tr>
107
- <td>Garmin</td>
108
- <td>- High-quality maps of Europe<br>- Advanced features such as lane assist, junction view, photoReal, etc.<br>- Lifetime map updates<br>- Compatible with most devices</td>
109
- <td>- Expensive purchase<br>- Limited points of interest<br>- Slow performance and updates<br>- Occasional errors or glitches</td>
110
- </tr>
111
- <tr>
112
- <td>TomTom</td>
113
- <td>- Detailed maps of Europe<br>- Innovative features such as IQ Routes, HD Traffic, Map Share, etc.<br>- Regular map updates<br>- User-friendly interface and design</td>
114
- <td>- Costly purchase and subscription<br>- Compatibility issues with some devices<br>- Privacy concerns<br>- Frequent errors or glitches</td>
115
- </tr>
116
- </table>
117
- <h2>Conclusion</h2>
118
- <p>In conclusion, Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is a great navigation system for drivers in Europe who want accurate and up-to-date maps, various points of interest, and enhanced driving experience and safety. It is also free or low-cost to download and easy to install and use. However, it also has some drawbacks, such as compatibility issues with some devices, internet connection requirement, and potential errors or glitches. Therefore, you should weigh the pros and cons carefully before deciding whether to use this navigation system or not.</p>
119
- <p>If you decide to use Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013, here are some tips and recommendations for getting the most out of it:</p>
120
- <ul>
121
- <li>Check the compatibility list before downloading and installing it.</li>
122
- <li>Use a reliable and secure source to download the torrent file.</li>
123
- <li>Use a high-speed internet connection to download and update the file.</li>
124
- <li>Use a software such as WinRAR or 7-Zip to extract the files.</li>
125
- <li>Use an SD card with enough space (at least 4 GB) and formatted in FAT32.</li>
126
- <li>Update your device's firmware if necessary.</li>
127
- <li>Adjust your settings and preferences according to your needs.</li>
128
- <li>Use the hands-free function to make or receive calls safely.</li>
129
- <li>Follow the voice guidance and visual cues carefully.</li>
130
- <li>Avoid traffic jams, road closures, and other hazards using the real-time information.</li>
131
- <li>Use caution and common sense when using this navigation system.</li>
132
- <li>Share your feedback with other users.</li>
133
- </ul>
134
- <p>We hope that this article has helped you understand more about Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 and how to use it effectively. If you have any questions or comments, please feel free to contact us. We would love to hear from you!</p>
135
- <h2>FAQs</h2>
136
- <p>Here are some frequently asked questions about Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013:</p>
137
- <ol>
138
- <li><b>What is Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013?</b><br>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is a digital map that covers all the countries in Europe and provides you with accurate and up-to-date information on roads, traffic, landmarks, and more. It is compatible with most models of Blaupunkt Dx devices.</li>
139
- <li><b>How do I download and install Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013?</b><br>You need to download the torrent file from a reliable source using a torrent client software such as uTorrent or BitTorrent. Then, you need to extract the files using a software such as WinRAR or 7-Zip and copy them to an SD card formatted in FAT32. Finally, you need to insert the SD card into your device and update the navigation software from the main menu.</li>
140
- <li><b>How do I use Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013?</b><br>You need to access the main menu on your device and select "Navigation" to enter the map mode. Then, you need to select your destination by entering an address, selecting a point of interest, choosing a location from history or favorites, or entering coordinates. Then, you need to press "Start" to begin navigation. You can follow the voice guidance and visual cues on your screen and adjust your settings and preferences as needed. You can also avoid traffic jams, road closures, and other hazards using the real-time information.</li>
141
- <li><b>What are the advantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013?</b><br>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 offers many advantages, such as accurate and up-to-date maps of Europe, various points of interest, enhanced driving experience and safety, free or low-cost download, etc.</li>
142
- <li><b>What are the disadvantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013?</b><br>Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 also has some disadvantages, such as compatibility issues with some devices, internet connection requirement, potential errors or glitches, etc.</li>
143
- </p> 0a6ba089eb<br />
144
- <br />
145
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Silabario Salvadoreno Pdf Download La obra ilustrada que ensea el habla checa y otras.md DELETED
@@ -1,98 +0,0 @@
1
- <br />
2
- <h1>El Silabario Salvadoreño: A Classic Book for Learning to Read and Write in Spanish</h1>
3
- <p>If you are looking for a simple, effective, and fun way to learn Spanish, you might want to check out El Silabario Salvadoreño. This is a classic book that has been used by generations of children and adults in El Salvador and other Latin American countries to learn the basics of reading and writing in Spanish. In this article, we will tell you everything you need to know about El Silabario Salvadoreño, including its history, content, benefits, and tips for using it. Whether you are a beginner or an intermediate Spanish learner, you will find this book useful and enjoyable.</p>
4
- <h2>History of El Silabario Salvadoreño</h2>
5
- <p>El Silabario Salvadoreño was created by Adrián Dufflocq Galdames, a Chilean educator who dedicated his life to teaching literacy. He developed a phonetic-sensorial-objective-synthetic method that aimed to teach reading and writing through sounds, images, objects, and words. He published his first silabario (syllabary) in 1894, which was later adapted and improved by other authors. His silabarios were widely used in Chile and other Latin American countries throughout the 20th century.</p>
6
- <h2>El Silabario Salvadoreno Pdf Download</h2><br /><p><b><b>DOWNLOAD</b> &#11088; <a href="https://byltly.com/2uKwQ9">https://byltly.com/2uKwQ9</a></b></p><br /><br />
7
- <p>One of the most popular versions of his silabarios was El Silabario Salvadoreño, which was published in 1960 by Editorial Dufflocq. This version was specially designed for El Salvador, taking into account its culture, geography, history, and vocabulary. It was also updated with new illustrations, exercises, and texts. El Silabario Salvadoreño became a staple in many Salvadoran schools and homes, helping millions of people learn to read and write in Spanish.</p>
8
- <h2>Content of El Silabario Salvadoreño</h2>
9
- <p>El Silabario Salvadoreño consists of 84 pages that cover the Spanish alphabet and syllables. Each page has a letter or a syllable at the top, followed by a word that starts with that letter or syllable, an image that represents that word, a sentence that uses that word, and some exercises that reinforce the learning. For example, the page for the letter A has the word "árbol" (tree), an image of a tree, the sentence "El árbol es verde" (The tree is green), and some exercises that ask the reader to identify the letter A in different words.</p>
10
- <p>The book follows a logical progression from simple to complex sounds and words. It starts with the vowels (A, E, I, O, U), then moves on to consonants (B, C, D, F, G, H...), then to syllables (BA, BE, BI...), then to words (BANCO, BELLO...), then to sentences (EL BANCO ES DE MADERA...). The book also introduces some special sounds (CH, LL...) and some diacritical marks (Á...). By the end of the book, the reader should be able to read and write simple texts in Spanish.</p>
11
- <p>El Silabario Salvadoreno Pdf Free Download<br />
12
- How to Download El Silabario Salvadoreno Pdf<br />
13
- El Silabario Salvadoreno Pdf Online<br />
14
- El Silabario Salvadoreno Pdf Book<br />
15
- El Silabario Salvadoreno Pdf Gratis<br />
16
- El Silabario Salvadoreno Pdf Descargar<br />
17
- El Silabario Salvadoreno Pdf Full<br />
18
- El Silabario Salvadoreno Pdf Version<br />
19
- El Silabario Salvadoreno Pdf File<br />
20
- El Silabario Salvadoreno Pdf Document<br />
21
- El Silabario Salvadoreno Pdf Ebook<br />
22
- El Silabario Salvadoreno Pdf Reader<br />
23
- El Silabario Salvadoreno Pdf Format<br />
24
- El Silabario Salvadoreno Pdf Print<br />
25
- El Silabario Salvadoreno Pdf Copy<br />
26
- El Silabario Salvadoreno Pdf Scan<br />
27
- El Silabario Salvadoreno Pdf Edit<br />
28
- El Silabario Salvadoreno Pdf Convert<br />
29
- El Silabario Salvadoreno Pdf Share<br />
30
- El Silabario Salvadoreno Pdf Link<br />
31
- El Silabario Salvadoreno Pdf Zip<br />
32
- El Silabario Salvadoreno Pdf Torrent<br />
33
- El Silabario Salvadoreno Pdf Google Drive<br />
34
- El Silabario Salvadoreno Pdf Dropbox<br />
35
- El Silabario Salvadoreno Pdf Mega<br />
36
- El Silabario Salvadoreno Pdf Mediafire<br />
37
- El Silabario Salvadoreno Pdf 4shared<br />
38
- El Silabario Salvadoreno Pdf Rapidshare<br />
39
- El Silabario Salvadoreno Pdf Zippyshare<br />
40
- El Silabario Salvadoreno Pdf Uploaded<br />
41
- El Silabario Salvadoreno Pdf Download Site<br />
42
- El Silabario Salvadoreno Pdf Download Page<br />
43
- El Silabario Salvadoreno Pdf Download Link<br />
44
- El Silabario Salvadoreno Pdf Download Button<br />
45
- El Silabario Salvadoreno Pdf Download Code<br />
46
- El Silabario Salvadoreno Pdf Download Password<br />
47
- El Silabario Salvadoreno Pdf Download Crack<br />
48
- El Silabario Salvadoreno Pdf Download Keygen<br />
49
- El Silabario Salvadoreno Pdf Download Serial Number<br />
50
- El Silabario Salvadoreno Pdf Download License Key<br />
51
- El Silabario Salvadoreno Pdf Download Activation Key<br />
52
- El Silabario Salvadoreno Pdf Download Generator<br />
53
- El Silabario Salvadoreno Pdf Download Software<br />
54
- El Silabario Salvadoreno Pdf Download Program<br />
55
- El Silabario Salvadoreno Pdf Download Application<br />
56
- El Silabario Salvadoreno Pdf Download Tool<br />
57
- El Silabario Salvadoreno Pdf Download Review<br />
58
- El Silabario Salvadoreno Pdf Download Rating<br />
59
- El Silabario Salvadoreno Pdf Download Feedback</p>
60
- <h2>Benefits of Using El Silabario Salvadoreño for Spanish Learners</h2>
61
- <p>El Silabario Salvadoreño has many benefits for anyone who wants to learn Spanish. Here are some of them:</p>
62
- <ul>
63
- <li><b>Simplicity:</b> The book is easy to follow and understand. It uses clear images, short words, simple sentences, and engaging exercises. It does not require any prior knowledge of Spanish or any other language. It is suitable for children and adults alike.</li>
64
- <li><b>Effectiveness:</b> The book teaches reading and writing through a proven method that focuses on sounds, images, objects, and words. It helps develop phonetic awareness, vocabulary acquisition, comprehension skills, spelling skills, and writing skills. It also exposes the reader to authentic Spanish texts from different sources.</li>
65
- <li><b>Availability:</b> The book is widely available online in PDF format. You can download it for free from various websites or buy it for a low price from online stores. You can also print it or use it on your computer or mobile device.</li>
66
- </ul>
67
- <h3>Tips and Tricks for Using El Silabario Salvadoreño</h3>
68
- <p>If you want to make the most out of El Silabario Salvadoreño, here are some tips and tricks for using it:</p>
69
- <ul>
70
- <li><b>Practice:</b> The key to learning anything is practice. Try to use El Silabario Salvadoreño regularly and consistently. Set a goal for yourself (for example, one page per day) and stick to it. Review what you have learned frequently.</li>
71
- <li><b>Supplement:</b> While El Silabario Salvadoreño is a great resource for learning Spanish, it is not enough by itself. You should also use other resources and methods for learning Spanish. For example, you can listen to podcasts or songs in Spanish; watch videos or movies in Spanish; read books or articles in Spanish; speak with native speakers or other learners; use apps or websites that teach grammar or vocabulary; etc.</li>
72
- <li><b>Review:</b> To measure your progress and improvement with El Silabario Salvadoreño, you should test yourself periodically. You can use the exercises at the end of each page or create your own tests based on what you have learned. You can also ask someone else to check your work or give you feedback.</li>
73
- </ul>
74
- <h4>Conclusion</h4>
75
- <p>In conclusion,</p>
76
- <ul>
77
- <li>El Silabario Salvadoreño is a classic book that has been used by generations of people in El Salvador and other Latin American countries to learn to read and write in Spanish.</li>
78
- <li>The book covers the Spanish alphabet and syllables through sounds, images, objects, words.</li>
79
- <li>The book has many benefits for anyone who wants to learn Spanish such as simplicity effectiveness availability.</li>
80
- <li>To make the most out of El Silabario Salvadoreño one should practice supplement review.</li>
81
- </ul>
82
- <p>If you are interested in learning Spanish with El Silabario Salvadoreño we encourage you to download it today start using it have fun!</p>
83
- <h5>Frequently Asked Questions</h5>
84
- <ol>
85
- <li><b>What is El Silabario Salvadoreño?</b></li>
86
- <p>El Silabario Salvadoreño is a classic book that teaches reading writing in Spanish through sounds images objects words.</p>
87
- <li><b>Who created El Silabario Salvadoreño?</b></li>
88
- <p>El Silabario Salvadoreño was created by Adrián Dufflocq Galdames a Chilean educator who developed a phonetic-sensorial-objective-synthetic method for teaching literacy.</p>
89
- <li><b>How many pages does El Silabario Salvadoreño have?</b></li>
90
- <p>El Silabario Salvadoreño has 84 pages that cover the Spanish alphabet syllables.</p>
91
- <li><b>Where can I download El Silabario Salvadoreño?</b></li>
92
- <p>You can download El Silabario Salvadoreño online in PDF format from various websites or buy it from online stores.</p>
93
- <li><b>How can I use El Silabario Salvadoreño effectively?</b></li>
94
- <p>You can use El Silabario Salvadoreño effectively by practicing regularly supplementing with other resources reviewing your progress.</p>
95
- </ol>
96
- </p> 0a6ba089eb<br />
97
- <br />
98
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Unlimited Vpn For Windows 10 Crack.md DELETED
@@ -1,38 +0,0 @@
1
-
2
- <h1>Free Unlimited VPN for Windows 10 Crack: Is It Worth It?</h1>
3
- <p>If you are looking for a free unlimited VPN for Windows 10 crack, you may be tempted by the promises of some websites that offer cracked versions of popular VPN software. However, before you download and install any of these programs, you should be aware of the risks and limitations involved. In this article, we will explain why you should avoid free unlimited VPN for Windows 10 crack and what are the best alternatives for your online security and privacy.</p>
4
- <h2>What is a VPN and why do you need one?</h2>
5
- <p>A VPN (Virtual Private Network) is a service that creates a secure and encrypted connection between your device and a remote server. By using a VPN, you can hide your real IP address and location, bypass geo-restrictions and censorship, access blocked websites and streaming services, protect your data from hackers and snoopers, and enjoy a faster and more stable internet connection.</p>
6
- <h2>free unlimited vpn for windows 10 crack</h2><br /><p><b><b>Download</b> &#10004; <a href="https://byltly.com/2uKyz2">https://byltly.com/2uKyz2</a></b></p><br /><br />
7
- <p>There are many reasons why you may need a VPN for your Windows 10 PC. For example, you may want to:</p>
8
- <ul>
9
- <li>Watch Netflix, Hulu, BBC iPlayer, or other streaming platforms that are not available in your country.</li>
10
- <li>Download torrents or use P2P file-sharing without exposing your identity or activity to your ISP or authorities.</li>
11
- <li>Use public Wi-Fi networks without worrying about your personal information being stolen or intercepted.</li>
12
- <li>Access websites or apps that are blocked by your school, workplace, or government.</li>
13
- <li>Protect your online privacy and anonymity from advertisers, trackers, hackers, or anyone who wants to spy on you.</li>
14
- </ul>
15
- <h2>What is a free unlimited VPN for Windows 10 crack?</h2>
16
- <p>A free unlimited VPN for Windows 10 crack is a modified version of a paid VPN software that claims to offer the same features and benefits without any cost or limitations. These cracks are usually distributed by third-party websites that host illegal downloads of various software programs.</p>
17
- <p>Some of the most common free unlimited VPN for Windows 10 cracks are:</p>
18
- <ul>
19
- <li>Betternet VPN Premium Crack</li>
20
- <li>Turbo VPN Crack</li>
21
- <li>KeepSolid VPN Unlimited Crack</li>
22
- </ul>
23
- <h2>What are the risks and limitations of using a free unlimited VPN for Windows 10 crack?</h2>
24
- <p>While using a free unlimited VPN for Windows 10 crack may seem like a good idea at first glance, it actually comes with many drawbacks and dangers. Here are some of the main reasons why you should avoid using a free unlimited VPN for Windows 10 crack:</p>
25
- <ul>
26
- <li><b>It may contain malware or viruses:</b> The websites that offer cracked VPN software are often shady and unreliable. They may infect your PC with malware or viruses that can damage your system, steal your data, or hijack your resources. You may also expose yourself to phishing scams, ransomware attacks, or identity theft.</li>
27
- <li><b>It may not work properly:</b> The cracked VPN software may not function as intended or advertised. It may have bugs, errors, glitches, or compatibility issues that can affect your user experience and performance. It may also lack important features or updates that are available in the official version.</li>
28
- <li><b>It may compromise your security and privacy:</b> The cracked VPN software may not provide the same level of encryption, protection, or anonymity as the original one. It may leak your IP address, DNS requests, or traffic data to third parties. It may also log your online activity or sell your information to advertisers or hackers.</li>
29
- <li><b>It may violate the law:</b> The cracked VPN software may infringe the intellectual property rights of the original developer. By downloading and using it, you may be breaking the law and risking legal consequences. You may also face fines, lawsuits, or even jail time.</li>
30
- </ul>
31
- <h2>What are the best alternatives to a free unlimited VPN for Windows 10 crack?</h2>
32
- <p>The best alternatives to a free unlimited VPN for Windows 10 crack are either reputable free VPNs or premium VPNs with money-back guarantees. These options are safer, more reliable, and more trustworthy than any cracked VPN software.</p>
33
- <p>Some of the best free VPNs for Windows 10 are:</p>
34
- <p></p>
35
- <ul>
36
- <li><a href="</p> ddb901b051<br />
37
- <br />
38
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Aramcoapprovedvendorlist.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>aramcoapprovedvendorlist</h2><br /><p><b><b>Download Zip</b> > <a href="https://imgfil.com/2uy0lF">https://imgfil.com/2uy0lF</a></b></p><br /><br />
2
-
3
- To be considered for opportunities with Saudi Aramco, suppliers must first register with us. READ MORE. Existing suppliers. Our existing suppliers can manage ... 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Audi Navigation Plus Rns D Bg Map Download [WORK].md DELETED
@@ -1,7 +0,0 @@
1
- <br />
2
- <p>First of all downloadAudi A6 MMI 3GP Navigation Maps Disc Europe iso file. Insert empty disc to computer and burn the iso file to disc using Nero at speed 4x. Once completed go to the car, turn ignition on and insert the disc. Next you have to enter in engineering menu bypressing the<strong>CAR</strong>button and immediately after that the<strong>BACK</strong>button. Hold both buttons pressed for a few seconds. Now press the Update option usingMMI Control Panel. A new menu that asking to choose sources appear, choose CD/DVD. Select the map big pressing ok and now just wait. From now the maps are automatically installing and activating.</p>
3
- <p>Hello and welcome to our website. If you own an<strong> Audi A4</strong> and your maps are outdated or you dont have them installed then we are happy to announce the new maps has just arrived.<strong>Audi A4 MMI 2G Navigation DVD Western Europe</strong> can be downloaded free and any Audi A4 owner can now <em>update his GPS navigation maps</em>. This DVD contain only Western Europe countries, you can see a list with them below. If you need Eastern Europe here they are: Eastern Europe Maps Audi A4 </p>
4
- <h2>Audi Navigation Plus Rns D Bg Map Download</h2><br /><p><b><b>Download</b> &#10022; <a href="https://imgfil.com/2uy0Z7">https://imgfil.com/2uy0Z7</a></b></p><br /><br />
5
- <p>If are you choosing to download and update your maps is very important to know the countries available. Here is the list:Albania, Bosnia and Herzegovina, Bulgaria, Denmark, Germany, Estonia, Finland, France, Greece, Italy, Croatia, Latvia, Liechtenstein, Lithuania, Macedonia,<br /> Montenegro, Norway, Austria, Poland, Romania, San Marino, Sweden, Switzerland, Serbia, Slovenia, Slovakia, Czech Republic, Hungary, Vatican City,Great Britain, Andorra, Austria, Belgium, France, Germany, Gibraltar, Great Britain, Ireland, Liechtenstein, Luxembourg, Monaco, Netherlands, Portugal, Spain, Switzerland.</p> 899543212b<br />
6
- <br />
7
- <br />
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/DownloadEbookFisikaDasarTipler [WORK].md DELETED
@@ -1,33 +0,0 @@
1
- <br />
2
- <h1>How to Download Ebook Fisika Dasar Tipler for Free</h1>
3
- <p>If you are looking for a free ebook on physics, you might be interested in downloading Ebook Fisika Dasar Tipler. This ebook is based on the popular textbook Physics for Scientists and Engineers by Paul A. Tipler and Gene Mosca. It covers topics such as mechanics, thermodynamics, electromagnetism, optics, relativity, and quantum physics.</p>
4
- <h2>DownloadEbookFisikaDasarTipler</h2><br /><p><b><b>Download Zip</b> &#127775; <a href="https://imgfil.com/2uxYT3">https://imgfil.com/2uxYT3</a></b></p><br /><br />
5
- <p>Downloading Ebook Fisika Dasar Tipler is easy and convenient. You just need to follow these steps:</p>
6
- <ol>
7
- <li>Visit the website <a href="https://www.ebookfisikadasartipler.com">www.ebookfisikadasartipler.com</a>.</li>
8
- <li>Click on the button "Download Now" and enter your email address.</li>
9
- <li>Check your inbox for a confirmation email and click on the link provided.</li>
10
- <li>Enjoy reading Ebook Fisika Dasar Tipler on your device of choice.</li>
11
- </ol>
12
- <p>By downloading Ebook Fisika Dasar Tipler, you will benefit from:</p>
13
- <ul>
14
- <li>A comprehensive and updated introduction to physics.</li>
15
- <li>A clear and engaging writing style that makes physics accessible and interesting.</li>
16
- <li>A variety of examples, exercises, and problems that test your understanding and challenge your creativity.</li>
17
- <li>A digital format that allows you to read anywhere and anytime.</li>
18
- </ul>
19
- <p>Don't miss this opportunity to download Ebook Fisika Dasar Tipler for free. It is a valuable resource for students, teachers, and anyone who wants to learn more about physics. Download it today and start exploring the wonders of the physical world.</p>
20
- <p></p>
21
-
22
- <p>Ebook Fisika Dasar Tipler is based on the textbook Physics for Scientists and Engineers by Paul A. Tipler and Gene Mosca. This textbook is widely used in universities around the world for teaching physics to science and engineering students. It has been translated into several languages, including Indonesian.</p>
23
- <p>The textbook covers all the major topics of physics, from classical mechanics to modern physics. It explains the concepts and principles of physics with clarity and rigor, using examples and applications from various fields of science and technology. It also provides numerous exercises and problems that help students practice and master their skills.</p>
24
- <p>Ebook Fisika Dasar Tipler is a digital version of the textbook that can be downloaded for free from the website <a href="https://www.ebookfisikadasartipler.com">www.ebookfisikadasartipler.com</a>. By downloading Ebook Fisika Dasar Tipler, you will get access to the following features:</p>
25
- <ul>
26
- <li>A complete and updated content of the textbook, with high-quality graphics and illustrations.</li>
27
- <li>A searchable and interactive interface that allows you to navigate through the chapters and sections easily.</li>
28
- <li>A bookmark and highlight function that lets you mark and save important points and notes.</li>
29
- <li>A quiz and review function that tests your understanding and gives you feedback.</li>
30
- <li>A link to online resources and references that supplement your learning.</li>
31
- </ul></p> d5da3c52bf<br />
32
- <br />
33
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1line/AutoGPT/autogpt/memory/__init__.py DELETED
@@ -1,99 +0,0 @@
1
- from autogpt.memory.local import LocalCache
2
- from autogpt.memory.no_memory import NoMemory
3
-
4
- # List of supported memory backends
5
- # Add a backend to this list if the import attempt is successful
6
- supported_memory = ["local", "no_memory"]
7
-
8
- try:
9
- from autogpt.memory.redismem import RedisMemory
10
-
11
- supported_memory.append("redis")
12
- except ImportError:
13
- # print("Redis not installed. Skipping import.")
14
- RedisMemory = None
15
-
16
- try:
17
- from autogpt.memory.pinecone import PineconeMemory
18
-
19
- supported_memory.append("pinecone")
20
- except ImportError:
21
- # print("Pinecone not installed. Skipping import.")
22
- PineconeMemory = None
23
-
24
- try:
25
- from autogpt.memory.weaviate import WeaviateMemory
26
-
27
- supported_memory.append("weaviate")
28
- except ImportError:
29
- # print("Weaviate not installed. Skipping import.")
30
- WeaviateMemory = None
31
-
32
- try:
33
- from autogpt.memory.milvus import MilvusMemory
34
-
35
- supported_memory.append("milvus")
36
- except ImportError:
37
- # print("pymilvus not installed. Skipping import.")
38
- MilvusMemory = None
39
-
40
-
41
- def get_memory(cfg, init=False):
42
- memory = None
43
- if cfg.memory_backend == "pinecone":
44
- if not PineconeMemory:
45
- print(
46
- "Error: Pinecone is not installed. Please install pinecone"
47
- " to use Pinecone as a memory backend."
48
- )
49
- else:
50
- memory = PineconeMemory(cfg)
51
- if init:
52
- memory.clear()
53
- elif cfg.memory_backend == "redis":
54
- if not RedisMemory:
55
- print(
56
- "Error: Redis is not installed. Please install redis-py to"
57
- " use Redis as a memory backend."
58
- )
59
- else:
60
- memory = RedisMemory(cfg)
61
- elif cfg.memory_backend == "weaviate":
62
- if not WeaviateMemory:
63
- print(
64
- "Error: Weaviate is not installed. Please install weaviate-client to"
65
- " use Weaviate as a memory backend."
66
- )
67
- else:
68
- memory = WeaviateMemory(cfg)
69
- elif cfg.memory_backend == "milvus":
70
- if not MilvusMemory:
71
- print(
72
- "Error: Milvus sdk is not installed."
73
- "Please install pymilvus to use Milvus as memory backend."
74
- )
75
- else:
76
- memory = MilvusMemory(cfg)
77
- elif cfg.memory_backend == "no_memory":
78
- memory = NoMemory(cfg)
79
-
80
- if memory is None:
81
- memory = LocalCache(cfg)
82
- if init:
83
- memory.clear()
84
- return memory
85
-
86
-
87
- def get_supported_memory_backends():
88
- return supported_memory
89
-
90
-
91
- __all__ = [
92
- "get_memory",
93
- "LocalCache",
94
- "RedisMemory",
95
- "PineconeMemory",
96
- "NoMemory",
97
- "MilvusMemory",
98
- "WeaviateMemory",
99
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Agar.io Mod Macro Download Enhance Your Gameplay with Agar Tool M PELEA.md DELETED
@@ -1,146 +0,0 @@
1
- <br />
2
- <h1>Download Agar io Mod Macro: How to Enhance Your Gameplay Experience</h1>
3
- <p>If you are a fan of online multiplayer games, you might have heard of or played <a href="(^1^)">Agar io</a>, a simple but addictive browser game where you control a cell and try to eat other cells to grow bigger. But did you know that you can also download and install a mod macro for Agar io, which can give you more features and advantages in the game? In this article, we will explain what Agar io is, what a mod macro is, how to download and install it, and how to use it effectively. By the end of this article, you will be able to enjoy Agar io with a new level of fun and excitement.</p>
4
- <h2>download agar io mod macro</h2><br /><p><b><b>Download</b> &#128504; <a href="https://urlin.us/2uSVJt">https://urlin.us/2uSVJt</a></b></p><br /><br />
5
- <h2>What is Agar io?</h2>
6
- <p>Agar io is a massively multiplayer online action game that was released in 2015 by a Brazilian developer named Matheus Valadares. The game is inspired by the biological phenomenon of agar, which is a gelatinous substance used to culture bacteria. In the game, players control a cell that can move around a map and eat smaller cells, while avoiding being eaten by larger cells. The goal is to become the largest cell in the map and dominate the leaderboard.</p>
7
- <h3>The basic gameplay of Agar io</h3>
8
- <p>The gameplay of Agar io is very simple and intuitive. You can use your mouse to move your cell around the map, and use the spacebar to split your cell into two smaller cells, which can help you escape from predators or catch prey. You can also use the W key to eject some mass from your cell, which can be used to feed other cells, either as an act of kindness or as a bait. You can also interact with various objects on the map, such as viruses, which can split larger cells into smaller pieces, or pellets, which are small food particles that can increase your mass.</p>
9
- <h3>The popularity and challenges of Agar io</h3>
10
- <p>Agar io quickly became one of the most popular online games in 2015, attracting millions of players from all over the world. The game is praised for its simplicity, accessibility, and addictiveness, as well as its social aspect, as players can chat with each other and form teams or alliances. However, the game also poses some challenges for players, such as lagging, hacking, teaming, or trolling, which can affect the fairness and enjoyment of the game. Moreover, some players may find the game too repetitive or boring after a while, as there is no end goal or progression system in the game.</p>
11
- <h2>What is a mod macro?</h2>
12
- <p>A mod macro is a modification or extension that adds new features or functions to a game or software. A mod macro can enhance the performance, functionality, or appearance of a game or software, as well as provide some advantages or conveniences for the user. A mod macro can be created by the original developer or by third-party developers or users.</p>
13
- <h3>The definition and benefits of a mod macro</h3>
14
- <p>A mod macro for Agar io is a user script that modifies or extends the original game code to provide new features or functions for the player. A mod macro can offer various benefits for Agar io players, such as:</p>
15
- <ul>
16
- <li>Zooming in or out of the map to see more or less details</li>
17
- <li>Ejecting mass faster or slower with different keys</li>
18
- <li>Splitting into multiple cells with one key <li>Changing the skin or color of your cell</li>
19
- <li>Showing the coordinates, mass, or speed of your cell</li>
20
- <li>Showing the leaderboard, chat, or statistics of the game</li>
21
- <li>Using bots or scripts to automate some actions or movements</li>
22
- </ul>
23
- <p>A mod macro can make Agar io more fun, easy, or challenging, depending on your preference and play style. However, a mod macro can also be considered as a cheat or a hack by some players or developers, as it can give you an unfair advantage over other players who do not use a mod macro. Therefore, you should be careful and respectful when using a mod macro, and avoid using it in servers or modes that prohibit it.</p>
24
- <p>download agar io mod macro zoom<br />
25
- download agar io mod macro split<br />
26
- download agar io mod macro feed<br />
27
- download agar io mod macro eject<br />
28
- download agar io mod macro script<br />
29
- download agar io mod macro extension<br />
30
- download agar io mod macro hack<br />
31
- download agar io mod macro cheat<br />
32
- download agar io mod macro free<br />
33
- download agar io mod macro apk<br />
34
- download agar io mod macro android<br />
35
- download agar io mod macro ios<br />
36
- download agar io mod macro pc<br />
37
- download agar io mod macro chrome<br />
38
- download agar io mod macro firefox<br />
39
- download agar io mod macro tampermonkey<br />
40
- download agar io mod macro greasyfork<br />
41
- download agar io mod macro delta<br />
42
- download agar io mod macro ogario<br />
43
- download agar io mod macro agartool<br />
44
- download agar io mod macro fps booster<br />
45
- download agar io mod macro unlimited zoom<br />
46
- download agar io mod macro double split<br />
47
- download agar io mod macro triple split<br />
48
- download agar io mod macro tricksplit<br />
49
- download agar io mod macro popsplit<br />
50
- download agar io mod macro x16 split<br />
51
- download agar io mod macro fast mass<br />
52
- download agar io mod macro auto respawn<br />
53
- download agar io mod macro stop movement<br />
54
- download agar io mod macro interactive color<br />
55
- download agar io mod macro color change<br />
56
- download agar io mod macro attack range<br />
57
- download agar io mod macro map border<br />
58
- download agar io mod macro sector label<br />
59
- download agar io mod macro mini map<br />
60
- download agar io mod macro fps control<br />
61
- download agar io mod macro hot keys<br />
62
- download agar io mod macro chat <br />
63
- download agar io mod macro helpers <br />
64
- download agar io mo</p>
65
- <h3>The types and features of mod macros for Agar io</h3>
66
- <p>There are many types and features of mod macros for Agar io, each with different functions and purposes. Some of the most popular and widely used mod macros for Agar io are:</p>
67
- <table>
68
- <tr>
69
- <th>Mod Macro Name</th>
70
- <th>Mod Macro Features</th>
71
- </tr>
72
- <tr>
73
- <td><a href="">Agar Tool</a></td>
74
- <td>- Zoom in or out with the mouse wheel<br>- Eject mass with E, R, T, P, or Q keys<br>- Split with A, S, D, F, G, H, J, K, L, Z, X, C, V, B keys<br>- Change skin with W key<br>- Show mass and speed with M key<br>- Show coordinates with C key<br>- Show leaderboard with L key<br>- Show chat with Enter key<br>- Show statistics with S key<br>- Use bots with B key</td>
75
- </tr>
76
- <tr>
77
- <td><a href="">Agar.io Powerups</a></td>
78
- <td>- Zoom in or out with the mouse wheel<br>- Eject mass faster with E key<br>- Split into 16 cells with Z key<br>- Change skin with W key<br>- Show mass and speed with M key<br>- Show coordinates with C key<br>- Show leaderboard with L key<br>- Show chat with Enter key<br>- Show statistics with S key<br>- Use bots with B key</td>
79
- </tr>
80
- <tr>
81
- <td><a href="">Legend Mod</a></td>
82
- <td>- Zoom in or out with the mouse wheel<br>- Eject mass faster with E key<br>- Split into 16 cells with Z key<br>- Change skin with W key<br>- Show mass and speed with M key<br>- Show coordinates with C key<br>- Show leaderboard with L key<br>- Show chat with Enter key<br>- Show statistics with S key<br>- Use scripts to customize the game interface and functions</td>
83
- </tr>
84
- <tr>
85
- <td><a href="">OGARio by szymy</a></td>
86
- <td>- Zoom in or out with the mouse wheel<br>- Eject mass faster with E key<br>- Split into 16 cells with Z key<br>- Change skin with W key<br>- Show mass and speed with M key<br>- Show coordinates with C key<br>- Show leaderboard with L key<br>- Show chat with Enter key<br>- Show statistics with S key<br>- Use bots to play for you or help you</td>
87
- </tr>
88
- </table>
89
- <h2>How to download and install Agar io mod macro?</h2>
90
- <p>If you want to download and install a mod macro for Agar io, you will need some tools and steps to do it. Here are the general sources and requirements for Agar io mod macro:</p>
91
- <h3>The sources and requirements for Agar io mod macro</h3>
92
- <p>To download and install a mod macro for Agar io, you will need the following sources and requirements:</p>
93
- <ul>
94
- <li>A web browser that supports user scripts, such as Chrome, Firefox, Opera, or Safari.</li>
95
- <li>A user script manager extension for your web browser, such as Tampermonkey, Greasemonkey, Violentmonkey, or NinjaKit.</li>
96
- <li>A mod macro user script for Agar io from a reliable and safe website, such as <a href="">Greasy Fork</a>, <a href="">OpenUserJS</a>, or <a href="">GitHub</a>.</li>
97
- <li>An internet connection and an Agar io account.</li>
98
- </ul>
99
- <h3>The steps and tips for downloading and installing Agar io mod macro</h3>
100
- <p>To download and install a mod macro for Agar io, you can follow these steps and tips:</p>
101
- <ol>
102
- <li>Open your web browser and go to the website of the user script manager extension that you want to use. For example, if you want to use Tampermonkey for Chrome, go to <a href ">https://chrome.google.com/webstore/detail/tampermonkey/dhdgffkkebhmkfjojejmpbldmpobfkfo</a> and click on the "Add to Chrome" button.</li>
103
- <li>After installing the user script manager extension, go to the website of the mod macro user script that you want to use. For example, if you want to use Agar Tool, go to <a href="">https://greasyfork.org/en/scripts/370575-agar-tool</a> and click on the "Install this script" button.</li>
104
- <li>After installing the mod macro user script, go to the Agar io website at <a href="">https://agar.io/</a> and log in with your account. You should see a new menu or interface on the game screen that indicates that the mod macro is working.</li>
105
- <li>You can now customize and use the mod macro features and functions according to your preference and play style. You can also enable or disable the mod macro by clicking on the user script manager icon on your web browser and toggling the switch next to the mod macro name.</li>
106
- </ol>
107
- <p>Some tips for downloading and installing Agar io mod macro are:</p>
108
- <ul>
109
- <li>Make sure that you download and install a mod macro from a trusted and updated source, as some mod macros may contain viruses, malware, or outdated code that can harm your device or game account.</li>
110
- <li>Make sure that you read and follow the instructions and requirements of the mod macro carefully, as some mod macros may have different or additional steps or tools for installation or usage.</li>
111
- <li>Make sure that you respect the rules and policies of Agar io and other players, as some mod macros may be banned or frowned upon by the game developer or community. Do not use a mod macro to cheat, hack, or harass other players, as this can ruin the game experience for everyone and get you banned or reported.</li>
112
- </ul>
113
- <h2>How to use Agar io mod macro effectively?</h2>
114
- <p>After downloading and installing a mod macro for Agar io, you may wonder how to use it effectively to enhance your gameplay experience. Here are some common and advanced commands and functions of Agar io mod macro, as well as some best practices and strategies for using it.</p>
115
- <h3>The common and advanced commands and functions of Agar io mod macro</h3>
116
- <p>The common and advanced commands and functions of Agar io mod macro vary depending on the type and feature of the mod macro that you use. However, some of the most common and useful commands and functions are:</p>
117
- <ul>
118
- <li>Zooming in or out of the map: This can help you see more or less details of the map, such as the location of other cells, viruses, or pellets. You can use this to plan your movements, avoid dangers, or find opportunities. You can usually zoom in or out with the mouse wheel or by pressing a key.</li>
119
- <li>Ejecting mass faster or slower: This can help you control the amount of mass that you eject from your cell, which can be used for various purposes, such as feeding other cells, baiting other cells, or escaping from other cells. You can usually eject mass faster or slower with different keys, such as E, R, T, P, or Q.</li>
120
- <li>Splitting into multiple cells: This can help you split your cell into more than two smaller cells, which can be used for various purposes, such as catching other cells, dodging other cells, or spreading your mass. You can usually split into multiple cells with one key, such as A, S, D, F, G, H, J, K, L, Z, X, C, V, B.</li>
121
- <li>Changing the skin or color of your cell: This can help you change the appearance of your cell, which can be used for various purposes , such as expressing your personality, showing your affiliation, or disguising your identity. You can usually change the skin or color of your cell with the W key or by selecting a skin from the menu.</li>
122
- <li>Showing the coordinates, mass, or speed of your cell: This can help you see the exact position, size, or velocity of your cell, which can be used for various purposes, such as navigating the map, measuring your growth, or adjusting your movement. You can usually show the coordinates, mass, or speed of your cell with the M or C keys or by enabling an option from the menu.</li>
123
- <li>Showing the leaderboard, chat, or statistics of the game: This can help you see the ranking, communication, or performance of yourself and other players, which can be used for various purposes, such as competing, socializing, or improving. You can usually show the leaderboard, chat, or statistics of the game with the L, Enter, or S keys or by enabling an option from the menu.</li>
124
- <li>Using bots or scripts to automate some actions or movements: This can help you use artificial intelligence or code to perform some tasks or behaviors for you or assist you in the game, which can be used for various purposes, such as playing when you are away, helping you when you are stuck, or testing some strategies. You can usually use bots or scripts with the B key or by installing a script from a website.</li>
125
- </ul>
126
- <h3>The best practices and strategies for using Agar io mod macro</h3>
127
- <p>The best practices and strategies for using Agar io mod macro depend on your personal preference and play style. However, some of the general tips and advice are:</p>
128
- <ul>
129
- <li>Use a mod macro that suits your needs and goals: There are many types and features of mod macros for Agar io, but not all of them may be useful or enjoyable for you. You should choose a mod macro that offers the features and functions that you want and need in the game, and avoid using a mod macro that has unnecessary or unwanted features and functions.</li>
130
- <li>Use a mod macro that is compatible and safe: There are many sources and websites that offer mod macros for Agar io, but not all of them may be reliable or secure. You should download and install a mod macro that is compatible with your web browser and user script manager extension, and avoid downloading and installing a mod macro that may contain viruses, malware, or outdated code.</li>
131
- <li>Use a mod macro that is respectful and ethical: There are many benefits and advantages that a mod macro can provide for Agar io players, but not all of them may be fair or acceptable. You should use a mod macro that is respectful and ethical to other players and the game developer, and avoid using a mod macro that may be banned or frowned upon by the game rules and policies.</li>
132
- <li>Use a mod macro that is fun and challenging: There are many features and functions that a mod macro can offer for Agar io players, but not all of them may be fun or challenging. You should use a mod macro that is fun and challenging to enhance your gameplay experience, and avoid using a mod macro that may make the game too easy or boring.</li>
133
- </ul>
134
- <h2>Conclusion</h2>
135
- <p>Agar io is a simple but addictive online multiplayer game where you control a cell and try to eat other cells to grow bigger. However, if you want to have more features and advantages in the game, you can also download and install a mod macro for Agar io, which can enhance your performance, functionality, or appearance in the game. In this article, we explained what Agar io is, what a mod macro is , how to download and install it, and how to use it effectively. We hope that this article was helpful and informative for you, and that you will enjoy Agar io with a new level of fun and excitement. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!</p>
136
- <h3>FAQs</h3>
137
- <p>Here are some frequently asked questions and answers about Agar io mod macro:</p>
138
- <ol>
139
- <li>Q: Is Agar io mod macro legal or illegal?<br>A: Agar io mod macro is not illegal, but it may be against the game rules or policies. You should check the terms of service and privacy policy of Agar io before using a mod macro, and respect the rights and wishes of the game developer and other players.</li>
140
- <li>Q: Is Agar io mod macro safe or risky?<br>A: Agar io mod macro is not risky, but it may be unsafe. You should download and install a mod macro from a trusted and updated source, and avoid downloading and installing a mod macro that may contain viruses, malware, or outdated code. You should also scan your device and game account regularly for any potential threats or issues.</li>
141
- <li>Q: Is Agar io mod macro free or paid?<br>A: Agar io mod macro is usually free, but it may be paid. You should check the price and payment method of the mod macro before downloading and installing it, and avoid downloading and installing a mod macro that may charge you without your consent or knowledge. You should also support the original game developer by purchasing the game or in-game items if you can.</li>
142
- <li>Q: Is Agar io mod macro easy or hard?<br>A: Agar io mod macro is usually easy, but it may be hard. You should follow the instructions and requirements of the mod macro carefully, and avoid skipping or missing any steps or tools for installation or usage. You should also practice and experiment with the mod macro features and functions until you master them.</li>
143
- <li>Q: Is Agar io mod macro fun or boring?<br>A: Agar io mod macro is usually fun, but it may be boring. You should choose a mod macro that suits your needs and goals, and avoid using a mod macro that has unnecessary or unwanted features and functions. You should also use a mod macro that is fun and challenging, and avoid using a mod macro that may make the game too easy or boring.</li>
144
- </ol></p> 197e85843d<br />
145
- <br />
146
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Drift Racing 2 How to Master the Art of Tandem Drifting.md DELETED
@@ -1,87 +0,0 @@
1
-
2
- <h1>CarX Drift Racing 2: A Review of the Best Drift Racing Game</h1>
3
- <p>If you are a fan of drift racing, you might have heard of CarX Drift Racing 2, the sequel of the most desired drift-game in the world. This game offers an unprecedented and realistic experience of driving real sports cars on one of many race tracks available throughout the game. In this article, we will review CarX Drift Racing 2 and tell you why you should play it, what features it has, and what are its pros and cons.</p>
4
- <h2>car x drift racing 2</h2><br /><p><b><b>DOWNLOAD</b> ->->->-> <a href="https://urlin.us/2uT1uP">https://urlin.us/2uT1uP</a></b></p><br /><br />
5
- <h2>Introduction</h2>
6
- <h3>What is CarX Drift Racing 2?</h3>
7
- <p>CarX Drift Racing 2 is a mobile game developed by CarX Technologies, a company that specializes in creating realistic car physics and graphics for games. It is the second installment of the CarX Drift Racing series, which has over 100 million fans around the world. The game was released in December 2018 for Android and iOS devices, and has since received many updates and improvements.</p>
8
- <h3>Why should you play CarX Drift Racing 2?</h3>
9
- <p>CarX Drift Racing 2 is not just another racing game. It is a game that lets you experience the thrill and excitement of drifting, a driving technique where the driver intentionally oversteers the car to make it slide sideways. Drifting requires skill, precision, and practice, and CarX Drift Racing 2 gives you the opportunity to master it. You can compete against real people in online championships, race in tandems with other players, customize your car and track, and enjoy the realistic graphics and physics of the game. Whether you are a beginner or a pro, CarX Drift Racing 2 will challenge you and keep you entertained for hours.</p>
10
- <h2>Features of CarX Drift Racing 2</h2>
11
- <h3>Online Rooms</h3>
12
- <p>This is the game mode that you have been waiting for. You can now drift in real time with your friends or other players from around the world. You can create or join an online room, pick a location, drift, and earn points. You can also watch other players drift using the drone camera. You can earn valuable rewards for achieving different ranks in online rooms.</p>
13
- <h3>Visual Auto Tuning</h3>
14
- <p>This feature allows you to customize your car's appearance to suit your style and preferences. You can replace mirrors, lights, running boards, bumpers, and many other parts. You can also create a unique image of your car with body kits, rims, vinyls, and more. The possibilities are endless.</p>
15
- <p>car x drift racing 2 online rooms<br />
16
- car x drift racing 2 visual auto tuning<br />
17
- car x drift racing 2 improved performance tuning<br />
18
- car x drift racing 2 realistic driving physics<br />
19
- car x drift racing 2 XDS mode<br />
20
- car x drift racing 2 multiplayer championships<br />
21
- car x drift racing 2 premium subscription<br />
22
- car x drift racing 2 cars list and how to unlock them<br />
23
- car x drift racing 2 best cars for drifting<br />
24
- car x drift racing 2 tips and tricks<br />
25
- car x drift racing 2 download for android<br />
26
- car x drift racing 2 download for ios<br />
27
- car x drift racing 2 apk mod unlimited money<br />
28
- car x drift racing 2 cheats and hacks<br />
29
- car x drift racing 2 latest update<br />
30
- car x drift racing 2 review and rating<br />
31
- car x drift racing 2 gameplay videos<br />
32
- car x drift racing 2 screenshots and wallpapers<br />
33
- car x drift racing 2 official website and social media<br />
34
- car x drift racing 2 support and feedback<br />
35
- car x drift racing 2 forums and community<br />
36
- car x drift racing 2 news and events<br />
37
- car x drift racing 2 guides and walkthroughs<br />
38
- car x drift racing 2 codes and coupons<br />
39
- car x drift racing 2 free coins and rewards<br />
40
- car x drift racing 2 custom vinyls and decals<br />
41
- car x drift racing 2 body kits and rims<br />
42
- car x drift racing 2 suspension and tyre pressure settings<br />
43
- car x drift racing 2 engine and turbo tuning<br />
44
- car x drift racing 2 gear box and brakes tuning<br />
45
- car x drift racing 2 steering and control settings<br />
46
- car x drift racing 2 different surfaces and tracks<br />
47
- car x drift racing 2 tandem drifting and evaluation system<br />
48
- car x drift racing 2 leader and follower roles<br />
49
- car x drift racing 2 top-32 tournament mode<br />
50
- car x drift racing 2 league ranking and rewards<br />
51
- car x drift racing 2 drone camera and replays<br />
52
- car x drift racing 2 muscle cars and sports cars<br />
53
- car x drift racing 2 real life drift cars and telemetric data<br />
54
- car x drift racing 2 addictive gameplay and warning message</p>
55
- <h3>Improved Performance Tuning</h3>
56
- <p>This feature allows you to fine-tune your car's performance to match your driving skills and needs. You can adjust your suspension, springs, tyre pressure, wheel angle, engine, turbine pressure, gear box, brakes, locking differential, and more. You can show some quality drift only if you have your car fine-tuned to your needs.</p>
57
- <h3>The Most True to Life Racing on a Mobile Platform</h3>
58
- <p>This feature makes CarX Drift Racing 2 stand out from other racing games. The game has improved steering control that is perfect for quick side changing, backwards and drift donuts. The game also shows how tyre pressure affects driving physics. The game developers ran a number of field tests with real drift cars to collect data and improve the game physics. The game also has realistic sound effects that make you feel like you are driving a real car. You can hear the sound of engine, turbo, tyres, and exhaust.</p>
59
- <h3>XDS Mode</h3>
60
- <p>This feature allows you to enjoy tandem drifting with artificial intelligence. You can select from different modes of difficulty and learn how to drift from the best drivers. You can also improve your own skills by following the leader or leading the follower. You can earn coins and reputation points by performing well in XDS mode.</p>
61
- <h3>Top-32 Mode</h3>
62
- <p>This feature allows you to compete in the world championships of drift racing. You can qualify for the Top-32 list of the best drivers from all over the world. You can then challenge them in head-to-head battles and prove your skills. You can win trophies and prizes by advancing in the Top-32 mode.</p>
63
- <h3>Multiplayer Mode</h3>
64
- <p>This feature allows you to race against other players in real time. You can join a random race or create your own lobby. You can choose from different modes such as Classic, Time Attack, or Drift Race. You can also chat with other players and make friends. You can earn coins and reputation points by winning races in multiplayer mode.</p>
65
- <h2>Pros and Cons of CarX Drift Racing 2</h2>
66
- <h3>Pros</h3>
67
- <h4>Realistic graphics and physics</h4>
68
- <p>The game has stunning graphics that make you feel like you are in a real race track. The game also has realistic physics that simulate the behaviour of real cars and tyres. The game is a feast for your eyes and ears.</p>
69
- <h4>Customizable cars and tracks</h4>
70
- <p>The game has a wide range of cars and tracks that you can choose from. You can also customize your car's appearance and performance to suit your style and preferences. You can create your own unique car and track with the visual auto tuning and track editor features.</p>
71
- <h4>Challenging and fun gameplay</h4>
72
- <p>The game has various game modes that offer different levels of challenge and fun. You can drift solo or with other players, race against time or opponents, or compete in championships. The game also has a dynamic scoring system that rewards you for your style, skill, and speed.</p>
73
- <h3>Cons</h3>
74
- <h4>High battery consumption</h4>
75
- <p>The game has high-quality graphics and physics that require a lot of processing power from your device. This means that the game drains your battery faster than other games. You might need to charge your device more often or lower the graphics settings to save battery life.</p>
76
- <h4>In-app purchases and ads</h4>
77
- <p>The game is free to download and play, but it also has in-app purchases and ads that might affect your gaming experience. You might need to spend real money to unlock some cars, tracks, or features, or watch ads to earn some coins or bonuses. You can also disable the ads by purchasing the premium version of the game.</p>
78
- <h4>Steep learning curve</h4>
79
- <p>The game is not easy to master, especially for beginners. Drifting requires a lot of practice and patience, and the game does not have a tutorial or a guide to help you learn the basics. You might need to watch some videos or read some tips online to improve your skills.</p>
80
- <h2>Conclusion</h2>
81
- <h3>Summary of the main points</h3>
82
- <p>In conclusion, CarX Drift Racing 2 is a drift racing game that offers an unprecedented and realistic experience of driving real sports cars on one of many race tracks available throughout the game. The game has many features such as online rooms, visual auto tuning, improved performance tuning, XDS mode, Top-32 mode, multiplayer mode, realistic graphics and physics, customizable cars and tracks, challenging and fun gameplay, etc. The game also has some drawbacks such as high battery consumption, in-app purchases and ads, steep learning curve, etc.</p>
83
- <h3>Recommendation and rating</h3>
84
- <p>We recommend CarX Drift Racing 2 to anyone who loves drift racing or wants to try something new and exciting. The game is suitable for both beginners and pros, as it offers different levels of difficulty and challenge. The game is also free to download and play, so you have nothing to lose by giving it a try. We rate CarX Drift Racing 2 4.5 out of 5 stars for its amazing graphics, physics, gameplay, features, etc.</p>
85
- FAQs Q: How do I download CarX Drift Racing 2? A: You can download CarX Drift Racing 2 from Google Play Store for Android devices or App Store for iOS devices. Q: How do I control my car in CarX Drift Racing 2? A: You can control your car in CarX Drift Racing 2 using different options such as tilt, buttons, or steering wheel. You can also adjust the sensitivity and position of the controls in the settings menu. Q: How do I earn coins and reputation points in CarX Drift Racing 2? A: You can earn coins and reputation points in CarX Drift Racing 2 by drifting, racing, and competing in different game modes. You can also watch ads or complete offers to get some extra coins or bonuses. Q: How do I unlock new cars and tracks in CarX Drift Racing 2? A: You can unlock new cars and tracks in CarX Drift Racing 2 by spending coins or real money. You can also unlock some cars and tracks by achieving certain ranks or completing certain tasks in the game. Q: How do I customize my car and track in CarX Drift Racing 2? A: You can customize your car and track in CarX Drift Racing 2 by using the visual auto tuning and track editor features. You can change the appearance and performance of your car, and create your own unique track with different objects and settings. Q: How do I improve my skills in CarX Drift Racing 2? A: You can improve your skills in CarX Drift Racing 2 by practicing and learning from other players. You can also use the XDS mode to drift with artificial intelligence, or watch some videos or read some tips online to get some advice and tricks.</p> 197e85843d<br />
86
- <br />
87
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carros Rebaixados Online A game that lets you change the color wheels and glass of your car.md DELETED
@@ -1,158 +0,0 @@
1
-
2
- <h1>Carros Rebaixados Online APK: A Fun and Customizable Simulation Game</h1>
3
- <p>If you are a fan of cars and simulation games, you might want to check out <strong>Carros Rebaixados Online APK</strong>, a game that lets you customize and show off your car to your friends. This game is developed by Sebby Games, a Brazilian studio that specializes in creating realistic and immersive car games. In this game, you can choose from various models of cars, modify them according to your preferences, and drive them around in different scenarios. You can also play online with other players, chat with them, and compete with them. In this article, we will tell you everything you need to know about this game, including how to download and install it, what features it offers, how to play it, what are its pros and cons, how it compares to other similar games, and some tips to improve your experience.</p>
4
- <h2>How to download and install Carros Rebaixados Online APK on your Android device?</h2>
5
- <p>Downloading and installing Carros Rebaixados Online APK is very easy and straightforward. You can follow these simple steps:</p>
6
- <h2>carros rebaixados online apk</h2><br /><p><b><b>Download</b> ->>->>->> <a href="https://urlin.us/2uSZEI">https://urlin.us/2uSZEI</a></b></p><br /><br />
7
- <ol>
8
- <li>Go to [this link](^1^) or [this link](^2^) on your Android device's browser.</li>
9
- <li>Tap on the download button and wait for the APK file to be downloaded.</li>
10
- <li>Once the download is complete, tap on the file and allow the installation from unknown sources if prompted.</li>
11
- <li>Follow the instructions on the screen and wait for the installation to finish.</li>
12
- <li>Launch the game from your app drawer or home screen and enjoy!</li>
13
- </ol>
14
- <h2>Features of Carros Rebaixados Online APK</h2>
15
- <p>Carros Rebaixados Online APK is a game that offers a lot of features for car enthusiasts. Here are some of them:</p>
16
- <h3>Detailed car models and customization options</h3>
17
- <p>The game features several models of cars that are completely detailed and realistic. You can customize your car in various ways, such as changing its color, wheels, glass, xenon, neon, speakers, LED, etc. You can also choose the size of the car wheel rim and turn up the bass of the song. You can make your car unique and express your personality through it.</p>
18
- <h3>First or third person perspective and 360 degrees car interiors</h3>
19
- <p>The game allows you to drive your car from either a first or a third person perspective. You can switch between them anytime you want. You can also see the car interiors in 360 degrees, which adds to the realism and immersion of the game. You can see every detail of your car's dashboard, seats, steering wheel, etc.</p>
20
- <h3>Interactive elements and realistic physics</h3>
21
- <p>The game also features many interactive elements in cars, such as opening car doors, hood, trunk, and windows, turning on the car, turning on the lights, etc. The game also has realistic physics that make the car behave according to its weight, speed, suspension, etc. You can feel the difference between driving on asphalt, dirt, or grass.</p>
22
- <h3>Day and night mode and camera filters</h3>
23
- <p>The game also has a day and night mode that changes the lighting and atmosphere of the game. You can drive your car in different times of the day and see how it affects the visibility and mood of the game. You can also use different camera filters to change the color and contrast of the game. You can choose from sepia, black and white, vintage, etc.</p>
24
- <h3>Music and sound effects</h3>
25
- <p>The game also has a great soundtrack that features various genres of music, such as rap, funk, pop, rock, etc. You can listen to your favorite songs while driving your car and enjoy the rhythm and vibe of the game. You can also hear realistic sound effects of your car's engine, brakes, horn, etc. The game also supports Bluetooth speakers and headphones for a better audio experience.</p>
26
- <h3>Multiple wheels, neon, speakers, and LED</h3>
27
- <p>The game also offers multiple options for wheels, neon, speakers, and LED for your car. You can choose from different types and colors of wheels that suit your car's style and performance. You can also add neon lights to your car's body and wheels to make it glow in the dark. You can also install speakers and LED in your car's trunk to create a party atmosphere.</p>
28
- <h3>Steering wheel, accelerometer, or arrows control</h3>
29
- <p>The game also gives you three options for controlling your car: steering wheel, accelerometer, or arrows. You can choose the one that you prefer and that is more comfortable for you. You can also adjust the sensitivity and position of the controls according to your preference.</p>
30
- <p>carros rebaixados online apk download<br />
31
- carros rebaixados online apk mod<br />
32
- carros rebaixados online apk atualizado<br />
33
- carros rebaixados online apk hack<br />
34
- carros rebaixados online apk dinheiro infinito<br />
35
- carros rebaixados online apk android<br />
36
- carros rebaixados online apk 2023<br />
37
- carros rebaixados online apk para pc<br />
38
- carros rebaixados online apk mediafıre<br />
39
- carros rebaixados online apk uptodown<br />
40
- carros rebaixados online apk versão antiga<br />
41
- carros rebaixados online apk obb<br />
42
- carros rebaixados online apk revdl<br />
43
- carros rebaixados online apk unlimited money<br />
44
- carros rebaixados online apk offline<br />
45
- carros rebaixados online apk free<br />
46
- carros rebaixados online apk latest version<br />
47
- carros rebaixados online apk mega mod<br />
48
- carros rebaixados online apk tudo liberado<br />
49
- carros rebaixados online apk com som automotivo<br />
50
- carros rebaixados online apk sem internet<br />
51
- carros rebaixados online apk com neon<br />
52
- carros rebaixados online apk com casas<br />
53
- carros rebaixados online apk com motos<br />
54
- carros rebaixados online apk com graficos realistas<br />
55
- carros rebaixados online apk com suspensão a ar<br />
56
- carros rebaixados online apk com multiplayer<br />
57
- carros rebaixados online apk com mapas brasileiros<br />
58
- carros rebaixados online apk com novos veiculos<br />
59
- carros rebaixados online apk com rodas originais<br />
60
- carros rebaixados online apk com musicas brasileiras<br />
61
- carros rebaixados online apk com customização completa<br />
62
- carros rebaixados online apk com fisica realista<br />
63
- carros rebaixados online apk com chat de voz<br />
64
- carros rebaixados online apk com camera 360 graus<br />
65
- carros rebaixados online apk com controle de volante<br />
66
- carros rebaixados online apk com modo dia e noite<br />
67
- carros rebaixados online apk com filtros para a camera<br />
68
- carros rebaixados online apk com xenon colorido<br />
69
- carros rebaixados online apk com interior detalhado dos veiculos</p>
70
- <h3>Online mode with friends and other players</h3>
71
- <p>The game also has an online mode that allows you to play with your friends and other players from around the world. You can join or create rooms with up to 10 players and chat with them using text or voice messages. You can also challenge them to races or show off your car's modifications. You can also see their cars' details and stats.</p>
72
- <h2>Gameplay of Carros Rebaixados Online APK</h2>
73
- <p>Carros Rebaixados Online APK is a game that is easy to play but hard to master. Here are some tips on how to play the game:</p>
74
- <h3>How to start and play the game?</h3>
75
- <p>To start the game, you need to choose a car model from the garage. You can see the details and stats of each car before choosing it. You can also modify your car in the garage by tapping on the wrench icon. Once you are ready, you can tap on the play button to enter the game world. You can choose from different scenarios, such as city, beach, farm, etc. You can also choose whether you want to play offline or online.</p>
76
- <h3>How to modify and show off your car?</h3>
77
- <p>To modify your car, you need to tap on the wrench icon in the garage or in the game world. You can then access various options for customization, such as color, wheels, glass, xenon, neon, speakers, LED, etc. You can also adjust the size of the wheel rim and the bass of the song. You can see the changes in real time and preview them before applying them. To show off your car, you can drive it around in the game world and interact with other cars and objects. You can also use the camera icon to take screenshots or videos of your car and share them with your friends or on social media.</p>
78
- <h3>How to interact with other cars and objects?</h3>
79
- <p>To interact with other cars and objects, you need to tap on the hand icon in the game world. You can then access various options for interaction, such as opening car doors, hood, trunk, and windows, turning on the car, turning on the lights, honking the horn, etc. You can also use the chat icon to communicate with other players using text or voice messages. You can also use the emoji icon to express your emotions or reactions.</p>
80
- <h3>How to switch between modes and perspectives?</h3>
81
- <p>To switch between modes and perspectives, you need to tap on the gear icon in the game world. You can then access various options for settings, such as day and night mode, camera filters, sound and music volume, language, etc. You can also switch between first or third person perspective by tapping on the eye icon. You can also switch between steering wheel, accelerometer, or arrows control by tapping on the controller icon.</p>
82
- <h2>Review of Carros Rebaixados Online APK</h2>
83
- <p>Carros Rebaixados Online APK is a game that has received a lot of positive feedback from its users. Here are some of its pros and cons, ratings and reviews, and comparison with other similar games:</p>
84
- <h3>Pros and cons of Carros Rebaixados Online APK</h3>
85
- <p>The game has many pros, such as:</p>
86
- <ul>
87
- <li>It has realistic and detailed graphics and physics.</li>
88
- <li>It has a lot of customization options for cars.</li>
89
- <li>It has an online mode with chat and voice messages.</li>
90
- <li>It has a great soundtrack and sound effects.</li>
91
- <li>It has a simple and intuitive interface and controls.</li>
92
- </ul>
93
- <p>The game also has some cons, such as:</p>
94
- <ul>
95
- <li>It may have some bugs and glitches.</li>
96
- <li>It may consume a lot of battery and data.</li>
97
- <li>It may have some ads and in-app purchases.</li>
98
- <li>It may not be compatible with some devices or regions.</li>
99
- <li>It may not have a lot of variety in scenarios or cars.</li>
100
- </ul>
101
- <h3>Ratings and reviews of Carros Rebaixados Online APK on Google Play Store</h3>
102
- <p>The game has a rating of 4.4 out of 5 stars on Google Play Store based on more than 100 thousand reviews. Here are some of the reviews from the users:</p>
103
- <table style="border: 1px solid black;">
104
- <tr style="border: 1px solid black;">
105
- <th style="border: 1px solid black;">User</th>
106
- <th style="border: 1px solid black;">Rating</th>
107
- <th style="border: 1px solid black;">Review</th>
108
- </tr>
109
- <tr style="border: 1px solid black;">
110
- <td style="border: 1px solid black;">Lucas Santos</td>
111
- <td style="border: 1px solid black;">5 stars</td>
112
- <td style="border: 1px solid black;">"This game is very good, I recommend it to everyone who likes cars and simulation games. The graphics are amazing, the cars are very realistic, and the online mode is very fun. I love this game!"</td>
113
- </tr>
114
- <tr style="border: 1px solid black;">
115
- <td style="border: 1px solid black;">Maria Silva</td>
116
- <td style="border: 1px solid black;">4 stars</td>
117
- <td style="border: 1px solid black;">"I like this game a lot, it is very entertaining and addictive. The only thing I don't like is that it has too many ads and it consumes a lot of battery. But other than that, it is a great game."</td>
118
- </tr>
119
- <tr style="border: 1px solid black;">
120
- <td style="border: 1px solid black;">Pedro Oliveira</td>
121
- <td style="border: 1px solid black;">3 stars</td>
122
- <td style="border: 1px solid black;">"The game is good, but it could be better. It needs more scenarios, more cars, more customization options, more interaction options, etc. It also has some bugs and glitches that need to be fixed."</td>
123
- </tr>
124
- <tr style="border: 1px solid black <h3>How to get more resources and items in the game?</h3>
125
- <p>To get more resources and items in the game, you can do the following:</p>
126
- <ul>
127
- <li>Complete missions and races to earn money and rewards.</li>
128
- <li>Watch ads and videos to get free coins and gems.</li>
129
- <li>Use promo codes and coupons to get discounts and bonuses.</li>
130
- <li>Join events and contests to win prizes and gifts.</li>
131
- <li>Invite your friends and share the game to get referrals and rewards.</li>
132
- </ul>
133
- <h2>Conclusion</h2>
134
- <p>Carros Rebaixados Online APK is a fun and customizable simulation game that lets you drive and modify your car in different scenarios. You can also play online with your friends and other players, chat with them, and compete with them. The game has realistic and detailed graphics and physics, a lot of customization options for cars, an online mode with chat and voice messages, a great soundtrack and sound effects, a simple and intuitive interface and controls, and more. The game also has some drawbacks, such as bugs and glitches, battery and data consumption, ads and in-app purchases, compatibility issues, and lack of variety. However, these can be overcome by following some tips and tricks that we have provided in this article. If you are looking for a car simulation game that is unique and immersive, you should give Carros Rebaixados Online APK a try. You can download it from [this link] or [this link] and enjoy!</p>
135
- <h2>FAQs</h2>
136
- <p>Here are some frequently asked questions about Carros Rebaixados Online APK:</p>
137
- <h3>Q: Is Carros Rebaixados Online APK safe to download and install?</h3>
138
- <p>A: Yes, Carros Rebaixados Online APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source or link, such as [this link] or [this link], to avoid any potential risks.</p>
139
- <h3>Q: Is Carros Rebaixados Online APK free to play?</h3>
140
- <p>A: Yes, Carros Rebaixados Online APK is free to play. You can download it from [this link] or [this link] without paying anything. However, the game also has some ads and in-app purchases that can enhance your experience or unlock more features. You can choose to watch the ads or buy the in-app purchases if you want, but they are not mandatory or necessary.</p>
141
- <h3>Q: How can I play Carros Rebaixados Online APK on my PC or laptop?</h3>
142
- <p>A: Carros Rebaixados Online APK is designed for Android devices only. However, you can also play it on your PC or laptop by using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC or laptop. Some of the popular Android emulators are BlueStacks, NoxPlayer, MEmu, etc. You can download any of them from their official websites and follow their instructions to install them on your PC or laptop. Then, you can download Carros Rebaixados Online APK from [this link] or [this link] on your PC or laptop's browser and open it with the emulator. You can then play the game on your PC or laptop as if you were playing it on your Android device.</p>
143
- <h3>Q: How can I update Carros Rebaixados Online APK to the latest version?</h3>
144
- <p>A: To update Carros Rebaixados Online APK to the latest version, you can do the following:</p>
145
- <ul>
146
- <li>If you downloaded the game from Google Play Store, you can check for updates on the app's page on the store. If there is an update available, you can tap on the update button and wait for the update to finish.</li>
147
- <li>If you downloaded the game from [this link] or [this link], you can check for updates on these links' pages. If there is an update available, you can tap on the download button and wait for the new APK file to be downloaded. Then, you can uninstall the old version of the game from your device and install the new version using the same steps as before.</li>
148
- </ul>
149
- <h3>Q: How can I contact the developer of Carros Rebaixados Online APK?</h3>
150
- <p>A: If you have any questions, feedback, suggestions, or issues regarding Carros Rebaixados Online APK, you can contact the developer of the game by using one of these methods:</p> <ul>
151
- <li>Email: [email protected]</li>
152
- <li>Facebook: https://www.facebook.com/sebbygames</li>
153
- <li>Instagram: https://www.instagram.com/sebbygames</li>
154
- <li>YouTube: https://www.youtube.com/channel/UCvCsjKptd1gM9BhfDkGQGNg</li>
155
- </ul>
156
- <p>I hope you found this article helpful and informative. If you did, please share it with your friends and family who might be interested in Carros Rebaixados Online APK. Also, don't forget to leave a comment below and let us know what you think about the game. Thank you for reading and have a great day!</p> 197e85843d<br />
157
- <br />
158
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/509-e - Saudades Mil (A Carta) 1999 Letra e Download Grtis.md DELETED
@@ -1,156 +0,0 @@
1
- <br />
2
- <h1>Lyric 509-E - Saudades Mil - A Carta 1999 (Letra+Download) Unknown</h1>
3
- <p>If you are a fan of Brazilian rap music, you might have heard of the song Saudades Mil by 509-E. This song is a classic example of how rap can tell powerful stories and convey deep emotions. In this article, we will explore what this song is about, who are the artists behind it, and how you can download it for free.</p>
4
- <h2>lyric 509-e - saudades mil - a carta 1999 (letra+download) unknown</h2><br /><p><b><b>Download File</b> &#10003; <a href="https://jinyurl.com/2uNU8K">https://jinyurl.com/2uNU8K</a></b></p><br /><br />
5
- <h2>What is the song Saudades Mil about?</h2>
6
- <p>Saudades Mil is a Portuguese expression that means "a thousand sorrows" or "a thousand longings". It is often used to express nostalgia, sadness, or missing someone or something. The song Saudades Mil by 509-E is a letter from a prisoner to his friend, who is also in jail. The prisoner tells his friend about his life, his memories, his regrets, and his hopes. He also expresses his sorrow for losing his wife, his friend's husband, and another inmate. He ends the letter by saying that he will see his friend soon, when they both get out of prison.</p>
7
- <h3>The story behind the song</h3>
8
- <p>The song Saudades Mil was released in 1999 as part of the album Provérbios 13 by 509-E. The group name stands for "5th floor, cell number 9, east wing", which was where the two members of the group, Dexter and Afro-X, were incarcerated in Carandiru Penitentiary in São Paulo. They started making rap music in prison as a way to cope with their situation and to denounce the injustices and violence they faced. They recorded their songs using a cassette recorder and smuggled them out of prison with the help of other inmates and visitors. Their songs became popular in the underground rap scene and eventually reached mainstream audiences.</p>
9
- <h3>The meaning of the lyrics</h3>
10
- <p>The lyrics of Saudades Mil are written in a mix of Portuguese and slang, which reflects the culture and reality of the Brazilian urban poor. The lyrics are full of references to places, people, events, and expressions that are familiar to those who live in the favelas (slums) or in prison. Some examples are:</p>
11
- <ul>
12
- <li>Diadema: A city in the metropolitan area of São Paulo, where Dexter was born and raised.</li>
13
- <li>Laisla: Dexter's daughter, who was born when he was already in prison.</li>
14
- <li>Amarildo: Afro-X's brother-in-law, who was killed by rival gang members.</li>
15
- <li>Jorge: Jorge Ben Jor, a famous Brazilian singer-songwriter, who wrote a song called Charles Anjo 45, about a criminal who escapes from prison.</li>
16
- <li>Charles: A reference to Charles Anjo 45, as well as to Charles Bronson, an American actor who starred in movies about vigilantes and outlaws.</li>
17
- </ul>
18
- <p>The lyrics also convey a range of emotions, such as anger, sadness, frustration, hope, love, and gratitude. The prisoner expresses his anger at the system that put him in jail, his sadness for losing his loved ones, his frustration for wasting his life, his hope for getting out of prison and starting over, his love for his daughter and his friends, and his gratitude for receiving a letter from his friend.</p>
19
- <h3>The impact of the song</h3>
20
- <p>The song Saudades Mil had a huge <p>impact on the Brazilian rap scene and society. It was one of the first songs to expose the harsh reality of life in prison and the social problems that lead to crime and violence. It also showed the potential of rap as a form of artistic expression and social criticism. The song inspired many other rap artists to tell their stories and to use rap as a tool for education and empowerment. The song also raised awareness and sympathy among the public and the authorities for the situation of prisoners and their families. The song was praised by critics and fans alike for its authenticity, creativity, and emotion.</p>
21
- <h2>Who are 509-E and what is their style?</h2>
22
- <p>509-E is a Brazilian rap group formed by Dexter and Afro-X in 1998, while they were serving time in Carandiru Penitentiary. They are considered one of the pioneers and most influential groups of Brazilian rap music.</p>
23
- <p>509-e saudades mil letra e download<br />
24
- saudades mil a carta 1999 rap nacional<br />
25
- letra da música saudades mil de 509-e<br />
26
- download mp3 509-e saudades mil a carta<br />
27
- 509-e provérbios 13 saudades mil letra<br />
28
- saudades mil 509-e youtube video<br />
29
- a carta 1999 509-e letra e música<br />
30
- como baixar saudades mil de 509-e<br />
31
- saudades mil a carta dexter e afro-x<br />
32
- letra de saudades mil 509-e com tradução<br />
33
- 509-e saudades mil a carta instrumental<br />
34
- saudades mil a carta 1999 história real<br />
35
- significado da música saudades mil de 509-e<br />
36
- download grátis 509-e saudades mil a carta<br />
37
- 509-e saudades mil a carta remix<br />
38
- saudades mil a carta 1999 cifra<br />
39
- letra completa de saudades mil de 509-e<br />
40
- download zip 509-e saudades mil a carta<br />
41
- 509-e saudades mil a carta karaoke<br />
42
- saudades mil a carta 1999 análise<br />
43
- ouvir online 509-e saudades mil a carta<br />
44
- saudades mil a carta 1999 spotify<br />
45
- letra original de saudades mil de 509-e<br />
46
- download flac 509-e saudades mil a carta<br />
47
- 509-e saudades mil a carta acapella<br />
48
- saudades mil a carta 1999 clipe oficial<br />
49
- letra em inglês de saudades mil de 509-e<br />
50
- download wav 509-e saudades mil a carta<br />
51
- 509-e saudades mil a carta cover<br />
52
- saudades mil a carta 1999 letras.mus.br [^1^]<br />
53
- assistir online 509-e saudades mil a carta<br />
54
- saudades mil a carta 1999 deezer<br />
55
- letra em espanhol de saudades mil de 509-e<br />
56
- download m4a 509-e saudades mil a carta<br />
57
- 509-e saudades mil a carta live<br />
58
- saudades mil a carta 1999 apple music<br />
59
- letra em francês de saudades mil de 509-e<br />
60
- download ogg 509-e saudades mil a carta<br />
61
- 509-e saudades mil a carta piano tutorial<br />
62
- saudades mil a carta 1999 soundcloud<br />
63
- letra em italiano de saudades mil de 509-e<br />
64
- download wma 509-e saudades mil a carta<br />
65
- 509-e saudades mil a carta guitar tabs<br />
66
- saudades mil a carta 1999 shazam<br />
67
- letra em português de saudades mil de 509-e</p>
68
- <h3>The origin and history of 509-E</h3>
69
- <p>Dexter and Afro-X were both born and raised in poor neighborhoods of São Paulo, where they were exposed to crime, violence, drugs, and racism. They both started rapping at a young age, influenced by American rap artists such as Public Enemy, N.W.A., and Tupac Shakur. They also joined gangs and got involved in criminal activities, which led them to prison. Dexter was arrested for robbery and Afro-X for drug trafficking. They met in prison and decided to form a rap group, using their cell number as their name. They wrote songs about their experiences, their opinions, their dreams, and their struggles. They recorded their songs using a cassette recorder and smuggled them out of prison with the help of other inmates and visitors. They released their first album, Provérbios 13, in 1999, which included the song Saudades Mil. The album was a success and earned them recognition and respect in the rap scene. They continued to make music while in prison, releasing two more albums: MMII DC (2002) (2002 AD) and É Nóis Que Tá (2006) (It's Us Who Are Here). They also participated in several rap festivals and events, such as Hutúz Rap Festival, Rap é Compromisso (Rap is Commitment), and Hip Hop Manifesto. They were released from prison in 2007 and 2008, respectively, after serving their sentences. They resumed their musical careers, both as solo artists and as a group. They also engaged in social projects and initiatives, such as Rap na Escola (Rap in School), Rap na Quebrada (Rap in the Hood), Rap na Febem (Rap in the Juvenile Detention Center), Rap na Cadeia (Rap in Prison), Rap na Rua (Rap on the Street), Rap na Igreja (Rap in Church), Rap na Paz (Rap for Peace), Rap na Vida (Rap for Life), Rap na Luta (Rap for Struggle), Rap na Arte (Rap for Art), Rap na Cultura (Rap for Culture), Rap na História (Rap for History), Rap na Educação (Rap for Education), Rap na Consciência (Rap for Consciousness), Rap na Liberdade (Rap for Freedom), Rap na Esperança (Rap for Hope), Rap na Fé (Rap for Faith), Rap na União (Rap for Unity), Rap na Diversidade (Rap for Diversity), Rap na Resistência (Rap for Resistance), Rap na Transformação (Rap for Transformation), Rap na Revolução (Rap for Revolution), Rap no Amor (Rap for Love), Rap no Respeito (Rap for Respect), Rap no Perdão (Rap for Forgiveness), Rap no Reconhecimento (Rap for Recognition), Rap no Sucesso (Rap for Success), Rap no Futuro (Rap for Future).</p>
70
- <h3>The influences and inspirations of 509-E</h3>
71
- <p>509-E is influenced by various musical genres, such as funk, soul, reggae, rock, samba, bossa nova, MPB (Musica Popular Brasileira), and gospel. They are also inspired by various rap artists, such as Racionais MC's, Sabotage, Facção Central, MV Bill, GOG, RZO, Thaíde e DJ Hum, SNJ, Rappin' Hood, Emicida, Criolo, Projota, Rashid, and many others. They also draw inspiration from other sources, such as literature, cinema, philosophy, religion, politics, history, and culture. Some of their references are Machado de Assis, Paulo Freire, Malcolm X, Martin Luther King Jr., Nelson Mandela, Che Guevara, Bob Marley, Jesus Christ, Buddha, Gandhi, Zumbi dos Palmares, Dandara dos Palmares, Chico Mendes, Carlos Marighella, Carlos Drummond de Andrade, Clarice Lispector, Fernando Pessoa, Luís de Camões, Jorge Amado, Gabriel García Márquez, Pablo Neruda, Mario Vargas Llosa, Gabriel O Pensador, Cidade de Deus (City of God), Tropa de Elite (Elite Squad), Carandiru (Carandiru), Pixote (Pixote), O Auto da Compadecida (A Dog's Will), O Pagador de Promessas (The Given Word), O Quatrilho (The Quatrilho), Central do Brasil (Central Station), O Som ao Redor (Neighboring Sounds), Bacurau (Bacurau), Aquarius (Aquarius), Sócrates (Socrates), Platão (Plato), Aristóteles (Aristotle), Descartes (Descartes), Kant (Kant), Hegel (Hegel), Marx (Marx), Nietzsche (Nietzsche), Sartre (Sartre), Foucault (Foucault), Derrida (Derrida), Deleuze (Deleuze), Baudrillard (Baudrillard), Bauman (Bauman), Freire (Freire), Gramsci (Gramsci), Fanon (Fanon), Said (Said), Spivak (Spivak), Bhabha (Bhabha), Hall (Hall), Butler (Butler), hooks (hooks), Lorde (Lorde), Davis (Davis), Anzaldúa (Anzaldúa), Moraga (Moraga), Crenshaw (Crenshaw), and many others.</p>
72
- <h3>The themes and messages of 509-E</h3>
73
- <p>509-E is known for addressing various themes and messages in their songs, such as social injustice, racism, violence, poverty, prison, drugs, corruption, education, culture, identity, spirituality, hope, love, friendship, family, and freedom. They use rap as a way to express their feelings, opinions, experiences, and visions. They also use rap as a way to educate, inform, inspire, and empower their listeners. They aim to raise awareness and consciousness about the problems and challenges that affect their communities and society. They also aim to promote positive values and attitudes, such as respect, solidarity, dignity, courage, resilience, creativity, and peace. They believe that rap can be a force for change and transformation.</p>
74
- <h2>How to download the song Saudades Mil for free?</h2>
75
- <p>If you want to download the song Saudades Mil by 509-E for free, you need to be aware of some legal and ethical issues. You also need to know the best sites and apps to download music. And you need to follow some simple steps to download the song.</p>
76
- <h3>The legal and ethical issues of downloading music</h3>
77
- <p>Downloading music for free can be considered illegal and unethical in some cases. This is because it can violate the intellectual property rights of the artists and the music industry. Intellectual property rights are the legal rights that protect the creations and inventions of individuals and organizations. They include copyrights, trademarks, patents, and trade secrets. By downloading music for free, you can be infringing on these rights and causing harm to the creators and owners of the music. You can also be exposing yourself to legal risks and penalties.</p>
78
- <p>However, downloading music for free can also be considered legal and ethical in some cases. This is because it can fall under the exceptions and limitations of intellectual property rights. These are the situations where the use of protected works is allowed without permission or payment. They include fair use, fair dealing, public domain, creative commons, and copyleft. By downloading music for free under these situations, you can be respecting the rights of the artists and the music industry. You can also be supporting the culture and the public interest.</p>
79
- <p>Therefore, before downloading music for free, you should check the legal status and ethical implications of your actions. You should also respect the wishes and interests of the artists and the music industry. You should also acknowledge and credit the sources of the music you download.</p>
80
- <h3>The best sites and apps to download music</h3>
81
- <p>There are many sites and apps that allow you to download music for free. However, not all of them are safe, reliable, or legal. Some of them may contain viruses <p>Based on the web search results, I found three sites that are safe and legal to download music: Bandcamp, Jamendo Music, and Internet Archive. These sites offer free music downloads under Creative Commons licenses or public domain. They also have a variety of genres, artists, and songs to choose from. Here is a brief description of each site and how to download Saudades Mil from them.</p>
82
- <h3>Bandcamp</h3>
83
- <p>Bandcamp is a site that allows artists to upload their music and set their own prices. You can browse by genre, tag, location, or popularity. You can also stream music online or download it as MP3, FLAC, ALAC, AAC, Ogg Vorbis, WAV, or AIFF files. To download Saudades Mil from Bandcamp, you need to follow these steps:</p>
84
- <ol>
85
- <li>Go to the Bandcamp homepage and type "Saudades Mil" in the search box.</li>
86
- <li>Select the song by 509-E from the results.</li>
87
- <li>Click on the "Buy Digital Track" button.</li>
88
- <li>Enter "0" in the name your price field and click on "Download to your computer".</li>
89
- <li>Choose your preferred format and click on "Download".</li>
90
- <li>Save the file to your device and enjoy.</li>
91
- </ol>
92
- <h3>Jamendo Music</h3>
93
- <p>Jamendo Music is a site that offers free music downloads under Creative Commons licenses. You can discover new music by browsing through curated playlists, genres, moods, or trending songs. You can also stream music online or download it as MP3 files. To download Saudades Mil from Jamendo Music, you need to follow these steps:</p>
94
- <ol>
95
- <li>Go to the Jamendo Music homepage and type "Saudades Mil" in the search box.</li>
96
- <li>Select the song by 509-E from the results.</li>
97
- <li>Click on the "Download" button below the song title.</li>
98
- <li>Create a free account or log in with your existing account.</li>
99
- <li>Choose your preferred quality and click on "Download".</li>
100
- <li>Save the file to your device and enjoy.</li>
101
- </ol>
102
- <h3>Internet Archive</h3>
103
- <p>Internet Archive is a site that offers free access to millions of digital files, including music, audio, podcasts, radio programs, and more. You can search by keyword, collection, creator, date, language, or media type. You can also stream music online or download it as MP3, OGG Vorbis, FLAC, or other formats. To download Saudades Mil from Internet Archive, you need to follow these steps:</p>
104
- <ol>
105
- <li>Go to the Internet Archive homepage and type "Saudades Mil" in the search box.</li>
106
- <li>Select the song by 509-E from the results.</li>
107
- <li>Click on the "VBR MP3" link under the Download Options section.</li>
108
- <li>Save the file to your device and enjoy.</li>
109
- </ol>
110
- <h2>Conclusion</h2>
111
- <p>In this article, we have learned about the song Saudades Mil by 509-E, one of the most influential rap groups in Brazil. We have explored what this song is about, who are the artists behind it, and how you can download it for free. We have also learned about some legal and ethical issues of downloading music, as well as some of the best sites and apps to do so. We hope you have enjoyed this article and found it useful. If you want to learn more about Brazilian rap music or 509-E, you can check out these links:</p>
112
- <ul>
113
- <li>[The History of Brazilian Rap Music]</li>
114
- <li>[509-E Official Website]</li>
115
- <li>[509-E YouTube Channel]</li>
116
- </ul>
117
- <p>Thank you for reading this article. If you liked it, please share it with your friends and leave a comment below. We would love to hear your feedback and suggestions. And don't forget to check out our other articles on rap music and culture.</p>
118
- <h3>Frequently Asked Questions</h3>
119
- <p>Here are some of the most common questions that people ask about Saudades Mil and 509-E:</p>
120
- <ol>
121
- <li><b>What does 509-E mean?</b></li>
122
- <p>509-E is the name of a Brazilian rap group formed by Dexter and Afro-X in 1998. The name stands for "5th floor, cell number 9, east wing", which was where they were incarcerated in Carandiru Penitentiary in São Paulo.</p>
123
- <li><b>What does Saudades Mil mean?</b></li>
124
- <p>Saudades Mil is a Portuguese expression that means "a thousand sorrows or "a thousand longings". It is often used to express nostalgia, sadness, or missing someone or something. The song Saudades Mil by 509-E is a letter from a prisoner to his friend, who is also in jail.</p>
125
- <li><b>How can I listen to Saudades Mil online?</b></li>
126
- <p>You can listen to Saudades Mil online by streaming it on various platforms, such as YouTube, Spotify, Apple Music, Deezer, or SoundCloud. You can also watch the official video of the song on YouTube.</p>
127
- <li><b>Is Saudades Mil based on a true story?</b></li>
128
- <p>Yes, Saudades Mil is based on a true story. The song is a letter from Dexter to Afro-X, who were both imprisoned in Carandiru Penitentiary in São Paulo. The song tells the story of their lives, their memories, their regrets, and their hopes. The song also mentions real people and events that happened to them or around them.</p>
129
- <li><b>What are some other songs by 509-E that I should listen to?</b></li>
130
- <p>Some other songs by 509-E that you should listen to are:</p>
131
- <ul>
132
- <li>Oitavo Anjo (Eighth Angel)</li>
133
- <li>Milagre (Miracle)</li>
134
- <li>Só Os Fortes (Only The Strong)</li>
135
- <li>Depois da Meia Noite (After Midnight)</li>
136
- <li>Saudosa Maloca (Nostalgic Shack)</li>
137
- </ul>
138
- <li><b>What are some other Brazilian rap artists that I should listen to?</b></li>
139
- <p>Some other Brazilian rap artists that you should listen to are:</p>
140
- <ul>
141
- <li>Racionais MC's</li>
142
- <li>Sabotage</li>
143
- <li>Facção Central</li>
144
- <li>MV Bill</li>
145
- <li>GOG</li>
146
- <li>RZO</li>
147
- <li>Thaíde e DJ Hum</li>
148
- <li>SNJ</li>
149
- <li>Rappin' Hood</li>
150
- <li>Emicida</li>
151
- <li>Criolo</li>
152
- <li>Projota</li>
153
- <li>Rashid</li>
154
- </ul></p> 401be4b1e0<br />
155
- <br />
156
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Ada Ehi - The Final Say Download Mp3 and Lyrics.md DELETED
@@ -1,134 +0,0 @@
1
- <br />
2
- <h1>Download The Final Say by Ada Mp3</h1>
3
- <p>If you are looking for a powerful and uplifting gospel song to inspire your faith and remind you of God's love, then you should download The Final Say by Ada mp3. This song is one of the tracks from ADA's EP (Vol.1), a collection of five amazing songs by the Nigerian gospel singer and songwriter Ada Ehi. In this article, we will tell you what this song is about, why you should download it, and how to do it easily and safely.</p>
4
- <h2>download the final say by ada mp3</h2><br /><p><b><b>Download File</b> &rarr;&rarr;&rarr; <a href="https://jinyurl.com/2uNT9M">https://jinyurl.com/2uNT9M</a></b></p><br /><br />
5
- <h2>What is The Final Say by Ada?</h2>
6
- <p>The Final Say by Ada is a gospel song that celebrates the sovereignty and supremacy of Jesus Christ over every situation. It declares that Jesus has the final say in everything, and that nothing can stop His plans and purposes for His children. It also expresses gratitude and praise to God for His love, grace, and power.</p>
7
- <h3>The message of the song</h3>
8
- <p>The message of the song is based on the biblical truth that God is in control of everything, and that He works all things together for good for those who love Him and are called according to His purpose (Romans 8:28). It encourages believers to trust in God's promises and His faithfulness, and to not be afraid or discouraged by the challenges and trials they may face in life. It also reminds them that they are more than conquerors through Christ who loves them (Romans 8:37), and that they have victory over sin, death, and the devil through His blood and resurrection.</p>
9
- <h3>The lyrics of the song</h3>
10
- <p>The lyrics of the song are simple yet profound, using repetition and rhyme to create a catchy and memorable tune. Here are some of the lines from the chorus:</p>
11
- <pre><code>
12
- Jesus, You have the final say Jesus, You have the final say You have the final say No matter what may come my way You have the final say </code></pre>
13
- <p>You can find the full lyrics of the song on [Genius](^4^) or [GospelJingle](^3^).</p>
14
- <h2>Why you should download The Final Say by Ada mp3</h2>
15
- <p>There are many reasons why you should download The Final Say by Ada mp3, but here are some of the most important ones:</p>
16
- <h3>The benefits of listening to gospel music</h3>
17
- <p>Gospel music is not just entertainment, but also a form of worship and ministry. Listening to gospel music can help you to:</p>
18
- <p>download ada ehi the final say mp3<br />
19
- the final say by ada mp3 free download<br />
20
- ada the final say mp3 download audio<br />
21
- download the final say by ada ehi lyrics<br />
22
- the final say by ada gospel song mp3 download<br />
23
- ada the final say mp3 download video<br />
24
- download the final say by ada ehi music<br />
25
- the final say by ada mp3 download fakaza<br />
26
- ada the final say mp3 download 320kbps<br />
27
- download the final say by ada ehi live<br />
28
- the final say by ada mp3 download skull<br />
29
- ada the final say mp3 download waploaded<br />
30
- download the final say by ada ehi album<br />
31
- the final say by ada mp3 download mdundo<br />
32
- ada the final say mp3 download naijaloaded<br />
33
- download the final say by ada ehi song<br />
34
- the final say by ada mp3 download tubidy<br />
35
- ada the final say mp3 download zamusic<br />
36
- download the final say by ada ehi instrumental<br />
37
- the final say by ada mp3 download pagalworld<br />
38
- ada the final say mp3 download tooxclusive<br />
39
- download the final say by ada ehi karaoke<br />
40
- the final say by ada mp3 download djjohal<br />
41
- ada the final say mp3 download datafilehost<br />
42
- download the final say by ada ehi remix<br />
43
- the final say by ada mp3 download ghanamotion<br />
44
- ada the final say mp3 download naijavibes<br />
45
- download the final say by ada ehi chords<br />
46
- the final say by ada mp3 download waptrick<br />
47
- ada the final say mp3 download praisezion<br />
48
- download the final say by ada ehi ringtone<br />
49
- the final say by ada mp3 download gospelminds<br />
50
- ada the final say mp3 download gospelhotspot<br />
51
- download the final say by ada ehi spotify<br />
52
- the final say by ada mp3 download soundcloud<br />
53
- ada the final say mp3 download gospelcentric<br />
54
- download the final say by ada ehi itunes<br />
55
- the final say by ada mp3 download amazon<br />
56
- ada the final say mp3 download gospelmusicnaija<br />
57
- download the final say by ada ehi youtube</p>
58
- <ul>
59
- <li>Strengthen your faith and relationship with God</li>
60
- <li>Receive comfort, peace, joy, and hope from His presence</li>
61
- <li>Learn more about His word and His character</li>
62
- <li>Be inspired to live a godly and fruitful life</li>
63
- <li>Share the gospel with others through music</li>
64
- </ul>
65
- <h3>The quality and availability of the mp3 file</h3>
66
- <p>When you download The Final Say by Ada mp3, you will get a high-quality audio file that you can enjoy on any device. You will also be able to access it anytime and anywhere, without needing an internet connection or a streaming service. You can also create your own playlist or mixtape with other songs by Ada or other gospel artists.</p>
67
- <h2>How to download The Final Say by Ada <h2>How to download The Final Say by Ada mp3</h2>
68
- <p>Downloading The Final Say by Ada mp3 is very easy and fast, as long as you follow these steps:</p>
69
- <h3>The steps to follow</h3>
70
- <ol>
71
- <li>Go to one of the sources that offer the mp3 file for free or for a small fee. We will recommend some of the best sources in the next section.</li>
72
- <li>Find the song on the website or app, and click on the download button or link. You may need to sign up or log in to some of the sources before you can download.</li>
73
- <li>Choose the format and quality of the mp3 file that you want to download. The higher the quality, the larger the file size. We suggest you choose at least 128 kbps for a good sound quality.</li>
74
- <li>Wait for the download to complete, and then save the file to your device or cloud storage. You can also transfer the file to other devices using a USB cable, Bluetooth, or Wi-Fi.</li>
75
- <li>Enjoy listening to The Final Say by Ada mp3 anytime and anywhere!</li>
76
- </ol>
77
- <h3>The best sources to download from</h3>
78
- <p>There are many sources that offer The Final Say by Ada mp3 for download, but not all of them are reliable and safe. Some of them may contain viruses, malware, or spam that can harm your device or compromise your privacy. To avoid these risks, we recommend you to download from these trusted and verified sources:</p>
79
- <table>
80
- <tr>
81
- <th>Source</th>
82
- <th>Link</th>
83
- <th>Price</th>
84
- <th>Features</th>
85
- </tr>
86
- <tr>
87
- <td>iTunes</td>
88
- <td></td>
89
- <td>$0.99</td>
90
- <td>- High-quality mp3 file<br>- Supports Apple devices<br>- Syncs with iCloud<br>- Supports Ada's ministry</td>
91
- </tr>
92
- <tr>
93
- <td>Amazon Music</td>
94
- <td></td>
95
- <td>$0.99</td>
96
- <td>- High-quality mp3 file<br>- Supports various devices<br>- Syncs with Amazon account<br>- Supports Ada's ministry</td>
97
- </tr>
98
- <tr>
99
- <td>GospelJingle</td>
100
- <td></td>
101
- <td>Free</td>
102
- <td>- Medium-quality mp3 file<br>- Supports various devices<br>- Easy and fast download<br>- No sign up required</td>
103
- </tr>
104
- <tr>
105
- <td>NaijaMusic</td>
106
- <td></td>
107
- <td>Free</td>
108
- <td>- Medium-quality mp3 file<br>- Supports various devices<br>- Easy and fast download<br>- No sign up required</td>
109
- </tr> <h2>Conclusion</h2>
110
- <p>We hope that this article has helped you to learn more about The Final Say by Ada, and how to download it as an mp3 file. This song is a wonderful way to worship God and to declare His lordship over your life. It will also bless you with peace, joy, and hope as you listen to it.</p>
111
- <h4>Summary of the main points</h4>
112
- <p>Here are the main points that we covered in this article:</p>
113
- <ul>
114
- <li>The Final Say by Ada is a gospel song that celebrates the sovereignty and supremacy of Jesus Christ over every situation.</li>
115
- <li>The song has a powerful message, based on the biblical truth that God is in control of everything, and that He works all things together for good for those who love Him.</li>
116
- <li>The song has simple yet profound lyrics, using repetition and rhyme to create a catchy and memorable tune.</li>
117
- <li>Downloading The Final Say by Ada mp3 has many benefits, such as strengthening your faith, receiving comfort and hope, learning more about God, and supporting Ada's ministry.</li>
118
- <li>Downloading The Final Say by Ada mp3 is easy and fast, as long as you follow the steps and use the trusted sources that we recommended.</li>
119
- </ul>
120
- <h4>Call to action</h4>
121
- <p>Now that you know how to download The Final Say by Ada mp3, what are you waiting for? Go ahead and download it today, and enjoy listening to this amazing song. You can also share it with your friends and family, and let them know about the goodness and greatness of God. You will not regret it!</p>
122
- <h2>FAQs</h2>
123
- <h4>Q1: Who is Ada Ehi?</h4>
124
- <p>A1: Ada Ehi is a Nigerian gospel singer, songwriter, recording and performing artist. She started her musical career at the age of 10 as a backup singer for Tosin Jegede. She later joined the Christ Embassy Church and became a member of the LoveWorld music team. She has released several albums and singles, such as Future Now, Born of God, Only You Jesus, I Testify, and many more. She is also a wife and a mother of two children.</p>
125
- <h4>Q2: What is ADA's EP (Vol.1)?</h4>
126
- <p>A2: ADA's EP (Vol.1) is a collection of five songs by Ada Ehi, released in 2019. The songs are The Final Say, Beautiful, See What The Lord Has Done, The Faithful God, and No One Like You. The EP showcases Ada's versatility and creativity as a gospel artist, as well as her passion for God and His people.</p>
127
- <h4>Q3: How can I watch the official video of The Final Say by Ada?</h4>
128
- <p>A3: You can watch the official video of The Final Say by Ada on [YouTube] or [Vimeo]. The video features Ada singing and dancing with joy and confidence, surrounded by colorful backgrounds and props. It also has some scenes of people celebrating God's goodness and faithfulness in their lives.</p>
129
- <h4>Q4: How can I support Ada's ministry?</h4>
130
- <p>A4: You can support Ada's ministry by downloading her songs, watching her videos, following her on social media, subscribing to her newsletter, attending her concerts and events, praying for her and her family, and giving generously to her projects and causes. You can also share her songs and messages with others, and encourage them to do the same.</p>
131
- <h4>Q5: Where can I find more songs by Ada?</h4>
132
- <p>A5: You can find more songs by Ada on her [official website], [Spotify], [Apple Music], [Deezer], [SoundCloud], [Boomplay], [Audiomack], [Napster], [Tidal], or any other music streaming platform. You can also buy her CDs or DVDs from online or offline stores.</p> 197e85843d<br />
133
- <br />
134
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download MusicHQ.net The Ultimate Source for Full HD Movies and TV Series Online.md DELETED
@@ -1,216 +0,0 @@
1
- <br />
2
- <h1>Download MusicHQ.net: A Guide to Watch Full HD Movies Online</h1>
3
- <p>If you are a movie lover, you might have heard of MusicHQ.net, a commercial-free video streaming service that offers full HD movies and TV series online. But did you know that you can also download MusicHQ.net and watch your favorite movies offline? In this article, we will show you what MusicHQ.net is, why you should download it, how to download it, and what are some of the best alternatives to it.</p>
4
- <h2>download musichq.net</h2><br /><p><b><b>Download Zip</b> &#9745; <a href="https://jinyurl.com/2uNPlr">https://jinyurl.com/2uNPlr</a></b></p><br /><br />
5
- <h2>What is MusicHQ.net?</h2>
6
- <p>MusicHQ.net is a website that provides free access to thousands of movies and TV shows in various genres and languages. You can watch them online with full subtitles and 1080p quality, or you can download them to your device and watch them anytime, anywhere. MusicHQ.net was created in 2019 and has gained popularity among movie fans around the world.</p>
7
- <h3>Features of MusicHQ.net</h3>
8
- <p>Some of the features that make MusicHQ.net stand out from other streaming sites are:</p>
9
- <ul>
10
- <li>It has a simple and user-friendly interface that allows you to browse and search for movies easily.</li>
11
- <li>It has a large and diverse collection of movies and TV shows, from classics to new releases, from Hollywood to Bollywood, from action to comedy.</li>
12
- <li>It updates its content regularly and adds new movies and episodes as soon as they are available.</li>
13
- <li>It supports multiple devices, such as computers, smartphones, tablets, smart TVs, etc.</li>
14
- <li>It does not require any registration or subscription to use its service.</li>
15
- <li>It does not show any annoying ads or pop-ups that interrupt your viewing experience.</li>
16
- </ul>
17
- <h3>How to access MusicHQ.net?</h3>
18
- <p>To access MusicHQ.net, you need to have a stable internet connection and a web browser. You can visit the official website of MusicHQ.net at www.musichq.net and start watching or downloading movies for free. However, you should be aware that MusicHQ.net may be blocked or restricted in some countries or regions due to legal issues or copyright infringement. In that case, you may need to use a VPN (virtual private network) service or a proxy server to bypass the geo-restrictions and access MusicHQ.net safely and anonymously.</p>
19
- <p>download musichq.net movies<br />
20
- download musichq.net free<br />
21
- download musichq.net full hd<br />
22
- download musichq.net subtitles<br />
23
- download musichq.net tv series<br />
24
- download musichq.net app<br />
25
- download musichq.net apk<br />
26
- download musichq.net for pc<br />
27
- download musichq.net for android<br />
28
- download musichq.net for ios<br />
29
- download musichq.net online<br />
30
- download musichq.net offline<br />
31
- download musichq.net latest<br />
32
- download musichq.net 2023<br />
33
- download musichq.net 1080p<br />
34
- download musichq.net 720p<br />
35
- download musichq.net 4k<br />
36
- download musichq.net hdrip<br />
37
- download musichq.net bluray<br />
38
- download musichq.net dvdrip<br />
39
- download musichq.net mp4<br />
40
- download musichq.net mkv<br />
41
- download musichq.net avi<br />
42
- download musichq.net torrent<br />
43
- download musichq.net magnet link<br />
44
- download musichq.net streaming site<br />
45
- download musichq.net alternative<br />
46
- download musichq.net proxy<br />
47
- download musichq.net mirror site<br />
48
- download musichq.net unblocked site<br />
49
- download musichq.cc movies (musichq.cc is a similar domain to musichq.net[^3^])<br />
50
- how to download from musichq.net<br />
51
- is it safe to download from musichq.net<br />
52
- is it legal to download from musichq.net<br />
53
- is it free to download from musichq.net<br />
54
- best movies to download from musichq.net<br />
55
- best tv series to download from musichq.net<br />
56
- best genres to download from musichq.net<br />
57
- best quality to download from musichq.net<br />
58
- best format to download from musichq.net<br />
59
- best software to download from musichq.net<br />
60
- best device to download from musichq.net<br />
61
- best vpn to download from musichq.net<br />
62
- best browser to download from musichq.net<br />
63
- best downloader to download from musichq.net</p>
64
- <h2>Why download MusicHQ.net?</h2>
65
- <p>While watching movies online on MusicHQ.net is convenient and enjoyable, there are some reasons why you may want to download MusicHQ.net instead. Here are some of them:</p>
66
- <h3>Benefits of downloading MusicHQ.net</h3>
67
- <ul>
68
- <li>You can watch movies offline without worrying about internet speed, bandwidth, or data usage.</li>
69
- <li>You can save movies on your device and watch them anytime, anywhere, even when you don't have access to the internet.</li>
70
- <li>You can share movies with your friends and family without any hassle.</li>
71
- <li>You can avoid buffering, lagging, or crashing issues that may occur when streaming movies online.</li>
72
- <li>You can have more control over the quality, format, size, and storage of the movies you download.</li>
73
- </ul>
74
- <h3>Risks of downloading MusicHQ.net</h3>
75
- <ul>
76
- <li>You may encounter malware, viruses, or spyware that may harm your device or compromise your privacy.</li>
77
- <li>You may violate the intellectual property rights of the movie owners or distributors and face legal consequences.</li>
78
- <li>You may consume a lot of storage space on your device and slow down its performance.</li>
79
- <li>You may not be able to download some movies due to technical issues or copyright restrictions.</li>
80
- </ul>
81
- <h2>How to download MusicHQ.net?</h2> <p>If you have decided to download MusicHQ.net, you need to follow some simple steps to do it successfully. Here are the instructions:</p>
82
- <h3>Step-by-step instructions</h3>
83
- <ol>
84
- <li>Go to the official website of MusicHQ.net at www.musichq.net and find the movie or TV show you want to download.</li>
85
- <li>Click on the movie or TV show poster and you will be redirected to a new page with more details and options.</li>
86
- <li>Scroll down and look for the download button below the video player. It may have different labels, such as "Download", "Download HD", "Download Full Movie", etc.</li>
87
- <li>Click on the download button and you will see a pop-up window with different links and formats to choose from. You can select the quality, size, and format of the movie you want to download, such as 1080p, 720p, MP4, MKV, etc.</li>
88
- <li>Click on the link that suits your preference and you will be taken to another page where you can start the download process. You may need to click on another button or link that says "Download Now", "Start Download", "Confirm Download", etc.</li>
89
- <li>Wait for the download to finish and enjoy your movie offline.</li>
90
- </ol>
91
- <h3>Tips and tricks</h3>
92
- <p>Here are some tips and tricks that can help you download MusicHQ.net more easily and safely:</p>
93
- <ul>
94
- <li>Use a VPN service or a proxy server to access MusicHQ.net if it is blocked or restricted in your country or region.</li>
95
- <li>Use an ad-blocker or a pop-up blocker to avoid annoying ads or pop-ups that may interfere with your download.</li>
96
- <li>Use a reliable antivirus or anti-malware software to scan the downloaded files and protect your device from any potential threats.</li>
97
- <li>Use a download manager or a downloader app to speed up the download, resume it if it is interrupted, and manage it more efficiently.</li>
98
- <li>Check the reviews and ratings of the movies or TV shows before downloading them to make sure they are of good quality and match your expectations.</li>
99
- </ul>
100
- <h2>Alternatives to MusicHQ.net</h2>
101
- <p>If you are looking for some other websites or apps that can offer similar or better services than MusicHQ.net, you may want to check out these alternatives:</p>
102
- <h3>List of top 10 alternatives</h3>
103
- <ul>
104
- <li><a href="">Netflix</a>: The most popular and widely used streaming service that offers original and exclusive movies and TV shows, as well as a huge library of licensed content. You can watch online or download offline with a paid subscription.</li>
105
- <li><a href="">Amazon Prime Video</a>: Another popular and widely used streaming service that offers original and exclusive movies and TV shows, as well as a huge library of licensed content. You can watch online or download offline with a paid subscription.</li>
106
- <li><a href="">Hulu</a>: A streaming service that offers original and exclusive movies and TV shows, as well as a huge library of licensed content. You can watch online or download offline with a paid subscription.</li>
107
- <li><a href="">Disney+</a>: A streaming service that offers original and exclusive movies and TV shows from Disney, Pixar, Marvel, Star Wars, National Geographic, and more. You can watch online or download offline with a paid subscription.</li>
108
- <li><a href="">HBO Max</a>: A streaming service that offers original and exclusive movies and TV shows from HBO, Warner Bros., DC, Cartoon Network, Adult Swim, and more. You can watch online or download offline with a paid subscription.</li>
109
- <li><a href="">YouTube</a>: The most popular and widely used video-sharing platform that offers millions of user-generated videos, as well as some original and licensed content. You can watch online for free or download offline with a paid subscription.</li>
110
- <li><a href="">Tubi</a>: A free streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free but you cannot download offline.</li>
111
- <li><a href="">Crackle</a>: A free streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free but you cannot download offline.</li>
112
- <li><a href="">Popcornflix</a>: A free streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free but you cannot download offline.</li>
113
- <li><a href="">Vudu</a>: A streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free or download offline with a paid subscription.</li>
114
- </ul>
115
- <h3>Comparison table</h3>
116
- <table>
117
- <tr>
118
- <th>Name</th>
119
- <th>Price</th>
120
- <th>Content</th>
121
- <th>Quality</th>
122
- <th>Download</th>
123
- </tr>
124
- <tr>
125
- <td>MusicHQ.net</td>
126
- <td>Free</td>
127
- <td>Thousands of movies and TV shows in various genres and languages</td>
128
- <td>Full HD (1080p)</td>
129
- <td>Yes</td>
130
- </tr>
131
- <tr>
132
- <td>Netflix</td>
133
- <td>$8.99-$17.99 per month</td>
134
- <td>Original and exclusive movies and TV shows, as well as a huge library of licensed content</td>
135
- <td>Full HD (1080p) or Ultra HD (4K)</td>
136
- <td>Yes</td>
137
- </tr>
138
- <tr>
139
- <td>Amazon Prime Video</td>
140
- <td>$8.99 per month or $119 per year</td>
141
- <td>Original and exclusive movies and TV shows, as well as a huge library of licensed content</td>
142
- <td>Full HD (1080p) or Ultra HD (4K)</td>
143
- <td>Yes</td>
144
- </tr>
145
- <tr>
146
- <td>Hulu</td>
147
- <td>$5.99-$11.99 per month or $64.99-$70.99 per month with live TV</td>
148
- <td>Original and exclusive movies and TV shows, as well as a huge library of licensed content</td>
149
- <td>Full HD (1080p) or Ultra HD (4K)</td>
150
- <td>Yes</td>
151
- </tr>
152
- <tr>
153
- <td>Disney+</td>
154
- <td>$7.99 per month or $79.99 per year</td>
155
- <td>Original and exclusive movies and TV shows from Disney, Pixar, Marvel, Star Wars, National Geographic, and more</td>
156
- <td>Full HD (1080p) or Ultra HD (4K)</td>
157
- <td>Yes</td>
158
- </tr>
159
- <tr> <td>HBO Max</td>
160
- <td>$9.99-$14.99 per month</td>
161
- <td>Original and exclusive movies and TV shows from HBO, Warner Bros., DC, Cartoon Network, Adult Swim, and more</td>
162
- <td>Full HD (1080p) or Ultra HD (4K)</td>
163
- <td>Yes</td>
164
- </tr>
165
- <tr>
166
- <td>YouTube</td>
167
- <td>Free or $11.99 per month for YouTube Premium</td>
168
- <td>Millions of user-generated videos, as well as some original and licensed content</td>
169
- <td>Full HD (1080p) or Ultra HD (4K)</td>
170
- <td>Yes</td>
171
- </tr>
172
- <tr>
173
- <td>Tubi</td>
174
- <td>Free</td>
175
- <td>Thousands of movies and TV shows in various genres and languages</td>
176
- <td>Full HD (1080p)</td>
177
- <td>No</td>
178
- </tr>
179
- <tr>
180
- <td>Crackle</td>
181
- <td>Free</td>
182
- <td>Thousands of movies and TV shows in various genres and languages</td>
183
- <td>Full HD (1080p)</td>
184
- <td>No</td>
185
- </tr>
186
- <tr>
187
- <td>Popcornflix</td>
188
- <td>Free</td>
189
- <td>Thousands of movies and TV shows in various genres and languages</td>
190
- <td>Full HD (1080p)</td>
191
- <td>No</td>
192
- </tr>
193
- <tr>
194
- <td>Vudu</td>
195
- <td>Free or $3.99-$19.99 per movie or TV show</td>
196
- <td>Thousands of movies and TV shows in various genres and languages</td>
197
- <td>Full HD (1080p) or Ultra HD (4K)</td>
198
- <td>Yes</td>
199
- </tr>
200
- <h2>Conclusion</h2>
201
- <p>In conclusion, MusicHQ.net is a great website to watch full HD movies online for free. However, if you want to enjoy your movies offline, you can also download MusicHQ.net and save them on your device. You just need to follow some simple steps and tips to do it safely and easily. However, you should also be aware of the risks and legal issues that may arise from downloading MusicHQ.net. If you are looking for some alternatives to MusicHQ.net, you can check out the list and comparison table above and choose the one that suits your needs and preferences.</p>
202
- <h2>FAQs</h2>
203
- <p>Here are some of the frequently asked questions about MusicHQ.net:</p>
204
- <ol>
205
- <li><b>Is MusicHQ.net legal?</b></li>
206
- <p>The legality of MusicHQ.net depends on your country or region's laws and regulations regarding streaming and downloading copyrighted content. In some countries or regions, MusicHQ.net may be considered illegal and may be blocked or restricted by the authorities. In that case, you should use a VPN service or a proxy server to access MusicHQ.net safely and anonymously.</p>
207
- <li><b>Is MusicHQ.net safe?</b></li>
208
- <p>The safety of MusicHQ.net depends on the source and quality of the files you download from it. Some files may contain malware, viruses, or spyware that may harm your device or compromise your privacy. To avoid this, you should use a reliable antivirus or anti-malware software to scan the downloaded files before opening them. You should also use an ad-blocker or a pop-up blocker to avoid annoying ads or pop-ups that may interfere with your download.</p>
209
- <li><b>How can I download MusicHQ.net faster?</b></li>
210
- <p>The speed of downloading MusicHQ.net depends on several factors, such as your internet connection, bandwidth, data usage, file size, format, quality, etc. To download MusicHQ.net faster, you should use a download manager or a downloader app that can speed up the download, resume it if it is interrupted, and manage it more efficiently. You should also choose the file size, format, and quality that match your device's specifications and storage capacity.</p>
211
- <li><b>How can I watch MusicHQ.net on my TV?</b></li>
212
- <p>To watch MusicHQ.net on your TV, you need to have a smart TV that supports web browsing or a streaming device that can connect your TV to the internet. You can then visit the official website of MusicHQ.net at www.musichq.net and watch your favorite movies online. Alternatively, you can download MusicHQ.net on your computer or smartphone and transfer the files to a USB drive or an external hard drive. You can then plug the USB drive or the external hard drive into your TV and watch your movies offline.</p>
213
- <li><b>How can I request a movie or TV show on MusicHQ.net?</b></li>
214
- <p>To request a movie or TV show on MusicHQ.net, you need to contact the website's administrators via email or social media. You can find their contact information on the website's homepage or footer. You can send them your request and they will try to add it to their collection as soon as possible. However, there is no guarantee that your request will be fulfilled, as it depends on the availability and legality of the movie or TV show you want.</p> 401be4b1e0<br />
215
- <br />
216
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_all_in_one.py DELETED
@@ -1,1294 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import inspect
17
- import os
18
- import random
19
- import re
20
- import time
21
- from typing import Callable, List, Optional, Union
22
-
23
- import numpy as np
24
- import paddle
25
- import PIL
26
- import PIL.Image
27
- from packaging import version
28
-
29
- from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
30
-
31
- from ...configuration_utils import FrozenDict
32
- from ...models import AutoencoderKL, UNet2DConditionModel
33
- from ...pipeline_utils import DiffusionPipeline
34
- from ...schedulers import (
35
- DDIMScheduler,
36
- DPMSolverMultistepScheduler,
37
- EulerAncestralDiscreteScheduler,
38
- EulerDiscreteScheduler,
39
- LMSDiscreteScheduler,
40
- PNDMScheduler,
41
- )
42
- from ...utils import PIL_INTERPOLATION, deprecate, logging
43
- from ...utils.testing_utils import load_image
44
- from . import StableDiffusionPipelineOutput
45
- from .safety_checker import StableDiffusionSafetyChecker
46
-
47
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
48
-
49
-
50
- def save_all(images, FORMAT="jpg", OUTDIR="./outputs/"):
51
- if not isinstance(images, (list, tuple)):
52
- images = [images]
53
- for image in images:
54
- PRECISION = "fp32"
55
- argument = image.argument
56
- os.makedirs(OUTDIR, exist_ok=True)
57
- epoch_time = argument["epoch_time"]
58
- PROMPT = argument["prompt"]
59
- NEGPROMPT = argument["negative_prompt"]
60
- HEIGHT = argument["height"]
61
- WIDTH = argument["width"]
62
- SEED = argument["seed"]
63
- STRENGTH = argument.get("strength", 1)
64
- INFERENCE_STEPS = argument["num_inference_steps"]
65
- GUIDANCE_SCALE = argument["guidance_scale"]
66
-
67
- filename = f"{str(epoch_time)}_scale_{GUIDANCE_SCALE}_steps_{INFERENCE_STEPS}_seed_{SEED}.{FORMAT}"
68
- filedir = f"{OUTDIR}/{filename}"
69
- image.save(filedir)
70
- with open(f"{OUTDIR}/{epoch_time}_prompt.txt", "w") as file:
71
- file.write(
72
- f"PROMPT: {PROMPT}\nNEG_PROMPT: {NEGPROMPT}\n\nINFERENCE_STEPS: {INFERENCE_STEPS}\nHeight: {HEIGHT}\nWidth: {WIDTH}\nSeed: {SEED}\n\nPrecision: {PRECISION}\nSTRENGTH: {STRENGTH}\nGUIDANCE_SCALE: {GUIDANCE_SCALE}"
73
- )
74
-
75
-
76
- re_attention = re.compile(
77
- r"""
78
- \\\(|
79
- \\\)|
80
- \\\[|
81
- \\]|
82
- \\\\|
83
- \\|
84
- \(|
85
- \[|
86
- :([+-]?[.\d]+)\)|
87
- \)|
88
- ]|
89
- [^\\()\[\]:]+|
90
- :
91
- """,
92
- re.X,
93
- )
94
-
95
-
96
- def parse_prompt_attention(text):
97
- """
98
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
99
- Accepted tokens are:
100
- (abc) - increases attention to abc by a multiplier of 1.1
101
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
102
- [abc] - decreases attention to abc by a multiplier of 1.1
103
- \( - literal character '('
104
- \[ - literal character '['
105
- \) - literal character ')'
106
- \] - literal character ']'
107
- \\ - literal character '\'
108
- anything else - just text
109
- >>> parse_prompt_attention('normal text')
110
- [['normal text', 1.0]]
111
- >>> parse_prompt_attention('an (important) word')
112
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
113
- >>> parse_prompt_attention('(unbalanced')
114
- [['unbalanced', 1.1]]
115
- >>> parse_prompt_attention('\(literal\]')
116
- [['(literal]', 1.0]]
117
- >>> parse_prompt_attention('(unnecessary)(parens)')
118
- [['unnecessaryparens', 1.1]]
119
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
120
- [['a ', 1.0],
121
- ['house', 1.5730000000000004],
122
- [' ', 1.1],
123
- ['on', 1.0],
124
- [' a ', 1.1],
125
- ['hill', 0.55],
126
- [', sun, ', 1.1],
127
- ['sky', 1.4641000000000006],
128
- ['.', 1.1]]
129
- """
130
-
131
- res = []
132
- round_brackets = []
133
- square_brackets = []
134
-
135
- round_bracket_multiplier = 1.1
136
- square_bracket_multiplier = 1 / 1.1
137
-
138
- def multiply_range(start_position, multiplier):
139
- for p in range(start_position, len(res)):
140
- res[p][1] *= multiplier
141
-
142
- for m in re_attention.finditer(text):
143
- text = m.group(0)
144
- weight = m.group(1)
145
-
146
- if text.startswith("\\"):
147
- res.append([text[1:], 1.0])
148
- elif text == "(":
149
- round_brackets.append(len(res))
150
- elif text == "[":
151
- square_brackets.append(len(res))
152
- elif weight is not None and len(round_brackets) > 0:
153
- multiply_range(round_brackets.pop(), float(weight))
154
- elif text == ")" and len(round_brackets) > 0:
155
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
156
- elif text == "]" and len(square_brackets) > 0:
157
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
158
- else:
159
- res.append([text, 1.0])
160
-
161
- for pos in round_brackets:
162
- multiply_range(pos, round_bracket_multiplier)
163
-
164
- for pos in square_brackets:
165
- multiply_range(pos, square_bracket_multiplier)
166
-
167
- if len(res) == 0:
168
- res = [["", 1.0]]
169
-
170
- # merge runs of identical weights
171
- i = 0
172
- while i + 1 < len(res):
173
- if res[i][1] == res[i + 1][1]:
174
- res[i][0] += res[i + 1][0]
175
- res.pop(i + 1)
176
- else:
177
- i += 1
178
-
179
- return res
180
-
181
-
182
- def get_prompts_with_weights(pipe: DiffusionPipeline, prompt: List[str], max_length: int):
183
- r"""
184
- Tokenize a list of prompts and return its tokens with weights of each token.
185
-
186
- No padding, starting or ending token is included.
187
- """
188
- tokens = []
189
- weights = []
190
- for text in prompt:
191
- texts_and_weights = parse_prompt_attention(text)
192
- text_token = []
193
- text_weight = []
194
- for word, weight in texts_and_weights:
195
- # tokenize and discard the starting and the ending token
196
- token = pipe.tokenizer(word).input_ids[1:-1]
197
- text_token += token
198
-
199
- # copy the weight by length of token
200
- text_weight += [weight] * len(token)
201
-
202
- # stop if the text is too long (longer than truncation limit)
203
- if len(text_token) > max_length:
204
- break
205
-
206
- # truncate
207
- if len(text_token) > max_length:
208
- text_token = text_token[:max_length]
209
- text_weight = text_weight[:max_length]
210
-
211
- tokens.append(text_token)
212
- weights.append(text_weight)
213
- return tokens, weights
214
-
215
-
216
- def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, pad, no_boseos_middle=True, chunk_length=77):
217
- r"""
218
- Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
219
- """
220
- max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
221
- weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
222
- for i in range(len(tokens)):
223
- tokens[i] = [bos] + tokens[i] + [eos] + [pad] * (max_length - 2 - len(tokens[i]))
224
- if no_boseos_middle:
225
- weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
226
- else:
227
- w = []
228
- if len(weights[i]) == 0:
229
- w = [1.0] * weights_length
230
- else:
231
- for j in range((len(weights[i]) - 1) // chunk_length + 1):
232
- w.append(1.0) # weight for starting token in this chunk
233
- w += weights[i][j * chunk_length : min(len(weights[i]), (j + 1) * chunk_length)]
234
- w.append(1.0) # weight for ending token in this chunk
235
- w += [1.0] * (weights_length - len(w))
236
- weights[i] = w[:]
237
-
238
- return tokens, weights
239
-
240
-
241
- def get_unweighted_text_embeddings(
242
- pipe: DiffusionPipeline, text_input: paddle.Tensor, chunk_length: int, no_boseos_middle: Optional[bool] = True
243
- ):
244
- """
245
- When the length of tokens is a multiple of the capacity of the text encoder,
246
- it should be split into chunks and sent to the text encoder individually.
247
- """
248
- max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
249
- if max_embeddings_multiples > 1:
250
- text_embeddings = []
251
- for i in range(max_embeddings_multiples):
252
- # extract the i-th chunk
253
- text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
254
-
255
- # cover the head and the tail by the starting and the ending tokens
256
- text_input_chunk[:, 0] = text_input[0, 0]
257
- text_input_chunk[:, -1] = text_input[0, -1]
258
-
259
- attention_mask = paddle.ones_like(text_input_chunk)
260
- text_embedding = pipe.text_encoder(text_input_chunk, attention_mask=attention_mask)[0]
261
-
262
- if no_boseos_middle:
263
- if i == 0:
264
- # discard the ending token
265
- text_embedding = text_embedding[:, :-1]
266
- elif i == max_embeddings_multiples - 1:
267
- # discard the starting token
268
- text_embedding = text_embedding[:, 1:]
269
- else:
270
- # discard both starting and ending tokens
271
- text_embedding = text_embedding[:, 1:-1]
272
-
273
- text_embeddings.append(text_embedding)
274
- text_embeddings = paddle.concat(text_embeddings, axis=1)
275
- else:
276
- attention_mask = paddle.ones_like(text_input)
277
- text_embeddings = pipe.text_encoder(text_input, attention_mask=attention_mask)[0]
278
- return text_embeddings
279
-
280
-
281
- def get_weighted_text_embeddings(
282
- pipe: DiffusionPipeline,
283
- prompt: Union[str, List[str]],
284
- uncond_prompt: Optional[Union[str, List[str]]] = None,
285
- max_embeddings_multiples: Optional[int] = 1,
286
- no_boseos_middle: Optional[bool] = False,
287
- skip_parsing: Optional[bool] = False,
288
- skip_weighting: Optional[bool] = False,
289
- **kwargs
290
- ):
291
- r"""
292
- Prompts can be assigned with local weights using brackets. For example,
293
- prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
294
- and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
295
-
296
- Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
297
-
298
- Args:
299
- pipe (`DiffusionPipeline`):
300
- Pipe to provide access to the tokenizer and the text encoder.
301
- prompt (`str` or `List[str]`):
302
- The prompt or prompts to guide the image generation.
303
- uncond_prompt (`str` or `List[str]`):
304
- The unconditional prompt or prompts for guide the image generation. If unconditional prompt
305
- is provided, the embeddings of prompt and uncond_prompt are concatenated.
306
- max_embeddings_multiples (`int`, *optional*, defaults to `1`):
307
- The max multiple length of prompt embeddings compared to the max output length of text encoder.
308
- no_boseos_middle (`bool`, *optional*, defaults to `False`):
309
- If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
310
- ending token in each of the chunk in the middle.
311
- skip_parsing (`bool`, *optional*, defaults to `False`):
312
- Skip the parsing of brackets.
313
- skip_weighting (`bool`, *optional*, defaults to `False`):
314
- Skip the weighting. When the parsing is skipped, it is forced True.
315
- """
316
- max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
317
- if isinstance(prompt, str):
318
- prompt = [prompt]
319
-
320
- if not skip_parsing:
321
- prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
322
- if uncond_prompt is not None:
323
- if isinstance(uncond_prompt, str):
324
- uncond_prompt = [uncond_prompt]
325
- uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
326
- else:
327
- prompt_tokens = [
328
- token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids
329
- ]
330
- prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
331
- if uncond_prompt is not None:
332
- if isinstance(uncond_prompt, str):
333
- uncond_prompt = [uncond_prompt]
334
- uncond_tokens = [
335
- token[1:-1]
336
- for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
337
- ]
338
- uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
339
-
340
- # round up the longest length of tokens to a multiple of (model_max_length - 2)
341
- max_length = max([len(token) for token in prompt_tokens])
342
- if uncond_prompt is not None:
343
- max_length = max(max_length, max([len(token) for token in uncond_tokens]))
344
-
345
- max_embeddings_multiples = min(
346
- max_embeddings_multiples, (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1
347
- )
348
- max_embeddings_multiples = max(1, max_embeddings_multiples)
349
- max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
350
-
351
- # pad the length of tokens and weights
352
- # support bert tokenizer
353
- bos = pipe.tokenizer.bos_token_id if pipe.tokenizer.bos_token_id is not None else pipe.tokenizer.cls_token_id
354
- eos = pipe.tokenizer.eos_token_id if pipe.tokenizer.eos_token_id is not None else pipe.tokenizer.sep_token_id
355
- pad = pipe.tokenizer.pad_token_id
356
- prompt_tokens, prompt_weights = pad_tokens_and_weights(
357
- prompt_tokens,
358
- prompt_weights,
359
- max_length,
360
- bos,
361
- eos,
362
- pad,
363
- no_boseos_middle=no_boseos_middle,
364
- chunk_length=pipe.tokenizer.model_max_length,
365
- )
366
- prompt_tokens = paddle.to_tensor(prompt_tokens)
367
- if uncond_prompt is not None:
368
- uncond_tokens, uncond_weights = pad_tokens_and_weights(
369
- uncond_tokens,
370
- uncond_weights,
371
- max_length,
372
- bos,
373
- eos,
374
- pad,
375
- no_boseos_middle=no_boseos_middle,
376
- chunk_length=pipe.tokenizer.model_max_length,
377
- )
378
- uncond_tokens = paddle.to_tensor(uncond_tokens)
379
-
380
- # get the embeddings
381
- text_embeddings = get_unweighted_text_embeddings(
382
- pipe, prompt_tokens, pipe.tokenizer.model_max_length, no_boseos_middle=no_boseos_middle
383
- )
384
- prompt_weights = paddle.to_tensor(prompt_weights, dtype=text_embeddings.dtype)
385
- if uncond_prompt is not None:
386
- uncond_embeddings = get_unweighted_text_embeddings(
387
- pipe, uncond_tokens, pipe.tokenizer.model_max_length, no_boseos_middle=no_boseos_middle
388
- )
389
- uncond_weights = paddle.to_tensor(uncond_weights, dtype=uncond_embeddings.dtype)
390
-
391
- # assign weights to the prompts and normalize in the sense of mean
392
- # TODO: should we normalize by chunk or in a whole (current implementation)?
393
- if (not skip_parsing) and (not skip_weighting):
394
- previous_mean = text_embeddings.mean(axis=[-2, -1])
395
- text_embeddings *= prompt_weights.unsqueeze(-1)
396
- text_embeddings *= previous_mean / text_embeddings.mean(axis=[-2, -1])
397
- if uncond_prompt is not None:
398
- previous_mean = uncond_embeddings.mean(axis=[-2, -1])
399
- uncond_embeddings *= uncond_weights.unsqueeze(-1)
400
- uncond_embeddings *= previous_mean / uncond_embeddings.mean(axis=[-2, -1])
401
-
402
- # For classifier free guidance, we need to do two forward passes.
403
- # Here we concatenate the unconditional and text embeddings into a single batch
404
- # to avoid doing two forward passes
405
- if uncond_prompt is not None:
406
- text_embeddings = paddle.concat([uncond_embeddings, text_embeddings])
407
-
408
- return text_embeddings
409
-
410
-
411
- def preprocess_image(image):
412
- w, h = image.size
413
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
414
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
415
- image = np.array(image).astype(np.float32) / 255.0
416
- image = image[None].transpose(0, 3, 1, 2)
417
- image = paddle.to_tensor(image)
418
- return 2.0 * image - 1.0
419
-
420
-
421
- def preprocess_mask(mask):
422
- mask = mask.convert("L")
423
- w, h = mask.size
424
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
425
- mask = mask.resize((w // 8, h // 8), resample=PIL_INTERPOLATION["nearest"])
426
- mask = np.array(mask).astype(np.float32) / 255.0
427
- mask = np.tile(mask, (4, 1, 1))
428
- mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
429
- mask = 1 - mask # repaint white, keep black
430
- mask = paddle.to_tensor(mask)
431
- return mask
432
-
433
-
434
- class StableDiffusionPipelineAllinOne(DiffusionPipeline):
435
- r"""
436
- Pipeline for text-to-image image-to-image inpainting generation using Stable Diffusion.
437
-
438
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
439
- library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
440
-
441
- Args:
442
- vae ([`AutoencoderKL`]):
443
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
444
- text_encoder ([`CLIPTextModel`]):
445
- Frozen text-encoder. Stable Diffusion uses the text portion of
446
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
447
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
448
- tokenizer (`CLIPTokenizer`):
449
- Tokenizer of class
450
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
451
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
452
- scheduler ([`SchedulerMixin`]):
453
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
454
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`]
455
- or [`DPMSolverMultistepScheduler`].
456
- safety_checker ([`StableDiffusionSafetyChecker`]):
457
- Classification module that estimates whether generated images could be considered offensive or harmful.
458
- Please, refer to the [model card](https://huggingface.co/junnyu/stable-diffusion-v1-4-paddle) for details.
459
- feature_extractor ([`CLIPFeatureExtractor`]):
460
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
461
- """
462
- _optional_components = ["safety_checker", "feature_extractor"]
463
-
464
- def __init__(
465
- self,
466
- vae: AutoencoderKL,
467
- text_encoder: CLIPTextModel,
468
- tokenizer: CLIPTokenizer,
469
- unet: UNet2DConditionModel,
470
- scheduler: Union[
471
- DDIMScheduler,
472
- PNDMScheduler,
473
- LMSDiscreteScheduler,
474
- EulerDiscreteScheduler,
475
- EulerAncestralDiscreteScheduler,
476
- DPMSolverMultistepScheduler,
477
- ],
478
- safety_checker: StableDiffusionSafetyChecker,
479
- feature_extractor: CLIPFeatureExtractor,
480
- requires_safety_checker: bool = False,
481
- ):
482
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
483
- deprecation_message = (
484
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
485
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
486
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
487
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
488
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
489
- " file"
490
- )
491
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
492
- new_config = dict(scheduler.config)
493
- new_config["steps_offset"] = 1
494
- scheduler._internal_dict = FrozenDict(new_config)
495
-
496
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
497
- deprecation_message = (
498
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
499
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
500
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
501
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
502
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
503
- )
504
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
505
- new_config = dict(scheduler.config)
506
- new_config["clip_sample"] = False
507
- scheduler._internal_dict = FrozenDict(new_config)
508
-
509
- if safety_checker is None and requires_safety_checker:
510
- logger.warning(
511
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
512
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
513
- " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face"
514
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
515
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
516
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
517
- )
518
- if safety_checker is not None and feature_extractor is None:
519
- raise ValueError(
520
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
521
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
522
- )
523
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_ppdiffusers_version") and version.parse(
524
- version.parse(unet.config._ppdiffusers_version).base_version
525
- ) < version.parse("0.9.0.dev0")
526
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
527
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
528
- deprecation_message = (
529
- "The configuration file of the unet has set the default `sample_size` to smaller than"
530
- " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
531
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
532
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
533
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
534
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
535
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
536
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
537
- " the `unet/config.json` file"
538
- )
539
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
540
- new_config = dict(unet.config)
541
- new_config["sample_size"] = 64
542
- unet._internal_dict = FrozenDict(new_config)
543
-
544
- self.register_modules(
545
- vae=vae,
546
- text_encoder=text_encoder,
547
- tokenizer=tokenizer,
548
- unet=unet,
549
- scheduler=scheduler,
550
- safety_checker=safety_checker,
551
- feature_extractor=feature_extractor,
552
- )
553
- self.register_to_config(requires_safety_checker=requires_safety_checker)
554
-
555
- def __call__(self, *args, **kwargs):
556
- return self.text2image(*args, **kwargs)
557
-
558
- def text2img(self, *args, **kwargs):
559
- return self.text2image(*args, **kwargs)
560
-
561
- def _encode_prompt(
562
- self,
563
- prompt,
564
- negative_prompt,
565
- max_embeddings_multiples,
566
- no_boseos_middle,
567
- skip_parsing,
568
- skip_weighting,
569
- do_classifier_free_guidance,
570
- num_images_per_prompt,
571
- ):
572
- if do_classifier_free_guidance and negative_prompt is None:
573
- negative_prompt = ""
574
- text_embeddings = get_weighted_text_embeddings(
575
- self, prompt, negative_prompt, max_embeddings_multiples, no_boseos_middle, skip_parsing, skip_weighting
576
- )
577
-
578
- bs_embed, seq_len, _ = text_embeddings.shape
579
- text_embeddings = text_embeddings.tile([1, num_images_per_prompt, 1])
580
- text_embeddings = text_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1])
581
- return text_embeddings
582
-
583
- def run_safety_checker(self, image, dtype):
584
- if self.safety_checker is not None:
585
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pd")
586
- image, has_nsfw_concept = self.safety_checker(
587
- images=image, clip_input=safety_checker_input.pixel_values.cast(dtype)
588
- )
589
- else:
590
- has_nsfw_concept = None
591
- return image, has_nsfw_concept
592
-
593
- def decode_latents(self, latents):
594
- latents = 1 / 0.18215 * latents
595
- image = self.vae.decode(latents).sample
596
- image = (image / 2 + 0.5).clip(0, 1)
597
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
598
- image = image.transpose([0, 2, 3, 1]).cast("float32").numpy()
599
- return image
600
-
601
- def prepare_extra_step_kwargs(self, eta):
602
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
603
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
604
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
605
- # and should be between [0, 1]
606
-
607
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
608
- extra_step_kwargs = {}
609
- if accepts_eta:
610
- extra_step_kwargs["eta"] = eta
611
-
612
- return extra_step_kwargs
613
-
614
- def check_inputs_text2img(self, prompt, height, width, callback_steps):
615
- if not isinstance(prompt, str) and not isinstance(prompt, list):
616
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
617
-
618
- if height % 8 != 0 or width % 8 != 0:
619
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
620
-
621
- if (callback_steps is None) or (
622
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
623
- ):
624
- raise ValueError(
625
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
626
- f" {type(callback_steps)}."
627
- )
628
-
629
- def check_inputs_img2img_inpaint(self, prompt, strength, callback_steps):
630
- if not isinstance(prompt, str) and not isinstance(prompt, list):
631
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
632
-
633
- if strength < 0 or strength > 1:
634
- raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}")
635
-
636
- if (callback_steps is None) or (
637
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
638
- ):
639
- raise ValueError(
640
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
641
- f" {type(callback_steps)}."
642
- )
643
-
644
- def prepare_latents_text2img(self, batch_size, num_channels_latents, height, width, dtype, latents=None):
645
- shape = [batch_size, num_channels_latents, height // 8, width // 8]
646
- if latents is None:
647
- latents = paddle.randn(shape, dtype=dtype)
648
- else:
649
- if latents.shape != shape:
650
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
651
-
652
- # scale the initial noise by the standard deviation required by the scheduler
653
- latents = latents * self.scheduler.init_noise_sigma
654
- return latents
655
-
656
- def prepare_latents_img2img(self, image, timestep, num_images_per_prompt, dtype):
657
- image = image.cast(dtype=dtype)
658
- init_latent_dist = self.vae.encode(image).latent_dist
659
- init_latents = init_latent_dist.sample()
660
- init_latents = 0.18215 * init_latents
661
-
662
- b, c, h, w = init_latents.shape
663
- init_latents = init_latents.tile([1, num_images_per_prompt, 1, 1])
664
- init_latents = init_latents.reshape([b * num_images_per_prompt, c, h, w])
665
-
666
- # add noise to latents using the timesteps
667
- noise = paddle.randn(init_latents.shape, dtype=dtype)
668
-
669
- # get latents
670
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
671
- latents = init_latents
672
-
673
- return latents
674
-
675
- def get_timesteps(self, num_inference_steps, strength):
676
- # get the original timestep using init_timestep
677
- offset = self.scheduler.config.get("steps_offset", 0)
678
- init_timestep = int(num_inference_steps * strength) + offset
679
- init_timestep = min(init_timestep, num_inference_steps)
680
-
681
- t_start = max(num_inference_steps - init_timestep + offset, 0)
682
- timesteps = self.scheduler.timesteps[t_start:]
683
-
684
- return timesteps
685
-
686
- def prepare_latents_inpaint(self, image, timestep, num_images_per_prompt, dtype):
687
- image = image.cast(dtype)
688
- init_latent_dist = self.vae.encode(image).latent_dist
689
- init_latents = init_latent_dist.sample()
690
- init_latents = 0.18215 * init_latents
691
-
692
- b, c, h, w = init_latents.shape
693
- init_latents = init_latents.tile([1, num_images_per_prompt, 1, 1])
694
- init_latents = init_latents.reshape([b * num_images_per_prompt, c, h, w])
695
-
696
- init_latents_orig = init_latents
697
-
698
- # add noise to latents using the timesteps
699
- noise = paddle.randn(init_latents.shape, dtype=dtype)
700
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
701
- latents = init_latents
702
- return latents, init_latents_orig, noise
703
-
704
- @paddle.no_grad()
705
- def text2image(
706
- self,
707
- prompt: Union[str, List[str]],
708
- height: int = 512,
709
- width: int = 512,
710
- num_inference_steps: int = 50,
711
- guidance_scale: float = 7.5,
712
- negative_prompt: Optional[Union[str, List[str]]] = None,
713
- num_images_per_prompt: Optional[int] = 1,
714
- eta: float = 0.0,
715
- seed: Optional[int] = None,
716
- latents: Optional[paddle.Tensor] = None,
717
- output_type: Optional[str] = "pil",
718
- return_dict: bool = True,
719
- callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
720
- callback_steps: Optional[int] = 1,
721
- # new add
722
- max_embeddings_multiples: Optional[int] = 1,
723
- no_boseos_middle: Optional[bool] = False,
724
- skip_parsing: Optional[bool] = False,
725
- skip_weighting: Optional[bool] = False,
726
- **kwargs,
727
- ):
728
- r"""
729
- Function invoked when calling the pipeline for generation.
730
-
731
- Args:
732
- prompt (`str` or `List[str]`):
733
- The prompt or prompts to guide the image generation.
734
- height (`int`, *optional*, defaults to 512):
735
- The height in pixels of the generated image.
736
- width (`int`, *optional*, defaults to 512):
737
- The width in pixels of the generated image.
738
- num_inference_steps (`int`, *optional*, defaults to 50):
739
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
740
- expense of slower inference.
741
- guidance_scale (`float`, *optional*, defaults to 7.5):
742
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
743
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
744
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
745
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
746
- usually at the expense of lower image quality.
747
- negative_prompt (`str` or `List[str]`, *optional*):
748
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
749
- if `guidance_scale` is less than `1`).
750
- num_images_per_prompt (`int`, *optional*, defaults to 1):
751
- The number of images to generate per prompt.
752
- eta (`float`, *optional*, defaults to 0.0):
753
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
754
- [`schedulers.DDIMScheduler`], will be ignored for others.
755
- seed (`int`, *optional*):
756
- Random number seed.
757
- latents (`paddle.Tensor`, *optional*):
758
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
759
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
760
- tensor will ge generated by sampling using the supplied random `seed`.
761
- output_type (`str`, *optional*, defaults to `"pil"`):
762
- The output format of the generate image. Choose between
763
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
764
- return_dict (`bool`, *optional*, defaults to `True`):
765
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
766
- plain tuple.
767
- callback (`Callable`, *optional*):
768
- A function that will be called every `callback_steps` steps during inference. The function will be
769
- called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
770
- callback_steps (`int`, *optional*, defaults to 1):
771
- The frequency at which the `callback` function will be called. If not specified, the callback will be
772
- called at every step.
773
-
774
- Returns:
775
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
776
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
777
- When returning a tuple, the first element is a list with the generated images, and the second element is a
778
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
779
- (nsfw) content, according to the `safety_checker`.
780
- """
781
- seed = random.randint(0, 2**32) if seed is None else seed
782
- argument = dict(
783
- prompt=prompt,
784
- negative_prompt=negative_prompt,
785
- height=height,
786
- width=width,
787
- num_inference_steps=num_inference_steps,
788
- guidance_scale=guidance_scale,
789
- num_images_per_prompt=num_images_per_prompt,
790
- eta=eta,
791
- seed=seed,
792
- latents=latents,
793
- max_embeddings_multiples=max_embeddings_multiples,
794
- no_boseos_middle=no_boseos_middle,
795
- skip_parsing=skip_parsing,
796
- skip_weighting=skip_weighting,
797
- epoch_time=time.time(),
798
- )
799
- paddle.seed(seed)
800
- # 1. Check inputs. Raise error if not correct
801
- self.check_inputs_text2img(prompt, height, width, callback_steps)
802
-
803
- # 2. Define call parameters
804
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
805
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
806
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
807
- # corresponds to doing no classifier free guidance.
808
- do_classifier_free_guidance = guidance_scale > 1.0
809
-
810
- # 3. Encode input prompt
811
- text_embeddings = self._encode_prompt(
812
- prompt,
813
- negative_prompt,
814
- max_embeddings_multiples,
815
- no_boseos_middle,
816
- skip_parsing,
817
- skip_weighting,
818
- do_classifier_free_guidance,
819
- num_images_per_prompt,
820
- )
821
-
822
- # 4. Prepare timesteps
823
- self.scheduler.set_timesteps(num_inference_steps)
824
- timesteps = self.scheduler.timesteps
825
-
826
- # 5. Prepare latent variables
827
- num_channels_latents = self.unet.in_channels
828
- latents = self.prepare_latents_text2img(
829
- batch_size * num_images_per_prompt,
830
- num_channels_latents,
831
- height,
832
- width,
833
- text_embeddings.dtype,
834
- latents,
835
- )
836
-
837
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
838
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
839
-
840
- # 7. Denoising loop
841
- for i, t in enumerate(self.progress_bar(timesteps)):
842
- # expand the latents if we are doing classifier free guidance
843
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
844
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
845
-
846
- # predict the noise residual
847
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
848
-
849
- # perform guidance
850
- if do_classifier_free_guidance:
851
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
852
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
853
-
854
- # compute the previous noisy sample x_t -> x_t-1
855
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
856
-
857
- # call the callback, if provided
858
- if callback is not None and i % callback_steps == 0:
859
- callback(i, t, latents)
860
-
861
- # 8. Post-processing
862
- image = self.decode_latents(latents)
863
-
864
- # 9. Run safety checker
865
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
866
-
867
- # 10. Convert to PIL
868
- if output_type == "pil":
869
- image = self.numpy_to_pil(image, argument=argument)
870
-
871
- if not return_dict:
872
- return (image, has_nsfw_concept)
873
-
874
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
875
-
876
- @paddle.no_grad()
877
- def img2img(
878
- self,
879
- prompt: Union[str, List[str]],
880
- image: Union[paddle.Tensor, PIL.Image.Image],
881
- strength: float = 0.8,
882
- height=None,
883
- width=None,
884
- num_inference_steps: Optional[int] = 50,
885
- guidance_scale: Optional[float] = 7.5,
886
- negative_prompt: Optional[Union[str, List[str]]] = None,
887
- num_images_per_prompt: Optional[int] = 1,
888
- eta: Optional[float] = 0.0,
889
- seed: Optional[int] = None,
890
- output_type: Optional[str] = "pil",
891
- return_dict: bool = True,
892
- callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
893
- callback_steps: Optional[int] = 1,
894
- # new add
895
- max_embeddings_multiples: Optional[int] = 1,
896
- no_boseos_middle: Optional[bool] = False,
897
- skip_parsing: Optional[bool] = False,
898
- skip_weighting: Optional[bool] = False,
899
- **kwargs,
900
- ):
901
- r"""
902
- Function invoked when calling the pipeline for generation.
903
-
904
- Args:
905
- prompt (`str` or `List[str]`):
906
- The prompt or prompts to guide the image generation.
907
- image (`paddle.Tensor` or `PIL.Image.Image`):
908
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
909
- process.
910
- strength (`float`, *optional*, defaults to 0.8):
911
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
912
- `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
913
- number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
914
- noise will be maximum and the denoising process will run for the full number of iterations specified in
915
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
916
- num_inference_steps (`int`, *optional*, defaults to 50):
917
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
918
- expense of slower inference. This parameter will be modulated by `strength`.
919
- guidance_scale (`float`, *optional*, defaults to 7.5):
920
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
921
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
922
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
923
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
924
- usually at the expense of lower image quality.
925
- negative_prompt (`str` or `List[str]`, *optional*):
926
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
927
- if `guidance_scale` is less than `1`).
928
- num_images_per_prompt (`int`, *optional*, defaults to 1):
929
- The number of images to generate per prompt.
930
- eta (`float`, *optional*, defaults to 0.0):
931
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
932
- [`schedulers.DDIMScheduler`], will be ignored for others.
933
- seed (`int`, *optional*):
934
- A random seed.
935
- output_type (`str`, *optional*, defaults to `"pil"`):
936
- The output format of the generate image. Choose between
937
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
938
- return_dict (`bool`, *optional*, defaults to `True`):
939
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
940
- plain tuple.
941
- callback (`Callable`, *optional*):
942
- A function that will be called every `callback_steps` steps during inference. The function will be
943
- called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
944
- callback_steps (`int`, *optional*, defaults to 1):
945
- The frequency at which the `callback` function will be called. If not specified, the callback will be
946
- called at every step.
947
-
948
- Returns:
949
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
950
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
951
- When returning a tuple, the first element is a list with the generated images, and the second element is a
952
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
953
- (nsfw) content, according to the `safety_checker`.
954
- """
955
- seed = random.randint(0, 2**32) if seed is None else seed
956
- image_str = image
957
- if isinstance(image_str, str):
958
- image = load_image(image_str)
959
-
960
- if height is None and width is None:
961
- width = (image.size[0] // 8) * 8
962
- height = (image.size[1] // 8) * 8
963
- elif height is None and width is not None:
964
- height = (image.size[1] // 8) * 8
965
- elif width is None and height is not None:
966
- width = (image.size[0] // 8) * 8
967
- else:
968
- height = height
969
- width = width
970
-
971
- argument = dict(
972
- prompt=prompt,
973
- image=image_str,
974
- negative_prompt=negative_prompt,
975
- height=height,
976
- width=width,
977
- strength=strength,
978
- num_inference_steps=num_inference_steps,
979
- guidance_scale=guidance_scale,
980
- num_images_per_prompt=num_images_per_prompt,
981
- eta=eta,
982
- seed=seed,
983
- max_embeddings_multiples=max_embeddings_multiples,
984
- no_boseos_middle=no_boseos_middle,
985
- skip_parsing=skip_parsing,
986
- skip_weighting=skip_weighting,
987
- epoch_time=time.time(),
988
- )
989
- paddle.seed(seed)
990
-
991
- # 1. Check inputs
992
- self.check_inputs_img2img_inpaint(prompt, strength, callback_steps)
993
-
994
- # 2. Define call parameters
995
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
996
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
997
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
998
- # corresponds to doing no classifier free guidance.
999
- do_classifier_free_guidance = guidance_scale > 1.0
1000
-
1001
- # 3. Encode input prompt
1002
- text_embeddings = self._encode_prompt(
1003
- prompt,
1004
- negative_prompt,
1005
- max_embeddings_multiples,
1006
- no_boseos_middle,
1007
- skip_parsing,
1008
- skip_weighting,
1009
- do_classifier_free_guidance,
1010
- num_images_per_prompt,
1011
- )
1012
-
1013
- # 4. Preprocess image
1014
- if isinstance(image, PIL.Image.Image):
1015
- image = image.resize((width, height))
1016
- image = preprocess_image(image)
1017
-
1018
- # 5. set timesteps
1019
- self.scheduler.set_timesteps(num_inference_steps)
1020
- timesteps = self.get_timesteps(num_inference_steps, strength)
1021
- latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt])
1022
-
1023
- # 6. Prepare latent variables
1024
- latents = self.prepare_latents_img2img(image, latent_timestep, num_images_per_prompt, text_embeddings.dtype)
1025
-
1026
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
1027
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
1028
-
1029
- # 8. Denoising loop
1030
- for i, t in enumerate(self.progress_bar(timesteps)):
1031
- # expand the latents if we are doing classifier free guidance
1032
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
1033
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
1034
-
1035
- # predict the noise residual
1036
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
1037
-
1038
- # perform guidance
1039
- if do_classifier_free_guidance:
1040
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1041
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
1042
-
1043
- # compute the previous noisy sample x_t -> x_t-1
1044
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
1045
-
1046
- # call the callback, if provided
1047
- if callback is not None and i % callback_steps == 0:
1048
- callback(i, t, latents)
1049
-
1050
- # 9. Post-processing
1051
- image = self.decode_latents(latents)
1052
-
1053
- # 10. Run safety checker
1054
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
1055
-
1056
- # 11. Convert to PIL
1057
- if output_type == "pil":
1058
- image = self.numpy_to_pil(image, argument=argument)
1059
-
1060
- if not return_dict:
1061
- return (image, has_nsfw_concept)
1062
-
1063
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
1064
-
1065
- @paddle.no_grad()
1066
- def inpaint(
1067
- self,
1068
- prompt: Union[str, List[str]],
1069
- image: Union[paddle.Tensor, PIL.Image.Image],
1070
- mask_image: Union[paddle.Tensor, PIL.Image.Image],
1071
- height=None,
1072
- width=None,
1073
- strength: float = 0.8,
1074
- num_inference_steps: Optional[int] = 50,
1075
- guidance_scale: Optional[float] = 7.5,
1076
- negative_prompt: Optional[Union[str, List[str]]] = None,
1077
- num_images_per_prompt: Optional[int] = 1,
1078
- eta: Optional[float] = 0.0,
1079
- seed: Optional[int] = None,
1080
- output_type: Optional[str] = "pil",
1081
- return_dict: bool = True,
1082
- callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
1083
- callback_steps: Optional[int] = 1,
1084
- # new add
1085
- max_embeddings_multiples: Optional[int] = 1,
1086
- no_boseos_middle: Optional[bool] = False,
1087
- skip_parsing: Optional[bool] = False,
1088
- skip_weighting: Optional[bool] = False,
1089
- **kwargs,
1090
- ):
1091
- r"""
1092
- Function invoked when calling the pipeline for generation.
1093
-
1094
- Args:
1095
- prompt (`str` or `List[str]`):
1096
- The prompt or prompts to guide the image generation.
1097
- image (`paddle.Tensor` or `PIL.Image.Image`):
1098
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
1099
- process. This is the image whose masked region will be inpainted.
1100
- mask_image (`paddle.Tensor` or `PIL.Image.Image`):
1101
- `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
1102
- replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1103
- PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1104
- contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1105
- strength (`float`, *optional*, defaults to 0.8):
1106
- Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1107
- is 1, the denoising process will be run on the masked area for the full number of iterations specified
1108
- in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
1109
- noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1110
- num_inference_steps (`int`, *optional*, defaults to 50):
1111
- The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1112
- the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1113
- guidance_scale (`float`, *optional*, defaults to 7.5):
1114
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1115
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
1116
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1117
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1118
- usually at the expense of lower image quality.
1119
- negative_prompt (`str` or `List[str]`, *optional*):
1120
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1121
- if `guidance_scale` is less than `1`).
1122
- num_images_per_prompt (`int`, *optional*, defaults to 1):
1123
- The number of images to generate per prompt.
1124
- eta (`float`, *optional*, defaults to 0.0):
1125
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1126
- [`schedulers.DDIMScheduler`], will be ignored for others.
1127
- seed (`int`, *optional*):
1128
- A random seed.
1129
- output_type (`str`, *optional*, defaults to `"pil"`):
1130
- The output format of the generate image. Choose between
1131
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1132
- return_dict (`bool`, *optional*, defaults to `True`):
1133
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1134
- plain tuple.
1135
- callback (`Callable`, *optional*):
1136
- A function that will be called every `callback_steps` steps during inference. The function will be
1137
- called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
1138
- callback_steps (`int`, *optional*, defaults to 1):
1139
- The frequency at which the `callback` function will be called. If not specified, the callback will be
1140
- called at every step.
1141
-
1142
- Returns:
1143
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1144
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1145
- When returning a tuple, the first element is a list with the generated images, and the second element is a
1146
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1147
- (nsfw) content, according to the `safety_checker`.
1148
- """
1149
- seed = random.randint(0, 2**32) if seed is None else seed
1150
- image_str = image
1151
- mask_image_str = mask_image
1152
-
1153
- if isinstance(image_str, str):
1154
- image = load_image(image_str)
1155
- if isinstance(mask_image_str, str):
1156
- mask_image = load_image(mask_image_str)
1157
-
1158
- if height is None and width is None:
1159
- width = (image.size[0] // 8) * 8
1160
- height = (image.size[1] // 8) * 8
1161
- elif height is None and width is not None:
1162
- height = (image.size[1] // 8) * 8
1163
- elif width is None and height is not None:
1164
- width = (image.size[0] // 8) * 8
1165
- else:
1166
- height = height
1167
- width = width
1168
-
1169
- argument = dict(
1170
- prompt=prompt,
1171
- image=image_str,
1172
- mask_image=mask_image_str,
1173
- negative_prompt=negative_prompt,
1174
- height=height,
1175
- width=width,
1176
- strength=strength,
1177
- num_inference_steps=num_inference_steps,
1178
- guidance_scale=guidance_scale,
1179
- num_images_per_prompt=num_images_per_prompt,
1180
- eta=eta,
1181
- seed=seed,
1182
- max_embeddings_multiples=max_embeddings_multiples,
1183
- no_boseos_middle=no_boseos_middle,
1184
- skip_parsing=skip_parsing,
1185
- skip_weighting=skip_weighting,
1186
- epoch_time=time.time(),
1187
- )
1188
- paddle.seed(seed)
1189
-
1190
- # 1. Check inputs
1191
- self.check_inputs_img2img_inpaint(prompt, strength, callback_steps)
1192
-
1193
- # 2. Define call parameters
1194
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
1195
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
1196
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
1197
- # corresponds to doing no classifier free guidance.
1198
- do_classifier_free_guidance = guidance_scale > 1.0
1199
-
1200
- # 3. Encode input prompt
1201
- text_embeddings = self._encode_prompt(
1202
- prompt,
1203
- negative_prompt,
1204
- max_embeddings_multiples,
1205
- no_boseos_middle,
1206
- skip_parsing,
1207
- skip_weighting,
1208
- do_classifier_free_guidance,
1209
- num_images_per_prompt,
1210
- )
1211
-
1212
- if not isinstance(image, paddle.Tensor):
1213
- image = image.resize((width, height))
1214
- image = preprocess_image(image)
1215
-
1216
- if not isinstance(mask_image, paddle.Tensor):
1217
- mask_image = mask_image.resize((width, height))
1218
- mask_image = preprocess_mask(mask_image)
1219
-
1220
- # 5. set timesteps
1221
- self.scheduler.set_timesteps(num_inference_steps)
1222
- timesteps = self.get_timesteps(num_inference_steps, strength)
1223
- latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt])
1224
-
1225
- # 6. Prepare latent variables
1226
- # encode the init image into latents and scale the latents
1227
- latents, init_latents_orig, noise = self.prepare_latents_inpaint(
1228
- image, latent_timestep, num_images_per_prompt, text_embeddings.dtype
1229
- )
1230
-
1231
- # 7. Prepare mask latent
1232
- mask = mask_image.cast(latents.dtype)
1233
- mask = paddle.concat([mask] * batch_size * num_images_per_prompt)
1234
-
1235
- # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
1236
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
1237
-
1238
- # 9. Denoising loop
1239
- for i, t in enumerate(self.progress_bar(timesteps)):
1240
- # expand the latents if we are doing classifier free guidance
1241
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
1242
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
1243
-
1244
- # predict the noise residual
1245
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
1246
-
1247
- # perform guidance
1248
- if do_classifier_free_guidance:
1249
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1250
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
1251
-
1252
- # compute the previous noisy sample x_t -> x_t-1
1253
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
1254
- # masking
1255
- init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t)
1256
-
1257
- latents = (init_latents_proper * mask) + (latents * (1 - mask))
1258
-
1259
- # call the callback, if provided
1260
- if callback is not None and i % callback_steps == 0:
1261
- callback(i, t, latents)
1262
-
1263
- # 10. Post-processing
1264
- image = self.decode_latents(latents)
1265
-
1266
- # 11. Run safety checker
1267
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
1268
-
1269
- # 12. Convert to PIL
1270
- if output_type == "pil":
1271
- image = self.numpy_to_pil(image, argument=argument)
1272
-
1273
- if not return_dict:
1274
- return (image, has_nsfw_concept)
1275
-
1276
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
1277
-
1278
- @staticmethod
1279
- def numpy_to_pil(images, **kwargs):
1280
- """
1281
- Convert a numpy image or a batch of images to a PIL image.
1282
- """
1283
- if images.ndim == 3:
1284
- images = images[None, ...]
1285
- images = (images * 255).round().astype("uint8")
1286
- pil_images = []
1287
- argument = kwargs.pop("argument", None)
1288
- for image in images:
1289
- image = PIL.Image.fromarray(image)
1290
- if argument is not None:
1291
- image.argument = argument
1292
- pil_images.append(image)
1293
-
1294
- return pil_images
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/7hao/bingo/src/lib/isomorphic/browser.ts DELETED
@@ -1,11 +0,0 @@
1
- 'use client'
2
-
3
- const debug = console.info.bind(console)
4
-
5
- class WebSocketAlias extends WebSocket {
6
- constructor(address: string | URL, ...args: any) {
7
- super(address)
8
- }
9
- }
10
-
11
- export default { fetch, WebSocket: WebSocketAlias, debug }
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/components/tailwind-indicator.tsx DELETED
@@ -1,14 +0,0 @@
1
- export function TailwindIndicator() {
2
- if (process.env.NODE_ENV === 'production') return null
3
-
4
- return (
5
- <div className="fixed bottom-1 left-1 z-50 flex h-6 w-6 items-center justify-center rounded-full bg-gray-800 p-3 font-mono text-xs text-white">
6
- <div className="block sm:hidden">xs</div>
7
- <div className="hidden sm:block md:hidden">sm</div>
8
- <div className="hidden md:block lg:hidden">md</div>
9
- <div className="hidden lg:block xl:hidden">lg</div>
10
- <div className="hidden xl:block 2xl:hidden">xl</div>
11
- <div className="hidden 2xl:block">2xl</div>
12
- </div>
13
- )
14
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/models_onnx.py DELETED
@@ -1,819 +0,0 @@
1
- import math, pdb, os
2
- from time import time as ttime
3
- import torch
4
- from torch import nn
5
- from torch.nn import functional as F
6
- from infer_pack import modules
7
- from infer_pack import attentions
8
- from infer_pack import commons
9
- from infer_pack.commons import init_weights, get_padding
10
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
11
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
12
- from infer_pack.commons import init_weights
13
- import numpy as np
14
- from infer_pack import commons
15
-
16
-
17
- class TextEncoder256(nn.Module):
18
- def __init__(
19
- self,
20
- out_channels,
21
- hidden_channels,
22
- filter_channels,
23
- n_heads,
24
- n_layers,
25
- kernel_size,
26
- p_dropout,
27
- f0=True,
28
- ):
29
- super().__init__()
30
- self.out_channels = out_channels
31
- self.hidden_channels = hidden_channels
32
- self.filter_channels = filter_channels
33
- self.n_heads = n_heads
34
- self.n_layers = n_layers
35
- self.kernel_size = kernel_size
36
- self.p_dropout = p_dropout
37
- self.emb_phone = nn.Linear(256, hidden_channels)
38
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
39
- if f0 == True:
40
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
41
- self.encoder = attentions.Encoder(
42
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
43
- )
44
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
45
-
46
- def forward(self, phone, pitch, lengths):
47
- if pitch == None:
48
- x = self.emb_phone(phone)
49
- else:
50
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
51
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
52
- x = self.lrelu(x)
53
- x = torch.transpose(x, 1, -1) # [b, h, t]
54
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
55
- x.dtype
56
- )
57
- x = self.encoder(x * x_mask, x_mask)
58
- stats = self.proj(x) * x_mask
59
-
60
- m, logs = torch.split(stats, self.out_channels, dim=1)
61
- return m, logs, x_mask
62
-
63
-
64
- class TextEncoder768(nn.Module):
65
- def __init__(
66
- self,
67
- out_channels,
68
- hidden_channels,
69
- filter_channels,
70
- n_heads,
71
- n_layers,
72
- kernel_size,
73
- p_dropout,
74
- f0=True,
75
- ):
76
- super().__init__()
77
- self.out_channels = out_channels
78
- self.hidden_channels = hidden_channels
79
- self.filter_channels = filter_channels
80
- self.n_heads = n_heads
81
- self.n_layers = n_layers
82
- self.kernel_size = kernel_size
83
- self.p_dropout = p_dropout
84
- self.emb_phone = nn.Linear(768, hidden_channels)
85
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
86
- if f0 == True:
87
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
88
- self.encoder = attentions.Encoder(
89
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
90
- )
91
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
92
-
93
- def forward(self, phone, pitch, lengths):
94
- if pitch == None:
95
- x = self.emb_phone(phone)
96
- else:
97
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
98
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
99
- x = self.lrelu(x)
100
- x = torch.transpose(x, 1, -1) # [b, h, t]
101
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
102
- x.dtype
103
- )
104
- x = self.encoder(x * x_mask, x_mask)
105
- stats = self.proj(x) * x_mask
106
-
107
- m, logs = torch.split(stats, self.out_channels, dim=1)
108
- return m, logs, x_mask
109
-
110
-
111
- class ResidualCouplingBlock(nn.Module):
112
- def __init__(
113
- self,
114
- channels,
115
- hidden_channels,
116
- kernel_size,
117
- dilation_rate,
118
- n_layers,
119
- n_flows=4,
120
- gin_channels=0,
121
- ):
122
- super().__init__()
123
- self.channels = channels
124
- self.hidden_channels = hidden_channels
125
- self.kernel_size = kernel_size
126
- self.dilation_rate = dilation_rate
127
- self.n_layers = n_layers
128
- self.n_flows = n_flows
129
- self.gin_channels = gin_channels
130
-
131
- self.flows = nn.ModuleList()
132
- for i in range(n_flows):
133
- self.flows.append(
134
- modules.ResidualCouplingLayer(
135
- channels,
136
- hidden_channels,
137
- kernel_size,
138
- dilation_rate,
139
- n_layers,
140
- gin_channels=gin_channels,
141
- mean_only=True,
142
- )
143
- )
144
- self.flows.append(modules.Flip())
145
-
146
- def forward(self, x, x_mask, g=None, reverse=False):
147
- if not reverse:
148
- for flow in self.flows:
149
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
150
- else:
151
- for flow in reversed(self.flows):
152
- x = flow(x, x_mask, g=g, reverse=reverse)
153
- return x
154
-
155
- def remove_weight_norm(self):
156
- for i in range(self.n_flows):
157
- self.flows[i * 2].remove_weight_norm()
158
-
159
-
160
- class PosteriorEncoder(nn.Module):
161
- def __init__(
162
- self,
163
- in_channels,
164
- out_channels,
165
- hidden_channels,
166
- kernel_size,
167
- dilation_rate,
168
- n_layers,
169
- gin_channels=0,
170
- ):
171
- super().__init__()
172
- self.in_channels = in_channels
173
- self.out_channels = out_channels
174
- self.hidden_channels = hidden_channels
175
- self.kernel_size = kernel_size
176
- self.dilation_rate = dilation_rate
177
- self.n_layers = n_layers
178
- self.gin_channels = gin_channels
179
-
180
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
181
- self.enc = modules.WN(
182
- hidden_channels,
183
- kernel_size,
184
- dilation_rate,
185
- n_layers,
186
- gin_channels=gin_channels,
187
- )
188
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
189
-
190
- def forward(self, x, x_lengths, g=None):
191
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
192
- x.dtype
193
- )
194
- x = self.pre(x) * x_mask
195
- x = self.enc(x, x_mask, g=g)
196
- stats = self.proj(x) * x_mask
197
- m, logs = torch.split(stats, self.out_channels, dim=1)
198
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
199
- return z, m, logs, x_mask
200
-
201
- def remove_weight_norm(self):
202
- self.enc.remove_weight_norm()
203
-
204
-
205
- class Generator(torch.nn.Module):
206
- def __init__(
207
- self,
208
- initial_channel,
209
- resblock,
210
- resblock_kernel_sizes,
211
- resblock_dilation_sizes,
212
- upsample_rates,
213
- upsample_initial_channel,
214
- upsample_kernel_sizes,
215
- gin_channels=0,
216
- ):
217
- super(Generator, self).__init__()
218
- self.num_kernels = len(resblock_kernel_sizes)
219
- self.num_upsamples = len(upsample_rates)
220
- self.conv_pre = Conv1d(
221
- initial_channel, upsample_initial_channel, 7, 1, padding=3
222
- )
223
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
224
-
225
- self.ups = nn.ModuleList()
226
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
227
- self.ups.append(
228
- weight_norm(
229
- ConvTranspose1d(
230
- upsample_initial_channel // (2**i),
231
- upsample_initial_channel // (2 ** (i + 1)),
232
- k,
233
- u,
234
- padding=(k - u) // 2,
235
- )
236
- )
237
- )
238
-
239
- self.resblocks = nn.ModuleList()
240
- for i in range(len(self.ups)):
241
- ch = upsample_initial_channel // (2 ** (i + 1))
242
- for j, (k, d) in enumerate(
243
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
244
- ):
245
- self.resblocks.append(resblock(ch, k, d))
246
-
247
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
248
- self.ups.apply(init_weights)
249
-
250
- if gin_channels != 0:
251
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
252
-
253
- def forward(self, x, g=None):
254
- x = self.conv_pre(x)
255
- if g is not None:
256
- x = x + self.cond(g)
257
-
258
- for i in range(self.num_upsamples):
259
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
260
- x = self.ups[i](x)
261
- xs = None
262
- for j in range(self.num_kernels):
263
- if xs is None:
264
- xs = self.resblocks[i * self.num_kernels + j](x)
265
- else:
266
- xs += self.resblocks[i * self.num_kernels + j](x)
267
- x = xs / self.num_kernels
268
- x = F.leaky_relu(x)
269
- x = self.conv_post(x)
270
- x = torch.tanh(x)
271
-
272
- return x
273
-
274
- def remove_weight_norm(self):
275
- for l in self.ups:
276
- remove_weight_norm(l)
277
- for l in self.resblocks:
278
- l.remove_weight_norm()
279
-
280
-
281
- class SineGen(torch.nn.Module):
282
- """Definition of sine generator
283
- SineGen(samp_rate, harmonic_num = 0,
284
- sine_amp = 0.1, noise_std = 0.003,
285
- voiced_threshold = 0,
286
- flag_for_pulse=False)
287
- samp_rate: sampling rate in Hz
288
- harmonic_num: number of harmonic overtones (default 0)
289
- sine_amp: amplitude of sine-wavefrom (default 0.1)
290
- noise_std: std of Gaussian noise (default 0.003)
291
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
292
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
293
- Note: when flag_for_pulse is True, the first time step of a voiced
294
- segment is always sin(np.pi) or cos(0)
295
- """
296
-
297
- def __init__(
298
- self,
299
- samp_rate,
300
- harmonic_num=0,
301
- sine_amp=0.1,
302
- noise_std=0.003,
303
- voiced_threshold=0,
304
- flag_for_pulse=False,
305
- ):
306
- super(SineGen, self).__init__()
307
- self.sine_amp = sine_amp
308
- self.noise_std = noise_std
309
- self.harmonic_num = harmonic_num
310
- self.dim = self.harmonic_num + 1
311
- self.sampling_rate = samp_rate
312
- self.voiced_threshold = voiced_threshold
313
-
314
- def _f02uv(self, f0):
315
- # generate uv signal
316
- uv = torch.ones_like(f0)
317
- uv = uv * (f0 > self.voiced_threshold)
318
- return uv
319
-
320
- def forward(self, f0, upp):
321
- """sine_tensor, uv = forward(f0)
322
- input F0: tensor(batchsize=1, length, dim=1)
323
- f0 for unvoiced steps should be 0
324
- output sine_tensor: tensor(batchsize=1, length, dim)
325
- output uv: tensor(batchsize=1, length, 1)
326
- """
327
- with torch.no_grad():
328
- f0 = f0[:, None].transpose(1, 2)
329
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
330
- # fundamental component
331
- f0_buf[:, :, 0] = f0[:, :, 0]
332
- for idx in np.arange(self.harmonic_num):
333
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
334
- idx + 2
335
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
336
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
337
- rand_ini = torch.rand(
338
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
339
- )
340
- rand_ini[:, 0] = 0
341
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
342
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
343
- tmp_over_one *= upp
344
- tmp_over_one = F.interpolate(
345
- tmp_over_one.transpose(2, 1),
346
- scale_factor=upp,
347
- mode="linear",
348
- align_corners=True,
349
- ).transpose(2, 1)
350
- rad_values = F.interpolate(
351
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
352
- ).transpose(
353
- 2, 1
354
- ) #######
355
- tmp_over_one %= 1
356
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
357
- cumsum_shift = torch.zeros_like(rad_values)
358
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
359
- sine_waves = torch.sin(
360
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
361
- )
362
- sine_waves = sine_waves * self.sine_amp
363
- uv = self._f02uv(f0)
364
- uv = F.interpolate(
365
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
366
- ).transpose(2, 1)
367
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
368
- noise = noise_amp * torch.randn_like(sine_waves)
369
- sine_waves = sine_waves * uv + noise
370
- return sine_waves, uv, noise
371
-
372
-
373
- class SourceModuleHnNSF(torch.nn.Module):
374
- """SourceModule for hn-nsf
375
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
376
- add_noise_std=0.003, voiced_threshod=0)
377
- sampling_rate: sampling_rate in Hz
378
- harmonic_num: number of harmonic above F0 (default: 0)
379
- sine_amp: amplitude of sine source signal (default: 0.1)
380
- add_noise_std: std of additive Gaussian noise (default: 0.003)
381
- note that amplitude of noise in unvoiced is decided
382
- by sine_amp
383
- voiced_threshold: threhold to set U/V given F0 (default: 0)
384
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
385
- F0_sampled (batchsize, length, 1)
386
- Sine_source (batchsize, length, 1)
387
- noise_source (batchsize, length 1)
388
- uv (batchsize, length, 1)
389
- """
390
-
391
- def __init__(
392
- self,
393
- sampling_rate,
394
- harmonic_num=0,
395
- sine_amp=0.1,
396
- add_noise_std=0.003,
397
- voiced_threshod=0,
398
- is_half=True,
399
- ):
400
- super(SourceModuleHnNSF, self).__init__()
401
-
402
- self.sine_amp = sine_amp
403
- self.noise_std = add_noise_std
404
- self.is_half = is_half
405
- # to produce sine waveforms
406
- self.l_sin_gen = SineGen(
407
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
408
- )
409
-
410
- # to merge source harmonics into a single excitation
411
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
412
- self.l_tanh = torch.nn.Tanh()
413
-
414
- def forward(self, x, upp=None):
415
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
416
- if self.is_half:
417
- sine_wavs = sine_wavs.half()
418
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
419
- return sine_merge, None, None # noise, uv
420
-
421
-
422
- class GeneratorNSF(torch.nn.Module):
423
- def __init__(
424
- self,
425
- initial_channel,
426
- resblock,
427
- resblock_kernel_sizes,
428
- resblock_dilation_sizes,
429
- upsample_rates,
430
- upsample_initial_channel,
431
- upsample_kernel_sizes,
432
- gin_channels,
433
- sr,
434
- is_half=False,
435
- ):
436
- super(GeneratorNSF, self).__init__()
437
- self.num_kernels = len(resblock_kernel_sizes)
438
- self.num_upsamples = len(upsample_rates)
439
-
440
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
441
- self.m_source = SourceModuleHnNSF(
442
- sampling_rate=sr, harmonic_num=0, is_half=is_half
443
- )
444
- self.noise_convs = nn.ModuleList()
445
- self.conv_pre = Conv1d(
446
- initial_channel, upsample_initial_channel, 7, 1, padding=3
447
- )
448
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
449
-
450
- self.ups = nn.ModuleList()
451
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
452
- c_cur = upsample_initial_channel // (2 ** (i + 1))
453
- self.ups.append(
454
- weight_norm(
455
- ConvTranspose1d(
456
- upsample_initial_channel // (2**i),
457
- upsample_initial_channel // (2 ** (i + 1)),
458
- k,
459
- u,
460
- padding=(k - u) // 2,
461
- )
462
- )
463
- )
464
- if i + 1 < len(upsample_rates):
465
- stride_f0 = np.prod(upsample_rates[i + 1 :])
466
- self.noise_convs.append(
467
- Conv1d(
468
- 1,
469
- c_cur,
470
- kernel_size=stride_f0 * 2,
471
- stride=stride_f0,
472
- padding=stride_f0 // 2,
473
- )
474
- )
475
- else:
476
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
477
-
478
- self.resblocks = nn.ModuleList()
479
- for i in range(len(self.ups)):
480
- ch = upsample_initial_channel // (2 ** (i + 1))
481
- for j, (k, d) in enumerate(
482
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
483
- ):
484
- self.resblocks.append(resblock(ch, k, d))
485
-
486
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
487
- self.ups.apply(init_weights)
488
-
489
- if gin_channels != 0:
490
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
491
-
492
- self.upp = np.prod(upsample_rates)
493
-
494
- def forward(self, x, f0, g=None):
495
- har_source, noi_source, uv = self.m_source(f0, self.upp)
496
- har_source = har_source.transpose(1, 2)
497
- x = self.conv_pre(x)
498
- if g is not None:
499
- x = x + self.cond(g)
500
-
501
- for i in range(self.num_upsamples):
502
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
503
- x = self.ups[i](x)
504
- x_source = self.noise_convs[i](har_source)
505
- x = x + x_source
506
- xs = None
507
- for j in range(self.num_kernels):
508
- if xs is None:
509
- xs = self.resblocks[i * self.num_kernels + j](x)
510
- else:
511
- xs += self.resblocks[i * self.num_kernels + j](x)
512
- x = xs / self.num_kernels
513
- x = F.leaky_relu(x)
514
- x = self.conv_post(x)
515
- x = torch.tanh(x)
516
- return x
517
-
518
- def remove_weight_norm(self):
519
- for l in self.ups:
520
- remove_weight_norm(l)
521
- for l in self.resblocks:
522
- l.remove_weight_norm()
523
-
524
-
525
- sr2sr = {
526
- "32k": 32000,
527
- "40k": 40000,
528
- "48k": 48000,
529
- }
530
-
531
-
532
- class SynthesizerTrnMsNSFsidM(nn.Module):
533
- def __init__(
534
- self,
535
- spec_channels,
536
- segment_size,
537
- inter_channels,
538
- hidden_channels,
539
- filter_channels,
540
- n_heads,
541
- n_layers,
542
- kernel_size,
543
- p_dropout,
544
- resblock,
545
- resblock_kernel_sizes,
546
- resblock_dilation_sizes,
547
- upsample_rates,
548
- upsample_initial_channel,
549
- upsample_kernel_sizes,
550
- spk_embed_dim,
551
- gin_channels,
552
- sr,
553
- version,
554
- **kwargs
555
- ):
556
- super().__init__()
557
- if type(sr) == type("strr"):
558
- sr = sr2sr[sr]
559
- self.spec_channels = spec_channels
560
- self.inter_channels = inter_channels
561
- self.hidden_channels = hidden_channels
562
- self.filter_channels = filter_channels
563
- self.n_heads = n_heads
564
- self.n_layers = n_layers
565
- self.kernel_size = kernel_size
566
- self.p_dropout = p_dropout
567
- self.resblock = resblock
568
- self.resblock_kernel_sizes = resblock_kernel_sizes
569
- self.resblock_dilation_sizes = resblock_dilation_sizes
570
- self.upsample_rates = upsample_rates
571
- self.upsample_initial_channel = upsample_initial_channel
572
- self.upsample_kernel_sizes = upsample_kernel_sizes
573
- self.segment_size = segment_size
574
- self.gin_channels = gin_channels
575
- # self.hop_length = hop_length#
576
- self.spk_embed_dim = spk_embed_dim
577
- if version == "v1":
578
- self.enc_p = TextEncoder256(
579
- inter_channels,
580
- hidden_channels,
581
- filter_channels,
582
- n_heads,
583
- n_layers,
584
- kernel_size,
585
- p_dropout,
586
- )
587
- else:
588
- self.enc_p = TextEncoder768(
589
- inter_channels,
590
- hidden_channels,
591
- filter_channels,
592
- n_heads,
593
- n_layers,
594
- kernel_size,
595
- p_dropout,
596
- )
597
- self.dec = GeneratorNSF(
598
- inter_channels,
599
- resblock,
600
- resblock_kernel_sizes,
601
- resblock_dilation_sizes,
602
- upsample_rates,
603
- upsample_initial_channel,
604
- upsample_kernel_sizes,
605
- gin_channels=gin_channels,
606
- sr=sr,
607
- is_half=kwargs["is_half"],
608
- )
609
- self.enc_q = PosteriorEncoder(
610
- spec_channels,
611
- inter_channels,
612
- hidden_channels,
613
- 5,
614
- 1,
615
- 16,
616
- gin_channels=gin_channels,
617
- )
618
- self.flow = ResidualCouplingBlock(
619
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
620
- )
621
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
622
- self.speaker_map = None
623
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
624
-
625
- def remove_weight_norm(self):
626
- self.dec.remove_weight_norm()
627
- self.flow.remove_weight_norm()
628
- self.enc_q.remove_weight_norm()
629
-
630
- def construct_spkmixmap(self, n_speaker):
631
- self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
632
- for i in range(n_speaker):
633
- self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
634
- self.speaker_map = self.speaker_map.unsqueeze(0)
635
-
636
- def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
637
- if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
638
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
639
- g = g * self.speaker_map # [N, S, B, 1, H]
640
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
641
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
642
- else:
643
- g = g.unsqueeze(0)
644
- g = self.emb_g(g).transpose(1, 2)
645
-
646
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
647
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
648
- z = self.flow(z_p, x_mask, g=g, reverse=True)
649
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
650
- return o
651
-
652
-
653
- class MultiPeriodDiscriminator(torch.nn.Module):
654
- def __init__(self, use_spectral_norm=False):
655
- super(MultiPeriodDiscriminator, self).__init__()
656
- periods = [2, 3, 5, 7, 11, 17]
657
- # periods = [3, 5, 7, 11, 17, 23, 37]
658
-
659
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
660
- discs = discs + [
661
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
662
- ]
663
- self.discriminators = nn.ModuleList(discs)
664
-
665
- def forward(self, y, y_hat):
666
- y_d_rs = [] #
667
- y_d_gs = []
668
- fmap_rs = []
669
- fmap_gs = []
670
- for i, d in enumerate(self.discriminators):
671
- y_d_r, fmap_r = d(y)
672
- y_d_g, fmap_g = d(y_hat)
673
- # for j in range(len(fmap_r)):
674
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
675
- y_d_rs.append(y_d_r)
676
- y_d_gs.append(y_d_g)
677
- fmap_rs.append(fmap_r)
678
- fmap_gs.append(fmap_g)
679
-
680
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
681
-
682
-
683
- class MultiPeriodDiscriminatorV2(torch.nn.Module):
684
- def __init__(self, use_spectral_norm=False):
685
- super(MultiPeriodDiscriminatorV2, self).__init__()
686
- # periods = [2, 3, 5, 7, 11, 17]
687
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
688
-
689
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
690
- discs = discs + [
691
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
692
- ]
693
- self.discriminators = nn.ModuleList(discs)
694
-
695
- def forward(self, y, y_hat):
696
- y_d_rs = [] #
697
- y_d_gs = []
698
- fmap_rs = []
699
- fmap_gs = []
700
- for i, d in enumerate(self.discriminators):
701
- y_d_r, fmap_r = d(y)
702
- y_d_g, fmap_g = d(y_hat)
703
- # for j in range(len(fmap_r)):
704
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
705
- y_d_rs.append(y_d_r)
706
- y_d_gs.append(y_d_g)
707
- fmap_rs.append(fmap_r)
708
- fmap_gs.append(fmap_g)
709
-
710
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
711
-
712
-
713
- class DiscriminatorS(torch.nn.Module):
714
- def __init__(self, use_spectral_norm=False):
715
- super(DiscriminatorS, self).__init__()
716
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
717
- self.convs = nn.ModuleList(
718
- [
719
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
720
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
721
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
722
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
723
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
724
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
725
- ]
726
- )
727
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
728
-
729
- def forward(self, x):
730
- fmap = []
731
-
732
- for l in self.convs:
733
- x = l(x)
734
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
735
- fmap.append(x)
736
- x = self.conv_post(x)
737
- fmap.append(x)
738
- x = torch.flatten(x, 1, -1)
739
-
740
- return x, fmap
741
-
742
-
743
- class DiscriminatorP(torch.nn.Module):
744
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
745
- super(DiscriminatorP, self).__init__()
746
- self.period = period
747
- self.use_spectral_norm = use_spectral_norm
748
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
749
- self.convs = nn.ModuleList(
750
- [
751
- norm_f(
752
- Conv2d(
753
- 1,
754
- 32,
755
- (kernel_size, 1),
756
- (stride, 1),
757
- padding=(get_padding(kernel_size, 1), 0),
758
- )
759
- ),
760
- norm_f(
761
- Conv2d(
762
- 32,
763
- 128,
764
- (kernel_size, 1),
765
- (stride, 1),
766
- padding=(get_padding(kernel_size, 1), 0),
767
- )
768
- ),
769
- norm_f(
770
- Conv2d(
771
- 128,
772
- 512,
773
- (kernel_size, 1),
774
- (stride, 1),
775
- padding=(get_padding(kernel_size, 1), 0),
776
- )
777
- ),
778
- norm_f(
779
- Conv2d(
780
- 512,
781
- 1024,
782
- (kernel_size, 1),
783
- (stride, 1),
784
- padding=(get_padding(kernel_size, 1), 0),
785
- )
786
- ),
787
- norm_f(
788
- Conv2d(
789
- 1024,
790
- 1024,
791
- (kernel_size, 1),
792
- 1,
793
- padding=(get_padding(kernel_size, 1), 0),
794
- )
795
- ),
796
- ]
797
- )
798
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
799
-
800
- def forward(self, x):
801
- fmap = []
802
-
803
- # 1d to 2d
804
- b, c, t = x.shape
805
- if t % self.period != 0: # pad first
806
- n_pad = self.period - (t % self.period)
807
- x = F.pad(x, (0, n_pad), "reflect")
808
- t = t + n_pad
809
- x = x.view(b, c, t // self.period, self.period)
810
-
811
- for l in self.convs:
812
- x = l(x)
813
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
814
- fmap.append(x)
815
- x = self.conv_post(x)
816
- fmap.append(x)
817
- x = torch.flatten(x, 1, -1)
818
-
819
- return x, fmap
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/you.py DELETED
@@ -1,79 +0,0 @@
1
- import sys
2
- import json
3
- import urllib.parse
4
-
5
- from curl_cffi import requests
6
-
7
- config = json.loads(sys.argv[1])
8
- messages = config['messages']
9
- prompt = ''
10
-
11
-
12
- def transform(messages: list) -> list:
13
- result = []
14
- i = 0
15
-
16
- while i < len(messages):
17
- if messages[i]['role'] == 'user':
18
- question = messages[i]['content']
19
- i += 1
20
-
21
- if i < len(messages) and messages[i]['role'] == 'assistant':
22
- answer = messages[i]['content']
23
- i += 1
24
- else:
25
- answer = ''
26
-
27
- result.append({'question': question, 'answer': answer})
28
-
29
- elif messages[i]['role'] == 'assistant':
30
- result.append({'question': '', 'answer': messages[i]['content']})
31
- i += 1
32
-
33
- elif messages[i]['role'] == 'system':
34
- result.append({'question': messages[i]['content'], 'answer': ''})
35
- i += 1
36
-
37
- return result
38
-
39
- headers = {
40
- 'Content-Type': 'application/x-www-form-urlencoded',
41
- 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
42
- 'Sec-Fetch-Site': 'same-origin',
43
- 'Accept-Language': 'en-GB,en;q=0.9',
44
- 'Sec-Fetch-Mode': 'navigate',
45
- 'Host': 'you.com',
46
- 'Origin': 'https://you.com',
47
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15',
48
- 'Referer': 'https://you.com/api/streamingSearch?q=nice&safeSearch=Moderate&onShoppingPage=false&mkt=&responseFilter=WebPages,Translations,TimeZone,Computation,RelatedSearches&domain=youchat&queryTraceId=7a6671f8-5881-404d-8ea3-c3f8301f85ba&chat=%5B%7B%22question%22%3A%22hi%22%2C%22answer%22%3A%22Hello!%20How%20can%20I%20assist%20you%20today%3F%22%7D%5D&chatId=7a6671f8-5881-404d-8ea3-c3f8301f85ba&__cf_chl_tk=ex2bw6vn5vbLsUm8J5rDYUC0Bjzc1XZqka6vUl6765A-1684108495-0-gaNycGzNDtA',
49
- 'Connection': 'keep-alive',
50
- 'Sec-Fetch-Dest': 'document',
51
- 'Priority': 'u=0, i',
52
- }
53
-
54
- if messages[-1]['role'] == 'user':
55
- prompt = messages[-1]['content']
56
- messages = messages[:-1]
57
-
58
- params = urllib.parse.urlencode({
59
- 'q': prompt,
60
- 'domain': 'youchat',
61
- 'chat': transform(messages)
62
- })
63
-
64
- def output(chunk):
65
- if b'"youChatToken"' in chunk:
66
- chunk_json = json.loads(chunk.decode().split('data: ')[1])
67
-
68
- print(chunk_json['youChatToken'], flush=True, end = '')
69
-
70
- while True:
71
- try:
72
- response = requests.get(f'https://you.com/api/streamingSearch?{params}',
73
- headers=headers, content_callback=output, impersonate='safari15_5')
74
-
75
- exit(0)
76
-
77
- except Exception as e:
78
- print('an error occured, retrying... |', e, flush=True)
79
- continue
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb32_in1k.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/resnet101.py', '../_base_/datasets/imagenet_bs32.py',
3
- '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
4
- ]
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-mixup_in1k.py DELETED
@@ -1,5 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/resnet50_mixup.py',
3
- '../_base_/datasets/imagenet_bs32.py',
4
- '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
5
- ]
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/abortedGenerations.ts DELETED
@@ -1,29 +0,0 @@
1
- // Shouldn't be needed if we dove into sveltekit internals, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
2
-
3
- import { setTimeout } from "node:timers/promises";
4
- import { collections } from "./database";
5
-
6
- let closed = false;
7
- process.on("SIGINT", () => {
8
- closed = true;
9
- });
10
-
11
- export let abortedGenerations: Map<string, Date> = new Map();
12
-
13
- async function maintainAbortedGenerations() {
14
- while (!closed) {
15
- await setTimeout(1000);
16
-
17
- try {
18
- const aborts = await collections.abortedGenerations.find({}).sort({ createdAt: 1 }).toArray();
19
-
20
- abortedGenerations = new Map(
21
- aborts.map(({ conversationId, createdAt }) => [conversationId.toString(), createdAt])
22
- );
23
- } catch (err) {
24
- console.error(err);
25
- }
26
- }
27
- }
28
-
29
- maintainAbortedGenerations();
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/util.py DELETED
@@ -1,270 +0,0 @@
1
- # adopted from
2
- # https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
3
- # and
4
- # https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
5
- # and
6
- # https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
7
- #
8
- # thanks!
9
-
10
-
11
- import os
12
- import math
13
- import torch
14
- import torch.nn as nn
15
- import numpy as np
16
- from einops import repeat
17
-
18
- from ldm.util import instantiate_from_config
19
-
20
-
21
- def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
22
- if schedule == "linear":
23
- betas = (
24
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
25
- )
26
-
27
- elif schedule == "cosine":
28
- timesteps = (
29
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
30
- )
31
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
32
- alphas = torch.cos(alphas).pow(2)
33
- alphas = alphas / alphas[0]
34
- betas = 1 - alphas[1:] / alphas[:-1]
35
- betas = np.clip(betas, a_min=0, a_max=0.999)
36
-
37
- elif schedule == "sqrt_linear":
38
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
39
- elif schedule == "sqrt":
40
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
41
- else:
42
- raise ValueError(f"schedule '{schedule}' unknown.")
43
- return betas.numpy()
44
-
45
-
46
- def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
47
- if ddim_discr_method == 'uniform':
48
- c = num_ddpm_timesteps // num_ddim_timesteps
49
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
50
- elif ddim_discr_method == 'quad':
51
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
52
- else:
53
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
54
-
55
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
56
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
57
- steps_out = ddim_timesteps + 1
58
- if verbose:
59
- print(f'Selected timesteps for ddim sampler: {steps_out}')
60
- return steps_out
61
-
62
-
63
- def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
64
- # select alphas for computing the variance schedule
65
- alphas = alphacums[ddim_timesteps]
66
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
67
-
68
- # according the the formula provided in https://arxiv.org/abs/2010.02502
69
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
70
- if verbose:
71
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
72
- print(f'For the chosen value of eta, which is {eta}, '
73
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
74
- return sigmas, alphas, alphas_prev
75
-
76
-
77
- def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
78
- """
79
- Create a beta schedule that discretizes the given alpha_t_bar function,
80
- which defines the cumulative product of (1-beta) over time from t = [0,1].
81
- :param num_diffusion_timesteps: the number of betas to produce.
82
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
83
- produces the cumulative product of (1-beta) up to that
84
- part of the diffusion process.
85
- :param max_beta: the maximum beta to use; use values lower than 1 to
86
- prevent singularities.
87
- """
88
- betas = []
89
- for i in range(num_diffusion_timesteps):
90
- t1 = i / num_diffusion_timesteps
91
- t2 = (i + 1) / num_diffusion_timesteps
92
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
93
- return np.array(betas)
94
-
95
-
96
- def extract_into_tensor(a, t, x_shape):
97
- b, *_ = t.shape
98
- out = a.gather(-1, t)
99
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
100
-
101
-
102
- def checkpoint(func, inputs, params, flag):
103
- """
104
- Evaluate a function without caching intermediate activations, allowing for
105
- reduced memory at the expense of extra compute in the backward pass.
106
- :param func: the function to evaluate.
107
- :param inputs: the argument sequence to pass to `func`.
108
- :param params: a sequence of parameters `func` depends on but does not
109
- explicitly take as arguments.
110
- :param flag: if False, disable gradient checkpointing.
111
- """
112
- if flag:
113
- args = tuple(inputs) + tuple(params)
114
- return CheckpointFunction.apply(func, len(inputs), *args)
115
- else:
116
- return func(*inputs)
117
-
118
-
119
- class CheckpointFunction(torch.autograd.Function):
120
- @staticmethod
121
- def forward(ctx, run_function, length, *args):
122
- ctx.run_function = run_function
123
- ctx.input_tensors = list(args[:length])
124
- ctx.input_params = list(args[length:])
125
- ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(),
126
- "dtype": torch.get_autocast_gpu_dtype(),
127
- "cache_enabled": torch.is_autocast_cache_enabled()}
128
- with torch.no_grad():
129
- output_tensors = ctx.run_function(*ctx.input_tensors)
130
- return output_tensors
131
-
132
- @staticmethod
133
- def backward(ctx, *output_grads):
134
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
135
- with torch.enable_grad(), \
136
- torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs):
137
- # Fixes a bug where the first op in run_function modifies the
138
- # Tensor storage in place, which is not allowed for detach()'d
139
- # Tensors.
140
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
141
- output_tensors = ctx.run_function(*shallow_copies)
142
- input_grads = torch.autograd.grad(
143
- output_tensors,
144
- ctx.input_tensors + ctx.input_params,
145
- output_grads,
146
- allow_unused=True,
147
- )
148
- del ctx.input_tensors
149
- del ctx.input_params
150
- del output_tensors
151
- return (None, None) + input_grads
152
-
153
-
154
- def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
155
- """
156
- Create sinusoidal timestep embeddings.
157
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
158
- These may be fractional.
159
- :param dim: the dimension of the output.
160
- :param max_period: controls the minimum frequency of the embeddings.
161
- :return: an [N x dim] Tensor of positional embeddings.
162
- """
163
- if not repeat_only:
164
- half = dim // 2
165
- freqs = torch.exp(
166
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
167
- ).to(device=timesteps.device)
168
- args = timesteps[:, None].float() * freqs[None]
169
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
170
- if dim % 2:
171
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
172
- else:
173
- embedding = repeat(timesteps, 'b -> b d', d=dim)
174
- return embedding
175
-
176
-
177
- def zero_module(module):
178
- """
179
- Zero out the parameters of a module and return it.
180
- """
181
- for p in module.parameters():
182
- p.detach().zero_()
183
- return module
184
-
185
-
186
- def scale_module(module, scale):
187
- """
188
- Scale the parameters of a module and return it.
189
- """
190
- for p in module.parameters():
191
- p.detach().mul_(scale)
192
- return module
193
-
194
-
195
- def mean_flat(tensor):
196
- """
197
- Take the mean over all non-batch dimensions.
198
- """
199
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
200
-
201
-
202
- def normalization(channels):
203
- """
204
- Make a standard normalization layer.
205
- :param channels: number of input channels.
206
- :return: an nn.Module for normalization.
207
- """
208
- return GroupNorm32(32, channels)
209
-
210
-
211
- # PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
212
- class SiLU(nn.Module):
213
- def forward(self, x):
214
- return x * torch.sigmoid(x)
215
-
216
-
217
- class GroupNorm32(nn.GroupNorm):
218
- def forward(self, x):
219
- return super().forward(x.float()).type(x.dtype)
220
-
221
- def conv_nd(dims, *args, **kwargs):
222
- """
223
- Create a 1D, 2D, or 3D convolution module.
224
- """
225
- if dims == 1:
226
- return nn.Conv1d(*args, **kwargs)
227
- elif dims == 2:
228
- return nn.Conv2d(*args, **kwargs)
229
- elif dims == 3:
230
- return nn.Conv3d(*args, **kwargs)
231
- raise ValueError(f"unsupported dimensions: {dims}")
232
-
233
-
234
- def linear(*args, **kwargs):
235
- """
236
- Create a linear module.
237
- """
238
- return nn.Linear(*args, **kwargs)
239
-
240
-
241
- def avg_pool_nd(dims, *args, **kwargs):
242
- """
243
- Create a 1D, 2D, or 3D average pooling module.
244
- """
245
- if dims == 1:
246
- return nn.AvgPool1d(*args, **kwargs)
247
- elif dims == 2:
248
- return nn.AvgPool2d(*args, **kwargs)
249
- elif dims == 3:
250
- return nn.AvgPool3d(*args, **kwargs)
251
- raise ValueError(f"unsupported dimensions: {dims}")
252
-
253
-
254
- class HybridConditioner(nn.Module):
255
-
256
- def __init__(self, c_concat_config, c_crossattn_config):
257
- super().__init__()
258
- self.concat_conditioner = instantiate_from_config(c_concat_config)
259
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
260
-
261
- def forward(self, c_concat, c_crossattn):
262
- c_concat = self.concat_conditioner(c_concat)
263
- c_crossattn = self.crossattn_conditioner(c_crossattn)
264
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
265
-
266
-
267
- def noise_like(shape, device, repeat=False):
268
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
269
- noise = lambda: torch.randn(shape, device=device)
270
- return repeat_noise() if repeat else noise()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/clock.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import Clock from './time/clock/Clock';
2
- export default Clock;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/line.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import Line from './gameobjects/rendertexture/line/Line.js';
2
- export default Line;
 
 
 
spaces/AiMimicry/sovits-models/inference/infer_tool_grad.py DELETED
@@ -1,160 +0,0 @@
1
- import hashlib
2
- import json
3
- import logging
4
- import os
5
- import time
6
- from pathlib import Path
7
- import io
8
- import librosa
9
- import maad
10
- import numpy as np
11
- from inference import slicer
12
- import parselmouth
13
- import soundfile
14
- import torch
15
- import torchaudio
16
-
17
- from hubert import hubert_model
18
- import utils
19
- from models import SynthesizerTrn
20
- logging.getLogger('numba').setLevel(logging.WARNING)
21
- logging.getLogger('matplotlib').setLevel(logging.WARNING)
22
-
23
- def resize2d_f0(x, target_len):
24
- source = np.array(x)
25
- source[source < 0.001] = np.nan
26
- target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)),
27
- source)
28
- res = np.nan_to_num(target)
29
- return res
30
-
31
- def get_f0(x, p_len,f0_up_key=0):
32
-
33
- time_step = 160 / 16000 * 1000
34
- f0_min = 50
35
- f0_max = 1100
36
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
37
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
38
-
39
- f0 = parselmouth.Sound(x, 16000).to_pitch_ac(
40
- time_step=time_step / 1000, voicing_threshold=0.6,
41
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
42
-
43
- pad_size=(p_len - len(f0) + 1) // 2
44
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
45
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
46
-
47
- f0 *= pow(2, f0_up_key / 12)
48
- f0_mel = 1127 * np.log(1 + f0 / 700)
49
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1
50
- f0_mel[f0_mel <= 1] = 1
51
- f0_mel[f0_mel > 255] = 255
52
- f0_coarse = np.rint(f0_mel).astype(np.int)
53
- return f0_coarse, f0
54
-
55
- def clean_pitch(input_pitch):
56
- num_nan = np.sum(input_pitch == 1)
57
- if num_nan / len(input_pitch) > 0.9:
58
- input_pitch[input_pitch != 1] = 1
59
- return input_pitch
60
-
61
-
62
- def plt_pitch(input_pitch):
63
- input_pitch = input_pitch.astype(float)
64
- input_pitch[input_pitch == 1] = np.nan
65
- return input_pitch
66
-
67
-
68
- def f0_to_pitch(ff):
69
- f0_pitch = 69 + 12 * np.log2(ff / 440)
70
- return f0_pitch
71
-
72
-
73
- def fill_a_to_b(a, b):
74
- if len(a) < len(b):
75
- for _ in range(0, len(b) - len(a)):
76
- a.append(a[0])
77
-
78
-
79
- def mkdir(paths: list):
80
- for path in paths:
81
- if not os.path.exists(path):
82
- os.mkdir(path)
83
-
84
-
85
- class VitsSvc(object):
86
- def __init__(self):
87
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
88
- self.SVCVITS = None
89
- self.hps = None
90
- self.speakers = None
91
- self.hubert_soft = utils.get_hubert_model()
92
-
93
- def set_device(self, device):
94
- self.device = torch.device(device)
95
- self.hubert_soft.to(self.device)
96
- if self.SVCVITS != None:
97
- self.SVCVITS.to(self.device)
98
-
99
- def loadCheckpoint(self, path):
100
- self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json")
101
- self.SVCVITS = SynthesizerTrn(
102
- self.hps.data.filter_length // 2 + 1,
103
- self.hps.train.segment_size // self.hps.data.hop_length,
104
- **self.hps.model)
105
- _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None)
106
- _ = self.SVCVITS.eval().to(self.device)
107
- self.speakers = self.hps.spk
108
-
109
- def get_units(self, source, sr):
110
- source = source.unsqueeze(0).to(self.device)
111
- with torch.inference_mode():
112
- units = self.hubert_soft.units(source)
113
- return units
114
-
115
-
116
- def get_unit_pitch(self, in_path, tran):
117
- source, sr = torchaudio.load(in_path)
118
- source = torchaudio.functional.resample(source, sr, 16000)
119
- if len(source.shape) == 2 and source.shape[1] >= 2:
120
- source = torch.mean(source, dim=0).unsqueeze(0)
121
- soft = self.get_units(source, sr).squeeze(0).cpu().numpy()
122
- f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran)
123
- return soft, f0
124
-
125
- def infer(self, speaker_id, tran, raw_path):
126
- speaker_id = self.speakers[speaker_id]
127
- sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0)
128
- soft, pitch = self.get_unit_pitch(raw_path, tran)
129
- f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device)
130
- stn_tst = torch.FloatTensor(soft)
131
- with torch.no_grad():
132
- x_tst = stn_tst.unsqueeze(0).to(self.device)
133
- x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2)
134
- audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float()
135
- return audio, audio.shape[-1]
136
-
137
- def inference(self,srcaudio,chara,tran,slice_db):
138
- sampling_rate, audio = srcaudio
139
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
140
- if len(audio.shape) > 1:
141
- audio = librosa.to_mono(audio.transpose(1, 0))
142
- if sampling_rate != 16000:
143
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
144
- soundfile.write("tmpwav.wav", audio, 16000, format="wav")
145
- chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db)
146
- audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks)
147
- audio = []
148
- for (slice_tag, data) in audio_data:
149
- length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate))
150
- raw_path = io.BytesIO()
151
- soundfile.write(raw_path, data, audio_sr, format="wav")
152
- raw_path.seek(0)
153
- if slice_tag:
154
- _audio = np.zeros(length)
155
- else:
156
- out_audio, out_sr = self.infer(chara, tran, raw_path)
157
- _audio = out_audio.cpu().numpy()
158
- audio.extend(list(_audio))
159
- audio = (np.array(audio) * 32768.0).astype('int16')
160
- return (self.hps.data.sampling_rate,audio)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/ChatPDF-GUI/README.md DELETED
@@ -1,8 +0,0 @@
1
- ---
2
- sdk: gradio
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: red
6
- pinned: false
7
- app_file: app.py
8
- ---
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/dataset.py DELETED
@@ -1,274 +0,0 @@
1
- # Copyright (c) SenseTime Research. All rights reserved.
2
-
3
- # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
4
- #
5
- # NVIDIA CORPORATION and its licensors retain all intellectual property
6
- # and proprietary rights in and to this software, related documentation
7
- # and any modifications thereto. Any use, reproduction, disclosure or
8
- # distribution of this software and related documentation without an express
9
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
10
-
11
- """Streaming images and labels from datasets created with dataset_tool.py."""
12
-
13
- import os
14
- import numpy as np
15
- import zipfile
16
- import PIL.Image
17
- import json
18
- import torch
19
- import dnnlib
20
- from petrel_client.client import Client
21
- import cv2
22
-
23
-
24
- try:
25
- import pyspng
26
- except ImportError:
27
- pyspng = None
28
-
29
- # ----------------------------------------------------------------------------
30
-
31
-
32
- class Dataset(torch.utils.data.Dataset):
33
- def __init__(self,
34
- name, # Name of the dataset.
35
- raw_shape, # Shape of the raw image data (NCHW).
36
- # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
37
- max_size=None,
38
- # Enable conditioning labels? False = label dimension is zero.
39
- use_labels=False,
40
- # Artificially double the size of the dataset via x-flips. Applied after max_size.
41
- xflip=False,
42
- # Random seed to use when applying max_size.
43
- random_seed=0,
44
- square=False,
45
- ):
46
- print('Inside Dataset')
47
- self._name = name
48
- self._raw_shape = list(raw_shape)
49
- self._use_labels = use_labels
50
- self._raw_labels = None
51
- self._label_shape = None
52
- self._square = square
53
- print("inside dataset, _square: ", self._square)
54
-
55
- # Apply max_size.
56
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
57
- if (max_size is not None) and (self._raw_idx.size > max_size):
58
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
59
- self._raw_idx = np.sort(self._raw_idx[:max_size])
60
-
61
- # Apply xflip.
62
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
63
- if xflip:
64
- self._raw_idx = np.tile(self._raw_idx, 2)
65
- self._xflip = np.concatenate(
66
- [self._xflip, np.ones_like(self._xflip)])
67
-
68
- def _get_raw_labels(self):
69
- if self._raw_labels is None:
70
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
71
- if self._raw_labels is None:
72
- self._raw_labels = np.zeros(
73
- [self._raw_shape[0], 0], dtype=np.float32)
74
- assert isinstance(self._raw_labels, np.ndarray)
75
- assert self._raw_labels.shape[0] == self._raw_shape[0]
76
- assert self._raw_labels.dtype in [np.float32, np.int64]
77
- if self._raw_labels.dtype == np.int64:
78
- assert self._raw_labels.ndim == 1
79
- assert np.all(self._raw_labels >= 0)
80
- return self._raw_labels
81
-
82
- def close(self): # to be overridden by subclass
83
- pass
84
-
85
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
86
- raise NotImplementedError
87
-
88
- def _load_raw_labels(self): # to be overridden by subclass
89
- raise NotImplementedError
90
-
91
- def __getstate__(self):
92
- return dict(self.__dict__, _raw_labels=None)
93
-
94
- def __del__(self):
95
- try:
96
- self.close()
97
- except:
98
- pass
99
-
100
- def __len__(self):
101
- return self._raw_idx.size
102
-
103
- def __getitem__(self, idx):
104
- image = self._load_raw_image(self._raw_idx[idx])
105
- assert isinstance(image, np.ndarray)
106
- assert list(image.shape) == self.image_shape
107
- assert image.dtype == np.uint8
108
- if self._xflip[idx]:
109
- assert image.ndim == 3 # CHW
110
- image = image[:, :, ::-1]
111
- return image.copy(), self.get_label(idx)
112
-
113
- def get_label(self, idx):
114
- label = self._get_raw_labels()[self._raw_idx[idx]]
115
- if label.dtype == np.int64:
116
- onehot = np.zeros(self.label_shape, dtype=np.float32)
117
- onehot[label] = 1
118
- label = onehot
119
- return label.copy()
120
-
121
- def get_details(self, idx):
122
- d = dnnlib.EasyDict()
123
- d.raw_idx = int(self._raw_idx[idx])
124
- d.xflip = (int(self._xflip[idx]) != 0)
125
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
126
- return d
127
-
128
- @property
129
- def name(self):
130
- return self._name
131
-
132
- @property
133
- def image_shape(self):
134
- return list(self._raw_shape[1:])
135
-
136
- @property
137
- def num_channels(self):
138
- assert len(self.image_shape) == 3 # CHW
139
- return self.image_shape[0]
140
-
141
- @property
142
- def resolution(self):
143
- assert len(self.image_shape) == 3 # CHW
144
- if self._square:
145
- assert self.image_shape[1] == self.image_shape[2]
146
- else:
147
- assert self.image_shape[1] == self.image_shape[2] * 2
148
- return self.image_shape[1]
149
-
150
- @property
151
- def label_shape(self):
152
- if self._label_shape is None:
153
- raw_labels = self._get_raw_labels()
154
- if raw_labels.dtype == np.int64:
155
- self._label_shape = [int(np.max(raw_labels)) + 1]
156
- else:
157
- self._label_shape = raw_labels.shape[1:]
158
- return list(self._label_shape)
159
-
160
- @property
161
- def label_dim(self):
162
- assert len(self.label_shape) == 1
163
- return self.label_shape[0]
164
-
165
- @property
166
- def has_labels(self):
167
- return any(x != 0 for x in self.label_shape)
168
-
169
- @property
170
- def has_onehot_labels(self):
171
- return self._get_raw_labels().dtype == np.int64
172
-
173
- # ----------------------------------------------------------------------------
174
-
175
-
176
- class ImageFolderDataset(Dataset):
177
- def __init__(self,
178
- path, # Path to directory or zip.
179
- # Ensure specific resolution, None = highest available.
180
- resolution=None,
181
- ceph=False,
182
- square=False,
183
- # Additional arguments for the Dataset base class.
184
- **super_kwargs,
185
- ):
186
- self._path = path
187
- self._zipfile = None
188
- self._square = square
189
-
190
- if os.path.isdir(self._path):
191
- self._type = 'dir'
192
- self._all_fnames = {os.path.relpath(os.path.join(
193
- root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
194
- elif self._file_ext(self._path) == '.zip':
195
- self._type = 'zip'
196
- self._all_fnames = set(self._get_zipfile().namelist())
197
- else:
198
- raise IOError('Path must point to a directory or zip')
199
-
200
- PIL.Image.init()
201
- self._image_fnames = sorted(
202
- fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
203
- if len(self._image_fnames) == 0:
204
- raise IOError('No image files found in the specified path')
205
-
206
- name = os.path.splitext(os.path.basename(self._path))[0]
207
- raw_shape = [len(self._image_fnames)] + \
208
- list(self._load_raw_image(0).shape)
209
- # if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
210
- # raise IOError('Image files do not match the specified resolution')
211
- if resolution is not None:
212
- if self._square:
213
- raw_shape[2] = raw_shape[3] = resolution
214
- else:
215
- raw_shape[2] = resolution
216
- raw_shape[3] = resolution // 2
217
- # print(raw_shape)
218
- super().__init__(name=name, raw_shape=raw_shape, square=square, **super_kwargs)
219
-
220
- @staticmethod
221
- def _file_ext(fname):
222
- return os.path.splitext(fname)[1].lower()
223
-
224
- def _get_zipfile(self):
225
- assert self._type == 'zip'
226
- if self._zipfile is None:
227
- self._zipfile = zipfile.ZipFile(self._path)
228
- return self._zipfile
229
-
230
- def _open_file(self, fname):
231
- if self._type == 'dir':
232
- return open(os.path.join(self._path, fname), 'rb')
233
- if self._type == 'zip':
234
- return self._get_zipfile().open(fname, 'r')
235
- return None
236
-
237
- def close(self):
238
- try:
239
- if self._zipfile is not None:
240
- self._zipfile.close()
241
- finally:
242
- self._zipfile = None
243
-
244
- def __getstate__(self):
245
- return dict(super().__getstate__(), _zipfile=None)
246
-
247
- def _load_raw_image(self, raw_idx):
248
- fname = self._image_fnames[raw_idx]
249
- with self._open_file(fname) as f:
250
- if pyspng is not None and self._file_ext(fname) == '.png':
251
- image = pyspng.load(f.read())
252
- else:
253
- image = np.array(PIL.Image.open(f))
254
- if image.ndim == 2:
255
- image = image[:, :, np.newaxis] # HW => HWC
256
- image = image.transpose(2, 0, 1) # HWC => CHW
257
- return image
258
-
259
- def _load_raw_labels(self):
260
- fname = 'dataset.json'
261
- if fname not in self._all_fnames:
262
- return None
263
- with self._open_file(fname) as f:
264
- labels = json.load(f)['labels']
265
- if labels is None:
266
- return None
267
- labels = dict(labels)
268
- labels = [labels[fname.replace('\\', '/')]
269
- for fname in self._image_fnames]
270
- labels = np.array(labels)
271
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
272
- return labels
273
-
274
- # ----------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/__init__.py DELETED
File without changes
spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_mstrain_3x_coco.py DELETED
@@ -1,20 +0,0 @@
1
- _base_ = './paa_r50_fpn_1x_coco.py'
2
- img_norm_cfg = dict(
3
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
4
- train_pipeline = [
5
- dict(type='LoadImageFromFile'),
6
- dict(type='LoadAnnotations', with_bbox=True),
7
- dict(
8
- type='Resize',
9
- img_scale=[(1333, 640), (1333, 800)],
10
- multiscale_mode='range',
11
- keep_ratio=True),
12
- dict(type='RandomFlip', flip_ratio=0.5),
13
- dict(type='Normalize', **img_norm_cfg),
14
- dict(type='Pad', size_divisor=32),
15
- dict(type='DefaultFormatBundle'),
16
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
17
- ]
18
- data = dict(train=dict(pipeline=train_pipeline))
19
- lr_config = dict(step=[28, 34])
20
- runner = dict(type='EpochBasedRunner', max_epochs=36)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/misc.py DELETED
@@ -1,377 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import collections.abc
3
- import functools
4
- import itertools
5
- import subprocess
6
- import warnings
7
- from collections import abc
8
- from importlib import import_module
9
- from inspect import getfullargspec
10
- from itertools import repeat
11
-
12
-
13
- # From PyTorch internals
14
- def _ntuple(n):
15
-
16
- def parse(x):
17
- if isinstance(x, collections.abc.Iterable):
18
- return x
19
- return tuple(repeat(x, n))
20
-
21
- return parse
22
-
23
-
24
- to_1tuple = _ntuple(1)
25
- to_2tuple = _ntuple(2)
26
- to_3tuple = _ntuple(3)
27
- to_4tuple = _ntuple(4)
28
- to_ntuple = _ntuple
29
-
30
-
31
- def is_str(x):
32
- """Whether the input is an string instance.
33
-
34
- Note: This method is deprecated since python 2 is no longer supported.
35
- """
36
- return isinstance(x, str)
37
-
38
-
39
- def import_modules_from_strings(imports, allow_failed_imports=False):
40
- """Import modules from the given list of strings.
41
-
42
- Args:
43
- imports (list | str | None): The given module names to be imported.
44
- allow_failed_imports (bool): If True, the failed imports will return
45
- None. Otherwise, an ImportError is raise. Default: False.
46
-
47
- Returns:
48
- list[module] | module | None: The imported modules.
49
-
50
- Examples:
51
- >>> osp, sys = import_modules_from_strings(
52
- ... ['os.path', 'sys'])
53
- >>> import os.path as osp_
54
- >>> import sys as sys_
55
- >>> assert osp == osp_
56
- >>> assert sys == sys_
57
- """
58
- if not imports:
59
- return
60
- single_import = False
61
- if isinstance(imports, str):
62
- single_import = True
63
- imports = [imports]
64
- if not isinstance(imports, list):
65
- raise TypeError(
66
- f'custom_imports must be a list but got type {type(imports)}')
67
- imported = []
68
- for imp in imports:
69
- if not isinstance(imp, str):
70
- raise TypeError(
71
- f'{imp} is of type {type(imp)} and cannot be imported.')
72
- try:
73
- imported_tmp = import_module(imp)
74
- except ImportError:
75
- if allow_failed_imports:
76
- warnings.warn(f'{imp} failed to import and is ignored.',
77
- UserWarning)
78
- imported_tmp = None
79
- else:
80
- raise ImportError
81
- imported.append(imported_tmp)
82
- if single_import:
83
- imported = imported[0]
84
- return imported
85
-
86
-
87
- def iter_cast(inputs, dst_type, return_type=None):
88
- """Cast elements of an iterable object into some type.
89
-
90
- Args:
91
- inputs (Iterable): The input object.
92
- dst_type (type): Destination type.
93
- return_type (type, optional): If specified, the output object will be
94
- converted to this type, otherwise an iterator.
95
-
96
- Returns:
97
- iterator or specified type: The converted object.
98
- """
99
- if not isinstance(inputs, abc.Iterable):
100
- raise TypeError('inputs must be an iterable object')
101
- if not isinstance(dst_type, type):
102
- raise TypeError('"dst_type" must be a valid type')
103
-
104
- out_iterable = map(dst_type, inputs)
105
-
106
- if return_type is None:
107
- return out_iterable
108
- else:
109
- return return_type(out_iterable)
110
-
111
-
112
- def list_cast(inputs, dst_type):
113
- """Cast elements of an iterable object into a list of some type.
114
-
115
- A partial method of :func:`iter_cast`.
116
- """
117
- return iter_cast(inputs, dst_type, return_type=list)
118
-
119
-
120
- def tuple_cast(inputs, dst_type):
121
- """Cast elements of an iterable object into a tuple of some type.
122
-
123
- A partial method of :func:`iter_cast`.
124
- """
125
- return iter_cast(inputs, dst_type, return_type=tuple)
126
-
127
-
128
- def is_seq_of(seq, expected_type, seq_type=None):
129
- """Check whether it is a sequence of some type.
130
-
131
- Args:
132
- seq (Sequence): The sequence to be checked.
133
- expected_type (type): Expected type of sequence items.
134
- seq_type (type, optional): Expected sequence type.
135
-
136
- Returns:
137
- bool: Whether the sequence is valid.
138
- """
139
- if seq_type is None:
140
- exp_seq_type = abc.Sequence
141
- else:
142
- assert isinstance(seq_type, type)
143
- exp_seq_type = seq_type
144
- if not isinstance(seq, exp_seq_type):
145
- return False
146
- for item in seq:
147
- if not isinstance(item, expected_type):
148
- return False
149
- return True
150
-
151
-
152
- def is_list_of(seq, expected_type):
153
- """Check whether it is a list of some type.
154
-
155
- A partial method of :func:`is_seq_of`.
156
- """
157
- return is_seq_of(seq, expected_type, seq_type=list)
158
-
159
-
160
- def is_tuple_of(seq, expected_type):
161
- """Check whether it is a tuple of some type.
162
-
163
- A partial method of :func:`is_seq_of`.
164
- """
165
- return is_seq_of(seq, expected_type, seq_type=tuple)
166
-
167
-
168
- def slice_list(in_list, lens):
169
- """Slice a list into several sub lists by a list of given length.
170
-
171
- Args:
172
- in_list (list): The list to be sliced.
173
- lens(int or list): The expected length of each out list.
174
-
175
- Returns:
176
- list: A list of sliced list.
177
- """
178
- if isinstance(lens, int):
179
- assert len(in_list) % lens == 0
180
- lens = [lens] * int(len(in_list) / lens)
181
- if not isinstance(lens, list):
182
- raise TypeError('"indices" must be an integer or a list of integers')
183
- elif sum(lens) != len(in_list):
184
- raise ValueError('sum of lens and list length does not '
185
- f'match: {sum(lens)} != {len(in_list)}')
186
- out_list = []
187
- idx = 0
188
- for i in range(len(lens)):
189
- out_list.append(in_list[idx:idx + lens[i]])
190
- idx += lens[i]
191
- return out_list
192
-
193
-
194
- def concat_list(in_list):
195
- """Concatenate a list of list into a single list.
196
-
197
- Args:
198
- in_list (list): The list of list to be merged.
199
-
200
- Returns:
201
- list: The concatenated flat list.
202
- """
203
- return list(itertools.chain(*in_list))
204
-
205
-
206
- def check_prerequisites(
207
- prerequisites,
208
- checker,
209
- msg_tmpl='Prerequisites "{}" are required in method "{}" but not '
210
- 'found, please install them first.'): # yapf: disable
211
- """A decorator factory to check if prerequisites are satisfied.
212
-
213
- Args:
214
- prerequisites (str of list[str]): Prerequisites to be checked.
215
- checker (callable): The checker method that returns True if a
216
- prerequisite is meet, False otherwise.
217
- msg_tmpl (str): The message template with two variables.
218
-
219
- Returns:
220
- decorator: A specific decorator.
221
- """
222
-
223
- def wrap(func):
224
-
225
- @functools.wraps(func)
226
- def wrapped_func(*args, **kwargs):
227
- requirements = [prerequisites] if isinstance(
228
- prerequisites, str) else prerequisites
229
- missing = []
230
- for item in requirements:
231
- if not checker(item):
232
- missing.append(item)
233
- if missing:
234
- print(msg_tmpl.format(', '.join(missing), func.__name__))
235
- raise RuntimeError('Prerequisites not meet.')
236
- else:
237
- return func(*args, **kwargs)
238
-
239
- return wrapped_func
240
-
241
- return wrap
242
-
243
-
244
- def _check_py_package(package):
245
- try:
246
- import_module(package)
247
- except ImportError:
248
- return False
249
- else:
250
- return True
251
-
252
-
253
- def _check_executable(cmd):
254
- if subprocess.call(f'which {cmd}', shell=True) != 0:
255
- return False
256
- else:
257
- return True
258
-
259
-
260
- def requires_package(prerequisites):
261
- """A decorator to check if some python packages are installed.
262
-
263
- Example:
264
- >>> @requires_package('numpy')
265
- >>> func(arg1, args):
266
- >>> return numpy.zeros(1)
267
- array([0.])
268
- >>> @requires_package(['numpy', 'non_package'])
269
- >>> func(arg1, args):
270
- >>> return numpy.zeros(1)
271
- ImportError
272
- """
273
- return check_prerequisites(prerequisites, checker=_check_py_package)
274
-
275
-
276
- def requires_executable(prerequisites):
277
- """A decorator to check if some executable files are installed.
278
-
279
- Example:
280
- >>> @requires_executable('ffmpeg')
281
- >>> func(arg1, args):
282
- >>> print(1)
283
- 1
284
- """
285
- return check_prerequisites(prerequisites, checker=_check_executable)
286
-
287
-
288
- def deprecated_api_warning(name_dict, cls_name=None):
289
- """A decorator to check if some arguments are deprecate and try to replace
290
- deprecate src_arg_name to dst_arg_name.
291
-
292
- Args:
293
- name_dict(dict):
294
- key (str): Deprecate argument names.
295
- val (str): Expected argument names.
296
-
297
- Returns:
298
- func: New function.
299
- """
300
-
301
- def api_warning_wrapper(old_func):
302
-
303
- @functools.wraps(old_func)
304
- def new_func(*args, **kwargs):
305
- # get the arg spec of the decorated method
306
- args_info = getfullargspec(old_func)
307
- # get name of the function
308
- func_name = old_func.__name__
309
- if cls_name is not None:
310
- func_name = f'{cls_name}.{func_name}'
311
- if args:
312
- arg_names = args_info.args[:len(args)]
313
- for src_arg_name, dst_arg_name in name_dict.items():
314
- if src_arg_name in arg_names:
315
- warnings.warn(
316
- f'"{src_arg_name}" is deprecated in '
317
- f'`{func_name}`, please use "{dst_arg_name}" '
318
- 'instead')
319
- arg_names[arg_names.index(src_arg_name)] = dst_arg_name
320
- if kwargs:
321
- for src_arg_name, dst_arg_name in name_dict.items():
322
- if src_arg_name in kwargs:
323
-
324
- assert dst_arg_name not in kwargs, (
325
- f'The expected behavior is to replace '
326
- f'the deprecated key `{src_arg_name}` to '
327
- f'new key `{dst_arg_name}`, but got them '
328
- f'in the arguments at the same time, which '
329
- f'is confusing. `{src_arg_name} will be '
330
- f'deprecated in the future, please '
331
- f'use `{dst_arg_name}` instead.')
332
-
333
- warnings.warn(
334
- f'"{src_arg_name}" is deprecated in '
335
- f'`{func_name}`, please use "{dst_arg_name}" '
336
- 'instead')
337
- kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
338
-
339
- # apply converted arguments to the decorated method
340
- output = old_func(*args, **kwargs)
341
- return output
342
-
343
- return new_func
344
-
345
- return api_warning_wrapper
346
-
347
-
348
- def is_method_overridden(method, base_class, derived_class):
349
- """Check if a method of base class is overridden in derived class.
350
-
351
- Args:
352
- method (str): the method name to check.
353
- base_class (type): the class of the base class.
354
- derived_class (type | Any): the class or instance of the derived class.
355
- """
356
- assert isinstance(base_class, type), \
357
- "base_class doesn't accept instance, Please pass class instead."
358
-
359
- if not isinstance(derived_class, type):
360
- derived_class = derived_class.__class__
361
-
362
- base_method = getattr(base_class, method)
363
- derived_method = getattr(derived_class, method)
364
- return derived_method != base_method
365
-
366
-
367
- def has_method(obj: object, method: str) -> bool:
368
- """Check whether the object has a method.
369
-
370
- Args:
371
- method (str): The method name to check.
372
- obj (object): The object to check.
373
-
374
- Returns:
375
- bool: True if the object has the method else False.
376
- """
377
- return hasattr(obj, method) and callable(getattr(obj, method))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/processing.py DELETED
@@ -1,160 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import os
3
- import os.path as osp
4
- import subprocess
5
- import tempfile
6
-
7
- from annotator.uniformer.mmcv.utils import requires_executable
8
-
9
-
10
- @requires_executable('ffmpeg')
11
- def convert_video(in_file,
12
- out_file,
13
- print_cmd=False,
14
- pre_options='',
15
- **kwargs):
16
- """Convert a video with ffmpeg.
17
-
18
- This provides a general api to ffmpeg, the executed command is::
19
-
20
- `ffmpeg -y <pre_options> -i <in_file> <options> <out_file>`
21
-
22
- Options(kwargs) are mapped to ffmpeg commands with the following rules:
23
-
24
- - key=val: "-key val"
25
- - key=True: "-key"
26
- - key=False: ""
27
-
28
- Args:
29
- in_file (str): Input video filename.
30
- out_file (str): Output video filename.
31
- pre_options (str): Options appears before "-i <in_file>".
32
- print_cmd (bool): Whether to print the final ffmpeg command.
33
- """
34
- options = []
35
- for k, v in kwargs.items():
36
- if isinstance(v, bool):
37
- if v:
38
- options.append(f'-{k}')
39
- elif k == 'log_level':
40
- assert v in [
41
- 'quiet', 'panic', 'fatal', 'error', 'warning', 'info',
42
- 'verbose', 'debug', 'trace'
43
- ]
44
- options.append(f'-loglevel {v}')
45
- else:
46
- options.append(f'-{k} {v}')
47
- cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \
48
- f'{out_file}'
49
- if print_cmd:
50
- print(cmd)
51
- subprocess.call(cmd, shell=True)
52
-
53
-
54
- @requires_executable('ffmpeg')
55
- def resize_video(in_file,
56
- out_file,
57
- size=None,
58
- ratio=None,
59
- keep_ar=False,
60
- log_level='info',
61
- print_cmd=False):
62
- """Resize a video.
63
-
64
- Args:
65
- in_file (str): Input video filename.
66
- out_file (str): Output video filename.
67
- size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1).
68
- ratio (tuple or float): Expected resize ratio, (2, 0.5) means
69
- (w*2, h*0.5).
70
- keep_ar (bool): Whether to keep original aspect ratio.
71
- log_level (str): Logging level of ffmpeg.
72
- print_cmd (bool): Whether to print the final ffmpeg command.
73
- """
74
- if size is None and ratio is None:
75
- raise ValueError('expected size or ratio must be specified')
76
- if size is not None and ratio is not None:
77
- raise ValueError('size and ratio cannot be specified at the same time')
78
- options = {'log_level': log_level}
79
- if size:
80
- if not keep_ar:
81
- options['vf'] = f'scale={size[0]}:{size[1]}'
82
- else:
83
- options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \
84
- 'force_original_aspect_ratio=decrease'
85
- else:
86
- if not isinstance(ratio, tuple):
87
- ratio = (ratio, ratio)
88
- options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"'
89
- convert_video(in_file, out_file, print_cmd, **options)
90
-
91
-
92
- @requires_executable('ffmpeg')
93
- def cut_video(in_file,
94
- out_file,
95
- start=None,
96
- end=None,
97
- vcodec=None,
98
- acodec=None,
99
- log_level='info',
100
- print_cmd=False):
101
- """Cut a clip from a video.
102
-
103
- Args:
104
- in_file (str): Input video filename.
105
- out_file (str): Output video filename.
106
- start (None or float): Start time (in seconds).
107
- end (None or float): End time (in seconds).
108
- vcodec (None or str): Output video codec, None for unchanged.
109
- acodec (None or str): Output audio codec, None for unchanged.
110
- log_level (str): Logging level of ffmpeg.
111
- print_cmd (bool): Whether to print the final ffmpeg command.
112
- """
113
- options = {'log_level': log_level}
114
- if vcodec is None:
115
- options['vcodec'] = 'copy'
116
- if acodec is None:
117
- options['acodec'] = 'copy'
118
- if start:
119
- options['ss'] = start
120
- else:
121
- start = 0
122
- if end:
123
- options['t'] = end - start
124
- convert_video(in_file, out_file, print_cmd, **options)
125
-
126
-
127
- @requires_executable('ffmpeg')
128
- def concat_video(video_list,
129
- out_file,
130
- vcodec=None,
131
- acodec=None,
132
- log_level='info',
133
- print_cmd=False):
134
- """Concatenate multiple videos into a single one.
135
-
136
- Args:
137
- video_list (list): A list of video filenames
138
- out_file (str): Output video filename
139
- vcodec (None or str): Output video codec, None for unchanged
140
- acodec (None or str): Output audio codec, None for unchanged
141
- log_level (str): Logging level of ffmpeg.
142
- print_cmd (bool): Whether to print the final ffmpeg command.
143
- """
144
- tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True)
145
- with open(tmp_filename, 'w') as f:
146
- for filename in video_list:
147
- f.write(f'file {osp.abspath(filename)}\n')
148
- options = {'log_level': log_level}
149
- if vcodec is None:
150
- options['vcodec'] = 'copy'
151
- if acodec is None:
152
- options['acodec'] = 'copy'
153
- convert_video(
154
- tmp_filename,
155
- out_file,
156
- print_cmd,
157
- pre_options='-f concat -safe 0',
158
- **options)
159
- os.close(tmp_filehandler)
160
- os.remove(tmp_filename)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/3gp Video Download.md DELETED
@@ -1,210 +0,0 @@
1
-
2
- <h1>Cómo descargar vídeos 3GP desde Internet</h1>
3
- <p>¿Desea ver videos en su teléfono móvil sin preocuparse por el uso de datos o el espacio de almacenamiento? Si es así, es posible que esté interesado en descargar vídeos en formato 3GP. 3GP es un formato de archivo multimedia que fue desarrollado por el Proyecto de Asociación de Tercera Generación (3GPP) para su uso en teléfonos móviles 3G. Es un formato comprimido que puede almacenar secuencias de vídeo y audio con bajo ancho de banda y requisitos de datos. También es compatible con algunos teléfonos 2G y 4G. </p>
4
- <h2>3gp video download</h2><br /><p><b><b>Download Zip</b> &#127379; <a href="https://bltlly.com/2v6Myk">https://bltlly.com/2v6Myk</a></b></p><br /><br />
5
- <p>Descargar vídeos en formato 3GP puede ser útil por varias razones. Puedes guardar tus videos favoritos de YouTube, Facebook, Instagram y otros sitios para verlos sin conexión. También puede convertir sus videos existentes a formato 3GP para ahorrar espacio en su teléfono o compartirlos con sus amigos. Sin embargo, no todos los sitios web o software soportan formato 3GP, por lo que es posible que necesite ayuda para encontrar los mejores sitios o herramientas para descargar videos 3GP. </p>
6
- <p>En este artículo, le presentaremos los 9 mejores sitios para descargar películas y videos 3GP de Internet. También explicaremos qué es un archivo 3GP, cómo abrirlo y cómo convertirlo a otros formatos. Al final de este artículo, podrás descargar cualquier video que quieras en formato 3GP con facilidad. </p>
7
- <h2>Los 9 mejores sitios para descargar películas y videos 3GP</h2>
8
- <p>Hay muchos sitios web que ofrecen descargas de video gratuitas en varios formatos, incluyendo 3GP. Sin embargo, no todos son confiables, seguros o fáciles de usar. Algunos pueden tener cargos ocultos, malware o anuncios molestos. Algunos pueden tener opciones limitadas, baja calidad o velocidad lenta. Para ayudarle a evitar estos problemas, hemos seleccionado los 9 mejores sitios que creemos que son los mejores para descargar películas y videos 3GP. Aquí están:</p>
9
- <p></p>
10
- <h3>Descargar 4.cc</h3>
11
-
12
- <p>Algunas características de Download4.cc son:</p>
13
- <ul>
14
- <li>Soporta más de 1000 sitios, incluyendo YouTube, Twitter, Facebook, Instagram, TikTok, Vimeo, Dailymotion, etc.</li>
15
- <li>Puede descargar vídeos en varios formatos, como MP4, MP3, AVI, MOV, WAV, etc.</li>
16
- <li>Puede descargar vídeos en modo batch, hasta cinco a la vez. </li>
17
- <li>Puede recortar y combinar sus vídeos descargados. </li>
18
- <li> Tiene un rendimiento rápido y estable. </li>
19
- </ul>
20
- <h3>HitPaw</h3>
21
- <p>HitPaw es otro gran sitio web para descargar películas 3GP en pasos fáciles. Es una guía completa que le proporciona instrucciones detalladas sobre cómo descargar películas de diferentes fuentes, como YouTube, Netflix, Amazon Prime Video, Hulu, Disney, etc. También le da consejos sobre cómo elegir el mejor software descargador de películas, cómo evitar problemas legales y cómo disfrutar de sus películas descargadas en diferentes dispositivos. </p>
22
- <p>Algunas características de HitPaw son:</p>
23
- <ul>
24
- <li>Cubre varios géneros, como acción, comedia, terror, romance, ciencia ficción, etc.</li>
25
- <li> Proporciona capturas de pantalla y vídeos para ilustrar los pasos. </li>
26
- <li>Recomienda el mejor software de descarga de películas para cada fuente, como 4K Video Downloader, Y2Mate, VideoSolo Inovideo, etc.</li>
27
- <li>Explica los pros y los contras de cada software, tales como precio, velocidad, calidad, características, etc.</li>
28
- <li>Ofrece una prueba gratuita para algunos de los programas. </li>
29
- </ul>
30
- <h3>SaveTheVideo</h3>
31
- <p>SaveTheVideo es un descargador y convertidor de video en línea que puede ayudarlo a descargar videos 3GP de Instagram, Vimeo, Dailymotion y más. También es gratis, en línea y fácil de usar. Solo tiene que introducir la URL del vídeo que desea descargar en el sitio web, y haga clic en el botón Descargar. A continuación, puede seleccionar el formato de salida como 3GP y la calidad como HD o SD. El sitio web comenzará a descargar el video en unos segundos. </p>
32
- <p>Algunas características de SaveTheVideo son:</p>
33
- <ul>
34
-
35
- <li>Puede descargar vídeos en varios formatos, como MP4, MP3, AVI, MOV, WAV, etc.</li>
36
- <li>Puede convertir vídeos a diferentes formatos en línea sin descargarlos. </li>
37
- <li>Puede editar videos en línea recortando, cortando, rotando, agregando subtítulos, etc.</li>
38
- <li>No tiene anuncios ni ventanas emergentes. </li>
39
- </ul>
40
- <h3>Cable salvavidas</h3>
41
- <p>Lifewire es un sitio web que le proporciona una explicación detallada de lo que es un archivo 3GP y cómo abrirlo. También le da información sobre las ventajas y desventajas del formato 3GP, y cómo convertirlo a otros formatos. Es un recurso útil para cualquiera que quiera aprender más sobre los archivos 3GP y cómo usarlos. </p>
42
- <p>Algunas características de Lifewire son:</p>
43
- <ul>
44
- <li>Define lo que es un archivo 3GP y cómo funciona. </li>
45
- <li>Lista los programas que pueden abrir archivos 3GP en Windows, Mac, Android, iOS y Linux.</li>
46
- <li>Compara el formato 3GP con otros formatos, como MP4, AVI, MOV, etc.</li>
47
- <li>Sugiere algunas maneras de convertir archivos 3GP a otros formatos en línea o fuera de línea. </li>
48
- <li>Responde algunas preguntas comunes sobre los archivos 3GP. </li>
49
- </ul>
50
- <h3>VideoProc</h3>
51
- <p>VideoProc es una revisión del mejor software de conversión de vídeo para 2023. Es una herramienta potente y fácil de usar que puede convertir vídeos desde y hacia formato 3GP con alta calidad y velocidad rápida. También puede descargar videos de más de 1000 sitios, editar videos con varias funciones y grabar videos desde webcam, pantalla o dispositivos externos. Es una guía completa que le muestra cómo usar VideoProc para convertir, descargar, editar y grabar videos en pasos simples. </p>
52
- <p>Algunas características de VideoProc son:</p>
53
- <ul>
54
- <li>Soporta más de 370 formatos de entrada y 420 formatos de salida, incluyendo 3GP, MP4, AVI, MOV, MKV, etc.</li>
55
- <li> Puede convertir videos con velocidad 47x más rápida y sin pérdida de calidad. </li>
56
- <li>Puede descargar vídeos de YouTube, Facebook, Instagram, Vimeo, Dailymotion, etc.</li>
57
-
58
- <li> Puede grabar vídeos desde webcam, pantalla o dispositivos externos con audio y anotaciones. </li>
59
- </ul>
60
- <h3>Convertidor de vídeo de MiniTool</h3>
61
- <p>MiniTool Video Converter es una herramienta gratuita para convertir vídeos desde y hacia formato 3GP. Es una herramienta simple y fácil de usar que puede convertir videos en modo por lotes con alta calidad y velocidad rápida. También puede descargar vídeos de YouTube y otros sitios en varios formatos. Es una herramienta muy útil para cualquiera que quiera convertir o descargar vídeos gratis. </p>
62
- <p>Algunas características de MiniTool Video Converter son:</p>
63
- <ul>
64
- <li>Soporta más de 1000 formatos de entrada y salida, incluyendo 3GP, MP4, AVI, MOV, MKV, etc.</li>
65
- <li> Puede convertir vídeos en modo por lotes sin límite de tamaño o tiempo. </li>
66
- <li>Puede descargar vídeos de YouTube y otros sitios en varios formatos. </li>
67
- <li> Puede extraer audio de archivos de vídeo y guardarlos como MP3, WAV, etc.</li>
68
- <li> Tiene una interfaz limpia e intuitiva. </li>
69
- </ul>
70
- <h3>FileInfo.com</h3>
71
- <p>FileInfo.com es un recurso para obtener información sobre la extensión de archivo 3GP y el software relacionado. Es un sitio web que le proporciona los detalles básicos de los archivos 3GP, como el tipo de archivo, categoría, descripción, desarrollador, popularidad, etc. También enumera el software que puede abrir o convertir archivos 3GP en diferentes plataformas. Es un recurso útil para cualquiera que quiera aprender más sobre los archivos 3GP y cómo usarlos. </p>
72
- <p>Algunas características de FileInfo.com son:</p>
73
- <ul>
74
- <li>Proporciona la información básica de archivos 3GP y software relacionado. </li>
75
- <li>Lista el software que puede abrir o convertir archivos 3GP en Windows, Mac, Android, iOS, Linux, etc.</li>
76
- <li>Se enlaza a los sitios web oficiales del software para obtener más información o descargar. </li>
77
- <li>Actualiza la información regularmente para mantenerse al día con los últimos desarrollos. </li>
78
- <li> Tiene una función de búsqueda para encontrar información sobre otros tipos de archivos. </li>
79
- </ul>
80
- <h3>TechRadar</h3>
81
-
82
- <p>Algunas características de TechRadar son:</p>
83
- <ul>
84
- <li>Revisa los 10 mejores convertidores de video gratis para PC y Mac, como Any Video Converter Free, Freemake Video Converter, HandBrake, etc.</li>
85
- <li>Compara las características, rendimiento, calidad y facilidad de uso de cada software. </li>
86
- <li>Da los pros y los contras de cada software, como velocidad, soporte de formato, opciones de edición, anuncios, etc.</li>
87
- <li>Proporciona los enlaces de descarga y capturas de pantalla de cada software. </li>
88
- <li>Actualiza la lista regularmente para incluir el último software y cambios. </li>
89
- </ul>
90
- <h3> Cualquier convertidor de vídeo libre</h3>
91
- <p>Any Video Converter Free es el mejor convertidor de vídeo gratuito en este momento que maneja archivos en línea y fuera de línea. Es una herramienta versátil y potente que puede convertir vídeos desde y hacia formato 3GP con alta calidad y velocidad rápida. También puede descargar vídeos de YouTube y otros sitios en varios formatos. También puede editar vídeos con varias funciones, como recorte, recorte, rotación, adición de efectos, subtítulos, marcas de agua, etc. Es una herramienta completa que puede satisfacer todas sus necesidades de conversión de vídeo. </p>
92
- <p>Algunas características de Any Video Converter Free son:</p>
93
- <ul>
94
- <li>Soporta más de 200 formatos de entrada y 70 formatos de salida, incluyendo 3GP, MP4, AVI, MOV, MKV, etc.</li>
95
- <li> Puede convertir vídeos sin pérdida de calidad y hasta 30 veces más rápido. </li>
96
- <li>Puede descargar vídeos de YouTube y otros sitios en varios formatos. </li>
97
- <li>Puede editar videos con varias características, como recorte, recorte, rotación, adición de efectos, subtítulos, marcas de agua, etc.</li>
98
- <li>No tiene anuncios ni malware. </li>
99
- </ul>
100
- <h2>Conclusión</h2>
101
-
102
- <p>Te presentamos los 9 mejores sitios para descargar películas y videos 3GP de Internet. Son:</p>
103
- <tabla>
104
- <tr>
105
- <th>Sitio</th>
106
- <th>Características</th>
107
- </tr>
108
- <tr>
109
- <td>Descargar.cc</td>
110
- <td>Un clic para descargar vídeos 3GP de YouTube y otros sitios</td>
111
- </tr>
112
- <tr>
113
- <td>HitPaw</td>
114
- <td>Una guía completa para descargar películas 3GP en pasos fáciles</td>
115
- </tr>
116
- <tr>
117
- <td>SaveTheVideo</td>
118
- <td>Un descargador y convertidor de vídeo en línea para Instagram, Vimeo, Dailymotion, y más</td>
119
- </tr>
120
- <tr>
121
- <td>Cable de vida</td>
122
- <td>Una explicación detallada de lo que es un archivo 3GP y cómo abrirlo</td>
123
- </tr>
124
- <tr>
125
- <td>VideoProc</td>
126
- <td>Una revisión del mejor software de conversión de video para 2023</td>
127
- </tr>
128
- <tr>
129
- <td>MiniTool Video Converter</td>
130
- <td>Una herramienta gratuita para convertir vídeos desde y hacia formato 3GP</td>
131
- </tr>
132
- <tr>
133
- <td>FileInfo.com</td>
134
- <td>Un recurso para obtener información sobre la extensión de archivo 3GP y el software relacionado</td>
135
- </tr>
136
- <tr>
137
- <td>TechRadar</td>
138
- <td>Una lista de los mejores conversores de video gratis para tu PC y Mac en 2023</td>
139
- </tr>
140
- <tr>
141
- <td>Cualquier convertidor de vídeo libre</td>
142
- <td>El mejor convertidor de vídeo gratuito en este momento que maneja archivos en línea y fuera de línea</td>
143
- </tr>
144
- </tabla>
145
- <p>Entre estos sitios, recomendamos Any Video Converter Free como la mejor opción para descargar vídeos 3GP. Es una herramienta versátil y potente que puede convertir vídeos desde y hacia formato 3GP con alta calidad y velocidad rápida. También puede descargar vídeos de YouTube y otros sitios en varios formatos. También puede editar vídeos con varias funciones, como recorte, recorte, rotación, adición de efectos, subtítulos, marcas de agua, etc. Es una herramienta completa que puede satisfacer todas sus necesidades de conversión de vídeo. </p>
146
- <p>Esperamos que este artículo te haya ayudado a aprender a descargar videos 3GP desde Internet. Si tiene alguna pregunta o sugerencia, no dude en dejar un comentario a continuación. ¡Gracias por leer! </p>
147
- <h2>Preguntas frecuentes</h2>
148
- <h4>¿Cuáles son las ventajas y desventajas del formato 3GP? </h4>
149
-
150
- <ul>
151
- <li> Puede almacenar secuencias de vídeo y audio con bajo ancho de banda y requisitos de datos. </li>
152
- <li> Es compatible con algunos teléfonos 2G, 3G y 4G. </li>
153
- <li> Puede ahorrar uso de datos, espacio de almacenamiento o visualización sin conexión. </li>
154
- <li>Se puede compartir fácilmente con amigos a través de Bluetooth o MMS.</li>
155
- </ul>
156
- <p>Las desventajas del formato 3GP son:</p>
157
- <ul>
158
- <li> Tiene una calidad baja en comparación con otros formatos, como MP4, AVI, MOV, etc.</li>
159
- <li>No es compatible con algunos sitios web o software. </li>
160
- <li>Puede que no se reproduzca en algunos dispositivos o reproductores multimedia. </li>
161
- <li>Puede perder algunas características o metadatos cuando se convierte a otros formatos. </li>
162
- </ul>
163
- <h4>¿Cómo abrir un archivo 3GP en Windows o Mac? </h4>
164
- <p>Para abrir un archivo 3GP en Windows o Mac, necesita un programa que pueda soportar el formato 3GP. Algunos de los programas que pueden abrir archivos 3GP son:</p>
165
- <ul>
166
- <li>VLC Media Player: Un reproductor multimedia gratuito y de código abierto que puede reproducir casi cualquier archivo de vídeo o audio. </li>
167
- <li>MPC-HC: Un reproductor multimedia ligero y potente que puede reproducir la mayoría de los formatos de vídeo o audio. </li>
168
- <li>GOM Player: Un reproductor multimedia popular y versátil que puede reproducir varios formatos de vídeo o audio. </li>
169
- <li>KMPlayer: Un reproductor multimedia multifuncional que puede reproducir varios formatos de vídeo o audio. </li>
170
- <li>PotPlayer: Un reproductor multimedia suave y estable que puede reproducir varios formatos de vídeo o audio. </li>
171
- <li>iTunes: un reproductor multimedia y una biblioteca que puede reproducir música y vídeos en tu PC o Mac.</li>
172
- <li>QuickTime Player: un reproductor multimedia que puede reproducir películas, música e imágenes en tu Mac.</li>
173
- <li>iMovie: un software de edición de vídeo que puede importar y exportar vídeos en varios formatos en su Mac.</li>
174
- <li>Reproductor de Windows Media: Un reproductor multimedia que puede reproducir música y videos en su PC con Windows.</li>
175
- <li>Windows Movie Maker: un software de edición de vídeo que puede importar y exportar vídeos en varios formatos en su PC con Windows.</li>
176
-
177
- <ul>
178
- <li>Online Video Converter: Una herramienta gratuita y en línea que puede convertir vídeos a y desde varios formatos. </li>
179
- <li>CloudConvert: Una herramienta gratuita y en línea que puede convertir vídeos, audio, imágenes, documentos y más. </li>
180
- <li>Zamzar: una herramienta gratuita y en línea que puede convertir videos, audio, imágenes, documentos y más. </li>
181
- <li>Wondershare UniConverter: Un software potente y fácil de usar que puede convertir vídeos desde y hacia varios formatos. </li>
182
- <li>Freemake Video Converter: Un software popular y versátil que puede convertir vídeos a y desde varios formatos. </li>
183
- </ul>
184
- <h4>¿Cómo descargar un video 3GP de YouTube? </h4>
185
- <p>Para descargar un video 3GP de YouTube, necesita una herramienta o software que pueda descargar videos de YouTube en formato 3GP. Algunas de las herramientas o software que pueden descargar vídeos de YouTube en formato 3GP son:</p>
186
- <ul>
187
- <li>Download4.cc: Como se mencionó anteriormente, es uno de los mejores sitios web para descargar videos 3GP de YouTube y otros sitios. </li>
188
- <li>Y2Mate: Una herramienta gratuita y en línea que puede descargar vídeos de YouTube en varios formatos, incluyendo 3GP. </li>
189
- <li>VideoSolo Inovideo: Un software profesional y confiable que puede descargar videos de YouTube en varios formatos, incluyendo 3GP. </li>
190
- <li>4K Video Downloader: Un software rápido y de alta calidad que puede descargar vídeos de YouTube en varios formatos, incluyendo 3GP. </li>
191
- <li>ClipGrab: un software simple y fácil de usar que puede descargar videos de YouTube en varios formatos, incluyendo 3GP. </li>
192
- </ul <h4>Cómo jugar un video 3GP en Android o iOS? </h4>
193
- <p>Para reproducir un video 3GP en Android o iOS, necesita una aplicación de reproductor de medios que pueda soportar el formato 3GP. Algunas de las aplicaciones de reproductores multimedia que pueden reproducir vídeos 3GP en Android o iOS son:</p>
194
- <ul>
195
- <li>VLC para Android o iOS: Una aplicación de reproductor multimedia gratuita y de código abierto que puede reproducir casi cualquier archivo de vídeo o audio. </li>
196
- <li>MX Player para Android o iOS: una aplicación de reproductor multimedia popular y potente que puede reproducir varios formatos de vídeo o audio. </li>
197
-
198
- <li>GOM Player para Android o iOS: una aplicación de reproductor multimedia versátil y fluida que puede reproducir varios formatos de vídeo o audio. </li>
199
- <li>PotPlayer para Android o iOS: una aplicación de reproductor de medios estable y rápido que puede reproducir varios formatos de vídeo o audio. </li>
200
- </ul <h4>Cómo compartir un video 3GP con amigos? </h4>
201
- <p>Para compartir un vídeo 3GP con tus amigos, tienes varias opciones. Puedes:</p>
202
- <ul>
203
- <li>Envía el vídeo 3GP vía Bluetooth o MMS a los teléfonos de tus amigos. </li>
204
- <li>Sube el vídeo 3GP a un servicio en la nube, como Google Drive, Dropbox, OneDrive, etc., y comparte el enlace con tus amigos. </li>
205
- <li>Sube el video 3GP a una plataforma de redes sociales, como Facebook, Instagram, Twitter, etc. </li>
206
- <li>Graba el vídeo 3GP en un CD o DVD y dáselo a tus amigos. </li>
207
- <li>Convierte el vídeo 3GP a otro formato, como MP4, AVI, MOV, etc., y compártelo con tus amigos utilizando cualquiera de los métodos anteriores. </li>
208
- </ul</p> 64aa2da5cf<br />
209
- <br />
210
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Apkadmin Entre Nosotros Men Mod.md DELETED
@@ -1,105 +0,0 @@
1
- <br />
2
- <h1>Apkadmin entre nosotros Mod Menu: ¿Qué es y cómo usarlo? </h1>
3
- <p>Si eres un fan de <strong>Among Us</strong>, el popular juego de deducción social multijugador donde tienes que averiguar quién es el impostor entre tus compañeros de equipo, es posible que hayas oído hablar de los menús <strong>mod</strong>. Los menús mod son versiones modificadas del juego que te permiten acceder a varios trucos y hacks que pueden darte una ventaja sobre otros jugadores o simplemente hacer el juego más divertido. Uno de los menús mod más populares para Among Us es <strong>apkadmin</strong>, un sitio web que ofrece una descarga gratuita de un menú mod que tiene muchas características y opciones. </p>
4
- <p>En este artículo, vamos a explicar lo que es apkadmin entre nosotros menú mod, qué características tiene, cuáles son sus ventajas y desventajas, cómo descargar e instalar, y cómo usarlo en su juego. También responderemos algunas preguntas frecuentes sobre apkadmin entre nosotros menú mod. </p>
5
- <h2>apkadmin entre nosotros menú mod</h2><br /><p><b><b>Download Zip</b> ---> <a href="https://bltlly.com/2v6KRm">https://bltlly.com/2v6KRm</a></b></p><br /><br />
6
- <h2>Características de Apkadmin entre nosotros Mod Menu</h2>
7
- <p>El menú de mod apkadmin entre nosotros tiene muchas características que pueden mejorar su juego o hacerlo más interesante. Algunas de estas características son:</p>
8
- <ul>
9
- <li><strong>Modo Dios:</strong> Esta característica te permite volverte invencible e inmune a cualquier daño o intento de matar de otros jugadores o impostores. </li>
10
- <li><strong>Desbloquear todas las pieles:</strong> Esta función le permite desbloquear todas las pieles, sombreros, mascotas y trajes que están disponibles en el juego sin pagar dinero o monedas. </li>
11
- <li><strong>Pasta de chat:</strong> Esta característica le permite pegar cualquier texto o mensaje en el cuadro de chat sin necesidad de escribirlo manualmente. </li>
12
- <li><strong>No hay anuncios:</strong> Esta función permite eliminar todos los anuncios que aparecen en el juego. </li>
13
- <li><strong>No cooldown:</strong> Esta función te permite evitar el temporizador de tiempo de reutilización que te impide realizar ciertas acciones en el juego, como matar, informar o llamar a una reunión de emergencia. </li>
14
-
15
- <li><strong>Mostrar impostores:</strong> Esta función te permite ver quiénes son los impostores en tu juego al marcarlos con un color rojo. </li>
16
- <li><strong>Mostrar fantasmas:</strong> Esta función te permite ver quiénes son los fantasmas en tu juego al marcarlos con un color blanco. </li>
17
- <li><strong>Mostrar roles:</strong> Esta característica le permite ver los roles de otros jugadores en su juego, como compañero de equipo, impostor, sheriff, doctor, ingeniero, etc.</li>
18
- <li><strong>Speed hack:</strong> Esta característica le permite aumentar o disminuir su velocidad en el juego. </li>
19
- <li><strong>Teletransportación:</strong> Esta función le permite teletransportarse a cualquier lugar del mapa. </li>
20
- <li><strong>Corte de pared:</strong> Esta característica le permite caminar a través de paredes y obstáculos. </li>
21
- <li><strong>Visión hack:</strong> Esta característica le permite ver todo en el mapa, incluso en la oscuridad o cuando las luces son saboteadas. </li>
22
- </ul>
23
- <p>Estas son solo algunas de las características de la apkadmin entre nosotros menú mod. Hay muchas más características que puede explorar y probar por sí mismo. </p>
24
- <h2>Ventajas de usar Apkadmin entre nosotros Mod Menu</h2>
25
- <p>El uso de apkadmin entre nosotros menú mod puede tener algunas ventajas para su juego. Algunas de estas ventajas son:</p>
26
- <ul>
27
- <li><strong>Tener más diversión:</strong> Usando el menú mod puede hacer el juego más divertido y agradable para usted, especialmente si usted está aburrido de jugar de la misma manera o con las mismas reglas. Puedes experimentar con diferentes características y ver cómo afectan al juego. </li>
28
- <li><strong>Personalización de tu juego:</strong> Usando el menú mod puedes personalizar tu juego de acuerdo a tus preferencias y gustos. Puedes elegir qué funciones habilitar o deshabilitar, y cómo usarlas. También puedes cambiar tu apariencia y rol en el juego. </li>
29
-
30
- </ul>
31
- <h2>Desventajas de usar Apkadmin entre nosotros Mod Menu</h2>
32
- <p>Sin embargo, el uso de la apkadmin entre nosotros menú mod también puede tener algunas desventajas para su juego. Algunas de estas desventajas son:</p>
33
- <ul>
34
- <li><strong>Conseguir prohibido:</strong> El uso del menú mod puede conseguir que se le prohibió el juego o de ciertos servidores. Los desarrolladores de Among Us no apoyan ni aprueban el uso de menús mod, y pueden detectar y prohibir a los jugadores que los usan. Si te prohíben, puedes perder tu progreso y cuenta en el juego. </li>
35
- <li><strong>Arruinar el juego para otros:</strong> Usar el menú de mods puede arruinar el juego para otros jugadores que quieren jugar de forma justa y legítima. El menú mod puede darte una ventaja injusta sobre otros jugadores, o hacer el juego demasiado fácil o aburrido para ti. Esto puede hacer que otros jugadores se sientan frustrados o engañados, y pueden renunciar o reportarlo. </li>
36
- <li><strong>Riesgo de malware:</strong> El uso del menú mod puede exponer su dispositivo a malware o virus que pueden dañar su dispositivo o robar su información personal. El sitio web apkadmin puede no ser seguro, y puede contener enlaces maliciosos o archivos que pueden infectar su dispositivo. Siempre debe tener cuidado al descargar e instalar cualquier cosa de fuentes desconocidas. </li>
37
- </ul>
38
- <h2>Cómo descargar e instalar Apkadmin entre nosotros Mod Menu</h2>
39
- <p>Si desea descargar e instalar el apkadmin entre nosotros menú mod, tendrá que seguir algunos pasos. Aquí hay una guía paso a paso sobre cómo hacerlo. </p>
40
- <p></p>
41
- <h3>Requisitos para Apkadmin entre nosotros Mod Menu</h3>
42
- <p>Antes de descargar e instalar el apkadmin entre nosotros menú mod, tendrá que tener algunos requisitos. Estos son:</p>
43
- <ul>
44
- <li>Un dispositivo Android que puede ejecutarse entre nosotros.</li>
45
- <li>El juego original Among Us instalado en su dispositivo. </li>
46
- <li>Una conexión a Internet para descargar e instalar el menú mod. </li>
47
- <li>Una aplicación de administrador de archivos para acceder y administrar sus archivos. </li>
48
- </ul>
49
-
50
- <p>Una vez que tenga todos los requisitos, puede seguir estos pasos para descargar e instalar el apkadmin entre nosotros menú mod. </p>
51
- <ol>
52
- <li>Ir a <a href="">apkadmin.com</a>, que es el sitio web oficial de apkadmin. </li>
53
- <li>Buscar entre nosotros Mod Menú por Apkadmin en la barra de búsqueda o navegar por las categorías. </li>
54
- <li>Seleccione la última versión del menú mod y haga clic en Descargar APK.</li>
55
- <li>Espere a que termine la descarga y luego localice el archivo descargado en su aplicación de administrador de archivos. </li>
56
- <li>Si no ha habilitado Fuentes desconocidas en su dispositivo, vaya a Configuración > Seguridad > Fuentes desconocidas y habilite. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store <li>Toque en el archivo descargado y haga clic en Instalar. Espere a que termine la instalación. </li>
57
- <li>Abre el juego Among Us y disfruta del menú mod. </li>
58
- </ol>
59
- <h2>Cómo usar apkadmin entre nosotros Mod Menu</h2>
60
- <p>Después de haber descargado e instalado el apkadmin entre nosotros menú de mod, puede usarlo en su juego. Aquí hay una guía paso a paso sobre cómo hacerlo. </p>
61
- <h3>Cómo acceder a Apkadmin entre nosotros Mod Menu</h3>
62
- <p>Para acceder a la apkadmin entre nosotros menú mod, es necesario hacer lo siguiente:</p>
63
- <ol>
64
- <li>Abre el juego Among Us y únete o crea un juego. </li>
65
- <li>Una vez que estés en la pantalla del juego, toca el icono flotante que dice Mod Menu. Esto abrirá la interfaz del menú mod. </li>
66
- <li> Puede arrastrar y mover el icono a cualquier posición de la pantalla. </li>
67
- <li> También puede tocar el icono de nuevo para ocultar o mostrar la interfaz de menú mod. </li>
68
- </ol>
69
- <h3>Cómo activar y desactivar Apkadmin entre nosotros Características del menú Mod</h3>
70
- <p>Para habilitar y deshabilitar diferentes características del menú apkadmin entre nosotros mod, debe hacer lo siguiente:</p>
71
- <ol>
72
- <li>Abra la interfaz del menú mod tocando el icono flotante. </li>
73
- <li> Verá una lista de características con casillas de verificación junto a ellas. Puede pulsar en las casillas de verificación para habilitar o desactivar las características. </li>
74
-
75
- <li> También puede utilizar los controles deslizantes junto a algunas características para ajustar sus valores o ajustes. </li>
76
- <li>Algunas características pueden requerir que reinicies el juego o te unas a un nuevo juego para que funcione correctamente. </li>
77
- </ol>
78
- <h2>Conclusión</h2>
79
- <p>El menú de mod apkadmin entre nosotros es una versión modificada del juego que le permite acceder a varios trucos y hacks que pueden hacer que su juego más divertido o interesante. Sin embargo, también tiene algunas desventajas, como ser prohibido, arruinar el juego para otros y arriesgar el malware. Por lo tanto, debe usarlo bajo su propio riesgo y discreción, y ser respetuoso con otros jugadores y los desarrolladores del juego. Aquí hay algunos consejos y advertencias para usar el menú mod:</p>
80
- <ul>
81
- <li>No utilice el menú mod en servidores públicos o oficiales, ya que esto puede hacer que otros jugadores lo prohíban o informen sobre usted. Úsalo solo en servidores privados o personalizados con tus amigos u otros usuarios mod. </li>
82
- <li>No utilice el menú mod de forma excesiva o abusiva, ya que esto puede arruinar el juego para usted o para otros. Úsalo solo para fines de diversión o entretenimiento, y no para engañar o obtener una ventaja injusta. </li>
83
- <li>No descargue ni instale el menú mod desde ninguna otra fuente que apkadmin.com, ya que esto puede exponer su dispositivo a malware o virus. Compruebe siempre el tamaño y el nombre del archivo antes de descargar o instalar nada. </li>
84
- <li>No comparta su información personal o datos de cuenta con nadie en apkadmin.com, ya que esto puede comprometer su seguridad o privacidad. Siempre tenga cuidado al navegar o hacer clic en cualquier enlace o anuncio en apkadmin.com. </li>
85
- </ul>
86
- <h3>Preguntas frecuentes</h3>
87
- <p>Aquí hay algunas preguntas frecuentes sobre apkadmin entre nosotros menú mod:</p>
88
- <h4>Q: ¿Es seguro apkadmin entre nosotros menú mod? </h4>
89
-
90
- <h4>Q: Es apkadmin entre nosotros menú mod libre? </h4>
91
- <p>A: Apkadmin entre nosotros el menú mod es gratuito para descargar e instalar desde apkadmin.com, pero puede contener anuncios o compras en la aplicación que pueden costarle dinero. Por lo tanto, debes tener cuidado al usarlo, y evitar hacer clic en cualquier enlace o anuncio que pueda cobrarte dinero. </p>
92
- <h4>Q: ¿Puedo usar apkadmin entre nosotros menú mod en dispositivos iOS? </h4>
93
- <p>A: Apkadmin entre nosotros menú mod solo es compatible con dispositivos Android, y no se puede utilizar en dispositivos iOS. Por lo tanto, si tiene un iPhone o iPad, no puede usar apkadmin entre nosotros menú mod en su dispositivo. </p>
94
- <h4>Q: ¿Puedo usar apkadmin entre nosotros menú mod en el PC? </h4>
95
- <p>A: Apkadmin entre nosotros menú mod solo es compatible con dispositivos Android, y no se puede utilizar en el PC. Por lo tanto, si tiene una computadora Windows o Mac, no puede usar apkadmin entre nosotros menú mod en su computadora. </p>
96
- <h4>Q: ¿Cómo puedo actualizar apkadmin entre nosotros menú mod? </h4>
97
- <p A: Para actualizar apkadmin entre nosotros menú mod, es necesario hacer lo siguiente:</p>
98
- <ol>
99
- <li>Ir a apkadmin.com y comprobar si hay una nueva versión del menú mod disponible. </li>
100
- <li>Si hay una nueva versión, descárgala e instálala siguiendo los mismos pasos que antes. </li>
101
- <li>Si no hay una nueva versión, espere a que apkadmin suelte una y vuelva a comprobarla más tarde. </li>
102
- </ol>
103
- <p>Espero que este artículo te haya ayudado a entender lo que es apkadmin entre nosotros menú mod, qué características tiene, cuáles son sus ventajas y desventajas, cómo descargar e instalar, y cómo usarlo en tu juego. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Gracias por leer y divertirse jugando entre nosotros con apkadmin entre nosotros menú mod! </p> 64aa2da5cf<br />
104
- <br />
105
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Bubble Shooter 3 Descarga Gratuita.md DELETED
@@ -1,47 +0,0 @@
1
- <br />
2
- <h1>Bubble Shooter 3 Descarga gratuita: Un juego divertido y adictivo para todas las edades</h1>
3
- <p>¿Te encanta jugar juegos que son fáciles de aprender pero difíciles de dominar? ¿Te gusta hacer estallar burbujas de colores y resolver puzzles? Si respondiste sí, entonces deberías probar <strong>Bubble Shooter 3</strong>, un juego gratuito que te mantendrá entretenido durante horas. En este artículo, te diremos qué es Bubble Shooter 3, por qué deberías descargarlo, cómo jugarlo y algunos consejos y trucos para dominarlo. ¡Vamos a empezar! </p>
4
- <h2>bubble shooter 3 descarga gratuita</h2><br /><p><b><b>DOWNLOAD</b> &gt;&gt;&gt;&gt;&gt; <a href="https://bltlly.com/2v6M6P">https://bltlly.com/2v6M6P</a></b></p><br /><br />
5
- <h2>¿Qué es Bubble Shooter 3?</h2>
6
- <p>Bubble Shooter 3 es un clásico juego de burbujas que se inspira en juegos como Bejeweled y Candy Crush. Fue creado por Funnygames, un gran desarrollador de juegos en línea que ha lanzado muchos otros juegos populares. Aquí están algunas características de Bubble Shooter 3:</p>
7
- <h3>Un clásico juego de burbujas con tres modos</h3>
8
- <p>Bubble Shooter 3 tiene tres modos para elegir: clásico, rompecabezas y árcade. En el modo clásico, tienes que borrar todas las burbujas en la pantalla antes de que lleguen a la parte inferior. En el modo puzzle, tienes que borrar todas las burbujas en un número dado de movimientos. En el modo árcade, tienes que eliminar tantas burbujas como sea posible en un tiempo limitado. Cada modo tiene diferentes niveles de dificultad y desafíos. </p>
9
- <h3>Un juego simple y fácil de jugar</h3>
10
- <p>Bubble Shooter 3 es muy fácil de jugar. Todo lo que tienes que hacer es usar el ratón o el dedo para apuntar y disparar burbujas del mismo color. Cuando coinciden tres o más burbujas del mismo color, que pop y desaparecen. Cuanto más burbujas que pop, más puntos de puntuación. También puede hacer combos haciendo estallar más de tres burbujas en una sola toma o vinculando los colores coincidentes en una cadena. </p>
11
- <p></p>
12
- <h3>Un juego que desafía tu cerebro y habilidades</h3>
13
-
14
- <h2>¿Por qué descargar Bubble Shooter 3?</h2>
15
- <p>Hay muchas razones por las que deberías descargar Bubble Shooter 3. Aquí están algunas de ellas:</p>
16
- <h3>Es gratuito y está disponible para cualquier dispositivo</h3>
17
- <p>Bubble Shooter 3 es completamente gratis para descargar y jugar. No tienes que pagar nada ni registrar nada para disfrutar de este juego. También puedes reproducirlo en cualquier dispositivo, ya sea Android, iOS o portátil. Puedes jugar en cualquier momento, en cualquier lugar, siempre y cuando tengas una conexión a Internet. </p>
18
- <h3>Es divertido y relajante para jugar</h3>
19
- <p>Bubble Shooter 3 es un juego divertido y relajante que te hará feliz. Tiene colores brillantes, gráficos lindos, sonidos relajantes y animaciones suaves. Te hará sentir tranquilo y satisfecho mientras haces estallar burbujas y las ves estallar. También te hará sonreír al ver personajes divertidos como pandas, monos, gatos y más. </p>
20
- <h3>Es adecuado para todos</h3>
21
- <p>Bubble Shooter 3 <p>Bubble Shooter 3 es un juego que es adecuado para todos, independientemente de la edad, el género o el fondo. Es un juego que cualquiera puede jugar y disfrutar, desde niños hasta adultos, desde principiantes hasta expertos. Es un juego que se puede jugar solo o con amigos y familiares. Es un juego que puede traer alegría y diversión a cualquiera que lo juegue. </p>
22
- <h2>Cómo descargar y jugar Bubble Shooter 3?</h2>
23
- <p>Si usted está interesado en jugar Bubble Shooter 3, aquí están los pasos que debe seguir:</p>
24
- <h3>Descárgalo desde la Google Play Store o la App Store</h3>
25
- <p>El primer paso es descargar el juego desde la Google Play Store o la App Store, dependiendo de tu dispositivo. Puedes encontrar los siguientes enlaces:</p>
26
- <ul>
27
- <li><a href="">Bubble Shooter 3 para Android</a></li>
28
- <li><a href="">Bubble Shooter 3 para iOS</a></li>
29
- </ul>
30
- <p>El juego es gratis para descargar e instalar, pero puede contener algunos anuncios y compras en la aplicación. </p>
31
- <h3>Iniciar el juego y elegir el modo</h3>
32
-
33
- <h3>Dispara y combina burbujas del mismo color para hacerlas estallar</h3>
34
- <p>El paso final es comenzar a jugar el juego. Verás un disparador de burbujas en la parte inferior de la pantalla y un montón de burbujas en la parte superior. Tienes que usar el ratón o el dedo para apuntar y disparar burbujas del mismo color. Cuando coinciden tres o más burbujas del mismo color, que pop y desaparecen. Tienes que borrar todas las burbujas de la pantalla para completar el nivel y pasar a la siguiente. </p>
35
- <h2>Consejos y trucos para dominar Bubble Shooter 3</h2>
36
- <p>Bubble Shooter 3 es un juego que requiere habilidad y estrategia. Aquí hay algunos consejos y trucos para ayudarte a dominarlo:</p>
37
- <h3>Apunta cuidadosamente y usa las paredes para rebotar tus burbujas</h3>
38
- <p>Una de las habilidades más importantes en Bubble Shooter 3 es apuntar. Tienes que apuntar con cuidado y precisión para alcanzar tu objetivo. También puedes usar las paredes para rebotar tus burbujas y llegar a lugares difíciles. Esto puede ayudarte a crear más coincidencias y eliminar más burbujas. </p>
39
- <h3>Usa potenciadores y amplificadores para eliminar niveles difíciles</h3>
40
- <p>Otra habilidad en Bubble Shooter 3 es usar potenciadores y potenciadores. Estos son elementos especiales que pueden ayudarte a superar niveles difíciles. Por ejemplo, puede utilizar una bomba para explotar una gran área de burbujas, o una burbuja de arco iris para que coincida con cualquier color. También puedes usar monedas para comprar más potenciadores y potenciadores en la tienda. </p>
41
- <h3>Planifica tus movimientos y crea combos</h3>
42
- <p>La última habilidad en Bubble Shooter 3 es planificar tus movimientos y crear combos. Tienes que pensar con anticipación y anticiparte a lo que sucederá cuando hagas estallar una burbuja. Tienes que buscar oportunidades para crear combos haciendo estallar más de tres burbujas en una sola toma o vinculando los colores a juego en una cadena. Esto puede ayudarle a ganar más puntos y borrar más niveles. </p>
43
- <h2>Conclusión</h2>
44
-
45
- P: ¿Cuántos niveles hay en Bubble Shooter 3? R: Hay más de 1000 niveles en Bubble Shooter 3, cada uno con diferentes diseños, obstáculos y objetivos. P: ¿Cómo puedo obtener más monedas en Bubble Shooter 3? R: Puedes obtener más monedas completando niveles, viendo anuncios o comprándolos con dinero real. P: ¿Cómo puedo desbloquear nuevos tiradores de burbujas en Bubble Shooter 3? R: Puedes desbloquear nuevos tiradores de burbujas recogiendo estrellas de completar niveles. P: ¿Cómo cambio entre modos en Bubble Shooter 3? R: Puedes cambiar entre modos tocando el icono del menú en la esquina superior izquierda de la pantalla. P: ¿Cómo hago una pausa o reanudo el juego en Bubble Shooter 3? R: Puede hacer una pausa o reanudar el juego tocando el icono de pausa en la esquina superior derecha de la pantalla. </p> 64aa2da5cf<br />
46
- <br />
47
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Blackpink.md DELETED
@@ -1,106 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar entre nosotros Blackpink: Una guía para parpadeos y jugadores</h1>
3
- <p>¿Eres un fan de BLACKPINK, la sensación global de K-pop? ¿Te encanta jugar entre nosotros, el popular juego multijugador de engaño y traición? Si respondiste sí a ambas preguntas, ¡estás de suerte! Hay un mod hecho por fans de Among Us que presenta miembros y temas de BLACKPINK, y se llama Among Us Blackpink. En este artículo, te mostraremos cómo descargar y jugar a este increíble mod, así como darte algunos consejos y trucos para hacer que tu experiencia de juego sea más divertida y agradable. </p>
4
- <h2>¿Qué hay entre nosotros Blackpink? </h2>
5
- <h3>Un mod hecho por fans de Among Us con miembros y temas de BLACKPINK</h3>
6
- <p>Entre nosotros Blackpink es un mod o modificación de Among Us, un juego en el que tienes que trabajar junto a otros jugadores para completar tareas en una nave espacial, evitando ser asesinado por un impostor que se esconde entre vosotros. El mod fue creado por tres fans de BLACKPINK, también conocidos como Blinks, y fue lanzado el 23 de octubre de 2020. El mod cambia el juego original añadiendo miembros BLACKPINK como personajes jugables, así como pieles personalizadas, sombreros, mascotas, mapas y sonidos relacionados con BLACKPINK.</p>
7
- <h2>descargar entre nosotros blackpink</h2><br /><p><b><b>Download File</b> &mdash; <a href="https://bltlly.com/2v6KqN">https://bltlly.com/2v6KqN</a></b></p><br /><br />
8
- <h3>Las características y beneficios de jugar entre nosotros Blackpink</h3>
9
- <p>Jugar entre nosotros Blackpink tiene muchas características y beneficios que lo hacen más divertido y emocionante que el juego original. Estos son algunos de ellos:</p>
10
- <ul>
11
- <li>Puedes elegir tu miembro BLACKPINK favorito como personaje, como Jisoo, Jennie, Rosé o Lisa.</li>
12
- <li>Puedes personalizar tu personaje con diferentes pieles, sombreros y mascotas que se inspiran en los trajes, accesorios y canciones de BLACKPINK. </li>
13
- <li>Puedes jugar en dos nuevos mapas que se basan en los videos musicales de BLACKPINK, como Kill This Love y How You Like That.</li>
14
- <li> Puede disfrutar del juego con nuevos efectos de sonido y música que se toman de canciones y álbumes de BLACKPINK. </li>
15
-
16
- </ul>
17
- <h2>Cómo descargar Among Us Blackpink para diferentes dispositivos</h2>
18
- <h3>Para PC</h3>
19
- <h4>Descargar WinRAR y el archivo mod de los enlaces oficiales</h4>
20
- <p>Para jugar entre nosotros Blackpink en su PC, tendrá que descargar dos cosas: WinRAR y el archivo mod. WinRAR es un software que le permite extraer archivos comprimidos, como el archivo mod. El archivo mod es un archivo zip que contiene todos los datos y archivos necesarios para ejecutar el mod. Puede descargar WinRAR desde [22 this](https://www.win-rar.com/download.html?&L=0) y el archivo mod desde [this](https://drive.google.com/file/d/1f7lZy0aXQw9wGx6u8w2L5Z4WQX0ZnYi/view) enlace. Asegúrate de tener la última versión de Among Us instalada en tu PC antes de descargar el archivo mod. </p>
21
- <h4>Extraer el archivo mod y ejecutar el juego</h4>
22
- <p>Después de descargar WinRAR y el archivo mod, necesitará extraer el archivo mod usando WinRAR. Para hacer esto, siga estos pasos:</p>
23
- <ol>
24
- <li>Haga clic derecho en el archivo mod y seleccione "Extraer aquí". </li>
25
- <li>Una carpeta llamada "Among Us Blackpink" aparecerá en la misma ubicación que el archivo mod. </li>
26
- <li>Abra la carpeta y haga doble clic en el archivo "Entre nosotros.exe" para ejecutar el juego. </li>
27
- <li>Verá un mensaje que dice "Entre nosotros Blackpink Mod por @blackpinkmod". Haga clic en "OK" para continuar. </li>
28
- <li>Ahora estás listo para jugar entre nosotros Blackpink en su PC! </li>
29
- </ol>
30
- <h3>Para Android</h3>
31
- <h4>Descargar el archivo mod de los enlaces oficiales</h4>
32
- <p>Para jugar entre nosotros Blackpink en su dispositivo Android, solo tendrá que descargar una cosa: el archivo mod. El archivo mod es un archivo apk que contiene todos los datos y archivos necesarios para ejecutar el mod. Puede descargar el archivo mod de [this](https://drive.google.com/file/d/1f7lZy0aXQ9wGx6u8w2L5Z4WQX0ZnYiYb/view) enlace. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargar el archivo mod. </p>
33
- <h4>Instalar el archivo mod y ejecutar el juego</h4>
34
-
35
- <ol>
36
- <li>Vaya a la configuración de su dispositivo y habilite "Fuentes desconocidas" en las opciones de seguridad o privacidad. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store.</li>
37
- <li>Busque el archivo mod en la carpeta de descargas de su dispositivo y toque en él para instalarlo. </li>
38
- <li> Verá un mensaje que dice "¿Desea instalar esta aplicación?" Toque en "Instalar" para continuar. </li>
39
- <li> Verá un mensaje que dice "App instalado". Toque en "Abrir" para ejecutar el juego. </li>
40
- <li> Ahora está listo para jugar entre nosotros Blackpink en su dispositivo Android! </li>
41
- </ol>
42
- <h3>Para iOS</h3>
43
- <h4>Esperar a que los desarrolladores para liberar el mod para dispositivos iOS</h4>
44
- <p>Desafortunadamente, no hay versión oficial de Among Us Blackpink para dispositivos iOS todavía. Los desarrolladores del mod están trabajando duro para que sea compatible con dispositivos iOS, pero no han anunciado una fecha de lanzamiento todavía. Sin embargo, han asegurado que lo lanzarán lo antes posible, ¡así que estad atentos! </p>
45
- <h4>Siga las cuentas oficiales de las redes sociales para actualizaciones</h4>
46
- <p>Si desea ser notificado cuando Among Us Blackpink está disponible para dispositivos iOS, puede seguir las cuentas de redes sociales oficiales de los desarrolladores. Publican actualizaciones regulares y noticias sobre el mod, así como capturas de pantalla y videos del juego. También puedes interactuar con otros Blinks que están jugando o esperando el mod, y compartir tus pensamientos y comentarios. Estas son algunas de sus cuentas de redes sociales:</p>
47
- <ul>
48
- <li>[Twitter](https://twitter.com/blackpinkmod)</li>
49
- <li>[Instagram](https://www.instagram.com/blackpinkmod/)</li>
50
- <li>[YouTube](https://www.youtube.com/channel/UCgKvJHmFzqjT9kNt1c7sVjA)</li>
51
- <li>[Discordia](https://discord.gg/8qDgBfT)</li>
52
- </ul>
53
- <h2>Cómo jugar entre nosotros Blackpink con tus amigos</h2>
54
- <h3>Crear o unirse a una habitación privada con un código</h3>
55
-
56
- <ol>
57
- <li>Lanzamiento entre nosotros Blackpink en su dispositivo y toque en "Online". </li>
58
- <li>Introduzca el apodo deseado y seleccione la región del servidor preferido. </li>
59
- <li>Si desea crear una habitación, toque en "Crear juego" y elegir la configuración del juego, como el número de impostores, el mapa, el idioma de chat, y las reglas del juego. Toque en "Confirmar" para crear la habitación y obtener el código. </li>
60
- <li>Si desea unirse a una habitación, toque en "Introducir código" y escriba el código que su amigo le ha dado. Toque en el botón de flecha para unirse a la habitación. </li>
61
- <li>Una vez que esté en la habitación, puede invitar a más amigos compartiendo el código con ellos. También puedes chatear con otros jugadores, cambiar la apariencia de tu personaje y personalizar la configuración del juego. </li>
62
- <li>Cuando todos estén listos, toque en "Inicio" para comenzar el juego. </li>
63
- </ol>
64
- <h3>Elige tu miembro BLACKPINK favorito como tu personaje</h3>
65
- <p>Una de las mejores características de Among Us Blackpink es que puedes elegir tu miembro BLACKPINK favorito como personaje. Puedes hacer esto tocando el botón "BLACKPINK" en la esquina inferior derecha de la pantalla. Verás cuatro opciones: Jisoo, Jennie, Rosé y Lisa. Toca en la que quieras jugar y confirma tu elección. A continuación, verá el cambio de rostro de su personaje para que coincida con el miembro BLACKPINK que seleccionó. También puedes cambiar el color de tu personaje tocando la paleta de colores en la esquina inferior izquierda de la pantalla. </p>
66
- <h3>Disfruta del juego con pieles personalizadas, sombreros, mascotas, mapas y sonidos</h3>
67
- <p>Otra gran característica de Among Us Blackpink es que puedes disfrutar del juego con pieles personalizadas, sombreros, mascotas, mapas y sonidos relacionados con BLACKPINK. Puede acceder a estas funciones pulsando los botones en el centro inferior de la pantalla. Estos son algunos ejemplos de lo que puede encontrar:</p>
68
- <p></p>
69
- <ul>
70
- <li>Skins: Puedes elegir entre diferentes atuendos inspirados en los videos musicales de BLACKPINK, como Kill This Love, How You Like That, Ice Cream y Lovesick Girls.</li>
71
-
72
- <li>Mascotas: Puedes elegir entre diferentes animales que están asociados con miembros de BLACKPINK, como un panda para Jisoo, un perro para Jennie, un gato para Rosé y un hámster para Lisa.</li>
73
- <li>Mapas: Puedes jugar en dos nuevos mapas que se basan en videos musicales de BLACKPINK, como Kill This Love y How You Like That. Los mapas tienen diferentes diseños, tareas, respiraderos y sabotajes que son únicos para cada tema. </li>
74
- <li>Sonidos: Puede disfrutar del juego con nuevos efectos de sonido y música que se toman de las canciones y álbumes de BLACKPINK. Los sonidos incluyen animaciones de muerte, reuniones de emergencia, resultados de votación, pantallas de victoria y derrota, y música de fondo. </li>
75
- </ul>
76
- <h2>Consejos y trucos para jugar entre nosotros Blackpink</h2>
77
- <h3>Usa letras BLACKPINK como mensajes de chat</h3>
78
- <p>Una forma divertida de jugar Entre nosotros Blackpink es utilizar letras BLACKPINK como sus mensajes de chat. Esto hará que tu comunicación sea más interesante y creativa, además de mostrar tu amor por BLACKPINK. Por ejemplo, puedes usar estas letras:</p>
79
- <ul>
80
- <li>Si eres un impostor y quieres mentir sobre tu ubicación o coartada: "Lo siento mucho pero es amor falso"</li>
81
- <li>Si eres un compañero de equipo y quieres acusar a alguien de ser un impostor: "Eres un chico malo y eres malo para mí"</li>
82
- <li>Si eres un compañero de equipo y quieres expresar tu frustración o enojo: "Golpéate con ese ddu-du ddu-du du du"</li>
83
- <li>Si eres un compañero de equipo y quieres animar o felicitar a alguien: "Eres mi tipo favorito de visual"</li>
84
- <li>Si eres un compañero de equipo y quieres coquetear o molestar a alguien: "Eres como un helado en este tiempo abrasador"</li>
85
- </ul>
86
- <h3>Tenga cuidado con las diferentes animaciones y efectos de sonido</h3>
87
-
88
- <h3>Diviértete y sé respetuoso con otros jugadores</h3>
89
- <p>El consejo más importante para jugar Among Us Blackpink es divertirse y ser respetuoso con otros jugadores. Recuerde que este es un juego que está destinado a entretener y conectar a las personas que comparten un interés común en BLACKPINK y entre nosotros. Por lo tanto, no debe tomar el juego demasiado en serio o personalmente, y no debe ser grosero u ofensivo con otros jugadores. En su lugar, deberías disfrutar del juego con una actitud positiva y un espíritu amistoso, y deberías apreciar los esfuerzos y talentos de los desarrolladores de mods y los miembros de BLACKPINK. </p>
90
- <h2>Conclusión</h2>
91
- <h3>Resumir los puntos principales del artículo</h3>
92
- <p>En conclusión, Among Us Blackpink es un mod hecho por fans de Among Us que presenta miembros y temas de BLACKPINK. Es una forma divertida y emocionante de jugar Among Us con tus amigos y otros Blinks, así como para mostrar tu amor y apoyo a BLACKPINK. Para jugar entre nosotros Blackpink, tendrá que descargar e instalar el archivo mod en su dispositivo, dependiendo de si está utilizando un PC, un dispositivo Android o un dispositivo iOS. También tendrá que crear o unirse a una habitación privada con un código, elegir su miembro favorito BLACKPINK como su personaje, y disfrutar del juego con pieles personalizadas, sombreros, mascotas, mapas y sonidos. También puedes usar algunos consejos y trucos para hacer que tu experiencia de juego sea más divertida y agradable, como usar letras BLACKPINK como tus mensajes de chat, ver las diferentes animaciones y efectos de sonido y divertirte y ser respetuoso con otros jugadores. </p>
93
- <h3>Invita a los lectores a probar el mod y compartir sus comentarios</h3>
94
-
95
- <h2>Preguntas frecuentes</h2>
96
- <h3>¿Es seguro descargar Blackpink? </h3>
97
- <p>Sí, entre nosotros Blackpink es seguro para descargar siempre y cuando utilice los enlaces oficiales que hemos proporcionado en este artículo. El archivo mod no contiene ningún virus o malware que pueda dañar su dispositivo o comprometer su privacidad. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet, y escanearlos con un software antivirus antes de instalarlos. </p>
98
- <h3>¿Está Entre Nosotros Blackpink libre para jugar? </h3>
99
- <p>Sí, Entre nosotros Blackpink es gratis para jugar siempre y cuando tengas la versión original de Among Us instalada en tu dispositivo. Usted no necesita pagar ningún dinero para descargar o jugar este mod. Sin embargo, es posible que necesites ver algunos anuncios o hacer algunas compras en la aplicación si quieres acceder a algunas funciones o elementos en el juego original. </p>
100
- <h3>¿Puedo jugar entre nosotros Blackpink con personas que no tienen el mod? </h3>
101
- <p>No, no se puede jugar entre nosotros Blackpink con personas que no tienen el mod instalado en sus dispositivos. Esto se debe a que el mod cambia algunos aspectos del juego que son incompatibles con la versión original. Por lo tanto, solo se puede jugar entre nosotros Blackpink con las personas que tienen el mismo mod instalado en sus dispositivos. </p>
102
- <h3>¿Puedo volver a la versión original de Among Us después de jugar Among Us Blackpink? </h3>
103
- <p>Sí, puede volver a la versión original de Among Us después de jugar Among Us Blackpink. Para hacer esto, tendrá que desinstalar o eliminar el archivo mod de su dispositivo, y luego iniciar el juego original de la tienda de aplicaciones o biblioteca de su dispositivo. También puedes mantener ambas versiones del juego en tu dispositivo si tienes suficiente espacio de almacenamiento. </p>
104
- <h3>¿Cómo puedo apoyar a los desarrolladores de Among Us Blackpink? </h3> 64aa2da5cf<br />
105
- <br />
106
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat/src/routes/settings/+server.ts DELETED
@@ -1,34 +0,0 @@
1
- import { collections } from "$lib/server/database.js";
2
- import { subMinutes } from "date-fns";
3
- import { z } from "zod";
4
-
5
- export async function PATCH({ locals, request }) {
6
- const json = await request.json();
7
-
8
- const settings = z
9
- .object({
10
- shareConversationsWithModelAuthors: z.boolean().default(true),
11
- ethicsModalAcceptedAt: z.optional(z.date({ coerce: true }).min(subMinutes(new Date(), 5))),
12
- })
13
- .parse(json);
14
-
15
- await collections.settings.updateOne(
16
- {
17
- sessionId: locals.sessionId,
18
- },
19
- {
20
- $set: {
21
- ...settings,
22
- updatedAt: new Date(),
23
- },
24
- $setOnInsert: {
25
- createdAt: new Date(),
26
- },
27
- },
28
- {
29
- upsert: true,
30
- }
31
- );
32
-
33
- return new Response();
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/enums.py DELETED
@@ -1,85 +0,0 @@
1
- """
2
- All of the Enums that are used throughout the chardet package.
3
-
4
- :author: Dan Blanchard ([email protected])
5
- """
6
-
7
- from enum import Enum, Flag
8
-
9
-
10
- class InputState:
11
- """
12
- This enum represents the different states a universal detector can be in.
13
- """
14
-
15
- PURE_ASCII = 0
16
- ESC_ASCII = 1
17
- HIGH_BYTE = 2
18
-
19
-
20
- class LanguageFilter(Flag):
21
- """
22
- This enum represents the different language filters we can apply to a
23
- ``UniversalDetector``.
24
- """
25
-
26
- NONE = 0x00
27
- CHINESE_SIMPLIFIED = 0x01
28
- CHINESE_TRADITIONAL = 0x02
29
- JAPANESE = 0x04
30
- KOREAN = 0x08
31
- NON_CJK = 0x10
32
- ALL = 0x1F
33
- CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL
34
- CJK = CHINESE | JAPANESE | KOREAN
35
-
36
-
37
- class ProbingState(Enum):
38
- """
39
- This enum represents the different states a prober can be in.
40
- """
41
-
42
- DETECTING = 0
43
- FOUND_IT = 1
44
- NOT_ME = 2
45
-
46
-
47
- class MachineState:
48
- """
49
- This enum represents the different states a state machine can be in.
50
- """
51
-
52
- START = 0
53
- ERROR = 1
54
- ITS_ME = 2
55
-
56
-
57
- class SequenceLikelihood:
58
- """
59
- This enum represents the likelihood of a character following the previous one.
60
- """
61
-
62
- NEGATIVE = 0
63
- UNLIKELY = 1
64
- LIKELY = 2
65
- POSITIVE = 3
66
-
67
- @classmethod
68
- def get_num_categories(cls) -> int:
69
- """:returns: The number of likelihood categories in the enum."""
70
- return 4
71
-
72
-
73
- class CharacterCategory:
74
- """
75
- This enum represents the different categories language models for
76
- ``SingleByteCharsetProber`` put characters into.
77
-
78
- Anything less than CONTROL is considered a letter.
79
- """
80
-
81
- UNDEFINED = 255
82
- LINE_BREAK = 254
83
- SYMBOL = 253
84
- DIGIT = 252
85
- CONTROL = 251
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows_renderer.py DELETED
@@ -1,56 +0,0 @@
1
- from typing import Iterable, Sequence, Tuple, cast
2
-
3
- from pip._vendor.rich._win32_console import LegacyWindowsTerm, WindowsCoordinates
4
- from pip._vendor.rich.segment import ControlCode, ControlType, Segment
5
-
6
-
7
- def legacy_windows_render(buffer: Iterable[Segment], term: LegacyWindowsTerm) -> None:
8
- """Makes appropriate Windows Console API calls based on the segments in the buffer.
9
-
10
- Args:
11
- buffer (Iterable[Segment]): Iterable of Segments to convert to Win32 API calls.
12
- term (LegacyWindowsTerm): Used to call the Windows Console API.
13
- """
14
- for text, style, control in buffer:
15
- if not control:
16
- if style:
17
- term.write_styled(text, style)
18
- else:
19
- term.write_text(text)
20
- else:
21
- control_codes: Sequence[ControlCode] = control
22
- for control_code in control_codes:
23
- control_type = control_code[0]
24
- if control_type == ControlType.CURSOR_MOVE_TO:
25
- _, x, y = cast(Tuple[ControlType, int, int], control_code)
26
- term.move_cursor_to(WindowsCoordinates(row=y - 1, col=x - 1))
27
- elif control_type == ControlType.CARRIAGE_RETURN:
28
- term.write_text("\r")
29
- elif control_type == ControlType.HOME:
30
- term.move_cursor_to(WindowsCoordinates(0, 0))
31
- elif control_type == ControlType.CURSOR_UP:
32
- term.move_cursor_up()
33
- elif control_type == ControlType.CURSOR_DOWN:
34
- term.move_cursor_down()
35
- elif control_type == ControlType.CURSOR_FORWARD:
36
- term.move_cursor_forward()
37
- elif control_type == ControlType.CURSOR_BACKWARD:
38
- term.move_cursor_backward()
39
- elif control_type == ControlType.CURSOR_MOVE_TO_COLUMN:
40
- _, column = cast(Tuple[ControlType, int], control_code)
41
- term.move_cursor_to_column(column - 1)
42
- elif control_type == ControlType.HIDE_CURSOR:
43
- term.hide_cursor()
44
- elif control_type == ControlType.SHOW_CURSOR:
45
- term.show_cursor()
46
- elif control_type == ControlType.ERASE_IN_LINE:
47
- _, mode = cast(Tuple[ControlType, int], control_code)
48
- if mode == 0:
49
- term.erase_end_of_line()
50
- elif mode == 1:
51
- term.erase_start_of_line()
52
- elif mode == 2:
53
- term.erase_line()
54
- elif control_type == ControlType.SET_WINDOW_TITLE:
55
- _, title = cast(Tuple[ControlType, str], control_code)
56
- term.set_title(title)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/console.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/__init__.py DELETED
@@ -1,11 +0,0 @@
1
- # SPDX-License-Identifier: MIT
2
- # SPDX-FileCopyrightText: 2021 Taneli Hukkinen
3
- # Licensed to PSF under a Contributor Agreement.
4
-
5
- __all__ = ("loads", "load", "TOMLDecodeError")
6
- __version__ = "2.0.1" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT
7
-
8
- from ._parser import TOMLDecodeError, load, loads
9
-
10
- # Pretend this exception was created here.
11
- TOMLDecodeError.__module__ = __name__
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/dataset.py DELETED
@@ -1,49 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- import os
3
-
4
- from detectron2.data import DatasetCatalog, MetadataCatalog
5
- from detectron2.data.datasets import load_coco_json
6
-
7
- _URL_PREFIX = "https://dl.fbaipublicfiles.com/densepose/data/"
8
-
9
-
10
- def get_densepose_metadata():
11
- meta = {
12
- "thing_classes": ["person"],
13
- "densepose_transform_src": _URL_PREFIX + "UV_symmetry_transforms.mat",
14
- "densepose_smpl_subdiv": _URL_PREFIX + "SMPL_subdiv.mat",
15
- "densepose_smpl_subdiv_transform": _URL_PREFIX + "SMPL_SUBDIV_TRANSFORM.mat",
16
- }
17
- return meta
18
-
19
-
20
- SPLITS = {
21
- "densepose_coco_2014_train": ("coco/train2014", "coco/annotations/densepose_train2014.json"),
22
- "densepose_coco_2014_minival": ("coco/val2014", "coco/annotations/densepose_minival2014.json"),
23
- "densepose_coco_2014_minival_100": (
24
- "coco/val2014",
25
- "coco/annotations/densepose_minival2014_100.json",
26
- ),
27
- "densepose_coco_2014_valminusminival": (
28
- "coco/val2014",
29
- "coco/annotations/densepose_valminusminival2014.json",
30
- ),
31
- }
32
-
33
- DENSEPOSE_KEYS = ["dp_x", "dp_y", "dp_I", "dp_U", "dp_V", "dp_masks"]
34
-
35
- for key, (image_root, json_file) in SPLITS.items():
36
- # Assume pre-defined datasets live in `./datasets`.
37
- json_file = os.path.join("datasets", json_file)
38
- image_root = os.path.join("datasets", image_root)
39
-
40
- DatasetCatalog.register(
41
- key,
42
- lambda key=key, json_file=json_file, image_root=image_root: load_coco_json(
43
- json_file, image_root, key, extra_annotation_keys=DENSEPOSE_KEYS
44
- ),
45
- )
46
-
47
- MetadataCatalog.get(key).set(
48
- json_file=json_file, image_root=image_root, **get_densepose_metadata()
49
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/dense_heads/paa_head.py DELETED
@@ -1,671 +0,0 @@
1
- import numpy as np
2
- import torch
3
- from mmcv.runner import force_fp32
4
-
5
- from mmdet.core import multi_apply, multiclass_nms
6
- from mmdet.core.bbox.iou_calculators import bbox_overlaps
7
- from mmdet.models import HEADS
8
- from mmdet.models.dense_heads import ATSSHead
9
-
10
- EPS = 1e-12
11
- try:
12
- import sklearn.mixture as skm
13
- except ImportError:
14
- skm = None
15
-
16
-
17
- def levels_to_images(mlvl_tensor):
18
- """Concat multi-level feature maps by image.
19
-
20
- [feature_level0, feature_level1...] -> [feature_image0, feature_image1...]
21
- Convert the shape of each element in mlvl_tensor from (N, C, H, W) to
22
- (N, H*W , C), then split the element to N elements with shape (H*W, C), and
23
- concat elements in same image of all level along first dimension.
24
-
25
- Args:
26
- mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from
27
- corresponding level. Each element is of shape (N, C, H, W)
28
-
29
- Returns:
30
- list[torch.Tensor]: A list that contains N tensors and each tensor is
31
- of shape (num_elements, C)
32
- """
33
- batch_size = mlvl_tensor[0].size(0)
34
- batch_list = [[] for _ in range(batch_size)]
35
- channels = mlvl_tensor[0].size(1)
36
- for t in mlvl_tensor:
37
- t = t.permute(0, 2, 3, 1)
38
- t = t.view(batch_size, -1, channels).contiguous()
39
- for img in range(batch_size):
40
- batch_list[img].append(t[img])
41
- return [torch.cat(item, 0) for item in batch_list]
42
-
43
-
44
- @HEADS.register_module()
45
- class PAAHead(ATSSHead):
46
- """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU
47
- Prediction for Object Detection.
48
-
49
- Code is modified from the `official github repo
50
- <https://github.com/kkhoot/PAA/blob/master/paa_core
51
- /modeling/rpn/paa/loss.py>`_.
52
-
53
- More details can be found in the `paper
54
- <https://arxiv.org/abs/2007.08103>`_ .
55
-
56
- Args:
57
- topk (int): Select topk samples with smallest loss in
58
- each level.
59
- score_voting (bool): Whether to use score voting in post-process.
60
- covariance_type : String describing the type of covariance parameters
61
- to be used in :class:`sklearn.mixture.GaussianMixture`.
62
- It must be one of:
63
-
64
- - 'full': each component has its own general covariance matrix
65
- - 'tied': all components share the same general covariance matrix
66
- - 'diag': each component has its own diagonal covariance matrix
67
- - 'spherical': each component has its own single variance
68
- Default: 'diag'. From 'full' to 'spherical', the gmm fitting
69
- process is faster yet the performance could be influenced. For most
70
- cases, 'diag' should be a good choice.
71
- """
72
-
73
- def __init__(self,
74
- *args,
75
- topk=9,
76
- score_voting=True,
77
- covariance_type='diag',
78
- **kwargs):
79
- # topk used in paa reassign process
80
- self.topk = topk
81
- self.with_score_voting = score_voting
82
- self.covariance_type = covariance_type
83
- super(PAAHead, self).__init__(*args, **kwargs)
84
-
85
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds'))
86
- def loss(self,
87
- cls_scores,
88
- bbox_preds,
89
- iou_preds,
90
- gt_bboxes,
91
- gt_labels,
92
- img_metas,
93
- gt_bboxes_ignore=None):
94
- """Compute losses of the head.
95
-
96
- Args:
97
- cls_scores (list[Tensor]): Box scores for each scale level
98
- Has shape (N, num_anchors * num_classes, H, W)
99
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
100
- level with shape (N, num_anchors * 4, H, W)
101
- iou_preds (list[Tensor]): iou_preds for each scale
102
- level with shape (N, num_anchors * 1, H, W)
103
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
104
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
105
- gt_labels (list[Tensor]): class indices corresponding to each box
106
- img_metas (list[dict]): Meta information of each image, e.g.,
107
- image size, scaling factor, etc.
108
- gt_bboxes_ignore (list[Tensor] | None): Specify which bounding
109
- boxes can be ignored when are computing the loss.
110
-
111
- Returns:
112
- dict[str, Tensor]: A dictionary of loss gmm_assignment.
113
- """
114
-
115
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
116
- assert len(featmap_sizes) == self.anchor_generator.num_levels
117
-
118
- device = cls_scores[0].device
119
- anchor_list, valid_flag_list = self.get_anchors(
120
- featmap_sizes, img_metas, device=device)
121
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
122
- cls_reg_targets = self.get_targets(
123
- anchor_list,
124
- valid_flag_list,
125
- gt_bboxes,
126
- img_metas,
127
- gt_bboxes_ignore_list=gt_bboxes_ignore,
128
- gt_labels_list=gt_labels,
129
- label_channels=label_channels,
130
- )
131
- (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds,
132
- pos_gt_index) = cls_reg_targets
133
- cls_scores = levels_to_images(cls_scores)
134
- cls_scores = [
135
- item.reshape(-1, self.cls_out_channels) for item in cls_scores
136
- ]
137
- bbox_preds = levels_to_images(bbox_preds)
138
- bbox_preds = [item.reshape(-1, 4) for item in bbox_preds]
139
- iou_preds = levels_to_images(iou_preds)
140
- iou_preds = [item.reshape(-1, 1) for item in iou_preds]
141
- pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list,
142
- cls_scores, bbox_preds, labels,
143
- labels_weight, bboxes_target,
144
- bboxes_weight, pos_inds)
145
-
146
- with torch.no_grad():
147
- reassign_labels, reassign_label_weight, \
148
- reassign_bbox_weights, num_pos = multi_apply(
149
- self.paa_reassign,
150
- pos_losses_list,
151
- labels,
152
- labels_weight,
153
- bboxes_weight,
154
- pos_inds,
155
- pos_gt_index,
156
- anchor_list)
157
- num_pos = sum(num_pos)
158
- # convert all tensor list to a flatten tensor
159
- cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1))
160
- bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1))
161
- iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1))
162
- labels = torch.cat(reassign_labels, 0).view(-1)
163
- flatten_anchors = torch.cat(
164
- [torch.cat(item, 0) for item in anchor_list])
165
- labels_weight = torch.cat(reassign_label_weight, 0).view(-1)
166
- bboxes_target = torch.cat(bboxes_target,
167
- 0).view(-1, bboxes_target[0].size(-1))
168
-
169
- pos_inds_flatten = ((labels >= 0)
170
- &
171
- (labels < self.num_classes)).nonzero().reshape(-1)
172
-
173
- losses_cls = self.loss_cls(
174
- cls_scores,
175
- labels,
176
- labels_weight,
177
- avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0
178
- if num_pos:
179
- pos_bbox_pred = self.bbox_coder.decode(
180
- flatten_anchors[pos_inds_flatten],
181
- bbox_preds[pos_inds_flatten])
182
- pos_bbox_target = bboxes_target[pos_inds_flatten]
183
- iou_target = bbox_overlaps(
184
- pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True)
185
- losses_iou = self.loss_centerness(
186
- iou_preds[pos_inds_flatten],
187
- iou_target.unsqueeze(-1),
188
- avg_factor=num_pos)
189
- losses_bbox = self.loss_bbox(
190
- pos_bbox_pred,
191
- pos_bbox_target,
192
- iou_target.clamp(min=EPS),
193
- avg_factor=iou_target.sum())
194
- else:
195
- losses_iou = iou_preds.sum() * 0
196
- losses_bbox = bbox_preds.sum() * 0
197
-
198
- return dict(
199
- loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou)
200
-
201
- def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight,
202
- bbox_target, bbox_weight, pos_inds):
203
- """Calculate loss of all potential positive samples obtained from first
204
- match process.
205
-
206
- Args:
207
- anchors (list[Tensor]): Anchors of each scale.
208
- cls_score (Tensor): Box scores of single image with shape
209
- (num_anchors, num_classes)
210
- bbox_pred (Tensor): Box energies / deltas of single image
211
- with shape (num_anchors, 4)
212
- label (Tensor): classification target of each anchor with
213
- shape (num_anchors,)
214
- label_weight (Tensor): Classification loss weight of each
215
- anchor with shape (num_anchors).
216
- bbox_target (dict): Regression target of each anchor with
217
- shape (num_anchors, 4).
218
- bbox_weight (Tensor): Bbox weight of each anchor with shape
219
- (num_anchors, 4).
220
- pos_inds (Tensor): Index of all positive samples got from
221
- first assign process.
222
-
223
- Returns:
224
- Tensor: Losses of all positive samples in single image.
225
- """
226
- if not len(pos_inds):
227
- return cls_score.new([]),
228
- anchors_all_level = torch.cat(anchors, 0)
229
- pos_scores = cls_score[pos_inds]
230
- pos_bbox_pred = bbox_pred[pos_inds]
231
- pos_label = label[pos_inds]
232
- pos_label_weight = label_weight[pos_inds]
233
- pos_bbox_target = bbox_target[pos_inds]
234
- pos_bbox_weight = bbox_weight[pos_inds]
235
- pos_anchors = anchors_all_level[pos_inds]
236
- pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred)
237
-
238
- # to keep loss dimension
239
- loss_cls = self.loss_cls(
240
- pos_scores,
241
- pos_label,
242
- pos_label_weight,
243
- avg_factor=self.loss_cls.loss_weight,
244
- reduction_override='none')
245
-
246
- loss_bbox = self.loss_bbox(
247
- pos_bbox_pred,
248
- pos_bbox_target,
249
- pos_bbox_weight,
250
- avg_factor=self.loss_cls.loss_weight,
251
- reduction_override='none')
252
-
253
- loss_cls = loss_cls.sum(-1)
254
- pos_loss = loss_bbox + loss_cls
255
- return pos_loss,
256
-
257
- def paa_reassign(self, pos_losses, label, label_weight, bbox_weight,
258
- pos_inds, pos_gt_inds, anchors):
259
- """Fit loss to GMM distribution and separate positive, ignore, negative
260
- samples again with GMM model.
261
-
262
- Args:
263
- pos_losses (Tensor): Losses of all positive samples in
264
- single image.
265
- label (Tensor): classification target of each anchor with
266
- shape (num_anchors,)
267
- label_weight (Tensor): Classification loss weight of each
268
- anchor with shape (num_anchors).
269
- bbox_weight (Tensor): Bbox weight of each anchor with shape
270
- (num_anchors, 4).
271
- pos_inds (Tensor): Index of all positive samples got from
272
- first assign process.
273
- pos_gt_inds (Tensor): Gt_index of all positive samples got
274
- from first assign process.
275
- anchors (list[Tensor]): Anchors of each scale.
276
-
277
- Returns:
278
- tuple: Usually returns a tuple containing learning targets.
279
-
280
- - label (Tensor): classification target of each anchor after
281
- paa assign, with shape (num_anchors,)
282
- - label_weight (Tensor): Classification loss weight of each
283
- anchor after paa assign, with shape (num_anchors).
284
- - bbox_weight (Tensor): Bbox weight of each anchor with shape
285
- (num_anchors, 4).
286
- - num_pos (int): The number of positive samples after paa
287
- assign.
288
- """
289
- if not len(pos_inds):
290
- return label, label_weight, bbox_weight, 0
291
- label = label.clone()
292
- label_weight = label_weight.clone()
293
- bbox_weight = bbox_weight.clone()
294
- num_gt = pos_gt_inds.max() + 1
295
- num_level = len(anchors)
296
- num_anchors_each_level = [item.size(0) for item in anchors]
297
- num_anchors_each_level.insert(0, 0)
298
- inds_level_interval = np.cumsum(num_anchors_each_level)
299
- pos_level_mask = []
300
- for i in range(num_level):
301
- mask = (pos_inds >= inds_level_interval[i]) & (
302
- pos_inds < inds_level_interval[i + 1])
303
- pos_level_mask.append(mask)
304
- pos_inds_after_paa = [label.new_tensor([])]
305
- ignore_inds_after_paa = [label.new_tensor([])]
306
- for gt_ind in range(num_gt):
307
- pos_inds_gmm = []
308
- pos_loss_gmm = []
309
- gt_mask = pos_gt_inds == gt_ind
310
- for level in range(num_level):
311
- level_mask = pos_level_mask[level]
312
- level_gt_mask = level_mask & gt_mask
313
- value, topk_inds = pos_losses[level_gt_mask].topk(
314
- min(level_gt_mask.sum(), self.topk), largest=False)
315
- pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds])
316
- pos_loss_gmm.append(value)
317
- pos_inds_gmm = torch.cat(pos_inds_gmm)
318
- pos_loss_gmm = torch.cat(pos_loss_gmm)
319
- # fix gmm need at least two sample
320
- if len(pos_inds_gmm) < 2:
321
- continue
322
- device = pos_inds_gmm.device
323
- pos_loss_gmm, sort_inds = pos_loss_gmm.sort()
324
- pos_inds_gmm = pos_inds_gmm[sort_inds]
325
- pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy()
326
- min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max()
327
- means_init = np.array([min_loss, max_loss]).reshape(2, 1)
328
- weights_init = np.array([0.5, 0.5])
329
- precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full
330
- if self.covariance_type == 'spherical':
331
- precisions_init = precisions_init.reshape(2)
332
- elif self.covariance_type == 'diag':
333
- precisions_init = precisions_init.reshape(2, 1)
334
- elif self.covariance_type == 'tied':
335
- precisions_init = np.array([[1.0]])
336
- if skm is None:
337
- raise ImportError('Please run "pip install sklearn" '
338
- 'to install sklearn first.')
339
- gmm = skm.GaussianMixture(
340
- 2,
341
- weights_init=weights_init,
342
- means_init=means_init,
343
- precisions_init=precisions_init,
344
- covariance_type=self.covariance_type)
345
- gmm.fit(pos_loss_gmm)
346
- gmm_assignment = gmm.predict(pos_loss_gmm)
347
- scores = gmm.score_samples(pos_loss_gmm)
348
- gmm_assignment = torch.from_numpy(gmm_assignment).to(device)
349
- scores = torch.from_numpy(scores).to(device)
350
-
351
- pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme(
352
- gmm_assignment, scores, pos_inds_gmm)
353
- pos_inds_after_paa.append(pos_inds_temp)
354
- ignore_inds_after_paa.append(ignore_inds_temp)
355
-
356
- pos_inds_after_paa = torch.cat(pos_inds_after_paa)
357
- ignore_inds_after_paa = torch.cat(ignore_inds_after_paa)
358
- reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1)
359
- reassign_ids = pos_inds[reassign_mask]
360
- label[reassign_ids] = self.num_classes
361
- label_weight[ignore_inds_after_paa] = 0
362
- bbox_weight[reassign_ids] = 0
363
- num_pos = len(pos_inds_after_paa)
364
- return label, label_weight, bbox_weight, num_pos
365
-
366
- def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm):
367
- """A general separation scheme for gmm model.
368
-
369
- It separates a GMM distribution of candidate samples into three
370
- parts, 0 1 and uncertain areas, and you can implement other
371
- separation schemes by rewriting this function.
372
-
373
- Args:
374
- gmm_assignment (Tensor): The prediction of GMM which is of shape
375
- (num_samples,). The 0/1 value indicates the distribution
376
- that each sample comes from.
377
- scores (Tensor): The probability of sample coming from the
378
- fit GMM distribution. The tensor is of shape (num_samples,).
379
- pos_inds_gmm (Tensor): All the indexes of samples which are used
380
- to fit GMM model. The tensor is of shape (num_samples,)
381
-
382
- Returns:
383
- tuple[Tensor]: The indices of positive and ignored samples.
384
-
385
- - pos_inds_temp (Tensor): Indices of positive samples.
386
- - ignore_inds_temp (Tensor): Indices of ignore samples.
387
- """
388
- # The implementation is (c) in Fig.3 in origin paper instead of (b).
389
- # You can refer to issues such as
390
- # https://github.com/kkhoot/PAA/issues/8 and
391
- # https://github.com/kkhoot/PAA/issues/9.
392
- fgs = gmm_assignment == 0
393
- pos_inds_temp = fgs.new_tensor([], dtype=torch.long)
394
- ignore_inds_temp = fgs.new_tensor([], dtype=torch.long)
395
- if fgs.nonzero().numel():
396
- _, pos_thr_ind = scores[fgs].topk(1)
397
- pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1]
398
- ignore_inds_temp = pos_inds_gmm.new_tensor([])
399
- return pos_inds_temp, ignore_inds_temp
400
-
401
- def get_targets(
402
- self,
403
- anchor_list,
404
- valid_flag_list,
405
- gt_bboxes_list,
406
- img_metas,
407
- gt_bboxes_ignore_list=None,
408
- gt_labels_list=None,
409
- label_channels=1,
410
- unmap_outputs=True,
411
- ):
412
- """Get targets for PAA head.
413
-
414
- This method is almost the same as `AnchorHead.get_targets()`. We direct
415
- return the results from _get_targets_single instead map it to levels
416
- by images_to_levels function.
417
-
418
- Args:
419
- anchor_list (list[list[Tensor]]): Multi level anchors of each
420
- image. The outer list indicates images, and the inner list
421
- corresponds to feature levels of the image. Each element of
422
- the inner list is a tensor of shape (num_anchors, 4).
423
- valid_flag_list (list[list[Tensor]]): Multi level valid flags of
424
- each image. The outer list indicates images, and the inner list
425
- corresponds to feature levels of the image. Each element of
426
- the inner list is a tensor of shape (num_anchors, )
427
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
428
- img_metas (list[dict]): Meta info of each image.
429
- gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be
430
- ignored.
431
- gt_labels_list (list[Tensor]): Ground truth labels of each box.
432
- label_channels (int): Channel of label.
433
- unmap_outputs (bool): Whether to map outputs back to the original
434
- set of anchors.
435
-
436
- Returns:
437
- tuple: Usually returns a tuple containing learning targets.
438
-
439
- - labels (list[Tensor]): Labels of all anchors, each with
440
- shape (num_anchors,).
441
- - label_weights (list[Tensor]): Label weights of all anchor.
442
- each with shape (num_anchors,).
443
- - bbox_targets (list[Tensor]): BBox targets of all anchors.
444
- each with shape (num_anchors, 4).
445
- - bbox_weights (list[Tensor]): BBox weights of all anchors.
446
- each with shape (num_anchors, 4).
447
- - pos_inds (list[Tensor]): Contains all index of positive
448
- sample in all anchor.
449
- - gt_inds (list[Tensor]): Contains all gt_index of positive
450
- sample in all anchor.
451
- """
452
-
453
- num_imgs = len(img_metas)
454
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
455
- concat_anchor_list = []
456
- concat_valid_flag_list = []
457
- for i in range(num_imgs):
458
- assert len(anchor_list[i]) == len(valid_flag_list[i])
459
- concat_anchor_list.append(torch.cat(anchor_list[i]))
460
- concat_valid_flag_list.append(torch.cat(valid_flag_list[i]))
461
-
462
- # compute targets for each image
463
- if gt_bboxes_ignore_list is None:
464
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
465
- if gt_labels_list is None:
466
- gt_labels_list = [None for _ in range(num_imgs)]
467
- results = multi_apply(
468
- self._get_targets_single,
469
- concat_anchor_list,
470
- concat_valid_flag_list,
471
- gt_bboxes_list,
472
- gt_bboxes_ignore_list,
473
- gt_labels_list,
474
- img_metas,
475
- label_channels=label_channels,
476
- unmap_outputs=unmap_outputs)
477
-
478
- (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds,
479
- valid_neg_inds, sampling_result) = results
480
-
481
- # Due to valid flag of anchors, we have to calculate the real pos_inds
482
- # in origin anchor set.
483
- pos_inds = []
484
- for i, single_labels in enumerate(labels):
485
- pos_mask = (0 <= single_labels) & (
486
- single_labels < self.num_classes)
487
- pos_inds.append(pos_mask.nonzero().view(-1))
488
-
489
- gt_inds = [item.pos_assigned_gt_inds for item in sampling_result]
490
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
491
- gt_inds)
492
-
493
- def _get_targets_single(self,
494
- flat_anchors,
495
- valid_flags,
496
- gt_bboxes,
497
- gt_bboxes_ignore,
498
- gt_labels,
499
- img_meta,
500
- label_channels=1,
501
- unmap_outputs=True):
502
- """Compute regression and classification targets for anchors in a
503
- single image.
504
-
505
- This method is same as `AnchorHead._get_targets_single()`.
506
- """
507
- assert unmap_outputs, 'We must map outputs back to the original' \
508
- 'set of anchors in PAAhead'
509
- return super(ATSSHead, self)._get_targets_single(
510
- flat_anchors,
511
- valid_flags,
512
- gt_bboxes,
513
- gt_bboxes_ignore,
514
- gt_labels,
515
- img_meta,
516
- label_channels=1,
517
- unmap_outputs=True)
518
-
519
- def _get_bboxes(self,
520
- cls_scores,
521
- bbox_preds,
522
- iou_preds,
523
- mlvl_anchors,
524
- img_shapes,
525
- scale_factors,
526
- cfg,
527
- rescale=False,
528
- with_nms=True):
529
- """Transform outputs for a single batch item into labeled boxes.
530
-
531
- This method is almost same as `ATSSHead._get_bboxes()`.
532
- We use sqrt(iou_preds * cls_scores) in NMS process instead of just
533
- cls_scores. Besides, score voting is used when `` score_voting``
534
- is set to True.
535
- """
536
- assert with_nms, 'PAA only supports "with_nms=True" now'
537
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
538
- batch_size = cls_scores[0].shape[0]
539
-
540
- mlvl_bboxes = []
541
- mlvl_scores = []
542
- mlvl_iou_preds = []
543
- for cls_score, bbox_pred, iou_preds, anchors in zip(
544
- cls_scores, bbox_preds, iou_preds, mlvl_anchors):
545
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
546
-
547
- scores = cls_score.permute(0, 2, 3, 1).reshape(
548
- batch_size, -1, self.cls_out_channels).sigmoid()
549
- bbox_pred = bbox_pred.permute(0, 2, 3,
550
- 1).reshape(batch_size, -1, 4)
551
- iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size,
552
- -1).sigmoid()
553
-
554
- nms_pre = cfg.get('nms_pre', -1)
555
- if nms_pre > 0 and scores.shape[1] > nms_pre:
556
- max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1)
557
- _, topk_inds = max_scores.topk(nms_pre)
558
- batch_inds = torch.arange(batch_size).view(
559
- -1, 1).expand_as(topk_inds).long()
560
- anchors = anchors[topk_inds, :]
561
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
562
- scores = scores[batch_inds, topk_inds, :]
563
- iou_preds = iou_preds[batch_inds, topk_inds]
564
- else:
565
- anchors = anchors.expand_as(bbox_pred)
566
-
567
- bboxes = self.bbox_coder.decode(
568
- anchors, bbox_pred, max_shape=img_shapes)
569
- mlvl_bboxes.append(bboxes)
570
- mlvl_scores.append(scores)
571
- mlvl_iou_preds.append(iou_preds)
572
-
573
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
574
- if rescale:
575
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
576
- scale_factors).unsqueeze(1)
577
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
578
- # Add a dummy background class to the backend when using sigmoid
579
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
580
- # BG cat_id: num_class
581
- padding = batch_mlvl_scores.new_zeros(batch_size,
582
- batch_mlvl_scores.shape[1], 1)
583
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
584
- batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1)
585
- batch_mlvl_nms_scores = (batch_mlvl_scores *
586
- batch_mlvl_iou_preds[..., None]).sqrt()
587
-
588
- det_results = []
589
- for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
590
- batch_mlvl_nms_scores):
591
- det_bbox, det_label = multiclass_nms(
592
- mlvl_bboxes,
593
- mlvl_scores,
594
- cfg.score_thr,
595
- cfg.nms,
596
- cfg.max_per_img,
597
- score_factors=None)
598
- if self.with_score_voting and len(det_bbox) > 0:
599
- det_bbox, det_label = self.score_voting(
600
- det_bbox, det_label, mlvl_bboxes, mlvl_scores,
601
- cfg.score_thr)
602
- det_results.append(tuple([det_bbox, det_label]))
603
-
604
- return det_results
605
-
606
- def score_voting(self, det_bboxes, det_labels, mlvl_bboxes,
607
- mlvl_nms_scores, score_thr):
608
- """Implementation of score voting method works on each remaining boxes
609
- after NMS procedure.
610
-
611
- Args:
612
- det_bboxes (Tensor): Remaining boxes after NMS procedure,
613
- with shape (k, 5), each dimension means
614
- (x1, y1, x2, y2, score).
615
- det_labels (Tensor): The label of remaining boxes, with shape
616
- (k, 1),Labels are 0-based.
617
- mlvl_bboxes (Tensor): All boxes before the NMS procedure,
618
- with shape (num_anchors,4).
619
- mlvl_nms_scores (Tensor): The scores of all boxes which is used
620
- in the NMS procedure, with shape (num_anchors, num_class)
621
- mlvl_iou_preds (Tensor): The predictions of IOU of all boxes
622
- before the NMS procedure, with shape (num_anchors, 1)
623
- score_thr (float): The score threshold of bboxes.
624
-
625
- Returns:
626
- tuple: Usually returns a tuple containing voting results.
627
-
628
- - det_bboxes_voted (Tensor): Remaining boxes after
629
- score voting procedure, with shape (k, 5), each
630
- dimension means (x1, y1, x2, y2, score).
631
- - det_labels_voted (Tensor): Label of remaining bboxes
632
- after voting, with shape (num_anchors,).
633
- """
634
- candidate_mask = mlvl_nms_scores > score_thr
635
- candidate_mask_nonzeros = candidate_mask.nonzero()
636
- candidate_inds = candidate_mask_nonzeros[:, 0]
637
- candidate_labels = candidate_mask_nonzeros[:, 1]
638
- candidate_bboxes = mlvl_bboxes[candidate_inds]
639
- candidate_scores = mlvl_nms_scores[candidate_mask]
640
- det_bboxes_voted = []
641
- det_labels_voted = []
642
- for cls in range(self.cls_out_channels):
643
- candidate_cls_mask = candidate_labels == cls
644
- if not candidate_cls_mask.any():
645
- continue
646
- candidate_cls_scores = candidate_scores[candidate_cls_mask]
647
- candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask]
648
- det_cls_mask = det_labels == cls
649
- det_cls_bboxes = det_bboxes[det_cls_mask].view(
650
- -1, det_bboxes.size(-1))
651
- det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4],
652
- candidate_cls_bboxes)
653
- for det_ind in range(len(det_cls_bboxes)):
654
- single_det_ious = det_candidate_ious[det_ind]
655
- pos_ious_mask = single_det_ious > 0.01
656
- pos_ious = single_det_ious[pos_ious_mask]
657
- pos_bboxes = candidate_cls_bboxes[pos_ious_mask]
658
- pos_scores = candidate_cls_scores[pos_ious_mask]
659
- pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) *
660
- pos_scores)[:, None]
661
- voted_box = torch.sum(
662
- pis * pos_bboxes, dim=0) / torch.sum(
663
- pis, dim=0)
664
- voted_score = det_cls_bboxes[det_ind][-1:][None, :]
665
- det_bboxes_voted.append(
666
- torch.cat((voted_box[None, :], voted_score), dim=1))
667
- det_labels_voted.append(cls)
668
-
669
- det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0)
670
- det_labels_voted = det_labels.new_tensor(det_labels_voted)
671
- return det_bboxes_voted, det_labels_voted
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/data/datasets/clip_prompt_utils.py DELETED
@@ -1,441 +0,0 @@
1
- import gzip
2
- import html
3
- import os
4
- from functools import lru_cache
5
-
6
- import ftfy
7
- import regex as re
8
- import torch
9
- import numpy as np
10
- from typing import Union, List
11
-
12
- from .lvis_v1_categories import LVIS_CATEGORIES as LVIS_V1_CATEGORIES
13
- from .coco_zeroshot_categories import COCO_UNSEEN_CLS, COCO_SEEN_CLS, COCO_OVD_ALL_CLS, COCO_80_ALL_CLS
14
-
15
- # https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
16
- @lru_cache()
17
- def default_bpe():
18
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
19
-
20
-
21
- @lru_cache()
22
- def bytes_to_unicode():
23
- """
24
- Returns list of utf-8 byte and a corresponding list of unicode strings.
25
- The reversible bpe codes work on unicode strings.
26
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
27
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
28
- This is a signficant percentage of your normal, say, 32K bpe vocab.
29
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
30
- And avoids mapping to whitespace/control characters the bpe code barfs on.
31
- """
32
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
33
- cs = bs[:]
34
- n = 0
35
- for b in range(2**8):
36
- if b not in bs:
37
- bs.append(b)
38
- cs.append(2**8+n)
39
- n += 1
40
- cs = [chr(n) for n in cs]
41
- return dict(zip(bs, cs))
42
-
43
-
44
- def get_pairs(word):
45
- """Return set of symbol pairs in a word.
46
- Word is represented as tuple of symbols (symbols being variable-length strings).
47
- """
48
- pairs = set()
49
- prev_char = word[0]
50
- for char in word[1:]:
51
- pairs.add((prev_char, char))
52
- prev_char = char
53
- return pairs
54
-
55
-
56
- def basic_clean(text):
57
- text = ftfy.fix_text(text)
58
- text = html.unescape(html.unescape(text))
59
- return text.strip()
60
-
61
-
62
- def whitespace_clean(text):
63
- text = re.sub(r'\s+', ' ', text)
64
- text = text.strip()
65
- return text
66
-
67
-
68
- class SimpleTokenizer(object):
69
- def __init__(self, bpe_path: str = default_bpe()):
70
- self.byte_encoder = bytes_to_unicode()
71
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
72
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
73
- merges = merges[1:49152-256-2+1]
74
- merges = [tuple(merge.split()) for merge in merges]
75
- vocab = list(bytes_to_unicode().values())
76
- vocab = vocab + [v+'</w>' for v in vocab]
77
- self.vocab = vocab
78
- for merge in merges:
79
- vocab.append(''.join(merge))
80
- vocab.extend(['<|startoftext|>', '<|endoftext|>'])
81
- self.encoder = dict(zip(vocab, range(len(vocab))))
82
- self.decoder = {v: k for k, v in self.encoder.items()}
83
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
84
- self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
85
- self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
86
-
87
- def bpe(self, token):
88
- if token in self.cache:
89
- return self.cache[token]
90
- word = tuple(token[:-1]) + ( token[-1] + '</w>',)
91
- pairs = get_pairs(word)
92
-
93
- if not pairs:
94
- return token+'</w>'
95
-
96
- while True:
97
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
98
- if bigram not in self.bpe_ranks:
99
- break
100
- first, second = bigram
101
- new_word = []
102
- i = 0
103
- while i < len(word):
104
- try:
105
- j = word.index(first, i)
106
- new_word.extend(word[i:j])
107
- i = j
108
- except:
109
- new_word.extend(word[i:])
110
- break
111
-
112
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
113
- new_word.append(first+second)
114
- i += 2
115
- else:
116
- new_word.append(word[i])
117
- i += 1
118
- new_word = tuple(new_word)
119
- word = new_word
120
- if len(word) == 1:
121
- break
122
- else:
123
- pairs = get_pairs(word)
124
- word = ' '.join(word)
125
- self.cache[token] = word
126
- return word
127
-
128
- def encode(self, text, return_link=False):
129
- bpe_tokens = []
130
- text = whitespace_clean(basic_clean(text)).lower()
131
- str2id_links = [] # link original sentence word to the tokenized ids of its subwords
132
- for token in re.findall(self.pat, text):
133
- this_link = [token]
134
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
135
- ids = [self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')]
136
- bpe_tokens.extend(ids)
137
- this_link.append(ids)
138
- str2id_links.append(this_link)
139
- if return_link:
140
- return bpe_tokens, str2id_links
141
- return bpe_tokens
142
-
143
- def decode(self, tokens):
144
- text = ''.join([self.decoder[token] for token in tokens])
145
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
146
- return text
147
-
148
-
149
- # https://github.com/openai/CLIP/blob/main/clip/clip.py
150
- #_tokenizer = SimpleTokenizer()
151
-
152
- def tokenize(texts: Union[str, List[str]], context_length: int = 77):
153
- if isinstance(texts, str):
154
- texts = [texts]
155
-
156
- sot_token = _tokenizer.encoder["<|startoftext|>"]
157
- eot_token = _tokenizer.encoder["<|endoftext|>"]
158
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
159
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
160
-
161
- for i, tokens in enumerate(all_tokens):
162
- if len(tokens) > context_length:
163
- raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
164
- result[i, :len(tokens)] = torch.tensor(tokens)
165
-
166
- return result
167
-
168
-
169
- # prompt_engineering.py
170
- def get_prompt_templates():
171
- # prompt_templates = [
172
- # 'There is a {} in the scene.',
173
- # 'There is the {} in the scene.',
174
- # 'a photo of a {} in the scene.',
175
- # 'a photo of the {} in the scene.',
176
- # 'a photo of one {} in the scene.',
177
-
178
- # 'itap of a {}.',
179
- # 'itap of my {}.', # itap: I took a picture of
180
- # 'itap of the {}.',
181
- # 'a photo of a {}.',
182
- # 'a photo of my {}.',
183
- # 'a photo of the {}.',
184
- # 'a photo of one {}.',
185
- # 'a photo of many {}.',
186
-
187
- # 'a good photo of a {}.',
188
- # 'a good photo of the {}.',
189
- # 'a bad photo of a {}.',
190
- # 'a bad photo of the {}.',
191
- # 'a photo of a nice {}.',
192
- # 'a photo of the nice {}.',
193
- # 'a photo of a cool {}.',
194
- # 'a photo of the cool {}.',
195
- # 'a photo of a weird {}.',
196
- # 'a photo of the weird {}.',
197
-
198
- # 'a photo of a small {}.',
199
- # 'a photo of the small {}.',
200
- # 'a photo of a large {}.',
201
- # 'a photo of the large {}.',
202
-
203
- # 'a photo of a clean {}.',
204
- # 'a photo of the clean {}.',
205
- # 'a photo of a dirty {}.',
206
- # 'a photo of the dirty {}.',
207
-
208
- # 'a bright photo of a {}.',
209
- # 'a bright photo of the {}.',
210
- # 'a dark photo of a {}.',
211
- # 'a dark photo of the {}.',
212
-
213
- # 'a photo of a hard to see {}.',
214
- # 'a photo of the hard to see {}.',
215
- # 'a low resolution photo of a {}.',
216
- # 'a low resolution photo of the {}.',
217
- # 'a cropped photo of a {}.',
218
- # 'a cropped photo of the {}.',
219
- # 'a close-up photo of a {}.',
220
- # 'a close-up photo of the {}.',
221
- # 'a jpeg corrupted photo of a {}.',
222
- # 'a jpeg corrupted photo of the {}.',
223
- # 'a blurry photo of a {}.',
224
- # 'a blurry photo of the {}.',
225
- # 'a pixelated photo of a {}.',
226
- # 'a pixelated photo of the {}.',
227
-
228
- # 'a black and white photo of the {}.',
229
- # 'a black and white photo of a {}.',
230
-
231
- # 'a plastic {}.',
232
- # 'the plastic {}.',
233
-
234
- # 'a toy {}.',
235
- # 'the toy {}.',
236
- # 'a plushie {}.',
237
- # 'the plushie {}.',
238
- # 'a cartoon {}.',
239
- # 'the cartoon {}.',
240
-
241
- # 'an embroidered {}.',
242
- # 'the embroidered {}.',
243
-
244
- # 'a painting of the {}.',
245
- # 'a painting of a {}.',
246
- # ]
247
-
248
- prompt_templates = [
249
- '{}.',
250
- 'a photo of a {}.',
251
- 'a bad photo of a {}.',
252
- 'a photo of many {}.',
253
- 'a sculpture of a {}.',
254
- 'a photo of the hard to see {}.',
255
- 'a low resolution photo of the {}.',
256
- 'a rendering of a {}.',
257
- 'graffiti of a {}.',
258
- 'a bad photo of the {}.',
259
- 'a cropped photo of the {}.',
260
- 'a tattoo of a {}.',
261
- 'the embroidered {}.',
262
- 'a photo of a hard to see {}.',
263
- 'a bright photo of a {}.',
264
- 'a photo of a clean {}.',
265
- 'a photo of a dirty {}.',
266
- 'a dark photo of the {}.',
267
- 'a drawing of a {}.',
268
- 'a photo of my {}.',
269
- 'the plastic {}.',
270
- 'a photo of the cool {}.',
271
- 'a close-up photo of a {}.',
272
- 'a black and white photo of the {}.',
273
- 'a painting of the {}.',
274
- 'a painting of a {}.',
275
- 'a pixelated photo of the {}.',
276
- 'a sculpture of the {}.',
277
- 'a bright photo of the {}.',
278
- 'a cropped photo of a {}.',
279
- 'a plastic {}.',
280
- 'a photo of the dirty {}.',
281
- 'a jpeg corrupted photo of a {}.',
282
- 'a blurry photo of the {}.',
283
- 'a photo of the {}.',
284
- 'a good photo of the {}.',
285
- 'a rendering of the {}.',
286
- 'a {} in a video game.',
287
- 'a photo of one {}.',
288
- 'a doodle of a {}.',
289
- 'a close-up photo of the {}.',
290
- 'the origami {}.',
291
- 'the {} in a video game.',
292
- 'a sketch of a {}.',
293
- 'a doodle of the {}.',
294
- 'a origami {}.',
295
- 'a low resolution photo of a {}.',
296
- 'the toy {}.',
297
- 'a rendition of the {}.',
298
- 'a photo of the clean {}.',
299
- 'a photo of a large {}.',
300
- 'a rendition of a {}.',
301
- 'a photo of a nice {}.',
302
- 'a photo of a weird {}.',
303
- 'a blurry photo of a {}.',
304
- 'a cartoon {}.',
305
- 'art of a {}.',
306
- 'a sketch of the {}.',
307
- 'a embroidered {}.',
308
- 'a pixelated photo of a {}.',
309
- 'itap of the {}.',
310
- 'a jpeg corrupted photo of the {}.',
311
- 'a good photo of a {}.',
312
- 'a plushie {}.',
313
- 'a photo of the nice {}.',
314
- 'a photo of the small {}.',
315
- 'a photo of the weird {}.',
316
- 'the cartoon {}.',
317
- 'art of the {}.',
318
- 'a drawing of the {}.',
319
- 'a photo of the large {}.',
320
- 'a black and white photo of a {}.',
321
- 'the plushie {}.',
322
- 'a dark photo of a {}.',
323
- 'itap of a {}.',
324
- 'graffiti of the {}.',
325
- 'a toy {}.',
326
- 'itap of my {}.',
327
- 'a photo of a cool {}.',
328
- 'a photo of a small {}.',
329
- 'a tattoo of the {}.',
330
- ]
331
- return prompt_templates
332
-
333
- def prompt_engineering(classnames, template=""):
334
- return template.replace('{}', classnames.replace(',', '').replace('+', ' '))
335
-
336
- # clip_img_tsv.py
337
- def convert_example_to_features_bpe(text, tokenizer, sot_token, eot_token, context_length=77):
338
- """
339
- Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample.
340
- :param tokenizer: Tokenizer
341
- :return: List, a list containing token id, padded by 0
342
- """
343
- assert isinstance(text, str)
344
- input_ids = [sot_token] + tokenizer.encode(text) + [eot_token]
345
- if len(input_ids) > context_length:
346
- input_ids = input_ids[:context_length]
347
- input_ids = np.array(input_ids)
348
-
349
- pad_input_ids = np.zeros(context_length)
350
- pad_input_ids[:input_ids.shape[0]] = input_ids
351
-
352
- return pad_input_ids
353
-
354
- def get_cls_names(filter_novel=False, coco=None, from_file=False):
355
- """ return a list of strings with each string as name of a class
356
- """
357
- # the names are stored in a txt file
358
- if from_file:
359
- # coco_det_cls = {COCO_80_ALL_CLS[key]: key for key in COCO_80_ALL_CLS}
360
- # # not found in nouns {'skis': 31, 'sports ball': 33, 'hot dog': 53, 'potted plant': 59, 'scissors': 77, 'hair drier': 79}
361
- # coco_det_cls['ski'] = 81
362
- # coco_det_cls['scissor'] = 82
363
- # with open('/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/trained_models/concept_pool/COCO_Caption_nouns_4688.txt','w') as g:
364
- # with open(from_file, 'r') as f:
365
- # cnt = 0
366
- # for row in f:
367
- # if row.split(",")[0] not in coco_det_cls:
368
- # g.write(row)
369
- # cnt += 1
370
- # else:
371
- # coco_det_cls.pop(row.split(",")[0])
372
- names = []
373
- with open(from_file, 'r') as f:
374
- for row in f:
375
- names.append(row.split(",")[0])
376
- return names
377
- # classes' names
378
- if coco == 'target':
379
- return COCO_UNSEEN_CLS
380
- elif coco == 'base':
381
- return COCO_SEEN_CLS
382
- elif coco == 'all':
383
- return COCO_OVD_ALL_CLS
384
- elif coco == 'all_80':
385
- return [COCO_80_ALL_CLS[i+1] for i in range(80)]
386
- assert len(LVIS_V1_CATEGORIES) == 1203
387
- cat_ids = [k["id"] for k in LVIS_V1_CATEGORIES]
388
- assert min(cat_ids) == 1 and max(cat_ids) == len(
389
- cat_ids
390
- ), "Category ids are not in [1, #categories], as expected"
391
- # Ensure that the category list is sorted by id
392
- lvis_categories = sorted(LVIS_V1_CATEGORIES, key=lambda x: x["id"])
393
- if filter_novel:
394
- class_names = [cls_meta['name'] for cls_meta in lvis_categories if cls_meta['frequency'] != 'r']
395
- else:
396
- class_names = [cls_meta['name'] for cls_meta in lvis_categories]
397
-
398
- # remove or replace special symbols
399
- class_names = [cls_n.replace("_", " ") for cls_n in class_names]
400
- class_names = [cls_n.replace("(", "") for cls_n in class_names]
401
- class_names = [cls_n.replace(")", "") for cls_n in class_names]
402
- return class_names
403
-
404
- def pre_tokenize(class_names):
405
- """
406
- pre-tokenize class names
407
- :param class_names: List, a list of class names
408
- :param tokenizer: Tokenizer, SimpleTokenizer()
409
- :return: Tensor, containing all prompts for all classes, [#cls, #prompts, context_length]
410
- """
411
- # tokenizer
412
- tokenizer = SimpleTokenizer()
413
- sot_token = tokenizer.encoder["<|startoftext|>"]
414
- eot_token = tokenizer.encoder["<|endoftext|>"]
415
-
416
- # prompt engineering
417
- prompt_templates = get_prompt_templates()
418
- input_ids_all = []
419
- for k in range(len(class_names)):
420
- v = class_names[k]
421
- if isinstance(v, str):
422
- vs = [v]
423
- elif isinstance(v, list):
424
- vs = v
425
- t1s = []
426
- for v in vs:
427
- for pt in prompt_templates:
428
- t1s.append(prompt_engineering(v, template=pt))
429
- input_ids = []
430
- for t1 in t1s:
431
- this_input_ids = convert_example_to_features_bpe(t1, tokenizer, sot_token, eot_token)
432
- input_ids.append(torch.tensor(this_input_ids, dtype=torch.long))
433
-
434
- input_ids_all.append(torch.stack(input_ids, 0))
435
-
436
- input_ids_all_classes = torch.stack(input_ids_all, 0)
437
- return input_ids_all_classes
438
-
439
-
440
- if __name__ == "__main__":
441
- flatten_input_ids = pre_tokenize()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/data/transforms/transform.py DELETED
@@ -1,351 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- # Copyright (c) Facebook, Inc. and its affiliates.
3
-
4
- """
5
- See "Data Augmentation" tutorial for an overview of the system:
6
- https://detectron2.readthedocs.io/tutorials/augmentation.html
7
- """
8
-
9
- import numpy as np
10
- import torch
11
- import torch.nn.functional as F
12
- from fvcore.transforms.transform import (
13
- CropTransform,
14
- HFlipTransform,
15
- NoOpTransform,
16
- Transform,
17
- TransformList,
18
- )
19
- from PIL import Image
20
-
21
- try:
22
- import cv2 # noqa
23
- except ImportError:
24
- # OpenCV is an optional dependency at the moment
25
- pass
26
-
27
- __all__ = [
28
- "ExtentTransform",
29
- "ResizeTransform",
30
- "RotationTransform",
31
- "ColorTransform",
32
- "PILColorTransform",
33
- ]
34
-
35
-
36
- class ExtentTransform(Transform):
37
- """
38
- Extracts a subregion from the source image and scales it to the output size.
39
-
40
- The fill color is used to map pixels from the source rect that fall outside
41
- the source image.
42
-
43
- See: https://pillow.readthedocs.io/en/latest/PIL.html#PIL.ImageTransform.ExtentTransform
44
- """
45
-
46
- def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
47
- """
48
- Args:
49
- src_rect (x0, y0, x1, y1): src coordinates
50
- output_size (h, w): dst image size
51
- interp: PIL interpolation methods
52
- fill: Fill color used when src_rect extends outside image
53
- """
54
- super().__init__()
55
- self._set_attributes(locals())
56
-
57
- def apply_image(self, img, interp=None):
58
- h, w = self.output_size
59
- if len(img.shape) > 2 and img.shape[2] == 1:
60
- pil_image = Image.fromarray(img[:, :, 0], mode="L")
61
- else:
62
- pil_image = Image.fromarray(img)
63
- pil_image = pil_image.transform(
64
- size=(w, h),
65
- method=Image.EXTENT,
66
- data=self.src_rect,
67
- resample=interp if interp else self.interp,
68
- fill=self.fill,
69
- )
70
- ret = np.asarray(pil_image)
71
- if len(img.shape) > 2 and img.shape[2] == 1:
72
- ret = np.expand_dims(ret, -1)
73
- return ret
74
-
75
- def apply_coords(self, coords):
76
- # Transform image center from source coordinates into output coordinates
77
- # and then map the new origin to the corner of the output image.
78
- h, w = self.output_size
79
- x0, y0, x1, y1 = self.src_rect
80
- new_coords = coords.astype(np.float32)
81
- new_coords[:, 0] -= 0.5 * (x0 + x1)
82
- new_coords[:, 1] -= 0.5 * (y0 + y1)
83
- new_coords[:, 0] *= w / (x1 - x0)
84
- new_coords[:, 1] *= h / (y1 - y0)
85
- new_coords[:, 0] += 0.5 * w
86
- new_coords[:, 1] += 0.5 * h
87
- return new_coords
88
-
89
- def apply_segmentation(self, segmentation):
90
- segmentation = self.apply_image(segmentation, interp=Image.NEAREST)
91
- return segmentation
92
-
93
-
94
- class ResizeTransform(Transform):
95
- """
96
- Resize the image to a target size.
97
- """
98
-
99
- def __init__(self, h, w, new_h, new_w, interp=None):
100
- """
101
- Args:
102
- h, w (int): original image size
103
- new_h, new_w (int): new image size
104
- interp: PIL interpolation methods, defaults to bilinear.
105
- """
106
- # TODO decide on PIL vs opencv
107
- super().__init__()
108
- if interp is None:
109
- interp = Image.BILINEAR
110
- self._set_attributes(locals())
111
-
112
- def apply_image(self, img, interp=None):
113
- assert img.shape[:2] == (self.h, self.w)
114
- assert len(img.shape) <= 4
115
- interp_method = interp if interp is not None else self.interp
116
-
117
- if img.dtype == np.uint8:
118
- if len(img.shape) > 2 and img.shape[2] == 1:
119
- pil_image = Image.fromarray(img[:, :, 0], mode="L")
120
- else:
121
- pil_image = Image.fromarray(img)
122
- pil_image = pil_image.resize((self.new_w, self.new_h), interp_method)
123
- ret = np.asarray(pil_image)
124
- if len(img.shape) > 2 and img.shape[2] == 1:
125
- ret = np.expand_dims(ret, -1)
126
- else:
127
- # PIL only supports uint8
128
- if any(x < 0 for x in img.strides):
129
- img = np.ascontiguousarray(img)
130
- img = torch.from_numpy(img)
131
- shape = list(img.shape)
132
- shape_4d = shape[:2] + [1] * (4 - len(shape)) + shape[2:]
133
- img = img.view(shape_4d).permute(2, 3, 0, 1) # hw(c) -> nchw
134
- _PIL_RESIZE_TO_INTERPOLATE_MODE = {
135
- Image.NEAREST: "nearest",
136
- Image.BILINEAR: "bilinear",
137
- Image.BICUBIC: "bicubic",
138
- }
139
- mode = _PIL_RESIZE_TO_INTERPOLATE_MODE[interp_method]
140
- align_corners = None if mode == "nearest" else False
141
- img = F.interpolate(
142
- img, (self.new_h, self.new_w), mode=mode, align_corners=align_corners
143
- )
144
- shape[:2] = (self.new_h, self.new_w)
145
- ret = img.permute(2, 3, 0, 1).view(shape).numpy() # nchw -> hw(c)
146
-
147
- return ret
148
-
149
- def apply_coords(self, coords):
150
- coords[:, 0] = coords[:, 0] * (self.new_w * 1.0 / self.w)
151
- coords[:, 1] = coords[:, 1] * (self.new_h * 1.0 / self.h)
152
- return coords
153
-
154
- def apply_segmentation(self, segmentation):
155
- segmentation = self.apply_image(segmentation, interp=Image.NEAREST)
156
- return segmentation
157
-
158
- def inverse(self):
159
- return ResizeTransform(self.new_h, self.new_w, self.h, self.w, self.interp)
160
-
161
-
162
- class RotationTransform(Transform):
163
- """
164
- This method returns a copy of this image, rotated the given
165
- number of degrees counter clockwise around its center.
166
- """
167
-
168
- def __init__(self, h, w, angle, expand=True, center=None, interp=None):
169
- """
170
- Args:
171
- h, w (int): original image size
172
- angle (float): degrees for rotation
173
- expand (bool): choose if the image should be resized to fit the whole
174
- rotated image (default), or simply cropped
175
- center (tuple (width, height)): coordinates of the rotation center
176
- if left to None, the center will be fit to the center of each image
177
- center has no effect if expand=True because it only affects shifting
178
- interp: cv2 interpolation method, default cv2.INTER_LINEAR
179
- """
180
- super().__init__()
181
- image_center = np.array((w / 2, h / 2))
182
- if center is None:
183
- center = image_center
184
- if interp is None:
185
- interp = cv2.INTER_LINEAR
186
- abs_cos, abs_sin = (abs(np.cos(np.deg2rad(angle))), abs(np.sin(np.deg2rad(angle))))
187
- if expand:
188
- # find the new width and height bounds
189
- bound_w, bound_h = np.rint(
190
- [h * abs_sin + w * abs_cos, h * abs_cos + w * abs_sin]
191
- ).astype(int)
192
- else:
193
- bound_w, bound_h = w, h
194
-
195
- self._set_attributes(locals())
196
- self.rm_coords = self.create_rotation_matrix()
197
- # Needed because of this problem https://github.com/opencv/opencv/issues/11784
198
- self.rm_image = self.create_rotation_matrix(offset=-0.5)
199
-
200
- def apply_image(self, img, interp=None):
201
- """
202
- img should be a numpy array, formatted as Height * Width * Nchannels
203
- """
204
- if len(img) == 0 or self.angle % 360 == 0:
205
- return img
206
- assert img.shape[:2] == (self.h, self.w)
207
- interp = interp if interp is not None else self.interp
208
- return cv2.warpAffine(img, self.rm_image, (self.bound_w, self.bound_h), flags=interp)
209
-
210
- def apply_coords(self, coords):
211
- """
212
- coords should be a N * 2 array-like, containing N couples of (x, y) points
213
- """
214
- coords = np.asarray(coords, dtype=float)
215
- if len(coords) == 0 or self.angle % 360 == 0:
216
- return coords
217
- return cv2.transform(coords[:, np.newaxis, :], self.rm_coords)[:, 0, :]
218
-
219
- def apply_segmentation(self, segmentation):
220
- segmentation = self.apply_image(segmentation, interp=cv2.INTER_NEAREST)
221
- return segmentation
222
-
223
- def create_rotation_matrix(self, offset=0):
224
- center = (self.center[0] + offset, self.center[1] + offset)
225
- rm = cv2.getRotationMatrix2D(tuple(center), self.angle, 1)
226
- if self.expand:
227
- # Find the coordinates of the center of rotation in the new image
228
- # The only point for which we know the future coordinates is the center of the image
229
- rot_im_center = cv2.transform(self.image_center[None, None, :] + offset, rm)[0, 0, :]
230
- new_center = np.array([self.bound_w / 2, self.bound_h / 2]) + offset - rot_im_center
231
- # shift the rotation center to the new coordinates
232
- rm[:, 2] += new_center
233
- return rm
234
-
235
- def inverse(self):
236
- """
237
- The inverse is to rotate it back with expand, and crop to get the original shape.
238
- """
239
- if not self.expand: # Not possible to inverse if a part of the image is lost
240
- raise NotImplementedError()
241
- rotation = RotationTransform(
242
- self.bound_h, self.bound_w, -self.angle, True, None, self.interp
243
- )
244
- crop = CropTransform(
245
- (rotation.bound_w - self.w) // 2, (rotation.bound_h - self.h) // 2, self.w, self.h
246
- )
247
- return TransformList([rotation, crop])
248
-
249
-
250
- class ColorTransform(Transform):
251
- """
252
- Generic wrapper for any photometric transforms.
253
- These transformations should only affect the color space and
254
- not the coordinate space of the image (e.g. annotation
255
- coordinates such as bounding boxes should not be changed)
256
- """
257
-
258
- def __init__(self, op):
259
- """
260
- Args:
261
- op (Callable): operation to be applied to the image,
262
- which takes in an ndarray and returns an ndarray.
263
- """
264
- if not callable(op):
265
- raise ValueError("op parameter should be callable")
266
- super().__init__()
267
- self._set_attributes(locals())
268
-
269
- def apply_image(self, img):
270
- return self.op(img)
271
-
272
- def apply_coords(self, coords):
273
- return coords
274
-
275
- def inverse(self):
276
- return NoOpTransform()
277
-
278
- def apply_segmentation(self, segmentation):
279
- return segmentation
280
-
281
-
282
- class PILColorTransform(ColorTransform):
283
- """
284
- Generic wrapper for PIL Photometric image transforms,
285
- which affect the color space and not the coordinate
286
- space of the image
287
- """
288
-
289
- def __init__(self, op):
290
- """
291
- Args:
292
- op (Callable): operation to be applied to the image,
293
- which takes in a PIL Image and returns a transformed
294
- PIL Image.
295
- For reference on possible operations see:
296
- - https://pillow.readthedocs.io/en/stable/
297
- """
298
- if not callable(op):
299
- raise ValueError("op parameter should be callable")
300
- super().__init__(op)
301
-
302
- def apply_image(self, img):
303
- img = Image.fromarray(img)
304
- return np.asarray(super().apply_image(img))
305
-
306
-
307
- def HFlip_rotated_box(transform, rotated_boxes):
308
- """
309
- Apply the horizontal flip transform on rotated boxes.
310
-
311
- Args:
312
- rotated_boxes (ndarray): Nx5 floating point array of
313
- (x_center, y_center, width, height, angle_degrees) format
314
- in absolute coordinates.
315
- """
316
- # Transform x_center
317
- rotated_boxes[:, 0] = transform.width - rotated_boxes[:, 0]
318
- # Transform angle
319
- rotated_boxes[:, 4] = -rotated_boxes[:, 4]
320
- return rotated_boxes
321
-
322
-
323
- def Resize_rotated_box(transform, rotated_boxes):
324
- """
325
- Apply the resizing transform on rotated boxes. For details of how these (approximation)
326
- formulas are derived, please refer to :meth:`RotatedBoxes.scale`.
327
-
328
- Args:
329
- rotated_boxes (ndarray): Nx5 floating point array of
330
- (x_center, y_center, width, height, angle_degrees) format
331
- in absolute coordinates.
332
- """
333
- scale_factor_x = transform.new_w * 1.0 / transform.w
334
- scale_factor_y = transform.new_h * 1.0 / transform.h
335
- rotated_boxes[:, 0] *= scale_factor_x
336
- rotated_boxes[:, 1] *= scale_factor_y
337
- theta = rotated_boxes[:, 4] * np.pi / 180.0
338
- c = np.cos(theta)
339
- s = np.sin(theta)
340
- rotated_boxes[:, 2] *= np.sqrt(np.square(scale_factor_x * c) + np.square(scale_factor_y * s))
341
- rotated_boxes[:, 3] *= np.sqrt(np.square(scale_factor_x * s) + np.square(scale_factor_y * c))
342
- rotated_boxes[:, 4] = np.arctan2(scale_factor_x * s, scale_factor_y * c) * 180 / np.pi
343
-
344
- return rotated_boxes
345
-
346
-
347
- HFlipTransform.register_type("rotated_box", HFlip_rotated_box)
348
- ResizeTransform.register_type("rotated_box", Resize_rotated_box)
349
-
350
- # not necessary any more with latest fvcore
351
- NoOpTransform.register_type("rotated_box", lambda t, x: x)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/modeling/poolers.py DELETED
@@ -1,250 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import math
3
- from typing import List
4
- import torch
5
- from torch import nn
6
- from torchvision.ops import RoIPool
7
-
8
- from detectron2.layers import ROIAlign, ROIAlignRotated, cat, nonzero_tuple
9
- from detectron2.structures import Boxes
10
-
11
- """
12
- To export ROIPooler to torchscript, in this file, variables that should be annotated with
13
- `Union[List[Boxes], List[RotatedBoxes]]` are only annotated with `List[Boxes]`.
14
-
15
- TODO: Correct these annotations when torchscript support `Union`.
16
- https://github.com/pytorch/pytorch/issues/41412
17
- """
18
-
19
- __all__ = ["ROIPooler"]
20
-
21
-
22
- def assign_boxes_to_levels(
23
- box_lists: List[Boxes],
24
- min_level: int,
25
- max_level: int,
26
- canonical_box_size: int,
27
- canonical_level: int,
28
- ):
29
- """
30
- Map each box in `box_lists` to a feature map level index and return the assignment
31
- vector.
32
-
33
- Args:
34
- box_lists (list[Boxes] | list[RotatedBoxes]): A list of N Boxes or N RotatedBoxes,
35
- where N is the number of images in the batch.
36
- min_level (int): Smallest feature map level index. The input is considered index 0,
37
- the output of stage 1 is index 1, and so.
38
- max_level (int): Largest feature map level index.
39
- canonical_box_size (int): A canonical box size in pixels (sqrt(box area)).
40
- canonical_level (int): The feature map level index on which a canonically-sized box
41
- should be placed.
42
-
43
- Returns:
44
- A tensor of length M, where M is the total number of boxes aggregated over all
45
- N batch images. The memory layout corresponds to the concatenation of boxes
46
- from all images. Each element is the feature map index, as an offset from
47
- `self.min_level`, for the corresponding box (so value i means the box is at
48
- `self.min_level + i`).
49
- """
50
- box_sizes = torch.sqrt(cat([boxes.area() for boxes in box_lists]))
51
- # Eqn.(1) in FPN paper
52
- level_assignments = torch.floor(
53
- canonical_level + torch.log2(box_sizes / canonical_box_size + 1e-8)
54
- )
55
- # clamp level to (min, max), in case the box size is too large or too small
56
- # for the available feature maps
57
- level_assignments = torch.clamp(level_assignments, min=min_level, max=max_level)
58
- return level_assignments.to(torch.int64) - min_level
59
-
60
-
61
- def _fmt_box_list(box_tensor, batch_index: int):
62
- repeated_index = torch.full_like(
63
- box_tensor[:, :1], batch_index, dtype=box_tensor.dtype, device=box_tensor.device
64
- )
65
- return cat((repeated_index, box_tensor), dim=1)
66
-
67
-
68
- def convert_boxes_to_pooler_format(box_lists: List[Boxes]):
69
- """
70
- Convert all boxes in `box_lists` to the low-level format used by ROI pooling ops
71
- (see description under Returns).
72
-
73
- Args:
74
- box_lists (list[Boxes] | list[RotatedBoxes]):
75
- A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch.
76
-
77
- Returns:
78
- When input is list[Boxes]:
79
- A tensor of shape (M, 5), where M is the total number of boxes aggregated over all
80
- N batch images.
81
- The 5 columns are (batch index, x0, y0, x1, y1), where batch index
82
- is the index in [0, N) identifying which batch image the box with corners at
83
- (x0, y0, x1, y1) comes from.
84
- When input is list[RotatedBoxes]:
85
- A tensor of shape (M, 6), where M is the total number of boxes aggregated over all
86
- N batch images.
87
- The 6 columns are (batch index, x_ctr, y_ctr, width, height, angle_degrees),
88
- where batch index is the index in [0, N) identifying which batch image the
89
- rotated box (x_ctr, y_ctr, width, height, angle_degrees) comes from.
90
- """
91
- pooler_fmt_boxes = cat(
92
- [_fmt_box_list(box_list.tensor, i) for i, box_list in enumerate(box_lists)], dim=0
93
- )
94
-
95
- return pooler_fmt_boxes
96
-
97
-
98
- class ROIPooler(nn.Module):
99
- """
100
- Region of interest feature map pooler that supports pooling from one or more
101
- feature maps.
102
- """
103
-
104
- def __init__(
105
- self,
106
- output_size,
107
- scales,
108
- sampling_ratio,
109
- pooler_type,
110
- canonical_box_size=224,
111
- canonical_level=4,
112
- ):
113
- """
114
- Args:
115
- output_size (int, tuple[int] or list[int]): output size of the pooled region,
116
- e.g., 14 x 14. If tuple or list is given, the length must be 2.
117
- scales (list[float]): The scale for each low-level pooling op relative to
118
- the input image. For a feature map with stride s relative to the input
119
- image, scale is defined as 1/s. The stride must be power of 2.
120
- When there are multiple scales, they must form a pyramid, i.e. they must be
121
- a monotically decreasing geometric sequence with a factor of 1/2.
122
- sampling_ratio (int): The `sampling_ratio` parameter for the ROIAlign op.
123
- pooler_type (string): Name of the type of pooling operation that should be applied.
124
- For instance, "ROIPool" or "ROIAlignV2".
125
- canonical_box_size (int): A canonical box size in pixels (sqrt(box area)). The default
126
- is heuristically defined as 224 pixels in the FPN paper (based on ImageNet
127
- pre-training).
128
- canonical_level (int): The feature map level index from which a canonically-sized box
129
- should be placed. The default is defined as level 4 (stride=16) in the FPN paper,
130
- i.e., a box of size 224x224 will be placed on the feature with stride=16.
131
- The box placement for all boxes will be determined from their sizes w.r.t
132
- canonical_box_size. For example, a box whose area is 4x that of a canonical box
133
- should be used to pool features from feature level ``canonical_level+1``.
134
-
135
- Note that the actual input feature maps given to this module may not have
136
- sufficiently many levels for the input boxes. If the boxes are too large or too
137
- small for the input feature maps, the closest level will be used.
138
- """
139
- super().__init__()
140
-
141
- if isinstance(output_size, int):
142
- output_size = (output_size, output_size)
143
- assert len(output_size) == 2
144
- assert isinstance(output_size[0], int) and isinstance(output_size[1], int)
145
- self.output_size = output_size
146
-
147
- if pooler_type == "ROIAlign":
148
- self.level_poolers = nn.ModuleList(
149
- ROIAlign(
150
- output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=False
151
- )
152
- for scale in scales
153
- )
154
- elif pooler_type == "ROIAlignV2":
155
- self.level_poolers = nn.ModuleList(
156
- ROIAlign(
157
- output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=True
158
- )
159
- for scale in scales
160
- )
161
- elif pooler_type == "ROIPool":
162
- self.level_poolers = nn.ModuleList(
163
- RoIPool(output_size, spatial_scale=scale) for scale in scales
164
- )
165
- elif pooler_type == "ROIAlignRotated":
166
- self.level_poolers = nn.ModuleList(
167
- ROIAlignRotated(output_size, spatial_scale=scale, sampling_ratio=sampling_ratio)
168
- for scale in scales
169
- )
170
- else:
171
- raise ValueError("Unknown pooler type: {}".format(pooler_type))
172
-
173
- # Map scale (defined as 1 / stride) to its feature map level under the
174
- # assumption that stride is a power of 2.
175
- min_level = -(math.log2(scales[0]))
176
- max_level = -(math.log2(scales[-1]))
177
- assert math.isclose(min_level, int(min_level)) and math.isclose(
178
- max_level, int(max_level)
179
- ), "Featuremap stride is not power of 2!"
180
- self.min_level = int(min_level)
181
- self.max_level = int(max_level)
182
- assert (
183
- len(scales) == self.max_level - self.min_level + 1
184
- ), "[ROIPooler] Sizes of input featuremaps do not form a pyramid!"
185
- assert 0 <= self.min_level and self.min_level <= self.max_level
186
- self.canonical_level = canonical_level
187
- assert canonical_box_size > 0
188
- self.canonical_box_size = canonical_box_size
189
-
190
- def forward(self, x: List[torch.Tensor], box_lists: List[Boxes]):
191
- """
192
- Args:
193
- x (list[Tensor]): A list of feature maps of NCHW shape, with scales matching those
194
- used to construct this module.
195
- box_lists (list[Boxes] | list[RotatedBoxes]):
196
- A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch.
197
- The box coordinates are defined on the original image and
198
- will be scaled by the `scales` argument of :class:`ROIPooler`.
199
-
200
- Returns:
201
- Tensor:
202
- A tensor of shape (M, C, output_size, output_size) where M is the total number of
203
- boxes aggregated over all N batch images and C is the number of channels in `x`.
204
- """
205
- num_level_assignments = len(self.level_poolers)
206
-
207
- assert isinstance(x, list) and isinstance(
208
- box_lists, list
209
- ), "Arguments to pooler must be lists"
210
- assert (
211
- len(x) == num_level_assignments
212
- ), "unequal value, num_level_assignments={}, but x is list of {} Tensors".format(
213
- num_level_assignments, len(x)
214
- )
215
-
216
- assert len(box_lists) == x[0].size(
217
- 0
218
- ), "unequal value, x[0] batch dim 0 is {}, but box_list has length {}".format(
219
- x[0].size(0), len(box_lists)
220
- )
221
- if len(box_lists) == 0:
222
- return torch.zeros(
223
- (0, x[0].shape[1]) + self.output_size, device=x[0].device, dtype=x[0].dtype
224
- )
225
-
226
- pooler_fmt_boxes = convert_boxes_to_pooler_format(box_lists)
227
-
228
- if num_level_assignments == 1:
229
- return self.level_poolers[0](x[0], pooler_fmt_boxes)
230
-
231
- level_assignments = assign_boxes_to_levels(
232
- box_lists, self.min_level, self.max_level, self.canonical_box_size, self.canonical_level
233
- )
234
-
235
- num_boxes = pooler_fmt_boxes.size(0)
236
- num_channels = x[0].shape[1]
237
- output_size = self.output_size[0]
238
-
239
- dtype, device = x[0].dtype, x[0].device
240
- output = torch.zeros(
241
- (num_boxes, num_channels, output_size, output_size), dtype=dtype, device=device
242
- )
243
-
244
- for level, pooler in enumerate(self.level_poolers):
245
- inds = nonzero_tuple(level_assignments == level)[0]
246
- pooler_fmt_boxes_level = pooler_fmt_boxes[inds]
247
- # Use index_put_ instead of advance indexing, to avoid pytorch/issues/49852
248
- output.index_put_((inds,), pooler(x[level], pooler_fmt_boxes_level))
249
-
250
- return output
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/unicl-zero-shot-img-recog/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Unicl Zero-Shot Image Recognition Demo
3
- emoji: 🏢
4
- colorFrom: red
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.0.13
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/datasets/transforms.py DELETED
@@ -1,311 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- """
3
- Transforms and data augmentation for both image + bbox.
4
- """
5
- import os
6
- import random
7
-
8
- import PIL
9
- import torch
10
- import torchvision.transforms as T
11
- import torchvision.transforms.functional as F
12
-
13
- from groundingdino.util.box_ops import box_xyxy_to_cxcywh
14
- from groundingdino.util.misc import interpolate
15
-
16
-
17
- def crop(image, target, region):
18
- cropped_image = F.crop(image, *region)
19
-
20
- target = target.copy()
21
- i, j, h, w = region
22
-
23
- # should we do something wrt the original size?
24
- target["size"] = torch.tensor([h, w])
25
-
26
- fields = ["labels", "area", "iscrowd", "positive_map"]
27
-
28
- if "boxes" in target:
29
- boxes = target["boxes"]
30
- max_size = torch.as_tensor([w, h], dtype=torch.float32)
31
- cropped_boxes = boxes - torch.as_tensor([j, i, j, i])
32
- cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)
33
- cropped_boxes = cropped_boxes.clamp(min=0)
34
- area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)
35
- target["boxes"] = cropped_boxes.reshape(-1, 4)
36
- target["area"] = area
37
- fields.append("boxes")
38
-
39
- if "masks" in target:
40
- # FIXME should we update the area here if there are no boxes?
41
- target["masks"] = target["masks"][:, i : i + h, j : j + w]
42
- fields.append("masks")
43
-
44
- # remove elements for which the boxes or masks that have zero area
45
- if "boxes" in target or "masks" in target:
46
- # favor boxes selection when defining which elements to keep
47
- # this is compatible with previous implementation
48
- if "boxes" in target:
49
- cropped_boxes = target["boxes"].reshape(-1, 2, 2)
50
- keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)
51
- else:
52
- keep = target["masks"].flatten(1).any(1)
53
-
54
- for field in fields:
55
- if field in target:
56
- target[field] = target[field][keep]
57
-
58
- if os.environ.get("IPDB_SHILONG_DEBUG", None) == "INFO":
59
- # for debug and visualization only.
60
- if "strings_positive" in target:
61
- target["strings_positive"] = [
62
- _i for _i, _j in zip(target["strings_positive"], keep) if _j
63
- ]
64
-
65
- return cropped_image, target
66
-
67
-
68
- def hflip(image, target):
69
- flipped_image = F.hflip(image)
70
-
71
- w, h = image.size
72
-
73
- target = target.copy()
74
- if "boxes" in target:
75
- boxes = target["boxes"]
76
- boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor(
77
- [w, 0, w, 0]
78
- )
79
- target["boxes"] = boxes
80
-
81
- if "masks" in target:
82
- target["masks"] = target["masks"].flip(-1)
83
-
84
- return flipped_image, target
85
-
86
-
87
- def resize(image, target, size, max_size=None):
88
- # size can be min_size (scalar) or (w, h) tuple
89
-
90
- def get_size_with_aspect_ratio(image_size, size, max_size=None):
91
- w, h = image_size
92
- if max_size is not None:
93
- min_original_size = float(min((w, h)))
94
- max_original_size = float(max((w, h)))
95
- if max_original_size / min_original_size * size > max_size:
96
- size = int(round(max_size * min_original_size / max_original_size))
97
-
98
- if (w <= h and w == size) or (h <= w and h == size):
99
- return (h, w)
100
-
101
- if w < h:
102
- ow = size
103
- oh = int(size * h / w)
104
- else:
105
- oh = size
106
- ow = int(size * w / h)
107
-
108
- return (oh, ow)
109
-
110
- def get_size(image_size, size, max_size=None):
111
- if isinstance(size, (list, tuple)):
112
- return size[::-1]
113
- else:
114
- return get_size_with_aspect_ratio(image_size, size, max_size)
115
-
116
- size = get_size(image.size, size, max_size)
117
- rescaled_image = F.resize(image, size)
118
-
119
- if target is None:
120
- return rescaled_image, None
121
-
122
- ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))
123
- ratio_width, ratio_height = ratios
124
-
125
- target = target.copy()
126
- if "boxes" in target:
127
- boxes = target["boxes"]
128
- scaled_boxes = boxes * torch.as_tensor(
129
- [ratio_width, ratio_height, ratio_width, ratio_height]
130
- )
131
- target["boxes"] = scaled_boxes
132
-
133
- if "area" in target:
134
- area = target["area"]
135
- scaled_area = area * (ratio_width * ratio_height)
136
- target["area"] = scaled_area
137
-
138
- h, w = size
139
- target["size"] = torch.tensor([h, w])
140
-
141
- if "masks" in target:
142
- target["masks"] = (
143
- interpolate(target["masks"][:, None].float(), size, mode="nearest")[:, 0] > 0.5
144
- )
145
-
146
- return rescaled_image, target
147
-
148
-
149
- def pad(image, target, padding):
150
- # assumes that we only pad on the bottom right corners
151
- padded_image = F.pad(image, (0, 0, padding[0], padding[1]))
152
- if target is None:
153
- return padded_image, None
154
- target = target.copy()
155
- # should we do something wrt the original size?
156
- target["size"] = torch.tensor(padded_image.size[::-1])
157
- if "masks" in target:
158
- target["masks"] = torch.nn.functional.pad(target["masks"], (0, padding[0], 0, padding[1]))
159
- return padded_image, target
160
-
161
-
162
- class ResizeDebug(object):
163
- def __init__(self, size):
164
- self.size = size
165
-
166
- def __call__(self, img, target):
167
- return resize(img, target, self.size)
168
-
169
-
170
- class RandomCrop(object):
171
- def __init__(self, size):
172
- self.size = size
173
-
174
- def __call__(self, img, target):
175
- region = T.RandomCrop.get_params(img, self.size)
176
- return crop(img, target, region)
177
-
178
-
179
- class RandomSizeCrop(object):
180
- def __init__(self, min_size: int, max_size: int, respect_boxes: bool = False):
181
- # respect_boxes: True to keep all boxes
182
- # False to tolerence box filter
183
- self.min_size = min_size
184
- self.max_size = max_size
185
- self.respect_boxes = respect_boxes
186
-
187
- def __call__(self, img: PIL.Image.Image, target: dict):
188
- init_boxes = len(target["boxes"])
189
- max_patience = 10
190
- for i in range(max_patience):
191
- w = random.randint(self.min_size, min(img.width, self.max_size))
192
- h = random.randint(self.min_size, min(img.height, self.max_size))
193
- region = T.RandomCrop.get_params(img, [h, w])
194
- result_img, result_target = crop(img, target, region)
195
- if (
196
- not self.respect_boxes
197
- or len(result_target["boxes"]) == init_boxes
198
- or i == max_patience - 1
199
- ):
200
- return result_img, result_target
201
- return result_img, result_target
202
-
203
-
204
- class CenterCrop(object):
205
- def __init__(self, size):
206
- self.size = size
207
-
208
- def __call__(self, img, target):
209
- image_width, image_height = img.size
210
- crop_height, crop_width = self.size
211
- crop_top = int(round((image_height - crop_height) / 2.0))
212
- crop_left = int(round((image_width - crop_width) / 2.0))
213
- return crop(img, target, (crop_top, crop_left, crop_height, crop_width))
214
-
215
-
216
- class RandomHorizontalFlip(object):
217
- def __init__(self, p=0.5):
218
- self.p = p
219
-
220
- def __call__(self, img, target):
221
- if random.random() < self.p:
222
- return hflip(img, target)
223
- return img, target
224
-
225
-
226
- class RandomResize(object):
227
- def __init__(self, sizes, max_size=None):
228
- assert isinstance(sizes, (list, tuple))
229
- self.sizes = sizes
230
- self.max_size = max_size
231
-
232
- def __call__(self, img, target=None):
233
- size = random.choice(self.sizes)
234
- return resize(img, target, size, self.max_size)
235
-
236
-
237
- class RandomPad(object):
238
- def __init__(self, max_pad):
239
- self.max_pad = max_pad
240
-
241
- def __call__(self, img, target):
242
- pad_x = random.randint(0, self.max_pad)
243
- pad_y = random.randint(0, self.max_pad)
244
- return pad(img, target, (pad_x, pad_y))
245
-
246
-
247
- class RandomSelect(object):
248
- """
249
- Randomly selects between transforms1 and transforms2,
250
- with probability p for transforms1 and (1 - p) for transforms2
251
- """
252
-
253
- def __init__(self, transforms1, transforms2, p=0.5):
254
- self.transforms1 = transforms1
255
- self.transforms2 = transforms2
256
- self.p = p
257
-
258
- def __call__(self, img, target):
259
- if random.random() < self.p:
260
- return self.transforms1(img, target)
261
- return self.transforms2(img, target)
262
-
263
-
264
- class ToTensor(object):
265
- def __call__(self, img, target):
266
- return F.to_tensor(img), target
267
-
268
-
269
- class RandomErasing(object):
270
- def __init__(self, *args, **kwargs):
271
- self.eraser = T.RandomErasing(*args, **kwargs)
272
-
273
- def __call__(self, img, target):
274
- return self.eraser(img), target
275
-
276
-
277
- class Normalize(object):
278
- def __init__(self, mean, std):
279
- self.mean = mean
280
- self.std = std
281
-
282
- def __call__(self, image, target=None):
283
- image = F.normalize(image, mean=self.mean, std=self.std)
284
- if target is None:
285
- return image, None
286
- target = target.copy()
287
- h, w = image.shape[-2:]
288
- if "boxes" in target:
289
- boxes = target["boxes"]
290
- boxes = box_xyxy_to_cxcywh(boxes)
291
- boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)
292
- target["boxes"] = boxes
293
- return image, target
294
-
295
-
296
- class Compose(object):
297
- def __init__(self, transforms):
298
- self.transforms = transforms
299
-
300
- def __call__(self, image, target):
301
- for t in self.transforms:
302
- image, target = t(image, target)
303
- return image, target
304
-
305
- def __repr__(self):
306
- format_string = self.__class__.__name__ + "("
307
- for t in self.transforms:
308
- format_string += "\n"
309
- format_string += " {0}".format(t)
310
- format_string += "\n)"
311
- return format_string
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarlDennis/Lovelive-VITS-JPZH/transforms.py DELETED
@@ -1,193 +0,0 @@
1
- import torch
2
- from torch.nn import functional as F
3
-
4
- import numpy as np
5
-
6
-
7
- DEFAULT_MIN_BIN_WIDTH = 1e-3
8
- DEFAULT_MIN_BIN_HEIGHT = 1e-3
9
- DEFAULT_MIN_DERIVATIVE = 1e-3
10
-
11
-
12
- def piecewise_rational_quadratic_transform(inputs,
13
- unnormalized_widths,
14
- unnormalized_heights,
15
- unnormalized_derivatives,
16
- inverse=False,
17
- tails=None,
18
- tail_bound=1.,
19
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
20
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
21
- min_derivative=DEFAULT_MIN_DERIVATIVE):
22
-
23
- if tails is None:
24
- spline_fn = rational_quadratic_spline
25
- spline_kwargs = {}
26
- else:
27
- spline_fn = unconstrained_rational_quadratic_spline
28
- spline_kwargs = {
29
- 'tails': tails,
30
- 'tail_bound': tail_bound
31
- }
32
-
33
- outputs, logabsdet = spline_fn(
34
- inputs=inputs,
35
- unnormalized_widths=unnormalized_widths,
36
- unnormalized_heights=unnormalized_heights,
37
- unnormalized_derivatives=unnormalized_derivatives,
38
- inverse=inverse,
39
- min_bin_width=min_bin_width,
40
- min_bin_height=min_bin_height,
41
- min_derivative=min_derivative,
42
- **spline_kwargs
43
- )
44
- return outputs, logabsdet
45
-
46
-
47
- def searchsorted(bin_locations, inputs, eps=1e-6):
48
- bin_locations[..., -1] += eps
49
- return torch.sum(
50
- inputs[..., None] >= bin_locations,
51
- dim=-1
52
- ) - 1
53
-
54
-
55
- def unconstrained_rational_quadratic_spline(inputs,
56
- unnormalized_widths,
57
- unnormalized_heights,
58
- unnormalized_derivatives,
59
- inverse=False,
60
- tails='linear',
61
- tail_bound=1.,
62
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
63
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
64
- min_derivative=DEFAULT_MIN_DERIVATIVE):
65
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
66
- outside_interval_mask = ~inside_interval_mask
67
-
68
- outputs = torch.zeros_like(inputs)
69
- logabsdet = torch.zeros_like(inputs)
70
-
71
- if tails == 'linear':
72
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
73
- constant = np.log(np.exp(1 - min_derivative) - 1)
74
- unnormalized_derivatives[..., 0] = constant
75
- unnormalized_derivatives[..., -1] = constant
76
-
77
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
78
- logabsdet[outside_interval_mask] = 0
79
- else:
80
- raise RuntimeError('{} tails are not implemented.'.format(tails))
81
-
82
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
83
- inputs=inputs[inside_interval_mask],
84
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
85
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
86
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
87
- inverse=inverse,
88
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
89
- min_bin_width=min_bin_width,
90
- min_bin_height=min_bin_height,
91
- min_derivative=min_derivative
92
- )
93
-
94
- return outputs, logabsdet
95
-
96
- def rational_quadratic_spline(inputs,
97
- unnormalized_widths,
98
- unnormalized_heights,
99
- unnormalized_derivatives,
100
- inverse=False,
101
- left=0., right=1., bottom=0., top=1.,
102
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
103
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
104
- min_derivative=DEFAULT_MIN_DERIVATIVE):
105
- if torch.min(inputs) < left or torch.max(inputs) > right:
106
- raise ValueError('Input to a transform is not within its domain')
107
-
108
- num_bins = unnormalized_widths.shape[-1]
109
-
110
- if min_bin_width * num_bins > 1.0:
111
- raise ValueError('Minimal bin width too large for the number of bins')
112
- if min_bin_height * num_bins > 1.0:
113
- raise ValueError('Minimal bin height too large for the number of bins')
114
-
115
- widths = F.softmax(unnormalized_widths, dim=-1)
116
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
117
- cumwidths = torch.cumsum(widths, dim=-1)
118
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
119
- cumwidths = (right - left) * cumwidths + left
120
- cumwidths[..., 0] = left
121
- cumwidths[..., -1] = right
122
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
123
-
124
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
125
-
126
- heights = F.softmax(unnormalized_heights, dim=-1)
127
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
128
- cumheights = torch.cumsum(heights, dim=-1)
129
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
130
- cumheights = (top - bottom) * cumheights + bottom
131
- cumheights[..., 0] = bottom
132
- cumheights[..., -1] = top
133
- heights = cumheights[..., 1:] - cumheights[..., :-1]
134
-
135
- if inverse:
136
- bin_idx = searchsorted(cumheights, inputs)[..., None]
137
- else:
138
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
139
-
140
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
141
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
142
-
143
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
144
- delta = heights / widths
145
- input_delta = delta.gather(-1, bin_idx)[..., 0]
146
-
147
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
148
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
149
-
150
- input_heights = heights.gather(-1, bin_idx)[..., 0]
151
-
152
- if inverse:
153
- a = (((inputs - input_cumheights) * (input_derivatives
154
- + input_derivatives_plus_one
155
- - 2 * input_delta)
156
- + input_heights * (input_delta - input_derivatives)))
157
- b = (input_heights * input_derivatives
158
- - (inputs - input_cumheights) * (input_derivatives
159
- + input_derivatives_plus_one
160
- - 2 * input_delta))
161
- c = - input_delta * (inputs - input_cumheights)
162
-
163
- discriminant = b.pow(2) - 4 * a * c
164
- assert (discriminant >= 0).all()
165
-
166
- root = (2 * c) / (-b - torch.sqrt(discriminant))
167
- outputs = root * input_bin_widths + input_cumwidths
168
-
169
- theta_one_minus_theta = root * (1 - root)
170
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
171
- * theta_one_minus_theta)
172
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
173
- + 2 * input_delta * theta_one_minus_theta
174
- + input_derivatives * (1 - root).pow(2))
175
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
176
-
177
- return outputs, -logabsdet
178
- else:
179
- theta = (inputs - input_cumwidths) / input_bin_widths
180
- theta_one_minus_theta = theta * (1 - theta)
181
-
182
- numerator = input_heights * (input_delta * theta.pow(2)
183
- + input_derivatives * theta_one_minus_theta)
184
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
185
- * theta_one_minus_theta)
186
- outputs = input_cumheights + numerator / denominator
187
-
188
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
189
- + 2 * input_delta * theta_one_minus_theta
190
- + input_derivatives * (1 - theta).pow(2))
191
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
192
-
193
- return outputs, logabsdet
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/Dockerfile DELETED
@@ -1,51 +0,0 @@
1
- FROM python:3.10 as tmp
2
-
3
- WORKDIR /tmp
4
-
5
- ENV PATH="${PATH}:/root/.local/bin"
6
-
7
- COPY ./pyproject.toml ./poetry.lock* /tmp/
8
- RUN pip install poetry \
9
- && poetry config virtualenvs.in-project true \
10
- && poetry install --only main --no-interaction --no-ansi
11
-
12
- FROM python:3.10-slim as app
13
-
14
- WORKDIR /app
15
-
16
- EXPOSE 7860
17
-
18
- VOLUME /data
19
-
20
- COPY --from=tmp /tmp/.venv /app/.venv
21
-
22
- COPY ./resources/fonts/* /usr/share/fonts/meme-fonts/
23
- RUN apt-get update \
24
- && apt-get install -y --no-install-recommends locales fontconfig fonts-noto-cjk fonts-noto-color-emoji gettext \
25
- && localedef -i zh_CN -c -f UTF-8 -A /usr/share/locale/locale.alias zh_CN.UTF-8 \
26
- && fc-cache -fv \
27
- && apt-get purge -y --auto-remove \
28
- && rm -rf /var/lib/apt/lists/*
29
-
30
- ENV TZ=Asia/Shanghai \
31
- LC_ALL=zh_CN.UTF-8 \
32
- PATH="/app/.venv/bin:${PATH}" \
33
- VIRTUAL_ENV="/app/.venv" \
34
- LOAD_BUILTIN_MEMES=true \
35
- MEME_DIRS="[\"/data/memes\"]" \
36
- MEME_DISABLED_LIST="[]" \
37
- GIF_MAX_SIZE=10.0 \
38
- GIF_MAX_FRAMES=100 \
39
- BAIDU_TRANS_APPID="" \
40
- BAIDU_TRANS_APIKEY="" \
41
- LOG_LEVEL="INFO"
42
-
43
- COPY ./meme_generator /app/meme_generator
44
-
45
- COPY ./docker/config.toml.template /app/config.toml.template
46
- COPY ./docker/start.sh /app/start.sh
47
- RUN mkdir -p /.config
48
- RUN chmod -R 777 /.config
49
- RUN chmod +x /app/start.sh
50
-
51
- CMD ["/app/start.sh"]