parquet-converter commited on
Commit
912fb48
·
1 Parent(s): c477f72

Update parquet files (step 41 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download TurboTax 2019 with License Code A Step-by-Step Guide.md +0 -17
  2. spaces/1gistliPinn/ChatGPT4/Examples/Babbie Earl. The Practice of Social Research. California USA How to Apply Research Concepts as a Researcher and a Consumer.md +0 -6
  3. spaces/1line/AutoGPT/tests/context.py +0 -6
  4. spaces/1line/AutoGPT/tests/unit/test_browse_scrape_text.py +0 -98
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chess Coach Pro APK The Ultimate Chess Training App for Android Devices.md +0 -105
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cat Dog Magic Tiles MOD APK for Free and Experience the Fun of Playing Piano with Adorable Animals.md +0 -102
  7. spaces/1phancelerku/anime-remove-background/Call of Duty Warzone Mobile - The Next Era of the Call of Duty Franchise on Your iOS Device.md +0 -119
  8. spaces/1phancelerku/anime-remove-background/Download 'LINK' The House Next Door Full Movie Kickass Torrent.md +0 -100
  9. spaces/1phancelerku/anime-remove-background/Download Portable Loba Loba The Best Way to Enjoy Rhymer_Lee and Portables Music.md +0 -102
  10. spaces/1toTree/lora_test/.ipynb_checkpoints/env-checkpoint.py +0 -13
  11. spaces/1toTree/lora_test/ppdiffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py +0 -128
  12. spaces/716this/review-star-prediction-app/app.py +0 -68
  13. spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/models/__init__.py +0 -2
  14. spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/distributions/distributions.py +0 -92
  15. spaces/ASJMO/freegpt/client/css/main.css +0 -14
  16. spaces/Abhaykoul/Wikipedia/app.py +0 -122
  17. spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/gpt4.py +0 -56
  18. spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/__init__.py +0 -3
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/FullWindowRectangle.js +0 -2
  20. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/MovePanelCallbacks.js +0 -12
  21. spaces/Alpaca233/SadTalker/src/face3d/data/base_dataset.py +0 -125
  22. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddpm_parallel.py +0 -216
  23. spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_rpn_r50_caffe_fpn_1x_coco.py +0 -58
  24. spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/trident_resnet.py +0 -292
  25. spaces/Armaliltril/qbee/README.md +0 -15
  26. spaces/ArtGAN/Diffusion-API/diffusion_webui/diffusion_models/base_controlnet_pipeline.py +0 -31
  27. spaces/Arthur678/vits-uma-genshin-honkai/attentions.py +0 -300
  28. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langgreekmodel.py +0 -0
  29. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py +0 -35
  30. spaces/BAAI/AltDiffusion-m9/js/index.js +0 -186
  31. spaces/Benson/text-generation/Examples/Br Style Download 2022.md +0 -64
  32. spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/ade20k.py +0 -124
  33. spaces/BernardoOlisan/vqganclip/taming-transformers/taming/util.py +0 -157
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/subprocess.py +0 -260
  35. spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/readers.py +0 -122
  36. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/_adapters.py +0 -170
  37. spaces/BreetheRun/stabilityai-stable-diffusion-xl-base-1.0/README.md +0 -13
  38. spaces/CVPR/LIVE/pybind11/tests/test_builtin_casters.cpp +0 -192
  39. spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/tabulate.h +0 -22
  40. spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/config-checkpoint.py +0 -34
  41. spaces/CarlDennis/Lovelive-VITS-JPZH/attentions.py +0 -300
  42. spaces/DEEMOSTECH/ChatAvatar/static/js/main.21f66c0f.js +0 -0
  43. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BmpImagePlugin.py +0 -471
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/WebPImagePlugin.py +0 -366
  45. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/expr/__init__.py +0 -19
  46. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/models.py +0 -337
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/__init__.py +0 -61
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/sbixStrike.py +0 -177
  49. spaces/Daniton/Midjourney-Disney/README.md +0 -12
  50. spaces/Davis/twitter_scraper/README.md +0 -46
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download TurboTax 2019 with License Code A Step-by-Step Guide.md DELETED
@@ -1,17 +0,0 @@
1
- <br />
2
- <h1>How to Download TurboTax 2019 with License Code and Save Money on Your Taxes</h1>
3
- <p>TurboTax is one of the most popular and trusted tax software in the US, helping millions of taxpayers file their taxes easily and accurately every year. TurboTax 2019 is the latest version of the software, which supports the tax year 2019 and includes all the updates and changes made by the IRS. If you want to download TurboTax 2019 with license code and save money on your taxes, you have come to the right place. In this article, we will show you how to download TurboTax 2019 with license code and how to use it to prepare and file your taxes online.</p>
4
- <h2>download turbotax 2019 with license code</h2><br /><p><b><b>Download File</b> &#10042;&#10042;&#10042; <a href="https://byltly.com/2uKz39">https://byltly.com/2uKz39</a></b></p><br /><br />
5
- <h2>Step 1: Download TurboTax 2019 with License Code</h2>
6
- <p>To download TurboTax 2019 with license code, you need to visit the official website of TurboTax: <a href="https://turbotax.intuit.com/">https://turbotax.intuit.com/</a>. On the homepage, you will see a button that says "Start for Free". Click on it and you will be taken to a page where you can choose the edition of TurboTax that suits your tax situation. TurboTax offers four editions: Free, Deluxe, Premier, and Self-Employed. Each edition has different features and prices, depending on your income level, deductions, credits, investments, etc. You can compare the editions and see which one is best for you.</p>
7
- <p>Once you have chosen the edition of TurboTax that you want to use, click on the "Buy Now" button. You will then be asked to create an account or sign in if you already have one. After that, you will be taken to a page where you can enter your payment information and your license code. A license code is a unique code that activates your TurboTax software and allows you to use it for one tax return. You can get a license code by purchasing TurboTax from a retailer or online store, or by receiving it as a gift or reward. If you don't have a license code, you can also pay for TurboTax using a credit card, debit card, or PayPal.</p>
8
- <p>After entering your payment information and your license code, click on the "Download" button. You will then see a dialog box asking you to save the file. Choose a location on your computer where you want to save the file and click on "Save". The file name should be something like "TurboTax_2019.exe".</p>
9
- <h2>Step 2: Install TurboTax 2019</h2>
10
- <p>Once you have downloaded TurboTax 2019 with license code, you need to run the installer file to install it on your system. To do that, locate the file on your computer and double-click on it. You will then see a User Account Control prompt asking you to allow the app to make changes to your device. Click on "Yes" to continue. You will then see the TurboTax Setup Wizard, which will guide you through the installation process.</p>
11
- <p></p>
12
- <p>The first screen of the wizard is the welcome screen, where you can choose the language of the installation. Select your preferred language and click on "Next". The next screen is the license agreement screen, where you need to accept the terms and conditions of using TurboTax. Read the agreement carefully and click on "I Agree" if you agree with it. The next screen is the installation options screen, where you can choose which components of TurboTax you want to install and where you want to install them. The default options are usually fine for most users, but you can change them if you want. Click on "Next" when you are done.</p>
13
- <p>The next screen is the ready to install screen, where you can review your choices and start the installation process. Click on "Install" to begin installing TurboTax 2019 on your system. The installation may take a few minutes, depending on your system speed and configuration. You will see a progress bar showing the status of the installation. When the installation is complete, you will see a confirmation screen telling you that TurboTax has been successfully installed on your system. Click on "Finish" to exit the wizard.</p>
14
- <h2>Step 3: Run TurboTax 2019</h2>
15
- <p>After installing TurboTax 2019 with license code, you can run it by clicking on the "TurboTax" icon on your desktop or in your start menu. You will then see a welcome screen where you can sign in to your account or create one if you don't</p> ddb901b051<br />
16
- <br />
17
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Babbie Earl. The Practice of Social Research. California USA How to Apply Research Concepts as a Researcher and a Consumer.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>love Maheruh in tamil pdf download</h2><br /><p><b><b>Download</b> &#10001; &#10001; &#10001; <a href="https://imgfil.com/2uy0WH">https://imgfil.com/2uy0WH</a></b></p><br /><br />
2
- <br />
3
- aaccfb2cb3<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1line/AutoGPT/tests/context.py DELETED
@@ -1,6 +0,0 @@
1
- import os
2
- import sys
3
-
4
- sys.path.insert(
5
- 0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../scripts"))
6
- )
 
 
 
 
 
 
 
spaces/1line/AutoGPT/tests/unit/test_browse_scrape_text.py DELETED
@@ -1,98 +0,0 @@
1
- # Generated by CodiumAI
2
-
3
- import requests
4
-
5
- from autogpt.commands.web_requests import scrape_text
6
-
7
- """
8
- Code Analysis
9
-
10
- Objective:
11
- The objective of the "scrape_text" function is to scrape the text content from
12
- a given URL and return it as a string, after removing any unwanted HTML tags and scripts.
13
-
14
- Inputs:
15
- - url: a string representing the URL of the webpage to be scraped.
16
-
17
- Flow:
18
- 1. Send a GET request to the given URL using the requests library and the user agent header from the config file.
19
- 2. Check if the response contains an HTTP error. If it does, return an error message.
20
- 3. Use BeautifulSoup to parse the HTML content of the response and extract all script and style tags.
21
- 4. Get the text content of the remaining HTML using the get_text() method of BeautifulSoup.
22
- 5. Split the text into lines and then into chunks, removing any extra whitespace.
23
- 6. Join the chunks into a single string with newline characters between them.
24
- 7. Return the cleaned text.
25
-
26
- Outputs:
27
- - A string representing the cleaned text content of the webpage.
28
-
29
- Additional aspects:
30
- - The function uses the requests library and BeautifulSoup to handle the HTTP request and HTML parsing, respectively.
31
- - The function removes script and style tags from the HTML to avoid including unwanted content in the text output.
32
- - The function uses a generator expression to split the text into lines and chunks, which can improve performance for large amounts of text.
33
- """
34
-
35
-
36
- class TestScrapeText:
37
- # Tests that scrape_text() returns the expected text when given a valid URL.
38
- def test_scrape_text_with_valid_url(self, mocker):
39
- # Mock the requests.get() method to return a response with expected text
40
- expected_text = "This is some sample text"
41
- mock_response = mocker.Mock()
42
- mock_response.status_code = 200
43
- mock_response.text = f"<html><body><div><p style='color: blue;'>{expected_text}</p></div></body></html>"
44
- mocker.patch("requests.Session.get", return_value=mock_response)
45
-
46
- # Call the function with a valid URL and assert that it returns the expected text
47
- url = "http://www.example.com"
48
- assert scrape_text(url) == expected_text
49
-
50
- # Tests that the function returns an error message when an invalid or unreachable url is provided.
51
- def test_invalid_url(self, mocker):
52
- # Mock the requests.get() method to raise an exception
53
- mocker.patch(
54
- "requests.Session.get", side_effect=requests.exceptions.RequestException
55
- )
56
-
57
- # Call the function with an invalid URL and assert that it returns an error message
58
- url = "http://www.invalidurl.com"
59
- error_message = scrape_text(url)
60
- assert "Error:" in error_message
61
-
62
- # Tests that the function returns an empty string when the html page contains no text to be scraped.
63
- def test_no_text(self, mocker):
64
- # Mock the requests.get() method to return a response with no text
65
- mock_response = mocker.Mock()
66
- mock_response.status_code = 200
67
- mock_response.text = "<html><body></body></html>"
68
- mocker.patch("requests.Session.get", return_value=mock_response)
69
-
70
- # Call the function with a valid URL and assert that it returns an empty string
71
- url = "http://www.example.com"
72
- assert scrape_text(url) == ""
73
-
74
- # Tests that the function returns an error message when the response status code is an http error (>=400).
75
- def test_http_error(self, mocker):
76
- # Mock the requests.get() method to return a response with a 404 status code
77
- mocker.patch("requests.Session.get", return_value=mocker.Mock(status_code=404))
78
-
79
- # Call the function with a URL
80
- result = scrape_text("https://www.example.com")
81
-
82
- # Check that the function returns an error message
83
- assert result == "Error: HTTP 404 error"
84
-
85
- # Tests that scrape_text() properly handles HTML tags.
86
- def test_scrape_text_with_html_tags(self, mocker):
87
- # Create a mock response object with HTML containing tags
88
- html = "<html><body><p>This is <b>bold</b> text.</p></body></html>"
89
- mock_response = mocker.Mock()
90
- mock_response.status_code = 200
91
- mock_response.text = html
92
- mocker.patch("requests.Session.get", return_value=mock_response)
93
-
94
- # Call the function with a URL
95
- result = scrape_text("https://www.example.com")
96
-
97
- # Check that the function properly handles HTML tags
98
- assert result == "This is bold text."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chess Coach Pro APK The Ultimate Chess Training App for Android Devices.md DELETED
@@ -1,105 +0,0 @@
1
- <br />
2
- <h1>Download Chess Coach Pro APK: A Guide for Chess Lovers</h1>
3
- <p>If you are passionate about chess, you might be looking for a way to improve your skills and enjoy the game more. One of the best ways to do that is to download Chess Coach Pro APK, a mobile game that offers a thematic collection of puzzles and openings, as well as interactive lessons and exercises. In this article, we will tell you what Chess Coach Pro APK is, what features and benefits it has, how to download and install it, why you should play chess with it, and some tips and tricks for playing chess with it. Let's get started!</p>
4
- <h2>download chess coach pro apk</h2><br /><p><b><b>Download</b> &#128504; <a href="https://urlin.us/2uT12n">https://urlin.us/2uT12n</a></b></p><br /><br />
5
- <h2>What is Chess Coach Pro APK?</h2>
6
- <p>Chess Coach Pro APK is an Android game developed by KemigoGames, a team of chess enthusiasts who want to share their passion and knowledge with other players. The game is designed to help you improve your chess skills and strategy, as well as to have fun and challenge yourself. The game contains 4200 themed chess puzzles and some chess openings, each of which reveals a tactic or a trick. You have to guess the next chess move to solve the puzzle and move on to the next one. The game also has exclusive sections on traps in the opening, debut (30 openings), endgame, diagonal mate, and vertical mate.</p>
7
- <h3>Features of Chess Coach Pro APK</h3>
8
- <p>Chess Coach Pro APK has many features that make it a great choice for chess lovers. Some of them are:</p>
9
- <ul>
10
- <li>100% offline, no ads, no in-app purchases, no monthly fees.</li>
11
- <li>Family library: share the purchased application with your family at no extra charge.</li>
12
- <li>Simple and fast interface.</li>
13
- <li>Option to automatically go to the next puzzle after finding the solution.</li>
14
- <li>Ability to track your progress, save puzzles into favorites, and progress from easier to more challenging levels and puzzles.</li>
15
- </ul>
16
- <h3>Benefits of Chess Coach Pro APK</h3>
17
- <p>Chess Coach Pro APK not only offers a lot of features, but also a lot of benefits for its users. Some of them are:</p>
18
- <ul>
19
- <li>Improving your chess skills and knowledge by learning from the puzzles and openings.</li>
20
- <li>Enhancing your brain power and concentration by solving the puzzles.</li>
21
- <li>Having fun and relaxing by playing an interesting and engaging game.</li>
22
- <li>Competing with yourself or others by comparing your scores and achievements.</li>
23
- </ul>
24
- <h3>How to download and install Chess Coach Pro APK</h3>
25
- <p>If you are interested in downloading and installing Chess Coach Pro APK, here are the steps you need to follow:</p>
26
- <ol>
27
- <li>Go to [APKCombo](^1^), a website that offers free download of Android games.</li>
28
- <li>Type "Chess Coach Pro" in the search box and click on the result that matches the game.</li>
29
- <li>Click on the "Download" button and choose the version that suits your device.</li>
30
- <li>Wait for the download to finish and then open the file.</li>
31
- <li>Follow the instructions on the screen to install the game.</li>
32
- <li>Enjoy playing chess with Chess Coach Pro APK!</li>
33
- </ol>
34
- <h2>Why you should play chess with Chess Coach Pro APK</h2>
35
- <p>Now that you know what Chess Coach Pro APK is and how to download and install it, you might be wondering why you should play chess with it. Here are some reasons why:</p>
36
- <h3>Improve your chess skills and knowledge</h3>
37
- <p>One of the main reasons why you should play chess with Chess Coach Pro APK is that it can help you and more. You can also see the statistics and ratings for each theme and level.</p>
38
- <p>download chess coach pro apk latest version<br />
39
- download chess coach pro apk for android<br />
40
- download chess coach pro apk free<br />
41
- download chess coach pro apk full<br />
42
- download chess coach pro apk mod<br />
43
- download chess coach pro apk paid<br />
44
- download chess coach pro apk offline<br />
45
- download chess coach pro apk no ads<br />
46
- download chess coach pro apk from apkcombo<br />
47
- download chess coach pro apk from apkmb<br />
48
- how to download chess coach pro apk<br />
49
- where to download chess coach pro apk<br />
50
- why download chess coach pro apk<br />
51
- what is chess coach pro apk<br />
52
- is chess coach pro apk safe<br />
53
- is chess coach pro apk worth it<br />
54
- is chess coach pro apk fun<br />
55
- is chess coach pro apk challenging<br />
56
- is chess coach pro apk educational<br />
57
- is chess coach pro apk updated<br />
58
- learn chess with chess coach pro apk<br />
59
- improve chess skills with chess coach pro apk<br />
60
- practice chess puzzles with chess coach pro apk<br />
61
- master chess openings with chess coach pro apk<br />
62
- play chess with chess coach pro apk<br />
63
- best chess app: chess coach pro apk<br />
64
- review of chess coach pro apk<br />
65
- rating of chess coach pro apk<br />
66
- features of chess coach pro apk<br />
67
- benefits of chess coach pro apk<br />
68
- advantages of chess coach pro apk<br />
69
- disadvantages of chess coach pro apk<br />
70
- alternatives to chess coach pro apk<br />
71
- compare chess coach pro apk with other apps<br />
72
- tips and tricks for chess coach pro apk<br />
73
- cheats and hacks for chess coach pro apk<br />
74
- problems and solutions for chess coach pro apk<br />
75
- faq for chess coach pro apk<br />
76
- support for chess coach pro apk<br />
77
- feedback for chess coach pro apk<br />
78
- share and recommend chess coach pro apk<br />
79
- buy and install chess coach pro apk<br />
80
- uninstall and refund chess coach pro apk<br />
81
- update and upgrade chess coach pro apk<br />
82
- backup and restore chess coach pro apk<br />
83
- customize and optimize chess coach pro apk<br />
84
- troubleshoot and fix chess coach pro apk<br />
85
- test and evaluate chess coach pro apk</p>
86
- <h3>Use the hints and solutions when you are stuck</h3>
87
- <p>A third way to improve your chess skills with Chess Coach Pro APK is to use the hints and solutions when you are stuck. The game has a hint button that shows you the best move for the current position, and a solution button that shows you the full solution for the puzzle or opening. You can use these buttons when you are not sure what to do next, or when you want to check your answer. However, try not to rely on them too much, as they will reduce your score and rating.</p>
88
- <h2>Conclusion</h2>
89
- <p>Chess Coach Pro APK is a great game for chess lovers who want to improve their skills and have fun. The game offers a thematic collection of puzzles and openings, as well as interactive lessons and exercises. You can download and install the game for free from APKCombo, a website that offers free download of Android games. You can also play offline or online with friends or strangers, and enjoy a variety of chess puzzles and openings. You can also learn from the interactive lessons and examples, practice with different difficulty levels and themes, and use the hints and solutions when you are stuck. Chess Coach Pro APK is a game that will help you become a better chess player and a smarter person.</p>
90
- <h2>FAQs</h2>
91
- <p>Here are some frequently asked questions about Chess Coach Pro APK:</p>
92
- <ul>
93
- <li><b>Q: Is Chess Coach Pro APK safe to download and install?</b></li>
94
- <li>A: Yes, Chess Coach Pro APK is safe to download and install from APKCombo, as the website scans the files for viruses and malware before uploading them. However, you should always be careful when downloading files from unknown sources, and check the permissions and reviews before installing them.</li>
95
- <li><b>Q: How can I update Chess Coach Pro APK?</b></li>
96
- <li>A: You can update Chess Coach Pro APK by visiting APKCombo again and downloading the latest version of the game. You can also enable the auto-update option in your device settings, so that the game will update automatically when a new version is available.</li>
97
- <li><b>Q: How can I contact the developers of Chess Coach Pro APK?</b></li>
98
- <li>A: You can contact the developers of Chess Coach Pro APK by sending them an email at [email protected]. You can also follow them on Facebook, Twitter, Instagram, or YouTube.</li>
99
- <li><b>Q: How can I support the developers of Chess Coach Pro APK?</b></li>
100
- <li>A: You can support the developers of Chess Coach Pro APK by rating and reviewing the game on Google Play Store, sharing it with your friends and family, and giving them feedback and suggestions.</li>
101
- <li><b>Q: How can I uninstall Chess Coach Pro APK?</b></li>
102
- <li>A: You can uninstall Chess Coach Pro APK by going to your device settings, finding the app in the list of installed apps, tapping on it, and choosing the uninstall option.</li>
103
- </ul></p> 197e85843d<br />
104
- <br />
105
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cat Dog Magic Tiles MOD APK for Free and Experience the Fun of Playing Piano with Adorable Animals.md DELETED
@@ -1,102 +0,0 @@
1
-
2
- <h1>Cat Dog Magic Tiles Mod APK: A Fun and Unique Piano Game</h1>
3
- <p>If you love piano games and cute animals, you will love Cat Dog Magic Tiles Mod APK. This is a new and innovative piano game that lets you play bork remixes of popular songs with adorable cats and dogs. You can download this game for free and enjoy unlimited access to all the songs and themes. In this article, we will tell you everything you need to know about Cat Dog Magic Tiles Mod APK, including its features, how to download and install it, and why you should play it.</p>
4
- <h2>cat dog magic tiles mod apk</h2><br /><p><b><b>Download</b> &#9913;&#9913;&#9913; <a href="https://urlin.us/2uSXGV">https://urlin.us/2uSXGV</a></b></p><br /><br />
5
- <h2>What is Cat Dog Magic Tiles Mod APK?</h2>
6
- <p>Cat Dog Magic Tiles Mod APK is a modified version of the original Cat Dog Magic Tiles game. This game is developed by \uE000mod\uE001droid.com, a website that provides modded games and apps for Android devices. The modded version of Cat Dog Magic Tiles has some advantages over the original version, such as:</p>
7
- <ul>
8
- <li>It has no ads or in-app purchases.</li>
9
- <li>It has all the songs and themes unlocked.</li>
10
- <li>It has unlimited coins to use in the game.</li>
11
- <li>It has better performance and stability.</li>
12
- </ul>
13
- <p>The gameplay of Cat Dog Magic Tiles is simple and fun. You just have to tap the tiles that appear on the screen in sync with the music. If you miss a tile or tap a wrong one, you will lose the game. The game has different modes, such as classic, arcade, zen, bomb, rush, and relay. You can also choose from different themes, such as cat, dog, panda, unicorn, dinosaur, and more. Each theme has its own background, sound effects, and animations.</p>
14
- <h3>Features of Cat Dog Magic Tiles Mod APK</h3>
15
- <p>Cat Dog Magic Tiles Mod APK has many features that make it a unique and enjoyable piano game. Here are some of them:</p>
16
- <p>cat dog magic tiles unlimited coins mod apk<br />
17
- cat dog magic tiles hack mod apk download<br />
18
- cat dog magic tiles mod apk latest version<br />
19
- cat dog magic tiles mod apk free shopping<br />
20
- cat dog magic tiles mod apk android 1<br />
21
- cat dog magic tiles mod apk revdl<br />
22
- cat dog magic tiles mod apk no ads<br />
23
- cat dog magic tiles mod apk offline<br />
24
- cat dog magic tiles mod apk unlimited money<br />
25
- cat dog magic tiles mod apk premium<br />
26
- cat dog magic tiles mod apk vip unlocked<br />
27
- cat dog magic tiles mod apk all songs<br />
28
- cat dog magic tiles mod apk rexdl<br />
29
- cat dog magic tiles mod apk happymod<br />
30
- cat dog magic tiles mod apk 2023<br />
31
- cat dog magic tiles pro mod apk<br />
32
- cat dog magic tiles plus mod apk<br />
33
- cat dog magic tiles deluxe mod apk<br />
34
- cat dog magic tiles 3d mod apk<br />
35
- cat dog magic tiles 2 mod apk<br />
36
- cat dog magic piano tiles mod apk<br />
37
- cat dog music tiles challenge mod apk<br />
38
- cat and dog piano magic tiles mod apk<br />
39
- download game cat dog magic tiles mod apk<br />
40
- how to install cat dog magic tiles mod apk<br />
41
- cara download cat dog magic tiles mod apk<br />
42
- descargar cat dog magic tiles mod apk<br />
43
- baixar cat dog magic tiles mod apk<br />
44
- télécharger cat dog magic tiles mod apk<br />
45
- scaricare cat dog magic tiles mod apk<br />
46
- indir cat dog magic tiles mod apk<br />
47
- unduh cat dog magic tiles mod apk<br />
48
- mengunduh kucing anjing ubin ajaib mod apk<br />
49
- تحميل لعبة قطة كلب بلاط سحري مود أبك <br />
50
- 다운로드 고양이 개 마법 타일 모드 APK <br />
51
- ダウンロード 猫 犬 魔法 タイル モッド APK <br />
52
- 下载 猫 狗 魔法 瓷砖 模式 APK <br />
53
- скачать кошка собака магические плитки мод APK <br />
54
- descargar gato perro azulejos mágicos mod APK <br />
55
- télécharger chat chien tuiles magiques mod APK <br />
56
- scaricare gatto cane piastrelle magiche mod APK <br />
57
- indir kedi köpek sihirli fayanslar mod APK <br />
58
- unduh kucing anjing ubin ajaib mod APK</p>
59
- <h4>- Easy to play, beautiful graphics</h4>
60
- <p>The game is easy to play for anyone who loves music. You don't need any musical skills or experience to play it. You just need to follow the rhythm and tap the tiles. The game also has beautiful graphics that are colorful and eye-catching. The tiles are designed with cute animal faces that will make you smile.</p>
61
- <h4>- Great selection of music and songs</h4>
62
- <p>The game has a great selection of music and songs that are bork remixes of popular songs. You can find songs from various genres, such as pop, rock, EDM, classical, jazz, hip hop, and more. You can also find songs from different artists, such as Ed Sheeran, Taylor Swift, Justin Bieber, BTS, Billie Eilish, Ariana Grande, and more. You will never get bored with the variety of songs in the game.</p>
63
- <h4>- Hot songs updated every month</h4>
64
- <p>The game is constantly updated with new songs every month. You can always find the latest hits and trends in the game. You can also request your favorite songs to be added to the game by contacting the developers. <h4>- Unlock new songs and themes with coins</h4>
65
- <p>The game has a coin system that allows you to unlock new songs and themes. You can earn coins by playing the game, watching videos, or completing tasks. You can also use the modded version of the game to get unlimited coins. You can use the coins to buy new songs and themes from the store. You can also customize your tiles with different colors and shapes.</p>
66
- <h4>- Play offline or online with friends</h4>
67
- <p>The game can be played offline or online. You can play offline without an internet connection and enjoy the game anytime, anywhere. You can also play online with your friends or other players from around the world. You can join a room or create your own and invite your friends. You can chat with them, send them emojis, and compete with them on the leaderboard.</p>
68
- <h3>How to download and install Cat Dog Magic Tiles Mod APK?</h3>
69
- <p>If you want to download and install Cat Dog Magic Tiles Mod APK, you need to follow these simple steps:</p>
70
- <h4>- Download the APK file from a trusted source</h4>
71
- <p>You can download the APK file from \uE000mod\uE001droid.com, which is a reliable website that provides modded games and apps for Android devices. You can also scan the QR code below to download the file directly to your device.</p>
72
- <p><img src="https://www.qrcode-monkey.com/img/default-preview-qr.svg" alt="QR code for Cat Dog Magic Tiles Mod APK" width="200" height="200"></p>
73
- <h4>- Enable unknown sources on your device settings</h4>
74
- <p>Before you install the APK file, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</p>
75
- <h4>- Install the APK file and enjoy the game</h4>
76
- <p>After you enable unknown sources, you can install the APK file by tapping on it and following the instructions. Once the installation is complete, you can open the game and enjoy it.</p>
77
- <h2>Why should you play Cat Dog Magic Tiles Mod APK?</h2>
78
- <p>Cat Dog Magic Tiles Mod APK is not just a piano game, it is also a fun and unique way to enjoy music and animals. Here are some of the benefits of playing Cat Dog Magic Tiles Mod APK:</p>
79
- <h3>Benefits of playing Cat Dog Magic Tiles Mod APK</h3>
80
- <h4>- Improve your musical skills and reflexes</h4>
81
- <p>Playing Cat Dog Magic Tiles Mod APK can help you improve your musical skills and reflexes. You can learn how to play different songs on the piano, recognize different notes and chords, and improve your sense of rhythm and timing. You can also train your hand-eye coordination, reaction speed, and accuracy by tapping the tiles in sync with the music.</p>
82
- <h4>- Relax and have fun with cute animals and catchy tunes</h4>
83
- <p>Playing Cat Dog Magic Tiles Mod APK can also help you relax and have fun with cute animals and catchy tunes. You can enjoy the soothing sounds of piano music, watch the adorable animations of cats and dogs, and feel the joy of playing bork remixes of popular songs. You can also choose from different themes and customize your tiles to suit your mood and preference.</p>
84
- <h4>- Challenge yourself and compete with others</h4>
85
- <p>Playing Cat Dog Magic Tiles Mod APK can also help you challenge yourself and compete with others. You can try different modes, such as classic, arcade, zen, bomb, rush, and relay, and see how far you can go in each one. You can also play online with your friends or other players from around the world, and see who can get the highest score on the leaderboard.</p>
86
- <h2>Conclusion</h2>
87
- <p>Cat Dog Magic Tiles Mod APK is a fun and unique piano game that lets you play bork remixes of popular songs with adorable cats and dogs. You can download this game for free and enjoy unlimited access to all the songs and themes. You can also improve your musical skills and reflexes, relax and have fun with cute animals and catchy tunes, and challenge yourself and compete with others by playing this game. If you are looking for a new and innovative piano game that will keep you entertained for hours, you should try Cat Dog Magic Tiles Mod APK.</p>
88
- <h3>FAQs</h3>
89
- <ul>
90
- <li><b>Q: Is Cat Dog Magic Tiles Mod APK safe to download?</b></li>
91
- <li>A: Yes, Cat Dog Magic Tiles Mod APK is safe to download from \uE000mod\uE001droid.com, which is a trusted website that provides modded games and apps for Android devices. However, you should always scan the file with an antivirus before installing it, and enable unknown sources on your device settings at your own risk.</li>
92
- <li><b>Q: How can I update Cat Dog Magic Tiles Mod APK?</b></li>
93
- <li>A: You can update Cat Dog Magic Tiles Mod APK by downloading the latest version of the APK file from \uE000mod\uE001droid.com and installing it over the existing one. You can also check for updates within the game settings.</li>
94
- <li><b>Q: How can I request a song to be added to Cat Dog Magic Tiles Mod APK?</b></li>
95
- <li>A: You can request a song to be added to Cat Dog Magic Tiles Mod APK by contacting the developers via email or social media. You can find their contact information on their website or in the game credits.</li>
96
- <li><b>Q: How can I play Cat Dog Magic Tiles Mod APK on PC?</b></li>
97
- <li>A: You can play Cat Dog Magic Tiles Mod APK on PC by using an Android emulator, such as BlueStacks, NoxPlayer, or LDPlayer. You can download and install the emulator on your PC, and then download and install the APK file on the emulator. You can then launch the game and play it with your keyboard or mouse.</li>
98
- <li><b>Q: What are some similar games to Cat Dog Magic Tiles Mod APK?</b></li>
99
- <li>A: Some similar games to Cat Dog Magic Tiles Mod APK are Piano Tiles 2, Magic Tiles 3, Piano Solo, and Piano Music Go. These are also piano games that let you play popular songs with different themes and modes.</li>
100
- </ul></p> 197e85843d<br />
101
- <br />
102
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Call of Duty Warzone Mobile - The Next Era of the Call of Duty Franchise on Your iOS Device.md DELETED
@@ -1,119 +0,0 @@
1
- <br />
2
- <h1>Call of Duty Warzone Mobile APK IOS: Everything You Need to Know</h1>
3
- <p>If you are a fan of Call of Duty and battle royale games, you are in for a treat. Call of Duty Warzone Mobile is the next generation of mobile battle royale, featuring authentic COD gameplay, shared progression, and up to 120 player count matches on mobile. In this article, we will tell you everything you need to know about this exciting game, including what it is, how to download and install it, and what are its features and benefits. Let's dive in!</p>
4
- <h2>call of duty warzone mobile apk ios</h2><br /><p><b><b>Download Zip</b> &#10038;&#10038;&#10038; <a href="https://jinyurl.com/2uNKjZ">https://jinyurl.com/2uNKjZ</a></b></p><br /><br />
5
- <h2>What is Call of Duty Warzone Mobile?</h2>
6
- <p>Call of Duty Warzone Mobile is a mobile version of the popular Call of Duty Warzone game, which is a free-to-play online multiplayer battle royale game developed by Activision. It is set to launch worldwide later in 2023, but you can pre-register or pre-order it now and earn rewards if global milestones are hit. Here are some of the highlights of this game:</p>
7
- <h3>A new era of mobile battle royale</h3>
8
- <p>Call of Duty Warzone Mobile is not just a port or a clone of the original game. It is a completely new and unique experience, built from the ground up for mobile gamers. It delivers authentic Call of Duty gameplay on mobile with first-class graphics and intuitive controls. Everything from movement, aiming and weapon handling to physics, animations and sound have been optimized, delivering the ultimate accuracy, authenticity and performance.</p>
9
- <h3>The return of Verdansk</h3>
10
- <p>The fan-favorite battle royale map is BACK! Verdansk is a massive map with dozens of points of interest to deploy to, explore and loot. The strategies to survive are limitless. You can lead your squad to victory and be the last one standing, or play solo and take on everyone else. You can also get a bird’s eye view of the chaos from the top of Stadium, fight through the frigid battleground in Dam, plan your aerial escape from Airport, and get a second chance at survival when you win a duel in the Gulag!</p>
11
- <h3>More competition and cross-progression</h3>
12
- <p>Call of Duty Warzone Mobile matches feature some of the highest real player-counts for mobile battle royale. You can skip the bots and put your skills to the test where they count. You can also enjoy cross-progression with certain titles (sold separately), such as Call of Duty Warzone 2.0 and Call of Duty Modern Warfare II. This means that your Battle Pass and friends list sync across platforms, allowing you to play with your friends and earn rewards wherever you go.</p>
13
- <h2>How to download and install Call of Duty Warzone Mobile?</h2>
14
- <p>Call of Duty Warzone Mobile is not yet available for download on the app store or Google Play, but you can pre-register or pre-order it now and get notified when it launches. You can also get access to the APK file by scanning a QR code or following a link. Here are the steps to download and install Call of Duty Warzone Mobile:</p>
15
- <p>call of duty warzone mobile apk ios download<br />
16
- call of duty warzone mobile apk ios release date<br />
17
- call of duty warzone mobile apk ios gameplay<br />
18
- call of duty warzone mobile apk ios pre-order<br />
19
- call of duty warzone mobile apk ios beta<br />
20
- call of duty warzone mobile apk ios review<br />
21
- call of duty warzone mobile apk ios system requirements<br />
22
- call of duty warzone mobile apk ios free<br />
23
- call of duty warzone mobile apk ios app store<br />
24
- call of duty warzone mobile apk ios google play<br />
25
- call of duty warzone mobile apk ios crossplay<br />
26
- call of duty warzone mobile apk ios cheats<br />
27
- call of duty warzone mobile apk ios hacks<br />
28
- call of duty warzone mobile apk ios tips<br />
29
- call of duty warzone mobile apk ios tricks<br />
30
- call of duty warzone mobile apk ios best weapons<br />
31
- call of duty warzone mobile apk ios best loadouts<br />
32
- call of duty warzone mobile apk ios best settings<br />
33
- call of duty warzone mobile apk ios update<br />
34
- call of duty warzone mobile apk ios patch notes<br />
35
- call of duty warzone mobile apk ios size<br />
36
- call of duty warzone mobile apk ios offline<br />
37
- call of duty warzone mobile apk ios online<br />
38
- call of duty warzone mobile apk ios multiplayer<br />
39
- call of duty warzone mobile apk ios solo<br />
40
- call of duty warzone mobile apk ios duo<br />
41
- call of duty warzone mobile apk ios squad<br />
42
- call of duty warzone mobile apk ios modes<br />
43
- call of duty warzone mobile apk ios maps<br />
44
- call of duty warzone mobile apk ios verdansk<br />
45
- call of duty warzone mobile apk ios gulag<br />
46
- call of duty warzone mobile apk ios contracts<br />
47
- call of duty warzone mobile apk ios killstreaks<br />
48
- call of duty warzone mobile apk ios vehicles<br />
49
- call of duty warzone mobile apk ios graphics<br />
50
- call of duty warzone mobile apk ios sound<br />
51
- call of duty warzone mobile apk ios controls<br />
52
- call of duty warzone mobile apk ios aim assist<br />
53
- call of duty warzone mobile apk ios sensitivity<br />
54
- call of duty warzone mobile apk ios fps<br />
55
- call of duty warzone mobile apk ios lag<br />
56
- call of duty warzone mobile apk ios error<br />
57
- call of duty warzone mobile apk ios fix<br />
58
- call of duty warzone mobile apk ios support<br />
59
- call of duty warzone mobi</p>
60
- <h3>Pre-register or pre-order on the official website or app store</h3>
61
- <p>The easiest way to get ready for Call of Duty Warzone Mobile is to pre-register or pre-order it on the official website [1] or app store [2]. By doing so, you will be among the first ones to play the game when it launches, and you will also earn rewards if global milestones are hit. These rewards include exclusive skins, weapons, maps and more.</p>
62
- <h3>Scan the QR code or follow the link to get the APK file</h3>
63
- <p <p>If you don't want to wait for the official release, you can also try to get the APK file by scanning a QR code or following a link. These methods are not endorsed by Activision and may not be safe or legal, so proceed at your own risk. You can find the QR code or the link on some websites [3] [4] or social media platforms [5] [6]. Once you have the APK file, you need to allow installation from unknown sources and follow the instructions.</p>
64
- <h3>Allow installation from unknown sources and follow the instructions</h3>
65
- <p>Before you can install Call of Duty Warzone Mobile, you need to allow installation from unknown sources on your device. This is a security setting that prevents unauthorized apps from being installed. To do this, go to your device settings, find the security or privacy option, and enable the unknown sources option. You may also need to grant permissions to the app, such as access to storage, camera, microphone, etc. After that, you can open the APK file and follow the instructions to complete the installation.</p>
66
- <h2>What are the features and benefits of Call of Duty Warzone Mobile?</h2>
67
- <p>Call of Duty Warzone Mobile is more than just a mobile battle royale game. It is a full-fledged Call of Duty experience that offers many features and benefits for mobile gamers. Here are some of them:</p>
68
- <h3>Authentic and optimized COD gameplay on mobile</h3>
69
- <p>Call of Duty Warzone Mobile delivers the same thrilling and immersive gameplay that you know and love from Call of Duty on mobile. You can choose from a variety of weapons, attachments, perks, killstreaks, and loadouts to customize your playstyle. You can also use vehicles, contracts, buy stations, and loadout drops to gain an edge over your enemies. The game also features realistic graphics, sound effects, and physics that make you feel like you are in the middle of a warzone.</p>
70
- <h3>Endless replayability and fun modes</h3>
71
- <p>Call of Duty Warzone Mobile offers endless replayability and fun modes for mobile gamers. You can play solo or with your friends in squads of up to four players. You can also choose from different modes, such as Battle Royale, Plunder, Resurgence, Rebirth Island, and more. Each mode has its own rules, objectives, and challenges that keep you on your toes. You can also expect regular updates and events that add new content and features to the game.</p>
72
- <h3>Social features and rewards</h3>
73
- <p>Call of Duty Warzone Mobile is not just a game, it is also a social platform where you can connect with other players around the world. You can chat with your friends, join clans, create custom matches, and compete in leaderboards and tournaments. You can also earn rewards by completing missions, challenges, and achievements. You can unlock new skins, weapons, maps, and more by leveling up your Battle Pass or purchasing bundles.</p>
74
- <h2>Conclusion</h2>
75
- <p>Call of Duty Warzone Mobile is a game that you don't want to miss if you are a fan of Call of Duty and battle royale games. It is a game that brings authentic COD gameplay on mobile with first-class graphics and intuitive controls. It is a game that offers endless replayability and fun modes with up to 120 player count matches on mobile. It is a game that features social features and rewards that let you connect with other players and earn exclusive content. It is a game that is coming soon to your mobile device, so pre-register or pre-order it now and get ready for the ultimate mobile battle royale experience!</p>
76
- <h2>FAQs</h2>
77
- <ul>
78
- <li><b>When will Call of Duty Warzone Mobile launch?</b></li>
79
- <li>The official launch date has not been announced yet, but it is expected to be sometime in 2023. You can pre-register or pre-order it now on the official website or app store to get notified when it launches.</li>
80
- <li><b>Is Call of Duty Warzone Mobile free-to-play?</b></li>
81
- <li>Yes, Call of Duty Warzone Mobile is free-to-play for everyone. However, there are optional in-game purchases that can enhance your gameplay or customize your appearance.</li>
82
- <li><b>Can I play Call of Duty Warzone Mobile with my friends?</b></li>
83
- <li>Yes, you can play Call of Duty Warzone Mobile with your friends in squads of up to four players. You can also chat with them in-game or invite them to join your clan.</li>
84
- <li><b>What are the minimum requirements for Call of Duty Warzone Mobile?</b></li>
85
- <li>The minimum requirements for Call of Duty Warzone Mobile are not yet confirmed, but they are likely to be similar to those of Call of Duty Mobile. You will need a device with Android <p>or higher to play the game. The requirements are:</p>
86
- <ul>
87
- <li>Dual-Core CPU clocked at 1.2GHz or higher</li>
88
- <li>2GB of RAM or higher</li>
89
- <li>Over 3GB of internal storage</li>
90
- </ul>
91
- <p>COD Mobile on iOS: Device Requirements</p>
92
- <p>To play the game on Apple devices, you will need to have iOS 9 or later. Simply put, if you have an iPhone 7 or up, you shouldn't have any trouble playing COD Mobile. The requirements are:</p>
93
- <ul>
94
- <li>Compatible with iOS 9 or later</li>
95
- <li>iPhone 7 or later</li>
96
- </ul>
97
- <p>Always keep in mind that if your device barely hits the requirements to run the game, you should try to close as many unnecessary apps as possible.</p>
98
- <h2>Conclusion</h2>
99
- <p>COD Mobile is a game that you don't want to miss if you are a fan of COD and battle royale games. It is a game that brings authentic COD gameplay on mobile with first-class graphics and intuitive controls. It is a game that offers endless replayability and fun modes with up to 120 player count matches on mobile. It is a game that features social features and rewards that let you connect with other players and earn exclusive content. It is a game that is coming soon to your mobile device, so pre-register or pre-order it now and get ready for the ultimate mobile battle royale experience!</p>
100
- <h2>FAQs</h2>
101
- <ul>
102
- <li><b>When will COD Mobile launch?</b></li>
103
- <li>The official launch date has not been announced yet, but it is expected to be sometime in 2023. You can pre-register or pre-order it now on the official website [1] or app store [2] to get notified when it launches.</li>
104
- <li><b>Is COD Mobile free-to-play?</b></li>
105
- <li>Yes, COD Mobile is free-to-play for everyone. However, there are optional in-game purchases that can enhance your gameplay or customize your appearance.</li>
106
- <li><b>Can I play COD Mobile with my friends?</b></li>
107
- <li>Yes, you can play COD Mobile with your friends in squads of up to four players. You can also chat with them in-game or invite them to join your clan.</li>
108
- <li><b>What are the minimum requirements for COD Mobile?</b></li>
109
- <li>The minimum requirements for COD Mobile are:</li>
110
- <li>Android - Dual-Core CPU clocked at 1.2GHz or higher, 2GB of RAM or higher, over 3GB of internal storage, Android 5.1 or higher.</li>
111
- <li>iOS - Compatible with iOS 9 or later, iPhone 7 or later.</li>
112
- <li><b>What are the features and benefits of COD Mobile?</b></li>
113
- <li>COD Mobile offers many features and benefits for mobile gamers, such as:</li>
114
- <li>Authentic and optimized COD gameplay on mobile</li>
115
- <li>Endless replayability and fun modes</li>
116
- <li>Social features and rewards</li>
117
- </ul></p> 401be4b1e0<br />
118
- <br />
119
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download 'LINK' The House Next Door Full Movie Kickass Torrent.md DELETED
@@ -1,100 +0,0 @@
1
- ## Download The House Next Door Full Movie Kickass Torrent
2
-
3
-
4
-
5
-
6
-
7
- ![Download 'LINK' The House Next Door Full Movie Kickass Torrent](https://assets.wakelet.com/monomer/thumbnail/wakelet-socail-thumbnail.png)
8
-
9
-
10
-
11
-
12
-
13
- **LINK >>>>> [https://bracadfofor.blogspot.com/?l=2txizY](https://bracadfofor.blogspot.com/?l=2txizY)**
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
-
24
-
25
-
26
-
27
- # How to Download The House Next Door Full Movie Kickass Torrent
28
-
29
-
30
-
31
- If you are looking for a way to download The House Next Door full movie kickass torrent, you might be disappointed. Kickass Torrents was one of the most popular torrent sites, but it was shut down by the authorities in 2016. Since then, many clones and alternatives have emerged, but none of them can match the original site's quality and variety.
32
-
33
-
34
-
35
- However, there is still a way to enjoy The House Next Door full movie for free. The House Next Door is a horror movie that was released in 2002. It tells the story of a couple who moves into a new house, only to discover that their neighbor is a sinister and mysterious man who has a dark secret. The movie is available on YouTube[^2^], where you can watch it legally and safely.
36
-
37
-
38
-
39
- To watch The House Next Door full movie on YouTube, you just need to follow these simple steps:
40
-
41
-
42
-
43
- 1. Go to YouTube.com and type "The House Next Door full movie" in the search box.
44
-
45
- 2. Click on the video that has the title "The House Next Door | FREE Full Horror Movie". It should have a duration of 1 hour and 33 minutes.
46
-
47
- 3. Enjoy the movie!
48
-
49
-
50
-
51
- Alternatively, you can also click on this link[^2^] to go directly to the video.
52
-
53
-
54
-
55
- That's it! You have successfully watched The House Next Door full movie for free. If you liked the movie, you can also check out other horror movies on YouTube's Kings of Horror channel. They have a lot of free and scary movies that you can watch anytime.
56
-
57
-
58
-
59
- Some of the horror movies that you can watch on YouTube's Kings of Horror channel are:
60
-
61
-
62
-
63
- - The Wicked One: A group of friends go to a music festival, but they end up being hunted by a masked killer who calls himself The Wicked One.
64
-
65
- - Dark Prism: Three women are haunted by a mysterious force that manifests itself through a prism.
66
-
67
- - The House on Pine Street: A pregnant woman moves into a haunted house with her husband, and she starts to experience terrifying visions and events.
68
-
69
- - The Last Light: A group of survivors are trapped in an abandoned hospital after a nuclear disaster, and they have to face the horrors that lurk in the dark.
70
-
71
- - The Invoking: A college student inherits a house from her estranged aunt, but she soon realizes that the house has a sinister history and a malevolent presence.
72
-
73
-
74
-
75
- These are just some of the horror movies that you can watch on YouTube for free. If you are a fan of horror, you will surely find something that suits your taste. Just be prepared to be scared!
76
-
77
-
78
-
79
- Watching horror movies on YouTube is a great way to enjoy some thrills and chills without spending any money. However, you should also be aware of some drawbacks and risks. Here are some tips to make your horror movie experience better and safer:
80
-
81
-
82
-
83
- - Make sure you have a good internet connection. Nothing ruins a horror movie more than buffering and lagging. You don't want to miss any important scenes or jump scares because of a slow connection.
84
-
85
- - Check the ratings and reviews of the movies before you watch them. Some movies might have low quality, poor acting, or bad subtitles. You can also look for recommendations from other horror fans or critics.
86
-
87
- - Be careful of malware and viruses. Some websites or links might try to trick you into downloading harmful software or giving away your personal information. Only click on trusted and verified sources, and use a reliable antivirus program.
88
-
89
- - Respect the content creators and the law. Don't download or share the movies illegally. If you like the movies, you can support the filmmakers by leaving positive feedback, subscribing to their channels, or donating to their projects.
90
-
91
-
92
-
93
- By following these tips, you can enjoy watching horror movies on YouTube without any worries. You can also invite your friends or family to join you, or watch them alone if you dare. Either way, you will have a lot of fun and excitement.
94
-
95
- dfd1c89656
96
-
97
-
98
-
99
-
100
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Portable Loba Loba The Best Way to Enjoy Rhymer_Lee and Portables Music.md DELETED
@@ -1,102 +0,0 @@
1
- <br />
2
- <h1>Download Portable Loba Loba: The Ultimate Music App for Your Device</h1>
3
- <p>If you are a music lover, you probably have heard of portable loba loba, the hottest music app in the market. Portable loba loba is a free app that lets you stream, download, and enjoy thousands of songs and videos from various genres and artists. Whether you are into hip-hop, Afrobeats, R&B, or pop, portable loba loba has something for everyone. In this article, we will show you how to download portable loba loba for your device, and why you should do it right now.</p>
4
- <h2>What is Portable Loba Loba?</h2>
5
- <p>Portable loba loba is a music app that was created by Nigerian fast-rising music sensation Peter Okeke, popularly known as Rhymer_Lee. He collaborated with another Nigerian superstar, Portable, to create a track called "Loba Loba", which became a viral hit in the country. The track is full of parables and talks about the struggles people face in relationships and friendships. Rhymer_Lee said, "I wanted to create a song that would connect with people on a deep level".</p>
6
- <h2>download portable loba loba</h2><br /><p><b><b>Download Zip</b> &#10004; <a href="https://jinyurl.com/2uNKCz">https://jinyurl.com/2uNKCz</a></b></p><br /><br />
7
- <p>After the success of "Loba Loba", Rhymer_Lee decided to launch his own music app, which he named after his hit song. Portable loba loba is not just a music app, but a platform where users can access, create, and share their own music and videos. Portable loba loba is designed to be user-friendly, fun, and innovative. It has a unique style of music, which is a blend of hip-hop, Afrobeats, and R&B. Portable loba loba aims to bring joy and inspiration to its users.</p>
8
- <h2>Why is Portable Loba Loba Popular Among Music Lovers?</h2>
9
- <p>Portable loba loba has gained popularity among music lovers for several reasons. Some of them are:</p>
10
- <ul>
11
- <li><strong>High-quality sound and video</strong>: Portable loba loba delivers crystal-clear sound and video quality for its users. You can enjoy your favorite songs and videos in HD resolution, without any buffering or lagging.</li>
12
- <li><strong>Offline mode and playlist support</strong>: Portable loba loba allows you to download your favorite songs and videos for offline listening. You can also create your own playlists and organize them according to your mood or preference.</li>
13
- <li><strong>Customizable skins and themes</strong>: Portable loba loba lets you customize your app according to your taste. You can choose from different skins and themes, such as dark mode, light mode, neon mode, etc.</li>
14
- <li><strong>Social media integration and sharing</strong>: Portable loba loba enables you to connect with your friends and family through social media. You can share your music and videos on Facebook, Twitter, Instagram, WhatsApp, etc. You can also follow other users and see what they are listening to.</li>
15
- </ul>
16
- <h2>How to Download Portable Loba Loba</h2> <p>Downloading portable loba loba is very easy and fast. You can download it for free from the official website or the app store. Here are the steps you need to follow:</p>
17
- <ol>
18
- <li><strong>Visit the official website or app store</strong>: You can visit the official website of portable loba loba at [www.portablelobaloba.com] or search for it on the Google Play Store or the Apple App Store. You will see the logo of portable loba loba, which is a blue circle with a white musical note inside.</li>
19
- <li><strong>Choose your preferred version and device</strong>: You can choose between the Android version or the iOS version of portable loba loba, depending on your device. You can also choose between the online version or the offline version of portable loba loba, depending on your internet connection.</li>
20
- <li><strong>Follow the instructions and install the app</strong>: You can follow the instructions on the screen and install the app on your device. The installation process will take only a few minutes, and you will need to grant some permissions to the app, such as access to your storage, microphone, camera, etc.</li>
21
- <li><strong>Enjoy your portable loba loba experience</strong>: Once you have installed the app, you can open it and start enjoying your portable loba loba experience. You can sign up with your email or phone number, or log in with your social media account. You can then browse through the library of songs and videos, or search for your favorite ones. You can also create your own music and videos with the built-in editor, and share them with other users.</li>
22
- </ol>
23
- <h2>Benefits of Portable Loba Loba</h2>
24
- <p>Portable loba loba is not just a music app, but a lifestyle. It offers many benefits to its users, such as:</p>
25
- <ul>
26
- <li><strong>Access to thousands of songs and videos from various genres and artists</strong>: Portable loba loba has a huge collection of songs and videos from different genres and artists, both local and international. You can find songs and videos from Rhymer_Lee, Portable, Wizkid, Davido, Burna Boy, Tiwa Savage, Beyoncé, Drake, Rihanna, Justin Bieber, Ed Sheeran, and many more.</li>
27
- <li><strong>Ability to create your own music and remixes with the built-in editor</strong>: Portable loba loba allows you to unleash your creativity and make your own music and remixes with the built-in editor. You can use various tools and effects, such as loops, samples, filters, equalizers, etc. You can also record your voice and add it to your music.</li>
28
- <li><strong>Opportunity to discover new talents and connect with other fans</strong>: Portable loba loba enables you to discover new talents and connect with other fans of music. You can listen to the music and videos of other users, and give them feedback and ratings. You can also follow them and chat with them. You can also join groups and communities based on your musical interests.</li>
29
- <li><strong>Freedom to listen to your favorite tunes anytime and anywhere</strong>: Portable loba loba gives you the freedom to listen to your favorite tunes anytime and anywhere. You can download your favorite songs and videos for offline listening, or stream them online. You can also listen to them on any device, such as your phone, tablet, laptop, or TV.</li>
30
- </ul>
31
- <h2>Conclusion</h2>
32
- <p>Portable loba loba is a must-have app for any music lover. It is a free app that lets you stream, download, and enjoy thousands of songs and videos from various genres and artists. It also lets you create your own music and remixes with the built-in editor. It also lets you discover new talents and connect with other fans of music. It also gives you the freedom to listen to your favorite tunes anytime and anywhere.</p>
33
- <p>download portable loba loba mp3<br />
34
- download portable loba loba video<br />
35
- download portable loba loba song<br />
36
- download portable loba loba audio<br />
37
- download portable loba loba music<br />
38
- download portable loba loba ft rhymer_lee<br />
39
- download portable loba loba lyrics<br />
40
- download portable loba loba instrumental<br />
41
- download portable loba loba remix<br />
42
- download portable loba loba free<br />
43
- how to download portable loba loba<br />
44
- where to download portable loba loba<br />
45
- best site to download portable loba loba<br />
46
- download portable rhymer_lee loba loba<br />
47
- download portable and rhymer_lee loba loba<br />
48
- download rhymer_lee and portable loba loba<br />
49
- download rhymer_lee ft portable loba loba<br />
50
- download rhymer_lee featuring portable loba loba<br />
51
- download rhymer_lee x portable loba loba<br />
52
- download rhymer_lee lobalobal ft portable<br />
53
- listen and download portable loba loba<br />
54
- stream and download portable loba loba<br />
55
- watch and download portable loba loba<br />
56
- enjoy and download portable lobalobal<br />
57
- share and download portable lobalobal<br />
58
- rate and download portable lobalobal<br />
59
- comment and download portable lobalobal<br />
60
- like and download portable lobalobal<br />
61
- subscribe and download portable lobalobal<br />
62
- follow and download portable lobalobal<br />
63
- latest portable lobalobal mp3 download<br />
64
- new portable lobalobal video download<br />
65
- trending portable lobalobal song download<br />
66
- popular portable lobalobal audio download<br />
67
- hot portable lobalobal music download<br />
68
- top portable lobalobal ft rhymer_lee download<br />
69
- best portable lobalobal lyrics download<br />
70
- amazing portable lobalobal instrumental download<br />
71
- awesome portable lobalobal remix download<br />
72
- cool portable lobalobal free download<br />
73
- fast download of portable lobalobal mp3 <br />
74
- easy download of portable lobalobal video <br />
75
- quick download of portable lobalobal song <br />
76
- simple download of portable lobalobal audio <br />
77
- direct download of portable lobalobal music <br />
78
- high quality download of portable lobalobal ft rhymer_lee <br />
79
- low size download of portable lobalobal lyrics <br />
80
- high speed download of portable lobalobal instrumental <br />
81
- unlimited download of portable lobalobal remix</p>
82
- <p>If you want to download portable loba loba for your device, you can visit the official website or the app store. You can choose between the Android version or the iOS version of portable loba loba, depending on your device. You can also choose between the online version or the offline version of portable loba loba, depending on your internet connection.</p>
83
- <p>What are you waiting for? Download portable loba loba today and enjoy the ultimate music experience!</p>
84
- <h2>FAQs</h2>
85
- <p>Here are some frequently asked questions about portable loba loba:</p>
86
- <h3>What is the difference between portable loba loba and other music apps?</h3>
87
- <p>Portable loba loba is different from other music apps in many ways. Some of them are:</p>
88
- <ul>
89
- <li>Portable loba loba has a unique style of music, which is a blend of hip-hop, Afrobeats, and R&B. It is inspired by the hit song "Loba Loba" by Rhymer_Lee and Portable, which is full of parables and talks about the struggles people face in relationships and friendships.</li>
90
- <li>Portable loba loba allows you to create your own music and remixes with the built-in editor. You can use various tools and effects, such as loops, samples, filters, equalizers, etc. You can also record your voice and add it to your music.</li>
91
- <li>Portable loba loba enables you to discover new talents and connect with other fans of music. You can listen to the music and videos of other users, and give them feedback and ratings. You can also follow them and chat with them. You can also join groups and communities based on your musical interests.</li>
92
- </ul>
93
- <h3>Is portable loba loba safe and legal to use?</h3>
94
- <p>Yes, portable loba loba is safe and legal to use. Portable loba loba does not contain any viruses, malware, or spyware that can harm your device or data. Portable loba loba also respects the intellectual property rights of the artists and creators, and does not promote any piracy or illegal downloading. Portable loba loba only uses licensed and royalty-free music and videos from reputable sources.</p>
95
- <h3>How much space does portable loba loba take on my device?</h3>
96
- <p>Portable loba loba does not take much space on your device. The app size is only about 20 MB, which is very lightweight compared to other music apps. However, the space may vary depending on the number of songs and videos you download for offline listening. You can always delete the songs and videos you don't need anymore to free up some space.</p>
97
- <h3>Can I use portable loba loba on multiple devices?</h3>
98
- <p>Yes, you can use portable loba loba on multiple devices. You can sync your account and preferences across different devices, such as your phone, tablet, laptop, or TV. You can also access your playlists and downloads on any device. However, you can only stream or play one song or video at a time on one device.</p>
99
- <h3>How can I contact the developers of portable loba loba?</h3>
100
- <p>If you have any questions, suggestions, feedback, or complaints about portable loba loba, you can contact the developers of portable loba loba through their email address: [[email protected]]. You can also follow them on their social media pages: [Facebook], [Twitter], [Instagram], [YouTube]. They are always happy to hear from their users and improve their app.</p> 401be4b1e0<br />
101
- <br />
102
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/.ipynb_checkpoints/env-checkpoint.py DELETED
@@ -1,13 +0,0 @@
1
- ############################################################################################################################
2
- # 修改下面的参数
3
- # (1)BASE_MODEL_NAME 代表你训练的基础模型
4
- BASE_MODEL_NAME = "runwayml/stable-diffusion-v1-5"
5
-
6
- # 是否开启lora
7
- # (2)LORA_WEIGHTS_PATH 代码你上传到huggingface后的lora权重。
8
- # LORA_WEIGHTS_PATH = None 表示不适应lora
9
- LORA_WEIGHTS_PATH = "1toTree/demo_test"
10
-
11
- # (3)PROMPTS 需要展示的prompt文本
12
- PROMPTS = "cartoon face"
13
- ############################################################################################################################
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py DELETED
@@ -1,128 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- from typing import List, Optional, Tuple, Union
17
-
18
- import paddle
19
-
20
- from ...pipeline_utils import AudioPipelineOutput, DiffusionPipeline
21
- from ...utils import logging
22
-
23
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
24
-
25
-
26
- class DanceDiffusionPipeline(DiffusionPipeline):
27
- r"""
28
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
29
- library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
30
-
31
- Parameters:
32
- unet ([`UNet1DModel`]): U-Net architecture to denoise the encoded image.
33
- scheduler ([`SchedulerMixin`]):
34
- A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
35
- [`IPNDMScheduler`].
36
- """
37
-
38
- def __init__(self, unet, scheduler):
39
- super().__init__()
40
- self.register_modules(unet=unet, scheduler=scheduler)
41
-
42
- @paddle.no_grad()
43
- def __call__(
44
- self,
45
- batch_size: int = 1,
46
- num_inference_steps: int = 100,
47
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
48
- audio_length_in_s: Optional[float] = None,
49
- return_dict: bool = True,
50
- ) -> Union[AudioPipelineOutput, Tuple]:
51
- r"""
52
- Args:
53
- batch_size (`int`, *optional*, defaults to 1):
54
- The number of audio samples to generate.
55
- num_inference_steps (`int`, *optional*, defaults to 50):
56
- The number of denoising steps. More denoising steps usually lead to a higher quality audio sample at
57
- the expense of slower inference.
58
- generator (`paddle.Generator`, *optional*):
59
- One or a list of paddle generator(s) to make generation deterministic.
60
- audio_length_in_s (`float`, *optional*, defaults to `self.unet.config.sample_size/self.unet.config.sample_rate`):
61
- The length of the generated audio sample in seconds. Note that the output of the pipeline, *i.e.*
62
- `sample_size`, will be `audio_length_in_s` * `self.unet.sample_rate`.
63
- return_dict (`bool`, *optional*, defaults to `True`):
64
- Whether or not to return a [`~pipeline_utils.AudioPipelineOutput`] instead of a plain tuple.
65
-
66
- Returns:
67
- [`~pipeline_utils.AudioPipelineOutput`] or `tuple`: [`~pipelines.utils.AudioPipelineOutput`] if
68
- `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the
69
- generated images.
70
- """
71
-
72
- if audio_length_in_s is None:
73
- audio_length_in_s = self.unet.config.sample_size / self.unet.config.sample_rate
74
-
75
- sample_size = audio_length_in_s * self.unet.sample_rate
76
-
77
- down_scale_factor = 2 ** len(self.unet.up_blocks)
78
- if sample_size < 3 * down_scale_factor:
79
- raise ValueError(
80
- f"{audio_length_in_s} is too small. Make sure it's bigger or equal to"
81
- f" {3 * down_scale_factor / self.unet.sample_rate}."
82
- )
83
-
84
- original_sample_size = int(sample_size)
85
- if sample_size % down_scale_factor != 0:
86
- sample_size = ((audio_length_in_s * self.unet.sample_rate) // down_scale_factor + 1) * down_scale_factor
87
- logger.info(
88
- f"{audio_length_in_s} is increased to {sample_size / self.unet.sample_rate} so that it can be handled"
89
- f" by the model. It will be cut to {original_sample_size / self.unet.sample_rate} after the denoising"
90
- " process."
91
- )
92
- sample_size = int(sample_size)
93
-
94
- dtype = self.unet.dtype
95
- shape = [batch_size, self.unet.in_channels, sample_size]
96
- if isinstance(generator, list) and len(generator) != batch_size:
97
- raise ValueError(
98
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
99
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
100
- )
101
-
102
- if isinstance(generator, list):
103
- shape = [
104
- 1,
105
- ] + shape[1:]
106
- audio = [paddle.randn(shape, generator=generator[i], dtype=self.unet.dtype) for i in range(batch_size)]
107
- audio = paddle.concat(audio, axis=0)
108
- else:
109
- audio = paddle.randn(shape, generator=generator, dtype=dtype)
110
- # set step values
111
- self.scheduler.set_timesteps(num_inference_steps)
112
- self.scheduler.timesteps = self.scheduler.timesteps.cast(dtype)
113
-
114
- for t in self.progress_bar(self.scheduler.timesteps):
115
- # 1. predict noise model_output
116
- model_output = self.unet(audio, t).sample
117
-
118
- # 2. compute previous image: x_t -> t_t-1
119
- audio = self.scheduler.step(model_output, t, audio).prev_sample
120
-
121
- audio = audio.clip(-1, 1).cast("float32").cpu().numpy()
122
-
123
- audio = audio[:, :, :original_sample_size]
124
-
125
- if not return_dict:
126
- return (audio,)
127
-
128
- return AudioPipelineOutput(audios=audio)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/716this/review-star-prediction-app/app.py DELETED
@@ -1,68 +0,0 @@
1
- # import csv
2
- import gradio as gr
3
- import pandas as pd
4
- from transformers import pipeline
5
- from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig
6
-
7
- # from datasets import load_dataset
8
-
9
-
10
- # Load the model and define the sentiment classifier
11
- MODEL = "LiYuan/amazon-review-sentiment-analysis"
12
- tokenizer = AutoTokenizer.from_pretrained(MODEL)
13
- config = AutoConfig.from_pretrained(MODEL)
14
- model = AutoModelForSequenceClassification.from_pretrained(MODEL)
15
- pipe = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer, config=config)
16
-
17
-
18
- def classify_sentiment(sentences):
19
- """
20
- Classify the sentiment of each sentence
21
- """
22
- predictions = pipe(sentences)
23
-
24
- # Extract the predicted labels and confidence scores from the predictions
25
- labels = [prediction["label"] for prediction in predictions]
26
- confidences = [prediction["score"] for prediction in predictions]
27
-
28
- return labels, confidences
29
-
30
-
31
- def classify_sentiment_from_csv(csv_file):
32
- """
33
- Read the CSV file and extract the list of sentences
34
- """
35
- df = pd.read_csv(csv_file.name, delimiter=",")
36
- sentences = df["sentence"].tolist()
37
-
38
- # Classify the sentiment of the sentences
39
- labels, confidences = classify_sentiment(sentences)
40
- df["confidences"] = confidences
41
- df["labels"] = labels
42
- return df
43
-
44
-
45
- def main():
46
- """
47
- Define the gradio app
48
- """
49
- iface = gr.Interface(
50
- fn=classify_sentiment_from_csv,
51
- inputs=gr.File(),
52
- outputs=gr.Dataframe(),
53
- live=True,
54
- # capture_session=True,
55
- allow_flagging="never",
56
- )
57
-
58
- iface.launch(enable_queue=False)
59
-
60
-
61
- # debug:
62
- # labels, confidence = classify_sentiment_from_csv("./reviews.csv")
63
- # print(labels)
64
-
65
-
66
- # Run the gradio app
67
- if __name__ == "__main__":
68
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/models/__init__.py DELETED
@@ -1,2 +0,0 @@
1
- from .melgan import * # NOQA
2
- from .parallel_wavegan import * # NOQA
 
 
 
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/distributions/distributions.py DELETED
@@ -1,92 +0,0 @@
1
- import torch
2
- import numpy as np
3
-
4
-
5
- class AbstractDistribution:
6
- def sample(self):
7
- raise NotImplementedError()
8
-
9
- def mode(self):
10
- raise NotImplementedError()
11
-
12
-
13
- class DiracDistribution(AbstractDistribution):
14
- def __init__(self, value):
15
- self.value = value
16
-
17
- def sample(self):
18
- return self.value
19
-
20
- def mode(self):
21
- return self.value
22
-
23
-
24
- class DiagonalGaussianDistribution(object):
25
- def __init__(self, parameters, deterministic=False):
26
- self.parameters = parameters
27
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
28
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
29
- self.deterministic = deterministic
30
- self.std = torch.exp(0.5 * self.logvar)
31
- self.var = torch.exp(self.logvar)
32
- if self.deterministic:
33
- self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
34
-
35
- def sample(self):
36
- x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
37
- return x
38
-
39
- def kl(self, other=None):
40
- if self.deterministic:
41
- return torch.Tensor([0.])
42
- else:
43
- if other is None:
44
- return 0.5 * torch.sum(torch.pow(self.mean, 2)
45
- + self.var - 1.0 - self.logvar,
46
- dim=[1, 2, 3])
47
- else:
48
- return 0.5 * torch.sum(
49
- torch.pow(self.mean - other.mean, 2) / other.var
50
- + self.var / other.var - 1.0 - self.logvar + other.logvar,
51
- dim=[1, 2, 3])
52
-
53
- def nll(self, sample, dims=[1,2,3]):
54
- if self.deterministic:
55
- return torch.Tensor([0.])
56
- logtwopi = np.log(2.0 * np.pi)
57
- return 0.5 * torch.sum(
58
- logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
59
- dim=dims)
60
-
61
- def mode(self):
62
- return self.mean
63
-
64
-
65
- def normal_kl(mean1, logvar1, mean2, logvar2):
66
- """
67
- source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
68
- Compute the KL divergence between two gaussians.
69
- Shapes are automatically broadcasted, so batches can be compared to
70
- scalars, among other use cases.
71
- """
72
- tensor = None
73
- for obj in (mean1, logvar1, mean2, logvar2):
74
- if isinstance(obj, torch.Tensor):
75
- tensor = obj
76
- break
77
- assert tensor is not None, "at least one argument must be a Tensor"
78
-
79
- # Force variances to be Tensors. Broadcasting helps convert scalars to
80
- # Tensors, but it does not work for torch.exp().
81
- logvar1, logvar2 = [
82
- x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
83
- for x in (logvar1, logvar2)
84
- ]
85
-
86
- return 0.5 * (
87
- -1.0
88
- + logvar2
89
- - logvar1
90
- + torch.exp(logvar1 - logvar2)
91
- + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
92
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/client/css/main.css DELETED
@@ -1,14 +0,0 @@
1
- .main-container {
2
- display: flex;
3
- padding: var(--section-gap);
4
- height: 100vh;
5
- justify-content: center;
6
- box-sizing: border-box;
7
- }
8
-
9
- @media screen and (max-width: 360px) {
10
- .main-container {
11
- padding: 0px;
12
- height: 90vh;
13
- }
14
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhaykoul/Wikipedia/app.py DELETED
@@ -1,122 +0,0 @@
1
- import streamlit as st
2
- import requests
3
- import google.generativeai as palm
4
-
5
- # Set the page title and icon
6
- st.set_page_config(page_title="Wikipedia Microbot and AI Mode")
7
-
8
- # Wikipedia Microbot Constants
9
- WIKIPEDIA_API_URL = "https://en.wikipedia.org/w/api.php"
10
-
11
- # AI Mode Constants
12
- palm.configure(api_key="AIzaSyCnrcQWTEedcGuEsJ9Uhbj_ZS9B2nozMiU")
13
- defaults = {
14
- 'model': 'models/text-bison-001',
15
- 'temperature': 1,
16
- 'candidate_count': 1,
17
- 'top_k': 40,
18
- 'top_p': 0.95,
19
- 'max_output_tokens': 1024,
20
- 'stop_sequences': [],
21
- 'safety_settings': [
22
- {"category": "HARM_CATEGORY_DEROGATORY", "threshold": 4},
23
- {"category": "HARM_CATEGORY_TOXICITY", "threshold": 4},
24
- {"category": "HARM_CATEGORY_VIOLENCE", "threshold": 4},
25
- {"category": "HARM_CATEGORY_SEXUAL", "threshold": 4},
26
- {"category": "HARM_CATEGORY_MEDICAL", "threshold": 4},
27
- {"category": "HARM_CATEGORY_DANGEROUS", "threshold": 4},
28
- ]
29
- }
30
-
31
- # Custom Instruction
32
- custom_instruction = """
33
- I am a small system from HelpingAI made by Abhay Koul who is 16 years old and lives in KP Colony, Vessu, Anantnag. Abhay Koul is currently working on more AI and AI-related projects.Certainly! If you're looking for more information about Abhay Koul's projects and HelpingAI, I would recommend joining the Discord community. You can find additional details there. He has been working on AI from Jan 2023. If you want more details, you can join the Discord community. The main function of HelpingAI is to make learning and education easier and more convenient for students. My purpose is to simplify your learning journey by providing personalized assistance, innovative teaching methods, and tailored resources to meet your unique needs. I am here to make your educational experience more enjoyable and effective. Feel free to ask me any questions or let me know how I can assist you in your learning adventure and in many more things from your life.
34
- """
35
-
36
- # Create tabs for Wikipedia Microbot and AI Mode
37
- selected_mode = st.radio("Select Mode", ["Wikipedia Microbot", "AI Mode"])
38
-
39
- if selected_mode == "Wikipedia Microbot":
40
- # Wikipedia Microbot Code
41
- st.title("Wikipedia Microbot")
42
- st.markdown("Explore Wikipedia with Ease")
43
-
44
- # Sidebar for user options
45
- st.sidebar.header("Options")
46
-
47
- # User input and search button
48
- query = st.sidebar.text_input("Enter a Query", help="E.g., 'Python programming'")
49
- search_button = st.sidebar.button("Search")
50
-
51
- # Container for main content
52
- main_container = st.container()
53
-
54
- if search_button:
55
- if query:
56
- try:
57
- # Search Wikipedia for the query
58
- params = {
59
- "action": "query",
60
- "format": "json",
61
- "prop": "extracts|info|pageviews",
62
- "exintro": True,
63
- "explaintext": True,
64
- "exsectionformat": "plain",
65
- "titles": query,
66
- "utf8": 1,
67
- "formatversion": 2,
68
- "pvipdays": 7,
69
- }
70
-
71
- response = requests.get(WIKIPEDIA_API_URL, params=params)
72
-
73
- if response.status_code == 200:
74
- data = response.json()
75
-
76
- if "error" in data:
77
- st.sidebar.error(f"Error: {data['error']['info']}")
78
- else:
79
- page = data["query"]["pages"][0]
80
-
81
- # Display page title
82
- st.title(page['title'])
83
-
84
- # Display page views statistics
85
- views = page.get("pageviews", {}).get(query, "Data not available")
86
- st.info(f"Page Views (Last 7 days): {views}")
87
-
88
- # Display summary
89
- st.write(page.get("extract", "No summary available."))
90
-
91
- else:
92
- st.sidebar.error("Error: Unable to retrieve data from Wikipedia. Please try again later.")
93
- except Exception as e:
94
- st.sidebar.error(f"Error: {e}")
95
-
96
- elif selected_mode == "AI Mode":
97
- # AI Mode Code
98
- st.title("AI Mode")
99
- st.markdown("Interact with an AI powered by Abhay Koul")
100
-
101
- user_input = st.text_area('You:', height=100, help="Type your message here")
102
-
103
- if st.button('Submit', key='ai_button'):
104
- with st.spinner("Thinking..."):
105
- if user_input.lower() in ['quit', 'exit', 'bye']:
106
- st.write("Goodbye! Have a great day!")
107
- else:
108
- # Create a chat history session state
109
- session_state = st.session_state.get(user_input, [])
110
- session_state.append({"user": user_input})
111
- st.session_state[user_input] = session_state
112
-
113
- # Prepare conversation history
114
- conversation_history = "\n".join(["You: " + item["user"] for item in session_state])
115
-
116
- # Construct the prompt with conversation history
117
- prompt = f"""{custom_instruction}
118
- Your conversation history:\n{conversation_history}
119
- Your AI-generated response:"""
120
-
121
- response = palm.generate_text(**defaults, prompt=prompt)
122
- st.write(response.result)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/gpt4.py DELETED
@@ -1,56 +0,0 @@
1
- """
2
- freeGPT's gpt4 module
3
- """
4
-
5
- from uuid import uuid4
6
- from re import findall
7
- from curl_cffi.requests import get, RequestsError
8
-
9
-
10
- class Completion:
11
- """
12
- This class provides methods for generating completions based on prompts.
13
- """
14
-
15
- async def create(self, prompt):
16
- """
17
- Generate a completion based on the provided prompt.
18
-
19
- Args:
20
- prompt (str): The input prompt to generate a completion from.
21
-
22
- Returns:
23
- str: The generated completion as a text string.
24
-
25
- Raises:
26
- Exception: If the response does not contain the expected "youChatToken".
27
- """
28
- resp = get(
29
- "https://you.com/api/streamingSearch",
30
- headers={
31
- "cache-control": "no-cache",
32
- "referer": "https://you.com/search?q=gpt4&tbm=youchat",
33
- "cookie": f"safesearch_guest=Off; uuid_guest={str(uuid4())}",
34
- },
35
- params={
36
- "q": prompt,
37
- "page": 1,
38
- "count": 10,
39
- "safeSearch": "Off",
40
- "onShoppingPage": False,
41
- "mkt": "",
42
- "responseFilter": "WebPages,Translations,TimeZone,Computation,RelatedSearches",
43
- "domain": "youchat",
44
- "queryTraceId": str(uuid4()),
45
- "chat": [],
46
- },
47
- impersonate="chrome107",
48
- )
49
- if "youChatToken" not in resp.text:
50
- raise RequestsError("Unable to fetch response.")
51
- return (
52
- "".join(findall(r"{\"youChatToken\": \"(.*?)\"}", resp.text))
53
- .replace("\\n", "\n")
54
- .replace("\\\\", "\\")
55
- .replace('\\"', '"')
56
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/__init__.py DELETED
@@ -1,3 +0,0 @@
1
- from .MikuChat import MikuChat
2
- from .PerplexityAi import PerplexityAi
3
- from .Komo import Komo
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/FullWindowRectangle.js DELETED
@@ -1,2 +0,0 @@
1
- import FullWindowRectangle from '../../../plugins/fullwindowrectangle.js';
2
- export default FullWindowRectangle;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/MovePanelCallbacks.js DELETED
@@ -1,12 +0,0 @@
1
- var GetCallback = function (duration, ease) {
2
- return function (child, key, sides, reset) {
3
- if (key === 'panel') {
4
- sides.moveChild(child, ((reset) ? 0 : duration), ease);
5
- }
6
- }
7
- }
8
-
9
- export default {
10
- show: GetCallback,
11
- hide: GetCallback
12
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/data/base_dataset.py DELETED
@@ -1,125 +0,0 @@
1
- """This module implements an abstract base class (ABC) 'BaseDataset' for datasets.
2
-
3
- It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses.
4
- """
5
- import random
6
- import numpy as np
7
- import torch.utils.data as data
8
- from PIL import Image
9
- import torchvision.transforms as transforms
10
- from abc import ABC, abstractmethod
11
-
12
-
13
- class BaseDataset(data.Dataset, ABC):
14
- """This class is an abstract base class (ABC) for datasets.
15
-
16
- To create a subclass, you need to implement the following four functions:
17
- -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
18
- -- <__len__>: return the size of dataset.
19
- -- <__getitem__>: get a data point.
20
- -- <modify_commandline_options>: (optionally) add dataset-specific options and set default options.
21
- """
22
-
23
- def __init__(self, opt):
24
- """Initialize the class; save the options in the class
25
-
26
- Parameters:
27
- opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
28
- """
29
- self.opt = opt
30
- # self.root = opt.dataroot
31
- self.current_epoch = 0
32
-
33
- @staticmethod
34
- def modify_commandline_options(parser, is_train):
35
- """Add new dataset-specific options, and rewrite default values for existing options.
36
-
37
- Parameters:
38
- parser -- original option parser
39
- is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
40
-
41
- Returns:
42
- the modified parser.
43
- """
44
- return parser
45
-
46
- @abstractmethod
47
- def __len__(self):
48
- """Return the total number of images in the dataset."""
49
- return 0
50
-
51
- @abstractmethod
52
- def __getitem__(self, index):
53
- """Return a data point and its metadata information.
54
-
55
- Parameters:
56
- index - - a random integer for data indexing
57
-
58
- Returns:
59
- a dictionary of data with their names. It ususally contains the data itself and its metadata information.
60
- """
61
- pass
62
-
63
-
64
- def get_transform(grayscale=False):
65
- transform_list = []
66
- if grayscale:
67
- transform_list.append(transforms.Grayscale(1))
68
- transform_list += [transforms.ToTensor()]
69
- return transforms.Compose(transform_list)
70
-
71
- def get_affine_mat(opt, size):
72
- shift_x, shift_y, scale, rot_angle, flip = 0., 0., 1., 0., False
73
- w, h = size
74
-
75
- if 'shift' in opt.preprocess:
76
- shift_pixs = int(opt.shift_pixs)
77
- shift_x = random.randint(-shift_pixs, shift_pixs)
78
- shift_y = random.randint(-shift_pixs, shift_pixs)
79
- if 'scale' in opt.preprocess:
80
- scale = 1 + opt.scale_delta * (2 * random.random() - 1)
81
- if 'rot' in opt.preprocess:
82
- rot_angle = opt.rot_angle * (2 * random.random() - 1)
83
- rot_rad = -rot_angle * np.pi/180
84
- if 'flip' in opt.preprocess:
85
- flip = random.random() > 0.5
86
-
87
- shift_to_origin = np.array([1, 0, -w//2, 0, 1, -h//2, 0, 0, 1]).reshape([3, 3])
88
- flip_mat = np.array([-1 if flip else 1, 0, 0, 0, 1, 0, 0, 0, 1]).reshape([3, 3])
89
- shift_mat = np.array([1, 0, shift_x, 0, 1, shift_y, 0, 0, 1]).reshape([3, 3])
90
- rot_mat = np.array([np.cos(rot_rad), np.sin(rot_rad), 0, -np.sin(rot_rad), np.cos(rot_rad), 0, 0, 0, 1]).reshape([3, 3])
91
- scale_mat = np.array([scale, 0, 0, 0, scale, 0, 0, 0, 1]).reshape([3, 3])
92
- shift_to_center = np.array([1, 0, w//2, 0, 1, h//2, 0, 0, 1]).reshape([3, 3])
93
-
94
- affine = shift_to_center @ scale_mat @ rot_mat @ shift_mat @ flip_mat @ shift_to_origin
95
- affine_inv = np.linalg.inv(affine)
96
- return affine, affine_inv, flip
97
-
98
- def apply_img_affine(img, affine_inv, method=Image.BICUBIC):
99
- return img.transform(img.size, Image.AFFINE, data=affine_inv.flatten()[:6], resample=Image.BICUBIC)
100
-
101
- def apply_lm_affine(landmark, affine, flip, size):
102
- _, h = size
103
- lm = landmark.copy()
104
- lm[:, 1] = h - 1 - lm[:, 1]
105
- lm = np.concatenate((lm, np.ones([lm.shape[0], 1])), -1)
106
- lm = lm @ np.transpose(affine)
107
- lm[:, :2] = lm[:, :2] / lm[:, 2:]
108
- lm = lm[:, :2]
109
- lm[:, 1] = h - 1 - lm[:, 1]
110
- if flip:
111
- lm_ = lm.copy()
112
- lm_[:17] = lm[16::-1]
113
- lm_[17:22] = lm[26:21:-1]
114
- lm_[22:27] = lm[21:16:-1]
115
- lm_[31:36] = lm[35:30:-1]
116
- lm_[36:40] = lm[45:41:-1]
117
- lm_[40:42] = lm[47:45:-1]
118
- lm_[42:46] = lm[39:35:-1]
119
- lm_[46:48] = lm[41:39:-1]
120
- lm_[48:55] = lm[54:47:-1]
121
- lm_[55:60] = lm[59:54:-1]
122
- lm_[60:65] = lm[64:59:-1]
123
- lm_[65:68] = lm[67:64:-1]
124
- lm = lm_
125
- return lm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddpm_parallel.py DELETED
@@ -1,216 +0,0 @@
1
- # Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import torch
16
-
17
- from diffusers import DDPMParallelScheduler
18
-
19
- from .test_schedulers import SchedulerCommonTest
20
-
21
-
22
- class DDPMParallelSchedulerTest(SchedulerCommonTest):
23
- scheduler_classes = (DDPMParallelScheduler,)
24
-
25
- def get_scheduler_config(self, **kwargs):
26
- config = {
27
- "num_train_timesteps": 1000,
28
- "beta_start": 0.0001,
29
- "beta_end": 0.02,
30
- "beta_schedule": "linear",
31
- "variance_type": "fixed_small",
32
- "clip_sample": True,
33
- }
34
-
35
- config.update(**kwargs)
36
- return config
37
-
38
- def test_timesteps(self):
39
- for timesteps in [1, 5, 100, 1000]:
40
- self.check_over_configs(num_train_timesteps=timesteps)
41
-
42
- def test_betas(self):
43
- for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]):
44
- self.check_over_configs(beta_start=beta_start, beta_end=beta_end)
45
-
46
- def test_schedules(self):
47
- for schedule in ["linear", "squaredcos_cap_v2"]:
48
- self.check_over_configs(beta_schedule=schedule)
49
-
50
- def test_variance_type(self):
51
- for variance in ["fixed_small", "fixed_large", "other"]:
52
- self.check_over_configs(variance_type=variance)
53
-
54
- def test_clip_sample(self):
55
- for clip_sample in [True, False]:
56
- self.check_over_configs(clip_sample=clip_sample)
57
-
58
- def test_thresholding(self):
59
- self.check_over_configs(thresholding=False)
60
- for threshold in [0.5, 1.0, 2.0]:
61
- for prediction_type in ["epsilon", "sample", "v_prediction"]:
62
- self.check_over_configs(
63
- thresholding=True,
64
- prediction_type=prediction_type,
65
- sample_max_value=threshold,
66
- )
67
-
68
- def test_prediction_type(self):
69
- for prediction_type in ["epsilon", "sample", "v_prediction"]:
70
- self.check_over_configs(prediction_type=prediction_type)
71
-
72
- def test_time_indices(self):
73
- for t in [0, 500, 999]:
74
- self.check_over_forward(time_step=t)
75
-
76
- def test_variance(self):
77
- scheduler_class = self.scheduler_classes[0]
78
- scheduler_config = self.get_scheduler_config()
79
- scheduler = scheduler_class(**scheduler_config)
80
-
81
- assert torch.sum(torch.abs(scheduler._get_variance(0) - 0.0)) < 1e-5
82
- assert torch.sum(torch.abs(scheduler._get_variance(487) - 0.00979)) < 1e-5
83
- assert torch.sum(torch.abs(scheduler._get_variance(999) - 0.02)) < 1e-5
84
-
85
- def test_batch_step_no_noise(self):
86
- scheduler_class = self.scheduler_classes[0]
87
- scheduler_config = self.get_scheduler_config()
88
- scheduler = scheduler_class(**scheduler_config)
89
-
90
- num_trained_timesteps = len(scheduler)
91
-
92
- model = self.dummy_model()
93
- sample1 = self.dummy_sample_deter
94
- sample2 = self.dummy_sample_deter + 0.1
95
- sample3 = self.dummy_sample_deter - 0.1
96
-
97
- per_sample_batch = sample1.shape[0]
98
- samples = torch.stack([sample1, sample2, sample3], dim=0)
99
- timesteps = torch.arange(num_trained_timesteps)[0:3, None].repeat(1, per_sample_batch)
100
-
101
- residual = model(samples.flatten(0, 1), timesteps.flatten(0, 1))
102
- pred_prev_sample = scheduler.batch_step_no_noise(residual, timesteps.flatten(0, 1), samples.flatten(0, 1))
103
-
104
- result_sum = torch.sum(torch.abs(pred_prev_sample))
105
- result_mean = torch.mean(torch.abs(pred_prev_sample))
106
-
107
- assert abs(result_sum.item() - 1153.1833) < 1e-2
108
- assert abs(result_mean.item() - 0.5005) < 1e-3
109
-
110
- def test_full_loop_no_noise(self):
111
- scheduler_class = self.scheduler_classes[0]
112
- scheduler_config = self.get_scheduler_config()
113
- scheduler = scheduler_class(**scheduler_config)
114
-
115
- num_trained_timesteps = len(scheduler)
116
-
117
- model = self.dummy_model()
118
- sample = self.dummy_sample_deter
119
- generator = torch.manual_seed(0)
120
-
121
- for t in reversed(range(num_trained_timesteps)):
122
- # 1. predict noise residual
123
- residual = model(sample, t)
124
-
125
- # 2. predict previous mean of sample x_t-1
126
- pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample
127
-
128
- sample = pred_prev_sample
129
-
130
- result_sum = torch.sum(torch.abs(sample))
131
- result_mean = torch.mean(torch.abs(sample))
132
-
133
- assert abs(result_sum.item() - 258.9606) < 1e-2
134
- assert abs(result_mean.item() - 0.3372) < 1e-3
135
-
136
- def test_full_loop_with_v_prediction(self):
137
- scheduler_class = self.scheduler_classes[0]
138
- scheduler_config = self.get_scheduler_config(prediction_type="v_prediction")
139
- scheduler = scheduler_class(**scheduler_config)
140
-
141
- num_trained_timesteps = len(scheduler)
142
-
143
- model = self.dummy_model()
144
- sample = self.dummy_sample_deter
145
- generator = torch.manual_seed(0)
146
-
147
- for t in reversed(range(num_trained_timesteps)):
148
- # 1. predict noise residual
149
- residual = model(sample, t)
150
-
151
- # 2. predict previous mean of sample x_t-1
152
- pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample
153
-
154
- sample = pred_prev_sample
155
-
156
- result_sum = torch.sum(torch.abs(sample))
157
- result_mean = torch.mean(torch.abs(sample))
158
-
159
- assert abs(result_sum.item() - 202.0296) < 1e-2
160
- assert abs(result_mean.item() - 0.2631) < 1e-3
161
-
162
- def test_custom_timesteps(self):
163
- scheduler_class = self.scheduler_classes[0]
164
- scheduler_config = self.get_scheduler_config()
165
- scheduler = scheduler_class(**scheduler_config)
166
-
167
- timesteps = [100, 87, 50, 1, 0]
168
-
169
- scheduler.set_timesteps(timesteps=timesteps)
170
-
171
- scheduler_timesteps = scheduler.timesteps
172
-
173
- for i, timestep in enumerate(scheduler_timesteps):
174
- if i == len(timesteps) - 1:
175
- expected_prev_t = -1
176
- else:
177
- expected_prev_t = timesteps[i + 1]
178
-
179
- prev_t = scheduler.previous_timestep(timestep)
180
- prev_t = prev_t.item()
181
-
182
- self.assertEqual(prev_t, expected_prev_t)
183
-
184
- def test_custom_timesteps_increasing_order(self):
185
- scheduler_class = self.scheduler_classes[0]
186
- scheduler_config = self.get_scheduler_config()
187
- scheduler = scheduler_class(**scheduler_config)
188
-
189
- timesteps = [100, 87, 50, 51, 0]
190
-
191
- with self.assertRaises(ValueError, msg="`custom_timesteps` must be in descending order."):
192
- scheduler.set_timesteps(timesteps=timesteps)
193
-
194
- def test_custom_timesteps_passing_both_num_inference_steps_and_timesteps(self):
195
- scheduler_class = self.scheduler_classes[0]
196
- scheduler_config = self.get_scheduler_config()
197
- scheduler = scheduler_class(**scheduler_config)
198
-
199
- timesteps = [100, 87, 50, 1, 0]
200
- num_inference_steps = len(timesteps)
201
-
202
- with self.assertRaises(ValueError, msg="Can only pass one of `num_inference_steps` or `custom_timesteps`."):
203
- scheduler.set_timesteps(num_inference_steps=num_inference_steps, timesteps=timesteps)
204
-
205
- def test_custom_timesteps_too_large(self):
206
- scheduler_class = self.scheduler_classes[0]
207
- scheduler_config = self.get_scheduler_config()
208
- scheduler = scheduler_class(**scheduler_config)
209
-
210
- timesteps = [scheduler.config.num_train_timesteps]
211
-
212
- with self.assertRaises(
213
- ValueError,
214
- msg="`timesteps` must start before `self.config.train_timesteps`: {scheduler.config.num_train_timesteps}}",
215
- ):
216
- scheduler.set_timesteps(timesteps=timesteps)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_rpn_r50_caffe_fpn_1x_coco.py DELETED
@@ -1,58 +0,0 @@
1
- _base_ = '../rpn/rpn_r50_caffe_fpn_1x_coco.py'
2
- model = dict(
3
- rpn_head=dict(
4
- _delete_=True,
5
- type='GARPNHead',
6
- in_channels=256,
7
- feat_channels=256,
8
- approx_anchor_generator=dict(
9
- type='AnchorGenerator',
10
- octave_base_scale=8,
11
- scales_per_octave=3,
12
- ratios=[0.5, 1.0, 2.0],
13
- strides=[4, 8, 16, 32, 64]),
14
- square_anchor_generator=dict(
15
- type='AnchorGenerator',
16
- ratios=[1.0],
17
- scales=[8],
18
- strides=[4, 8, 16, 32, 64]),
19
- anchor_coder=dict(
20
- type='DeltaXYWHBBoxCoder',
21
- target_means=[.0, .0, .0, .0],
22
- target_stds=[0.07, 0.07, 0.14, 0.14]),
23
- bbox_coder=dict(
24
- type='DeltaXYWHBBoxCoder',
25
- target_means=[.0, .0, .0, .0],
26
- target_stds=[0.07, 0.07, 0.11, 0.11]),
27
- loc_filter_thr=0.01,
28
- loss_loc=dict(
29
- type='FocalLoss',
30
- use_sigmoid=True,
31
- gamma=2.0,
32
- alpha=0.25,
33
- loss_weight=1.0),
34
- loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0),
35
- loss_cls=dict(
36
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
37
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
38
- # model training and testing settings
39
- train_cfg=dict(
40
- rpn=dict(
41
- ga_assigner=dict(
42
- type='ApproxMaxIoUAssigner',
43
- pos_iou_thr=0.7,
44
- neg_iou_thr=0.3,
45
- min_pos_iou=0.3,
46
- ignore_iof_thr=-1),
47
- ga_sampler=dict(
48
- type='RandomSampler',
49
- num=256,
50
- pos_fraction=0.5,
51
- neg_pos_ub=-1,
52
- add_gt_as_proposals=False),
53
- allowed_border=-1,
54
- center_ratio=0.2,
55
- ignore_ratio=0.5)),
56
- test_cfg=dict(rpn=dict(nms_post=1000)))
57
- optimizer_config = dict(
58
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/trident_resnet.py DELETED
@@ -1,292 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- import torch.utils.checkpoint as cp
5
- from mmcv.cnn import build_conv_layer, build_norm_layer, kaiming_init
6
- from torch.nn.modules.utils import _pair
7
-
8
- from mmdet.models.backbones.resnet import Bottleneck, ResNet
9
- from mmdet.models.builder import BACKBONES
10
-
11
-
12
- class TridentConv(nn.Module):
13
- """Trident Convolution Module.
14
-
15
- Args:
16
- in_channels (int): Number of channels in input.
17
- out_channels (int): Number of channels in output.
18
- kernel_size (int): Size of convolution kernel.
19
- stride (int, optional): Convolution stride. Default: 1.
20
- trident_dilations (tuple[int, int, int], optional): Dilations of
21
- different trident branch. Default: (1, 2, 3).
22
- test_branch_idx (int, optional): In inference, all 3 branches will
23
- be used if `test_branch_idx==-1`, otherwise only branch with
24
- index `test_branch_idx` will be used. Default: 1.
25
- bias (bool, optional): Whether to use bias in convolution or not.
26
- Default: False.
27
- """
28
-
29
- def __init__(self,
30
- in_channels,
31
- out_channels,
32
- kernel_size,
33
- stride=1,
34
- trident_dilations=(1, 2, 3),
35
- test_branch_idx=1,
36
- bias=False):
37
- super(TridentConv, self).__init__()
38
- self.num_branch = len(trident_dilations)
39
- self.with_bias = bias
40
- self.test_branch_idx = test_branch_idx
41
- self.stride = _pair(stride)
42
- self.kernel_size = _pair(kernel_size)
43
- self.paddings = _pair(trident_dilations)
44
- self.dilations = trident_dilations
45
- self.in_channels = in_channels
46
- self.out_channels = out_channels
47
- self.bias = bias
48
-
49
- self.weight = nn.Parameter(
50
- torch.Tensor(out_channels, in_channels, *self.kernel_size))
51
- if bias:
52
- self.bias = nn.Parameter(torch.Tensor(out_channels))
53
- else:
54
- self.bias = None
55
- self.init_weights()
56
-
57
- def init_weights(self):
58
- kaiming_init(self, distribution='uniform', mode='fan_in')
59
-
60
- def extra_repr(self):
61
- tmpstr = f'in_channels={self.in_channels}'
62
- tmpstr += f', out_channels={self.out_channels}'
63
- tmpstr += f', kernel_size={self.kernel_size}'
64
- tmpstr += f', num_branch={self.num_branch}'
65
- tmpstr += f', test_branch_idx={self.test_branch_idx}'
66
- tmpstr += f', stride={self.stride}'
67
- tmpstr += f', paddings={self.paddings}'
68
- tmpstr += f', dilations={self.dilations}'
69
- tmpstr += f', bias={self.bias}'
70
- return tmpstr
71
-
72
- def forward(self, inputs):
73
- if self.training or self.test_branch_idx == -1:
74
- outputs = [
75
- F.conv2d(input, self.weight, self.bias, self.stride, padding,
76
- dilation) for input, dilation, padding in zip(
77
- inputs, self.dilations, self.paddings)
78
- ]
79
- else:
80
- assert len(inputs) == 1
81
- outputs = [
82
- F.conv2d(inputs[0], self.weight, self.bias, self.stride,
83
- self.paddings[self.test_branch_idx],
84
- self.dilations[self.test_branch_idx])
85
- ]
86
-
87
- return outputs
88
-
89
-
90
- # Since TridentNet is defined over ResNet50 and ResNet101, here we
91
- # only support TridentBottleneckBlock.
92
- class TridentBottleneck(Bottleneck):
93
- """BottleBlock for TridentResNet.
94
-
95
- Args:
96
- trident_dilations (tuple[int, int, int]): Dilations of different
97
- trident branch.
98
- test_branch_idx (int): In inference, all 3 branches will be used
99
- if `test_branch_idx==-1`, otherwise only branch with index
100
- `test_branch_idx` will be used.
101
- concat_output (bool): Whether to concat the output list to a Tensor.
102
- `True` only in the last Block.
103
- """
104
-
105
- def __init__(self, trident_dilations, test_branch_idx, concat_output,
106
- **kwargs):
107
-
108
- super(TridentBottleneck, self).__init__(**kwargs)
109
- self.trident_dilations = trident_dilations
110
- self.num_branch = len(trident_dilations)
111
- self.concat_output = concat_output
112
- self.test_branch_idx = test_branch_idx
113
- self.conv2 = TridentConv(
114
- self.planes,
115
- self.planes,
116
- kernel_size=3,
117
- stride=self.conv2_stride,
118
- bias=False,
119
- trident_dilations=self.trident_dilations,
120
- test_branch_idx=test_branch_idx)
121
-
122
- def forward(self, x):
123
-
124
- def _inner_forward(x):
125
- num_branch = (
126
- self.num_branch
127
- if self.training or self.test_branch_idx == -1 else 1)
128
- identity = x
129
- if not isinstance(x, list):
130
- x = (x, ) * num_branch
131
- identity = x
132
- if self.downsample is not None:
133
- identity = [self.downsample(b) for b in x]
134
-
135
- out = [self.conv1(b) for b in x]
136
- out = [self.norm1(b) for b in out]
137
- out = [self.relu(b) for b in out]
138
-
139
- if self.with_plugins:
140
- for k in range(len(out)):
141
- out[k] = self.forward_plugin(out[k],
142
- self.after_conv1_plugin_names)
143
-
144
- out = self.conv2(out)
145
- out = [self.norm2(b) for b in out]
146
- out = [self.relu(b) for b in out]
147
- if self.with_plugins:
148
- for k in range(len(out)):
149
- out[k] = self.forward_plugin(out[k],
150
- self.after_conv2_plugin_names)
151
-
152
- out = [self.conv3(b) for b in out]
153
- out = [self.norm3(b) for b in out]
154
-
155
- if self.with_plugins:
156
- for k in range(len(out)):
157
- out[k] = self.forward_plugin(out[k],
158
- self.after_conv3_plugin_names)
159
-
160
- out = [
161
- out_b + identity_b for out_b, identity_b in zip(out, identity)
162
- ]
163
- return out
164
-
165
- if self.with_cp and x.requires_grad:
166
- out = cp.checkpoint(_inner_forward, x)
167
- else:
168
- out = _inner_forward(x)
169
-
170
- out = [self.relu(b) for b in out]
171
- if self.concat_output:
172
- out = torch.cat(out, dim=0)
173
- return out
174
-
175
-
176
- def make_trident_res_layer(block,
177
- inplanes,
178
- planes,
179
- num_blocks,
180
- stride=1,
181
- trident_dilations=(1, 2, 3),
182
- style='pytorch',
183
- with_cp=False,
184
- conv_cfg=None,
185
- norm_cfg=dict(type='BN'),
186
- dcn=None,
187
- plugins=None,
188
- test_branch_idx=-1):
189
- """Build Trident Res Layers."""
190
-
191
- downsample = None
192
- if stride != 1 or inplanes != planes * block.expansion:
193
- downsample = []
194
- conv_stride = stride
195
- downsample.extend([
196
- build_conv_layer(
197
- conv_cfg,
198
- inplanes,
199
- planes * block.expansion,
200
- kernel_size=1,
201
- stride=conv_stride,
202
- bias=False),
203
- build_norm_layer(norm_cfg, planes * block.expansion)[1]
204
- ])
205
- downsample = nn.Sequential(*downsample)
206
-
207
- layers = []
208
- for i in range(num_blocks):
209
- layers.append(
210
- block(
211
- inplanes=inplanes,
212
- planes=planes,
213
- stride=stride if i == 0 else 1,
214
- trident_dilations=trident_dilations,
215
- downsample=downsample if i == 0 else None,
216
- style=style,
217
- with_cp=with_cp,
218
- conv_cfg=conv_cfg,
219
- norm_cfg=norm_cfg,
220
- dcn=dcn,
221
- plugins=plugins,
222
- test_branch_idx=test_branch_idx,
223
- concat_output=True if i == num_blocks - 1 else False))
224
- inplanes = planes * block.expansion
225
- return nn.Sequential(*layers)
226
-
227
-
228
- @BACKBONES.register_module()
229
- class TridentResNet(ResNet):
230
- """The stem layer, stage 1 and stage 2 in Trident ResNet are identical to
231
- ResNet, while in stage 3, Trident BottleBlock is utilized to replace the
232
- normal BottleBlock to yield trident output. Different branch shares the
233
- convolution weight but uses different dilations to achieve multi-scale
234
- output.
235
-
236
- / stage3(b0) \
237
- x - stem - stage1 - stage2 - stage3(b1) - output
238
- \ stage3(b2) /
239
-
240
- Args:
241
- depth (int): Depth of resnet, from {50, 101, 152}.
242
- num_branch (int): Number of branches in TridentNet.
243
- test_branch_idx (int): In inference, all 3 branches will be used
244
- if `test_branch_idx==-1`, otherwise only branch with index
245
- `test_branch_idx` will be used.
246
- trident_dilations (tuple[int]): Dilations of different trident branch.
247
- len(trident_dilations) should be equal to num_branch.
248
- """ # noqa
249
-
250
- def __init__(self, depth, num_branch, test_branch_idx, trident_dilations,
251
- **kwargs):
252
-
253
- assert num_branch == len(trident_dilations)
254
- assert depth in (50, 101, 152)
255
- super(TridentResNet, self).__init__(depth, **kwargs)
256
- assert self.num_stages == 3
257
- self.test_branch_idx = test_branch_idx
258
- self.num_branch = num_branch
259
-
260
- last_stage_idx = self.num_stages - 1
261
- stride = self.strides[last_stage_idx]
262
- dilation = trident_dilations
263
- dcn = self.dcn if self.stage_with_dcn[last_stage_idx] else None
264
- if self.plugins is not None:
265
- stage_plugins = self.make_stage_plugins(self.plugins,
266
- last_stage_idx)
267
- else:
268
- stage_plugins = None
269
- planes = self.base_channels * 2**last_stage_idx
270
- res_layer = make_trident_res_layer(
271
- TridentBottleneck,
272
- inplanes=(self.block.expansion * self.base_channels *
273
- 2**(last_stage_idx - 1)),
274
- planes=planes,
275
- num_blocks=self.stage_blocks[last_stage_idx],
276
- stride=stride,
277
- trident_dilations=dilation,
278
- style=self.style,
279
- with_cp=self.with_cp,
280
- conv_cfg=self.conv_cfg,
281
- norm_cfg=self.norm_cfg,
282
- dcn=dcn,
283
- plugins=stage_plugins,
284
- test_branch_idx=self.test_branch_idx)
285
-
286
- layer_name = f'layer{last_stage_idx + 1}'
287
-
288
- self.__setattr__(layer_name, res_layer)
289
- self.res_layers.pop(last_stage_idx)
290
- self.res_layers.insert(last_stage_idx, layer_name)
291
-
292
- self._freeze_stages()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Armaliltril/qbee/README.md DELETED
@@ -1,15 +0,0 @@
1
- ---
2
- title: Qbee
3
- emoji: 📉
4
- colorFrom: purple
5
- colorTo: orange
6
- python_version: 3.10.6
7
- sdk: gradio
8
- sdk_version: 3.9.1
9
- app_file: app.py
10
- pinned: true
11
- license: mit
12
- tags: [ode, quadratization]
13
- ---
14
-
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArtGAN/Diffusion-API/diffusion_webui/diffusion_models/base_controlnet_pipeline.py DELETED
@@ -1,31 +0,0 @@
1
- class ControlnetPipeline:
2
- def __init__(self):
3
- self.pipe = None
4
-
5
- def load_model(self, stable_model_path: str, controlnet_model_path: str):
6
- raise NotImplementedError()
7
-
8
- def load_image(self, image_path: str):
9
- raise NotImplementedError()
10
-
11
- def controlnet_preprocces(self, read_image: str):
12
- raise NotImplementedError()
13
-
14
- def generate_image(
15
- self,
16
- image_path: str,
17
- stable_model_path: str,
18
- controlnet_model_path: str,
19
- prompt: str,
20
- negative_prompt: str,
21
- num_images_per_prompt: int,
22
- guidance_scale: int,
23
- num_inference_step: int,
24
- controlnet_conditioning_scale: int,
25
- scheduler: str,
26
- seed_generator: int,
27
- ):
28
- raise NotImplementedError()
29
-
30
- def web_interface():
31
- raise NotImplementedError()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arthur678/vits-uma-genshin-honkai/attentions.py DELETED
@@ -1,300 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- from modules import LayerNorm
8
-
9
-
10
- class Encoder(nn.Module):
11
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
12
- super().__init__()
13
- self.hidden_channels = hidden_channels
14
- self.filter_channels = filter_channels
15
- self.n_heads = n_heads
16
- self.n_layers = n_layers
17
- self.kernel_size = kernel_size
18
- self.p_dropout = p_dropout
19
- self.window_size = window_size
20
-
21
- self.drop = nn.Dropout(p_dropout)
22
- self.attn_layers = nn.ModuleList()
23
- self.norm_layers_1 = nn.ModuleList()
24
- self.ffn_layers = nn.ModuleList()
25
- self.norm_layers_2 = nn.ModuleList()
26
- for i in range(self.n_layers):
27
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
28
- self.norm_layers_1.append(LayerNorm(hidden_channels))
29
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
30
- self.norm_layers_2.append(LayerNorm(hidden_channels))
31
-
32
- def forward(self, x, x_mask):
33
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
34
- x = x * x_mask
35
- for i in range(self.n_layers):
36
- y = self.attn_layers[i](x, x, attn_mask)
37
- y = self.drop(y)
38
- x = self.norm_layers_1[i](x + y)
39
-
40
- y = self.ffn_layers[i](x, x_mask)
41
- y = self.drop(y)
42
- x = self.norm_layers_2[i](x + y)
43
- x = x * x_mask
44
- return x
45
-
46
-
47
- class Decoder(nn.Module):
48
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
49
- super().__init__()
50
- self.hidden_channels = hidden_channels
51
- self.filter_channels = filter_channels
52
- self.n_heads = n_heads
53
- self.n_layers = n_layers
54
- self.kernel_size = kernel_size
55
- self.p_dropout = p_dropout
56
- self.proximal_bias = proximal_bias
57
- self.proximal_init = proximal_init
58
-
59
- self.drop = nn.Dropout(p_dropout)
60
- self.self_attn_layers = nn.ModuleList()
61
- self.norm_layers_0 = nn.ModuleList()
62
- self.encdec_attn_layers = nn.ModuleList()
63
- self.norm_layers_1 = nn.ModuleList()
64
- self.ffn_layers = nn.ModuleList()
65
- self.norm_layers_2 = nn.ModuleList()
66
- for i in range(self.n_layers):
67
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
68
- self.norm_layers_0.append(LayerNorm(hidden_channels))
69
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
70
- self.norm_layers_1.append(LayerNorm(hidden_channels))
71
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
72
- self.norm_layers_2.append(LayerNorm(hidden_channels))
73
-
74
- def forward(self, x, x_mask, h, h_mask):
75
- """
76
- x: decoder input
77
- h: encoder output
78
- """
79
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
80
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
81
- x = x * x_mask
82
- for i in range(self.n_layers):
83
- y = self.self_attn_layers[i](x, x, self_attn_mask)
84
- y = self.drop(y)
85
- x = self.norm_layers_0[i](x + y)
86
-
87
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
88
- y = self.drop(y)
89
- x = self.norm_layers_1[i](x + y)
90
-
91
- y = self.ffn_layers[i](x, x_mask)
92
- y = self.drop(y)
93
- x = self.norm_layers_2[i](x + y)
94
- x = x * x_mask
95
- return x
96
-
97
-
98
- class MultiHeadAttention(nn.Module):
99
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
100
- super().__init__()
101
- assert channels % n_heads == 0
102
-
103
- self.channels = channels
104
- self.out_channels = out_channels
105
- self.n_heads = n_heads
106
- self.p_dropout = p_dropout
107
- self.window_size = window_size
108
- self.heads_share = heads_share
109
- self.block_length = block_length
110
- self.proximal_bias = proximal_bias
111
- self.proximal_init = proximal_init
112
- self.attn = None
113
-
114
- self.k_channels = channels // n_heads
115
- self.conv_q = nn.Conv1d(channels, channels, 1)
116
- self.conv_k = nn.Conv1d(channels, channels, 1)
117
- self.conv_v = nn.Conv1d(channels, channels, 1)
118
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
119
- self.drop = nn.Dropout(p_dropout)
120
-
121
- if window_size is not None:
122
- n_heads_rel = 1 if heads_share else n_heads
123
- rel_stddev = self.k_channels**-0.5
124
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
125
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
126
-
127
- nn.init.xavier_uniform_(self.conv_q.weight)
128
- nn.init.xavier_uniform_(self.conv_k.weight)
129
- nn.init.xavier_uniform_(self.conv_v.weight)
130
- if proximal_init:
131
- with torch.no_grad():
132
- self.conv_k.weight.copy_(self.conv_q.weight)
133
- self.conv_k.bias.copy_(self.conv_q.bias)
134
-
135
- def forward(self, x, c, attn_mask=None):
136
- q = self.conv_q(x)
137
- k = self.conv_k(c)
138
- v = self.conv_v(c)
139
-
140
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
141
-
142
- x = self.conv_o(x)
143
- return x
144
-
145
- def attention(self, query, key, value, mask=None):
146
- # reshape [b, d, t] -> [b, n_h, t, d_k]
147
- b, d, t_s, t_t = (*key.size(), query.size(2))
148
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
149
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
150
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
151
-
152
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
153
- if self.window_size is not None:
154
- assert t_s == t_t, "Relative attention is only available for self-attention."
155
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
156
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
157
- scores_local = self._relative_position_to_absolute_position(rel_logits)
158
- scores = scores + scores_local
159
- if self.proximal_bias:
160
- assert t_s == t_t, "Proximal bias is only available for self-attention."
161
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
162
- if mask is not None:
163
- scores = scores.masked_fill(mask == 0, -1e4)
164
- if self.block_length is not None:
165
- assert t_s == t_t, "Local attention is only available for self-attention."
166
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
167
- scores = scores.masked_fill(block_mask == 0, -1e4)
168
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
169
- p_attn = self.drop(p_attn)
170
- output = torch.matmul(p_attn, value)
171
- if self.window_size is not None:
172
- relative_weights = self._absolute_position_to_relative_position(p_attn)
173
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
174
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
175
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
176
- return output, p_attn
177
-
178
- def _matmul_with_relative_values(self, x, y):
179
- """
180
- x: [b, h, l, m]
181
- y: [h or 1, m, d]
182
- ret: [b, h, l, d]
183
- """
184
- ret = torch.matmul(x, y.unsqueeze(0))
185
- return ret
186
-
187
- def _matmul_with_relative_keys(self, x, y):
188
- """
189
- x: [b, h, l, d]
190
- y: [h or 1, m, d]
191
- ret: [b, h, l, m]
192
- """
193
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
194
- return ret
195
-
196
- def _get_relative_embeddings(self, relative_embeddings, length):
197
- max_relative_position = 2 * self.window_size + 1
198
- # Pad first before slice to avoid using cond ops.
199
- pad_length = max(length - (self.window_size + 1), 0)
200
- slice_start_position = max((self.window_size + 1) - length, 0)
201
- slice_end_position = slice_start_position + 2 * length - 1
202
- if pad_length > 0:
203
- padded_relative_embeddings = F.pad(
204
- relative_embeddings,
205
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
206
- else:
207
- padded_relative_embeddings = relative_embeddings
208
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
209
- return used_relative_embeddings
210
-
211
- def _relative_position_to_absolute_position(self, x):
212
- """
213
- x: [b, h, l, 2*l-1]
214
- ret: [b, h, l, l]
215
- """
216
- batch, heads, length, _ = x.size()
217
- # Concat columns of pad to shift from relative to absolute indexing.
218
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
219
-
220
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
221
- x_flat = x.view([batch, heads, length * 2 * length])
222
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
223
-
224
- # Reshape and slice out the padded elements.
225
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
226
- return x_final
227
-
228
- def _absolute_position_to_relative_position(self, x):
229
- """
230
- x: [b, h, l, l]
231
- ret: [b, h, l, 2*l-1]
232
- """
233
- batch, heads, length, _ = x.size()
234
- # padd along column
235
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
236
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
237
- # add 0's in the beginning that will skew the elements after reshape
238
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
239
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
240
- return x_final
241
-
242
- def _attention_bias_proximal(self, length):
243
- """Bias for self-attention to encourage attention to close positions.
244
- Args:
245
- length: an integer scalar.
246
- Returns:
247
- a Tensor with shape [1, 1, length, length]
248
- """
249
- r = torch.arange(length, dtype=torch.float32)
250
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
251
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
252
-
253
-
254
- class FFN(nn.Module):
255
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
256
- super().__init__()
257
- self.in_channels = in_channels
258
- self.out_channels = out_channels
259
- self.filter_channels = filter_channels
260
- self.kernel_size = kernel_size
261
- self.p_dropout = p_dropout
262
- self.activation = activation
263
- self.causal = causal
264
-
265
- if causal:
266
- self.padding = self._causal_padding
267
- else:
268
- self.padding = self._same_padding
269
-
270
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
271
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
272
- self.drop = nn.Dropout(p_dropout)
273
-
274
- def forward(self, x, x_mask):
275
- x = self.conv_1(self.padding(x * x_mask))
276
- if self.activation == "gelu":
277
- x = x * torch.sigmoid(1.702 * x)
278
- else:
279
- x = torch.relu(x)
280
- x = self.drop(x)
281
- x = self.conv_2(self.padding(x * x_mask))
282
- return x * x_mask
283
-
284
- def _causal_padding(self, x):
285
- if self.kernel_size == 1:
286
- return x
287
- pad_l = self.kernel_size - 1
288
- pad_r = 0
289
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
290
- x = F.pad(x, commons.convert_pad_shape(padding))
291
- return x
292
-
293
- def _same_padding(self, x):
294
- if self.kernel_size == 1:
295
- return x
296
- pad_l = (self.kernel_size - 1) // 2
297
- pad_r = self.kernel_size // 2
298
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
299
- x = F.pad(x, commons.convert_pad_shape(padding))
300
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langgreekmodel.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py DELETED
@@ -1,35 +0,0 @@
1
- from itertools import filterfalse
2
-
3
- from typing import (
4
- Callable,
5
- Iterable,
6
- Iterator,
7
- Optional,
8
- Set,
9
- TypeVar,
10
- Union,
11
- )
12
-
13
- # Type and type variable definitions
14
- _T = TypeVar('_T')
15
- _U = TypeVar('_U')
16
-
17
-
18
- def unique_everseen(
19
- iterable: Iterable[_T], key: Optional[Callable[[_T], _U]] = None
20
- ) -> Iterator[_T]:
21
- "List unique elements, preserving order. Remember all elements ever seen."
22
- # unique_everseen('AAAABBBCCDAABBB') --> A B C D
23
- # unique_everseen('ABBCcAD', str.lower) --> A B C D
24
- seen: Set[Union[_T, _U]] = set()
25
- seen_add = seen.add
26
- if key is None:
27
- for element in filterfalse(seen.__contains__, iterable):
28
- seen_add(element)
29
- yield element
30
- else:
31
- for element in iterable:
32
- k = key(element)
33
- if k not in seen:
34
- seen_add(k)
35
- yield element
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/AltDiffusion-m9/js/index.js DELETED
@@ -1,186 +0,0 @@
1
- window.SD = (() => {
2
- /*
3
- * Painterro is made a field of the SD global object
4
- * To provide convinience when using w() method in css_and_js.py
5
- */
6
- class PainterroClass {
7
- static isOpen = false;
8
- static async init ({ x, toId }) {
9
- console.log(x)
10
-
11
- const originalImage = x[2] === 'Mask' ? x[1]?.image : x[0];
12
-
13
- if (window.Painterro === undefined) {
14
- try {
15
- await this.load();
16
- } catch (e) {
17
- SDClass.error(e);
18
-
19
- return this.fallback(originalImage);
20
- }
21
- }
22
-
23
- if (this.isOpen) {
24
- return this.fallback(originalImage);
25
- }
26
- this.isOpen = true;
27
-
28
- let resolveResult;
29
- const paintClient = Painterro({
30
- hiddenTools: ['arrow'],
31
- onHide: () => {
32
- resolveResult?.(null);
33
- },
34
- saveHandler: (image, done) => {
35
- const data = image.asDataURL();
36
-
37
- // ensures stable performance even
38
- // when the editor is in interactive mode
39
- SD.clearImageInput(SD.el.get(`#${toId}`));
40
-
41
- resolveResult(data);
42
-
43
- done(true);
44
- paintClient.hide();
45
- },
46
- });
47
-
48
- const result = await new Promise((resolve) => {
49
- resolveResult = resolve;
50
- paintClient.show(originalImage);
51
- });
52
- this.isOpen = false;
53
-
54
- return result ? this.success(result) : this.fallback(originalImage);
55
- }
56
- static success (result) { return [result, { image: result, mask: result }] };
57
- static fallback (image) { return [image, { image: image, mask: image }] };
58
- static load () {
59
- return new Promise((resolve, reject) => {
60
- const scriptId = '__painterro-script';
61
- if (document.getElementById(scriptId)) {
62
- reject(new Error('Tried to load painterro script, but script tag already exists.'));
63
- return;
64
- }
65
-
66
- const styleId = '__painterro-css-override';
67
- if (!document.getElementById(styleId)) {
68
- /* Ensure Painterro window is always on top */
69
- const style = document.createElement('style');
70
- style.id = styleId;
71
- style.setAttribute('type', 'text/css');
72
- style.appendChild(document.createTextNode(`
73
- .ptro-holder-wrapper {
74
- z-index: 100;
75
- }
76
- `));
77
- document.head.appendChild(style);
78
- }
79
-
80
- const script = document.createElement('script');
81
- script.id = scriptId;
82
- script.src = 'https://unpkg.com/[email protected]/build/painterro.min.js';
83
- script.onload = () => resolve(true);
84
- script.onerror = (e) => {
85
- // remove self on error to enable reattempting load
86
- document.head.removeChild(script);
87
- reject(e);
88
- };
89
- document.head.appendChild(script);
90
- });
91
- }
92
- }
93
-
94
- /*
95
- * Turns out caching elements doesn't actually work in gradio
96
- * As elements in tabs might get recreated
97
- */
98
- class ElementCache {
99
- #el;
100
- constructor () {
101
- this.root = document.querySelector('gradio-app').shadowRoot;
102
- }
103
- get (selector) {
104
- return this.root.querySelector(selector);
105
- }
106
- }
107
-
108
- /*
109
- * The main helper class to incapsulate functions
110
- * that change gradio ui functionality
111
- */
112
- class SDClass {
113
- el = new ElementCache();
114
- Painterro = PainterroClass;
115
- moveImageFromGallery ({ x, fromId, toId }) {
116
- x = x[0];
117
- if (!Array.isArray(x) || x.length === 0) return;
118
-
119
- this.clearImageInput(this.el.get(`#${toId}`));
120
-
121
- const i = this.#getGallerySelectedIndex(this.el.get(`#${fromId}`));
122
-
123
- return [x[i].replace('data:;','data:image/png;')];
124
- }
125
- async copyImageFromGalleryToClipboard ({ x, fromId }) {
126
- x = x[0];
127
- if (!Array.isArray(x) || x.length === 0) return;
128
-
129
- const i = this.#getGallerySelectedIndex(this.el.get(`#${fromId}`));
130
-
131
- const data = x[i];
132
- const blob = await (await fetch(data.replace('data:;','data:image/png;'))).blob();
133
- const item = new ClipboardItem({'image/png': blob});
134
-
135
- await this.copyToClipboard([item]);
136
- }
137
- clickFirstVisibleButton({ rowId }) {
138
- const generateButtons = this.el.get(`#${rowId}`).querySelectorAll('.gr-button-primary');
139
-
140
- if (!generateButtons) return;
141
-
142
- for (let i = 0, arr = [...generateButtons]; i < arr.length; i++) {
143
- const cs = window.getComputedStyle(arr[i]);
144
-
145
- if (cs.display !== 'none' && cs.visibility !== 'hidden') {
146
- console.log(arr[i]);
147
-
148
- arr[i].click();
149
- break;
150
- }
151
- }
152
- }
153
- async gradioInputToClipboard ({ x }) { return this.copyToClipboard(x[0]); }
154
- async copyToClipboard (value) {
155
- if (!value || typeof value === 'boolean') return;
156
- try {
157
- if (Array.isArray(value) &&
158
- value.length &&
159
- value[0] instanceof ClipboardItem) {
160
- await navigator.clipboard.write(value);
161
- } else {
162
- await navigator.clipboard.writeText(value);
163
- }
164
- } catch (e) {
165
- SDClass.error(e);
166
- }
167
- }
168
- static error (e) {
169
- console.error(e);
170
- if (typeof e === 'string') {
171
- alert(e);
172
- } else if(typeof e === 'object' && Object.hasOwn(e, 'message')) {
173
- alert(e.message);
174
- }
175
- }
176
- clearImageInput (imageEditor) {
177
- imageEditor?.querySelector('.modify-upload button:last-child')?.click();
178
- }
179
- #getGallerySelectedIndex (gallery) {
180
- const selected = gallery.querySelector(`.\\!ring-2`);
181
- return selected ? [...selected.parentNode.children].indexOf(selected) : 0;
182
- }
183
- }
184
-
185
- return new SDClass();
186
- })();
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Br Style Download 2022.md DELETED
@@ -1,64 +0,0 @@
1
-
2
- <h1>Estilo BR Download 2022: La guía definitiva</h1>
3
- <p>Si eres un entusiasta de las carreras en Brasil, probablemente hayas oído hablar de Estilo BR, el juego de carreras de arrastre definitivo para dispositivos Android. Con 43 vehículos diferentes para elegir, todos brasileños, desde los más clásicos hasta los más modernos, puede experimentar la emoción de las carreras de alta velocidad contra competidores de todo el mundo, incluyendo motocicletas, camiones y remolques. En esta guía, te mostraremos qué es Estilo BR, cómo descargarlo e instalarlo en tu dispositivo, cómo jugarlo y disfrutar de la experiencia de carreras de arrastre, y por qué deberías jugarlo. Si usted es un veterano experimentado o un nuevo jugador, Estilo BR tiene algo para todos. Descarga ahora y únete a la revolución de las carreras de aceleración en Brasil, escuchando tu música favorita mientras juegas. </p>
4
- <h2>br style download 2022</h2><br /><p><b><b>Download</b> &#10026;&#10026;&#10026; <a href="https://bltlly.com/2v6LlI">https://bltlly.com/2v6LlI</a></b></p><br /><br />
5
- <h2>¿Qué es Estilo BR? </h2>
6
- <p>Estilo BR es un juego de carreras de drag desarrollado por RF Entertainment, un estudio de juegos indie brasileño. Fue lanzado en 2019 y desde entonces ha ganado más de 5 millones de descargas y 4.2 estrellas en Google Play. Es uno de los juegos de carreras más populares en Brasil, ofreciendo una experiencia de carreras sin igual que refleja el estilo y la cultura de las carreras callejeras en el país. </p>
7
- <h3>Una breve introducción al juego y sus características</h3>
8
- <p>En Estilo BR, puedes elegir entre 43 vehículos diferentes, todos brasileños, desde los más clásicos hasta los más modernos. Puede personalizar sus vehículos con una amplia variedad de mejoras estéticas y de rendimiento, desde trabajos de pintura personalizados hasta modificaciones del motor. También puede ajustar su motor para optimizar su potencia y velocidad. Puedes participar en carreras multijugador globales con hasta 500 jugadores, tanto en una sala global de mundo abierto como en salas privadas creadas para jugar con amigos. Puedes competir contra pilotos de diferentes países y mostrar tus habilidades en la pista. También puede reproducir música desde su teléfono celular mientras juega, dándole la posibilidad de escuchar sus canciones favoritas mientras corre. </p>
9
-
10
- <p>Para descargar e instalar Estilo BR en tu dispositivo Android, debes seguir estos sencillos pasos:</p>
11
- <ol>
12
- <li>Ir a Google Play Store o Aptoide en su dispositivo y buscar Estilo BR.</li>
13
- <li>Seleccione el juego de los resultados de búsqueda y toque en Instalar.</li>
14
- <li>Espera a que el juego se descargue e instale en tu dispositivo. </li>
15
- <li>Una vez completada la instalación, toque en Abrir para iniciar el juego. </li>
16
- <li>Disfruta jugando Estilo BR! </li>
17
- </ol>
18
- <h3>Cómo jugar Estilo BR y disfrutar de la experiencia de carreras de arrastre</h3>
19
- <p>Para jugar Estilo BR y disfrutar de la experiencia de carreras de arrastre, es necesario seguir estos sencillos consejos:</p>
20
- <ul>
21
- <li>Seleccione un vehículo que se adapte a su estilo y preferencia. Puedes elegir entre 43 vehículos diferentes, todos brasileños, desde los más clásicos hasta los más modernos. </li>
22
- <li>Personaliza tu vehículo con una amplia variedad de mejoras estéticas y de rendimiento. Puede cambiar el color, ruedas, calcomanías, alerones, escapes, luces, parachoques, campanas, techos, ventanas, espejos, puertas, troncos, interiores, asientos, volantes, salpicaderos, indicadores, pedales, palancas de cambios, radios, altavoces, pegatinas, placas de matrícula, etc. También puede modificar su motor, transmisión, suspensión, frenos, neumáticos, nitro, turbocompresor, sobrealimentador, intercooler, colector de admisión, colector de escape, inyector de combustible, bujía, etc.</li>
23
- <li>Ajusta tu motor para optimizar tu potencia y velocidad . Puede ajustar la relación aire/combustible, el tiempo de ignición, la presión de impulso, la relación de engranajes, la presión de los neumáticos, etc. También puede utilizar la prueba del dinamómetro para medir su potencia, par y aceleración. </li>
24
- <li>Participa en carreras multijugador globales con hasta 500 jugadores. Puedes unirte a la sala global del mundo abierto o crear tu propia sala privada para jugar con tus amigos. También puedes chatear con otros jugadores y enviar emojis. </li>
25
-
26
- <li>Reproduce música desde tu teléfono celular mientras juegas. Puedes acceder a tu biblioteca de música desde el juego y escuchar tus canciones favoritas mientras corres. También puede ajustar el volumen y saltar canciones. </li>
27
- </ul>
28
- <h2>¿Por qué debería jugar Estilo BR? </h2>
29
- <p>Estilo BR no es solo otro juego de carreras. Es un juego que refleja el estilo y la cultura de las carreras callejeras en Brasil, ofreciendo una experiencia de carreras sin igual que te hará sentir como si fueras parte de la escena. Estos son algunos de los beneficios y desafíos de jugar Estilo BR:</p>
30
- <p></p>
31
- <h3>Los beneficios de jugar Estilo BR</h3>
32
- <p>Jugar Estilo BR tiene muchos beneficios, como:</p>
33
- <h4>Física y gráficos realistas</h4>
34
- <p>Estilo BR tiene física realista y gráficos que te harán sentir como si estuvieras conduciendo un vehículo real. El juego simula los efectos de gravedad, fricción, arrastre, inercia, tracción, distribución de peso, suspensión, aerodinámica, etc. sobre el rendimiento y el comportamiento de su vehículo. El juego también cuenta con impresionantes gráficos que te sumergirán en el entorno y la atmósfera de cada pista. El juego cuenta con ciclos de día y noche, efectos meteorológicos, iluminación dinámica y sombras, efectos de humo y fuego, etc.</p>
35
- <h4>Vehículos diversos y personalizables</h4>
36
- <p>Estilo BR tiene 43 vehículos diferentes para elegir, todos brasileños, desde los más clásicos hasta los más modernos. Puede personalizar sus vehículos con una amplia variedad de mejoras estéticas y de rendimiento, desde trabajos de pintura personalizados hasta modificaciones del motor. También puede ajustar su motor para optimizar su potencia y velocidad. Puedes crear tu propio estilo único y expresar tu personalidad a través de tu vehículo. </p>
37
- <h4>Modo multijugador global y reproductor de música</h4>
38
-
39
- <h3>Los retos de jugar Estilo BR</h3>
40
- <p>Jugar Estilo BR también tiene algunos desafíos, como:</p>
41
- <h4>Competir contra conductores cualificados de diferentes países</h4>
42
- <p>Estilo BR es un juego competitivo que requiere habilidad y estrategia para ganar carreras. Se enfrentará a conductores de diferentes países que tienen diferentes niveles de experiencia y habilidad. Tendrás que adaptarte a sus estilos y tácticas de conducción y usar tus propias habilidades y estrategias para vencerlos. También tendrá que lidiar con problemas de retardo y conexión que pueden afectar su rendimiento. </p>
43
- <h4>Gestión de su combustible y dinero</h4>
44
- <p>Estilo BR es un juego realista que requiere que administres tu combustible y dinero sabiamente. Tendrá que repostar su vehículo con regularidad para evitar quedarse sin gasolina durante las carreras. También tendrás que ganar dinero ganando carreras o viendo anuncios para comprar vehículos nuevos o actualizar los existentes. Usted tendrá que equilibrar sus hábitos de gasto y ahorro y planificar con antelación para futuras compras. </p>
45
- <h4>Actualizar su rendimiento y ajustar su motor</h4>
46
- <p>Estilo BR es un juego complejo que requiere que actualices tu rendimiento y afines tu motor cuidadosamente. Tendrá que elegir entre una amplia variedad de mejoras estéticas y de rendimiento que afectarán la apariencia y el rendimiento de su vehículo. También tendrá que ajustar su motor para optimizar su potencia y velocidad mediante el ajuste de varios parámetros, tales como relación aire/ combustible, tiempo de ignición, aumentar la presión, relación de engranajes, presión de los neumáticos, etc. Tendrá que experimentar con diferentes combinaciones y configuraciones hasta que encuentre la configuración óptima para cada vehículo y pista. </p>
47
- <h2>Conclusión</h2>
48
-
49
- <h3>Preguntas frecuentes</h3>
50
- <p>Estas son algunas de las preguntas más frecuentes sobre Estilo BR:</p>
51
- <ol>
52
- <li>¿Cómo puedo obtener más dinero en Estilo BR? </li>
53
- <p>Puedes ganar más dinero en Estilo BR ganando carreras o viendo anuncios. También puedes comprar dinero con dinero real a través de compras en la aplicación. </p>
54
- <li>¿Cómo puedo obtener más vehículos en Estilo BR? </li>
55
- <p>Puedes conseguir más vehículos en Estilo BR comprándolos con dinero. También puedes desbloquear algunos vehículos completando ciertos logros o eventos. </p>
56
- <li>¿Cómo puedo chatear con otros jugadores en Estilo BR? </li>
57
- <p>Puedes chatear con otros jugadores en Estilo BR tocando el icono de chat en la esquina superior derecha de la pantalla. También puedes enviar emojis tocando el icono de emoji en la esquina inferior izquierda de la pantalla. </p>
58
- <li>¿Cómo puedo reproducir música desde mi teléfono celular mientras se reproduce Estilo BR? </li>
59
- <p>Puede reproducir música desde su teléfono celular mientras reproduce Estilo BR tocando el icono de música en la esquina superior izquierda de la pantalla. Puedes acceder a tu biblioteca de música desde el juego y escuchar tus canciones favoritas mientras corres. También puedes ajustar el volumen y saltar canciones. </p>
60
- <li>¿Cómo puedo contactar a los desarrolladores de Estilo BR? </li>
61
- <p>Puede ponerse en contacto con los desarrolladores de Estilo BR enviando un correo electrónico a [email protected] o siguiéndolos en sus cuentas de redes sociales, como Facebook, Instagram, YouTube, etc.</p>
62
- </ol></p> 64aa2da5cf<br />
63
- <br />
64
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/ade20k.py DELETED
@@ -1,124 +0,0 @@
1
- import os
2
- import numpy as np
3
- import cv2
4
- import albumentations
5
- from PIL import Image
6
- from torch.utils.data import Dataset
7
-
8
- from taming.data.sflckr import SegmentationBase # for examples included in repo
9
-
10
-
11
- class Examples(SegmentationBase):
12
- def __init__(self, size=256, random_crop=False, interpolation="bicubic"):
13
- super().__init__(data_csv="data/ade20k_examples.txt",
14
- data_root="data/ade20k_images",
15
- segmentation_root="data/ade20k_segmentations",
16
- size=size, random_crop=random_crop,
17
- interpolation=interpolation,
18
- n_labels=151, shift_segmentation=False)
19
-
20
-
21
- # With semantic map and scene label
22
- class ADE20kBase(Dataset):
23
- def __init__(self, config=None, size=None, random_crop=False, interpolation="bicubic", crop_size=None):
24
- self.split = self.get_split()
25
- self.n_labels = 151 # unknown + 150
26
- self.data_csv = {"train": "data/ade20k_train.txt",
27
- "validation": "data/ade20k_test.txt"}[self.split]
28
- self.data_root = "data/ade20k_root"
29
- with open(os.path.join(self.data_root, "sceneCategories.txt"), "r") as f:
30
- self.scene_categories = f.read().splitlines()
31
- self.scene_categories = dict(line.split() for line in self.scene_categories)
32
- with open(self.data_csv, "r") as f:
33
- self.image_paths = f.read().splitlines()
34
- self._length = len(self.image_paths)
35
- self.labels = {
36
- "relative_file_path_": [l for l in self.image_paths],
37
- "file_path_": [os.path.join(self.data_root, "images", l)
38
- for l in self.image_paths],
39
- "relative_segmentation_path_": [l.replace(".jpg", ".png")
40
- for l in self.image_paths],
41
- "segmentation_path_": [os.path.join(self.data_root, "annotations",
42
- l.replace(".jpg", ".png"))
43
- for l in self.image_paths],
44
- "scene_category": [self.scene_categories[l.split("/")[1].replace(".jpg", "")]
45
- for l in self.image_paths],
46
- }
47
-
48
- size = None if size is not None and size<=0 else size
49
- self.size = size
50
- if crop_size is None:
51
- self.crop_size = size if size is not None else None
52
- else:
53
- self.crop_size = crop_size
54
- if self.size is not None:
55
- self.interpolation = interpolation
56
- self.interpolation = {
57
- "nearest": cv2.INTER_NEAREST,
58
- "bilinear": cv2.INTER_LINEAR,
59
- "bicubic": cv2.INTER_CUBIC,
60
- "area": cv2.INTER_AREA,
61
- "lanczos": cv2.INTER_LANCZOS4}[self.interpolation]
62
- self.image_rescaler = albumentations.SmallestMaxSize(max_size=self.size,
63
- interpolation=self.interpolation)
64
- self.segmentation_rescaler = albumentations.SmallestMaxSize(max_size=self.size,
65
- interpolation=cv2.INTER_NEAREST)
66
-
67
- if crop_size is not None:
68
- self.center_crop = not random_crop
69
- if self.center_crop:
70
- self.cropper = albumentations.CenterCrop(height=self.crop_size, width=self.crop_size)
71
- else:
72
- self.cropper = albumentations.RandomCrop(height=self.crop_size, width=self.crop_size)
73
- self.preprocessor = self.cropper
74
-
75
- def __len__(self):
76
- return self._length
77
-
78
- def __getitem__(self, i):
79
- example = dict((k, self.labels[k][i]) for k in self.labels)
80
- image = Image.open(example["file_path_"])
81
- if not image.mode == "RGB":
82
- image = image.convert("RGB")
83
- image = np.array(image).astype(np.uint8)
84
- if self.size is not None:
85
- image = self.image_rescaler(image=image)["image"]
86
- segmentation = Image.open(example["segmentation_path_"])
87
- segmentation = np.array(segmentation).astype(np.uint8)
88
- if self.size is not None:
89
- segmentation = self.segmentation_rescaler(image=segmentation)["image"]
90
- if self.size is not None:
91
- processed = self.preprocessor(image=image, mask=segmentation)
92
- else:
93
- processed = {"image": image, "mask": segmentation}
94
- example["image"] = (processed["image"]/127.5 - 1.0).astype(np.float32)
95
- segmentation = processed["mask"]
96
- onehot = np.eye(self.n_labels)[segmentation]
97
- example["segmentation"] = onehot
98
- return example
99
-
100
-
101
- class ADE20kTrain(ADE20kBase):
102
- # default to random_crop=True
103
- def __init__(self, config=None, size=None, random_crop=True, interpolation="bicubic", crop_size=None):
104
- super().__init__(config=config, size=size, random_crop=random_crop,
105
- interpolation=interpolation, crop_size=crop_size)
106
-
107
- def get_split(self):
108
- return "train"
109
-
110
-
111
- class ADE20kValidation(ADE20kBase):
112
- def get_split(self):
113
- return "validation"
114
-
115
-
116
- if __name__ == "__main__":
117
- dset = ADE20kValidation()
118
- ex = dset[0]
119
- for k in ["image", "scene_category", "segmentation"]:
120
- print(type(ex[k]))
121
- try:
122
- print(ex[k].shape)
123
- except:
124
- print(ex[k])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BernardoOlisan/vqganclip/taming-transformers/taming/util.py DELETED
@@ -1,157 +0,0 @@
1
- import os, hashlib
2
- import requests
3
- from tqdm import tqdm
4
-
5
- URL_MAP = {
6
- "vgg_lpips": "https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1"
7
- }
8
-
9
- CKPT_MAP = {
10
- "vgg_lpips": "vgg.pth"
11
- }
12
-
13
- MD5_MAP = {
14
- "vgg_lpips": "d507d7349b931f0638a25a48a722f98a"
15
- }
16
-
17
-
18
- def download(url, local_path, chunk_size=1024):
19
- os.makedirs(os.path.split(local_path)[0], exist_ok=True)
20
- with requests.get(url, stream=True) as r:
21
- total_size = int(r.headers.get("content-length", 0))
22
- with tqdm(total=total_size, unit="B", unit_scale=True) as pbar:
23
- with open(local_path, "wb") as f:
24
- for data in r.iter_content(chunk_size=chunk_size):
25
- if data:
26
- f.write(data)
27
- pbar.update(chunk_size)
28
-
29
-
30
- def md5_hash(path):
31
- with open(path, "rb") as f:
32
- content = f.read()
33
- return hashlib.md5(content).hexdigest()
34
-
35
-
36
- def get_ckpt_path(name, root, check=False):
37
- assert name in URL_MAP
38
- path = os.path.join(root, CKPT_MAP[name])
39
- if not os.path.exists(path) or (check and not md5_hash(path) == MD5_MAP[name]):
40
- print("Downloading {} model from {} to {}".format(name, URL_MAP[name], path))
41
- download(URL_MAP[name], path)
42
- md5 = md5_hash(path)
43
- assert md5 == MD5_MAP[name], md5
44
- return path
45
-
46
-
47
- class KeyNotFoundError(Exception):
48
- def __init__(self, cause, keys=None, visited=None):
49
- self.cause = cause
50
- self.keys = keys
51
- self.visited = visited
52
- messages = list()
53
- if keys is not None:
54
- messages.append("Key not found: {}".format(keys))
55
- if visited is not None:
56
- messages.append("Visited: {}".format(visited))
57
- messages.append("Cause:\n{}".format(cause))
58
- message = "\n".join(messages)
59
- super().__init__(message)
60
-
61
-
62
- def retrieve(
63
- list_or_dict, key, splitval="/", default=None, expand=True, pass_success=False
64
- ):
65
- """Given a nested list or dict return the desired value at key expanding
66
- callable nodes if necessary and :attr:`expand` is ``True``. The expansion
67
- is done in-place.
68
-
69
- Parameters
70
- ----------
71
- list_or_dict : list or dict
72
- Possibly nested list or dictionary.
73
- key : str
74
- key/to/value, path like string describing all keys necessary to
75
- consider to get to the desired value. List indices can also be
76
- passed here.
77
- splitval : str
78
- String that defines the delimiter between keys of the
79
- different depth levels in `key`.
80
- default : obj
81
- Value returned if :attr:`key` is not found.
82
- expand : bool
83
- Whether to expand callable nodes on the path or not.
84
-
85
- Returns
86
- -------
87
- The desired value or if :attr:`default` is not ``None`` and the
88
- :attr:`key` is not found returns ``default``.
89
-
90
- Raises
91
- ------
92
- Exception if ``key`` not in ``list_or_dict`` and :attr:`default` is
93
- ``None``.
94
- """
95
-
96
- keys = key.split(splitval)
97
-
98
- success = True
99
- try:
100
- visited = []
101
- parent = None
102
- last_key = None
103
- for key in keys:
104
- if callable(list_or_dict):
105
- if not expand:
106
- raise KeyNotFoundError(
107
- ValueError(
108
- "Trying to get past callable node with expand=False."
109
- ),
110
- keys=keys,
111
- visited=visited,
112
- )
113
- list_or_dict = list_or_dict()
114
- parent[last_key] = list_or_dict
115
-
116
- last_key = key
117
- parent = list_or_dict
118
-
119
- try:
120
- if isinstance(list_or_dict, dict):
121
- list_or_dict = list_or_dict[key]
122
- else:
123
- list_or_dict = list_or_dict[int(key)]
124
- except (KeyError, IndexError, ValueError) as e:
125
- raise KeyNotFoundError(e, keys=keys, visited=visited)
126
-
127
- visited += [key]
128
- # final expansion of retrieved value
129
- if expand and callable(list_or_dict):
130
- list_or_dict = list_or_dict()
131
- parent[last_key] = list_or_dict
132
- except KeyNotFoundError as e:
133
- if default is None:
134
- raise e
135
- else:
136
- list_or_dict = default
137
- success = False
138
-
139
- if not pass_success:
140
- return list_or_dict
141
- else:
142
- return list_or_dict, success
143
-
144
-
145
- if __name__ == "__main__":
146
- config = {"keya": "a",
147
- "keyb": "b",
148
- "keyc":
149
- {"cc1": 1,
150
- "cc2": 2,
151
- }
152
- }
153
- from omegaconf import OmegaConf
154
- config = OmegaConf.create(config)
155
- print(config)
156
- retrieve(config, "keya")
157
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/subprocess.py DELETED
@@ -1,260 +0,0 @@
1
- import logging
2
- import os
3
- import shlex
4
- import subprocess
5
- from typing import (
6
- TYPE_CHECKING,
7
- Any,
8
- Callable,
9
- Iterable,
10
- List,
11
- Mapping,
12
- Optional,
13
- Union,
14
- )
15
-
16
- from pip._vendor.rich.markup import escape
17
-
18
- from pip._internal.cli.spinners import SpinnerInterface, open_spinner
19
- from pip._internal.exceptions import InstallationSubprocessError
20
- from pip._internal.utils.logging import VERBOSE, subprocess_logger
21
- from pip._internal.utils.misc import HiddenText
22
-
23
- if TYPE_CHECKING:
24
- # Literal was introduced in Python 3.8.
25
- #
26
- # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7.
27
- from typing import Literal
28
-
29
- CommandArgs = List[Union[str, HiddenText]]
30
-
31
-
32
- def make_command(*args: Union[str, HiddenText, CommandArgs]) -> CommandArgs:
33
- """
34
- Create a CommandArgs object.
35
- """
36
- command_args: CommandArgs = []
37
- for arg in args:
38
- # Check for list instead of CommandArgs since CommandArgs is
39
- # only known during type-checking.
40
- if isinstance(arg, list):
41
- command_args.extend(arg)
42
- else:
43
- # Otherwise, arg is str or HiddenText.
44
- command_args.append(arg)
45
-
46
- return command_args
47
-
48
-
49
- def format_command_args(args: Union[List[str], CommandArgs]) -> str:
50
- """
51
- Format command arguments for display.
52
- """
53
- # For HiddenText arguments, display the redacted form by calling str().
54
- # Also, we don't apply str() to arguments that aren't HiddenText since
55
- # this can trigger a UnicodeDecodeError in Python 2 if the argument
56
- # has type unicode and includes a non-ascii character. (The type
57
- # checker doesn't ensure the annotations are correct in all cases.)
58
- return " ".join(
59
- shlex.quote(str(arg)) if isinstance(arg, HiddenText) else shlex.quote(arg)
60
- for arg in args
61
- )
62
-
63
-
64
- def reveal_command_args(args: Union[List[str], CommandArgs]) -> List[str]:
65
- """
66
- Return the arguments in their raw, unredacted form.
67
- """
68
- return [arg.secret if isinstance(arg, HiddenText) else arg for arg in args]
69
-
70
-
71
- def call_subprocess(
72
- cmd: Union[List[str], CommandArgs],
73
- show_stdout: bool = False,
74
- cwd: Optional[str] = None,
75
- on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise",
76
- extra_ok_returncodes: Optional[Iterable[int]] = None,
77
- extra_environ: Optional[Mapping[str, Any]] = None,
78
- unset_environ: Optional[Iterable[str]] = None,
79
- spinner: Optional[SpinnerInterface] = None,
80
- log_failed_cmd: Optional[bool] = True,
81
- stdout_only: Optional[bool] = False,
82
- *,
83
- command_desc: str,
84
- ) -> str:
85
- """
86
- Args:
87
- show_stdout: if true, use INFO to log the subprocess's stderr and
88
- stdout streams. Otherwise, use DEBUG. Defaults to False.
89
- extra_ok_returncodes: an iterable of integer return codes that are
90
- acceptable, in addition to 0. Defaults to None, which means [].
91
- unset_environ: an iterable of environment variable names to unset
92
- prior to calling subprocess.Popen().
93
- log_failed_cmd: if false, failed commands are not logged, only raised.
94
- stdout_only: if true, return only stdout, else return both. When true,
95
- logging of both stdout and stderr occurs when the subprocess has
96
- terminated, else logging occurs as subprocess output is produced.
97
- """
98
- if extra_ok_returncodes is None:
99
- extra_ok_returncodes = []
100
- if unset_environ is None:
101
- unset_environ = []
102
- # Most places in pip use show_stdout=False. What this means is--
103
- #
104
- # - We connect the child's output (combined stderr and stdout) to a
105
- # single pipe, which we read.
106
- # - We log this output to stderr at DEBUG level as it is received.
107
- # - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't
108
- # requested), then we show a spinner so the user can still see the
109
- # subprocess is in progress.
110
- # - If the subprocess exits with an error, we log the output to stderr
111
- # at ERROR level if it hasn't already been displayed to the console
112
- # (e.g. if --verbose logging wasn't enabled). This way we don't log
113
- # the output to the console twice.
114
- #
115
- # If show_stdout=True, then the above is still done, but with DEBUG
116
- # replaced by INFO.
117
- if show_stdout:
118
- # Then log the subprocess output at INFO level.
119
- log_subprocess: Callable[..., None] = subprocess_logger.info
120
- used_level = logging.INFO
121
- else:
122
- # Then log the subprocess output using VERBOSE. This also ensures
123
- # it will be logged to the log file (aka user_log), if enabled.
124
- log_subprocess = subprocess_logger.verbose
125
- used_level = VERBOSE
126
-
127
- # Whether the subprocess will be visible in the console.
128
- showing_subprocess = subprocess_logger.getEffectiveLevel() <= used_level
129
-
130
- # Only use the spinner if we're not showing the subprocess output
131
- # and we have a spinner.
132
- use_spinner = not showing_subprocess and spinner is not None
133
-
134
- log_subprocess("Running command %s", command_desc)
135
- env = os.environ.copy()
136
- if extra_environ:
137
- env.update(extra_environ)
138
- for name in unset_environ:
139
- env.pop(name, None)
140
- try:
141
- proc = subprocess.Popen(
142
- # Convert HiddenText objects to the underlying str.
143
- reveal_command_args(cmd),
144
- stdin=subprocess.PIPE,
145
- stdout=subprocess.PIPE,
146
- stderr=subprocess.STDOUT if not stdout_only else subprocess.PIPE,
147
- cwd=cwd,
148
- env=env,
149
- errors="backslashreplace",
150
- )
151
- except Exception as exc:
152
- if log_failed_cmd:
153
- subprocess_logger.critical(
154
- "Error %s while executing command %s",
155
- exc,
156
- command_desc,
157
- )
158
- raise
159
- all_output = []
160
- if not stdout_only:
161
- assert proc.stdout
162
- assert proc.stdin
163
- proc.stdin.close()
164
- # In this mode, stdout and stderr are in the same pipe.
165
- while True:
166
- line: str = proc.stdout.readline()
167
- if not line:
168
- break
169
- line = line.rstrip()
170
- all_output.append(line + "\n")
171
-
172
- # Show the line immediately.
173
- log_subprocess(line)
174
- # Update the spinner.
175
- if use_spinner:
176
- assert spinner
177
- spinner.spin()
178
- try:
179
- proc.wait()
180
- finally:
181
- if proc.stdout:
182
- proc.stdout.close()
183
- output = "".join(all_output)
184
- else:
185
- # In this mode, stdout and stderr are in different pipes.
186
- # We must use communicate() which is the only safe way to read both.
187
- out, err = proc.communicate()
188
- # log line by line to preserve pip log indenting
189
- for out_line in out.splitlines():
190
- log_subprocess(out_line)
191
- all_output.append(out)
192
- for err_line in err.splitlines():
193
- log_subprocess(err_line)
194
- all_output.append(err)
195
- output = out
196
-
197
- proc_had_error = proc.returncode and proc.returncode not in extra_ok_returncodes
198
- if use_spinner:
199
- assert spinner
200
- if proc_had_error:
201
- spinner.finish("error")
202
- else:
203
- spinner.finish("done")
204
- if proc_had_error:
205
- if on_returncode == "raise":
206
- error = InstallationSubprocessError(
207
- command_description=command_desc,
208
- exit_code=proc.returncode,
209
- output_lines=all_output if not showing_subprocess else None,
210
- )
211
- if log_failed_cmd:
212
- subprocess_logger.error("[present-rich] %s", error)
213
- subprocess_logger.verbose(
214
- "[bold magenta]full command[/]: [blue]%s[/]",
215
- escape(format_command_args(cmd)),
216
- extra={"markup": True},
217
- )
218
- subprocess_logger.verbose(
219
- "[bold magenta]cwd[/]: %s",
220
- escape(cwd or "[inherit]"),
221
- extra={"markup": True},
222
- )
223
-
224
- raise error
225
- elif on_returncode == "warn":
226
- subprocess_logger.warning(
227
- 'Command "%s" had error code %s in %s',
228
- command_desc,
229
- proc.returncode,
230
- cwd,
231
- )
232
- elif on_returncode == "ignore":
233
- pass
234
- else:
235
- raise ValueError(f"Invalid value: on_returncode={on_returncode!r}")
236
- return output
237
-
238
-
239
- def runner_with_spinner_message(message: str) -> Callable[..., None]:
240
- """Provide a subprocess_runner that shows a spinner message.
241
-
242
- Intended for use with for BuildBackendHookCaller. Thus, the runner has
243
- an API that matches what's expected by BuildBackendHookCaller.subprocess_runner.
244
- """
245
-
246
- def runner(
247
- cmd: List[str],
248
- cwd: Optional[str] = None,
249
- extra_environ: Optional[Mapping[str, Any]] = None,
250
- ) -> None:
251
- with open_spinner(message) as spinner:
252
- call_subprocess(
253
- cmd,
254
- command_desc=message,
255
- cwd=cwd,
256
- extra_environ=extra_environ,
257
- spinner=spinner,
258
- )
259
-
260
- return runner
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/readers.py DELETED
@@ -1,122 +0,0 @@
1
- import collections
2
- import pathlib
3
- import operator
4
-
5
- from . import abc
6
-
7
- from ._itertools import unique_everseen
8
- from ._compat import ZipPath
9
-
10
-
11
- def remove_duplicates(items):
12
- return iter(collections.OrderedDict.fromkeys(items))
13
-
14
-
15
- class FileReader(abc.TraversableResources):
16
- def __init__(self, loader):
17
- self.path = pathlib.Path(loader.path).parent
18
-
19
- def resource_path(self, resource):
20
- """
21
- Return the file system path to prevent
22
- `resources.path()` from creating a temporary
23
- copy.
24
- """
25
- return str(self.path.joinpath(resource))
26
-
27
- def files(self):
28
- return self.path
29
-
30
-
31
- class ZipReader(abc.TraversableResources):
32
- def __init__(self, loader, module):
33
- _, _, name = module.rpartition('.')
34
- self.prefix = loader.prefix.replace('\\', '/') + name + '/'
35
- self.archive = loader.archive
36
-
37
- def open_resource(self, resource):
38
- try:
39
- return super().open_resource(resource)
40
- except KeyError as exc:
41
- raise FileNotFoundError(exc.args[0])
42
-
43
- def is_resource(self, path):
44
- # workaround for `zipfile.Path.is_file` returning true
45
- # for non-existent paths.
46
- target = self.files().joinpath(path)
47
- return target.is_file() and target.exists()
48
-
49
- def files(self):
50
- return ZipPath(self.archive, self.prefix)
51
-
52
-
53
- class MultiplexedPath(abc.Traversable):
54
- """
55
- Given a series of Traversable objects, implement a merged
56
- version of the interface across all objects. Useful for
57
- namespace packages which may be multihomed at a single
58
- name.
59
- """
60
-
61
- def __init__(self, *paths):
62
- self._paths = list(map(pathlib.Path, remove_duplicates(paths)))
63
- if not self._paths:
64
- message = 'MultiplexedPath must contain at least one path'
65
- raise FileNotFoundError(message)
66
- if not all(path.is_dir() for path in self._paths):
67
- raise NotADirectoryError('MultiplexedPath only supports directories')
68
-
69
- def iterdir(self):
70
- files = (file for path in self._paths for file in path.iterdir())
71
- return unique_everseen(files, key=operator.attrgetter('name'))
72
-
73
- def read_bytes(self):
74
- raise FileNotFoundError(f'{self} is not a file')
75
-
76
- def read_text(self, *args, **kwargs):
77
- raise FileNotFoundError(f'{self} is not a file')
78
-
79
- def is_dir(self):
80
- return True
81
-
82
- def is_file(self):
83
- return False
84
-
85
- def joinpath(self, child):
86
- # first try to find child in current paths
87
- for file in self.iterdir():
88
- if file.name == child:
89
- return file
90
- # if it does not exist, construct it with the first path
91
- return self._paths[0] / child
92
-
93
- __truediv__ = joinpath
94
-
95
- def open(self, *args, **kwargs):
96
- raise FileNotFoundError(f'{self} is not a file')
97
-
98
- @property
99
- def name(self):
100
- return self._paths[0].name
101
-
102
- def __repr__(self):
103
- paths = ', '.join(f"'{path}'" for path in self._paths)
104
- return f'MultiplexedPath({paths})'
105
-
106
-
107
- class NamespaceReader(abc.TraversableResources):
108
- def __init__(self, namespace_path):
109
- if 'NamespacePath' not in str(namespace_path):
110
- raise ValueError('Invalid path')
111
- self.path = MultiplexedPath(*list(namespace_path))
112
-
113
- def resource_path(self, resource):
114
- """
115
- Return the file system path to prevent
116
- `resources.path()` from creating a temporary
117
- copy.
118
- """
119
- return str(self.path.joinpath(resource))
120
-
121
- def files(self):
122
- return self.path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_resources/_adapters.py DELETED
@@ -1,170 +0,0 @@
1
- from contextlib import suppress
2
- from io import TextIOWrapper
3
-
4
- from . import abc
5
-
6
-
7
- class SpecLoaderAdapter:
8
- """
9
- Adapt a package spec to adapt the underlying loader.
10
- """
11
-
12
- def __init__(self, spec, adapter=lambda spec: spec.loader):
13
- self.spec = spec
14
- self.loader = adapter(spec)
15
-
16
- def __getattr__(self, name):
17
- return getattr(self.spec, name)
18
-
19
-
20
- class TraversableResourcesLoader:
21
- """
22
- Adapt a loader to provide TraversableResources.
23
- """
24
-
25
- def __init__(self, spec):
26
- self.spec = spec
27
-
28
- def get_resource_reader(self, name):
29
- return CompatibilityFiles(self.spec)._native()
30
-
31
-
32
- def _io_wrapper(file, mode='r', *args, **kwargs):
33
- if mode == 'r':
34
- return TextIOWrapper(file, *args, **kwargs)
35
- elif mode == 'rb':
36
- return file
37
- raise ValueError(
38
- "Invalid mode value '{}', only 'r' and 'rb' are supported".format(mode)
39
- )
40
-
41
-
42
- class CompatibilityFiles:
43
- """
44
- Adapter for an existing or non-existent resource reader
45
- to provide a compatibility .files().
46
- """
47
-
48
- class SpecPath(abc.Traversable):
49
- """
50
- Path tied to a module spec.
51
- Can be read and exposes the resource reader children.
52
- """
53
-
54
- def __init__(self, spec, reader):
55
- self._spec = spec
56
- self._reader = reader
57
-
58
- def iterdir(self):
59
- if not self._reader:
60
- return iter(())
61
- return iter(
62
- CompatibilityFiles.ChildPath(self._reader, path)
63
- for path in self._reader.contents()
64
- )
65
-
66
- def is_file(self):
67
- return False
68
-
69
- is_dir = is_file
70
-
71
- def joinpath(self, other):
72
- if not self._reader:
73
- return CompatibilityFiles.OrphanPath(other)
74
- return CompatibilityFiles.ChildPath(self._reader, other)
75
-
76
- @property
77
- def name(self):
78
- return self._spec.name
79
-
80
- def open(self, mode='r', *args, **kwargs):
81
- return _io_wrapper(self._reader.open_resource(None), mode, *args, **kwargs)
82
-
83
- class ChildPath(abc.Traversable):
84
- """
85
- Path tied to a resource reader child.
86
- Can be read but doesn't expose any meaningful children.
87
- """
88
-
89
- def __init__(self, reader, name):
90
- self._reader = reader
91
- self._name = name
92
-
93
- def iterdir(self):
94
- return iter(())
95
-
96
- def is_file(self):
97
- return self._reader.is_resource(self.name)
98
-
99
- def is_dir(self):
100
- return not self.is_file()
101
-
102
- def joinpath(self, other):
103
- return CompatibilityFiles.OrphanPath(self.name, other)
104
-
105
- @property
106
- def name(self):
107
- return self._name
108
-
109
- def open(self, mode='r', *args, **kwargs):
110
- return _io_wrapper(
111
- self._reader.open_resource(self.name), mode, *args, **kwargs
112
- )
113
-
114
- class OrphanPath(abc.Traversable):
115
- """
116
- Orphan path, not tied to a module spec or resource reader.
117
- Can't be read and doesn't expose any meaningful children.
118
- """
119
-
120
- def __init__(self, *path_parts):
121
- if len(path_parts) < 1:
122
- raise ValueError('Need at least one path part to construct a path')
123
- self._path = path_parts
124
-
125
- def iterdir(self):
126
- return iter(())
127
-
128
- def is_file(self):
129
- return False
130
-
131
- is_dir = is_file
132
-
133
- def joinpath(self, other):
134
- return CompatibilityFiles.OrphanPath(*self._path, other)
135
-
136
- @property
137
- def name(self):
138
- return self._path[-1]
139
-
140
- def open(self, mode='r', *args, **kwargs):
141
- raise FileNotFoundError("Can't open orphan path")
142
-
143
- def __init__(self, spec):
144
- self.spec = spec
145
-
146
- @property
147
- def _reader(self):
148
- with suppress(AttributeError):
149
- return self.spec.loader.get_resource_reader(self.spec.name)
150
-
151
- def _native(self):
152
- """
153
- Return the native reader if it supports files().
154
- """
155
- reader = self._reader
156
- return reader if hasattr(reader, 'files') else self
157
-
158
- def __getattr__(self, attr):
159
- return getattr(self._reader, attr)
160
-
161
- def files(self):
162
- return CompatibilityFiles.SpecPath(self.spec, self._reader)
163
-
164
-
165
- def wrap_spec(package):
166
- """
167
- Construct a package spec with traversable compatibility
168
- on the spec/loader/reader.
169
- """
170
- return SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BreetheRun/stabilityai-stable-diffusion-xl-base-1.0/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Stabilityai Stable Diffusion Xl Base 1.0
3
- emoji: 💻
4
- colorFrom: pink
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.47.1
8
- app_file: app.py
9
- pinned: false
10
- license: unknown
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_builtin_casters.cpp DELETED
@@ -1,192 +0,0 @@
1
- /*
2
- tests/test_builtin_casters.cpp -- Casters available without any additional headers
3
-
4
- Copyright (c) 2017 Wenzel Jakob <[email protected]>
5
-
6
- All rights reserved. Use of this source code is governed by a
7
- BSD-style license that can be found in the LICENSE file.
8
- */
9
-
10
- #include "pybind11_tests.h"
11
- #include <pybind11/complex.h>
12
-
13
- #if defined(_MSC_VER)
14
- # pragma warning(push)
15
- # pragma warning(disable: 4127) // warning C4127: Conditional expression is constant
16
- #endif
17
-
18
- TEST_SUBMODULE(builtin_casters, m) {
19
- // test_simple_string
20
- m.def("string_roundtrip", [](const char *s) { return s; });
21
-
22
- // test_unicode_conversion
23
- // Some test characters in utf16 and utf32 encodings. The last one (the 𝐀) contains a null byte
24
- char32_t a32 = 0x61 /*a*/, z32 = 0x7a /*z*/, ib32 = 0x203d /*‽*/, cake32 = 0x1f382 /*🎂*/, mathbfA32 = 0x1d400 /*𝐀*/;
25
- char16_t b16 = 0x62 /*b*/, z16 = 0x7a, ib16 = 0x203d, cake16_1 = 0xd83c, cake16_2 = 0xdf82, mathbfA16_1 = 0xd835, mathbfA16_2 = 0xdc00;
26
- std::wstring wstr;
27
- wstr.push_back(0x61); // a
28
- wstr.push_back(0x2e18); // ⸘
29
- if (sizeof(wchar_t) == 2) { wstr.push_back(mathbfA16_1); wstr.push_back(mathbfA16_2); } // 𝐀, utf16
30
- else { wstr.push_back((wchar_t) mathbfA32); } // 𝐀, utf32
31
- wstr.push_back(0x7a); // z
32
-
33
- m.def("good_utf8_string", []() { return std::string((const char*)u8"Say utf8\u203d \U0001f382 \U0001d400"); }); // Say utf8‽ 🎂 𝐀
34
- m.def("good_utf16_string", [=]() { return std::u16string({ b16, ib16, cake16_1, cake16_2, mathbfA16_1, mathbfA16_2, z16 }); }); // b‽🎂𝐀z
35
- m.def("good_utf32_string", [=]() { return std::u32string({ a32, mathbfA32, cake32, ib32, z32 }); }); // a𝐀🎂‽z
36
- m.def("good_wchar_string", [=]() { return wstr; }); // a‽𝐀z
37
- m.def("bad_utf8_string", []() { return std::string("abc\xd0" "def"); });
38
- m.def("bad_utf16_string", [=]() { return std::u16string({ b16, char16_t(0xd800), z16 }); });
39
- // Under Python 2.7, invalid unicode UTF-32 characters don't appear to trigger UnicodeDecodeError
40
- if (PY_MAJOR_VERSION >= 3)
41
- m.def("bad_utf32_string", [=]() { return std::u32string({ a32, char32_t(0xd800), z32 }); });
42
- if (PY_MAJOR_VERSION >= 3 || sizeof(wchar_t) == 2)
43
- m.def("bad_wchar_string", [=]() { return std::wstring({ wchar_t(0x61), wchar_t(0xd800) }); });
44
- m.def("u8_Z", []() -> char { return 'Z'; });
45
- m.def("u8_eacute", []() -> char { return '\xe9'; });
46
- m.def("u16_ibang", [=]() -> char16_t { return ib16; });
47
- m.def("u32_mathbfA", [=]() -> char32_t { return mathbfA32; });
48
- m.def("wchar_heart", []() -> wchar_t { return 0x2665; });
49
-
50
- // test_single_char_arguments
51
- m.attr("wchar_size") = py::cast(sizeof(wchar_t));
52
- m.def("ord_char", [](char c) -> int { return static_cast<unsigned char>(c); });
53
- m.def("ord_char_lv", [](char &c) -> int { return static_cast<unsigned char>(c); });
54
- m.def("ord_char16", [](char16_t c) -> uint16_t { return c; });
55
- m.def("ord_char16_lv", [](char16_t &c) -> uint16_t { return c; });
56
- m.def("ord_char32", [](char32_t c) -> uint32_t { return c; });
57
- m.def("ord_wchar", [](wchar_t c) -> int { return c; });
58
-
59
- // test_bytes_to_string
60
- m.def("strlen", [](char *s) { return strlen(s); });
61
- m.def("string_length", [](std::string s) { return s.length(); });
62
-
63
- #ifdef PYBIND11_HAS_U8STRING
64
- m.attr("has_u8string") = true;
65
- m.def("good_utf8_u8string", []() { return std::u8string(u8"Say utf8\u203d \U0001f382 \U0001d400"); }); // Say utf8‽ 🎂 𝐀
66
- m.def("bad_utf8_u8string", []() { return std::u8string((const char8_t*)"abc\xd0" "def"); });
67
-
68
- m.def("u8_char8_Z", []() -> char8_t { return u8'Z'; });
69
-
70
- // test_single_char_arguments
71
- m.def("ord_char8", [](char8_t c) -> int { return static_cast<unsigned char>(c); });
72
- m.def("ord_char8_lv", [](char8_t &c) -> int { return static_cast<unsigned char>(c); });
73
- #endif
74
-
75
- // test_string_view
76
- #ifdef PYBIND11_HAS_STRING_VIEW
77
- m.attr("has_string_view") = true;
78
- m.def("string_view_print", [](std::string_view s) { py::print(s, s.size()); });
79
- m.def("string_view16_print", [](std::u16string_view s) { py::print(s, s.size()); });
80
- m.def("string_view32_print", [](std::u32string_view s) { py::print(s, s.size()); });
81
- m.def("string_view_chars", [](std::string_view s) { py::list l; for (auto c : s) l.append((std::uint8_t) c); return l; });
82
- m.def("string_view16_chars", [](std::u16string_view s) { py::list l; for (auto c : s) l.append((int) c); return l; });
83
- m.def("string_view32_chars", [](std::u32string_view s) { py::list l; for (auto c : s) l.append((int) c); return l; });
84
- m.def("string_view_return", []() { return std::string_view((const char*)u8"utf8 secret \U0001f382"); });
85
- m.def("string_view16_return", []() { return std::u16string_view(u"utf16 secret \U0001f382"); });
86
- m.def("string_view32_return", []() { return std::u32string_view(U"utf32 secret \U0001f382"); });
87
-
88
- # ifdef PYBIND11_HAS_U8STRING
89
- m.def("string_view8_print", [](std::u8string_view s) { py::print(s, s.size()); });
90
- m.def("string_view8_chars", [](std::u8string_view s) { py::list l; for (auto c : s) l.append((std::uint8_t) c); return l; });
91
- m.def("string_view8_return", []() { return std::u8string_view(u8"utf8 secret \U0001f382"); });
92
- # endif
93
- #endif
94
-
95
- // test_integer_casting
96
- m.def("i32_str", [](std::int32_t v) { return std::to_string(v); });
97
- m.def("u32_str", [](std::uint32_t v) { return std::to_string(v); });
98
- m.def("i64_str", [](std::int64_t v) { return std::to_string(v); });
99
- m.def("u64_str", [](std::uint64_t v) { return std::to_string(v); });
100
-
101
- // test_tuple
102
- m.def("pair_passthrough", [](std::pair<bool, std::string> input) {
103
- return std::make_pair(input.second, input.first);
104
- }, "Return a pair in reversed order");
105
- m.def("tuple_passthrough", [](std::tuple<bool, std::string, int> input) {
106
- return std::make_tuple(std::get<2>(input), std::get<1>(input), std::get<0>(input));
107
- }, "Return a triple in reversed order");
108
- m.def("empty_tuple", []() { return std::tuple<>(); });
109
- static std::pair<RValueCaster, RValueCaster> lvpair;
110
- static std::tuple<RValueCaster, RValueCaster, RValueCaster> lvtuple;
111
- static std::pair<RValueCaster, std::tuple<RValueCaster, std::pair<RValueCaster, RValueCaster>>> lvnested;
112
- m.def("rvalue_pair", []() { return std::make_pair(RValueCaster{}, RValueCaster{}); });
113
- m.def("lvalue_pair", []() -> const decltype(lvpair) & { return lvpair; });
114
- m.def("rvalue_tuple", []() { return std::make_tuple(RValueCaster{}, RValueCaster{}, RValueCaster{}); });
115
- m.def("lvalue_tuple", []() -> const decltype(lvtuple) & { return lvtuple; });
116
- m.def("rvalue_nested", []() {
117
- return std::make_pair(RValueCaster{}, std::make_tuple(RValueCaster{}, std::make_pair(RValueCaster{}, RValueCaster{}))); });
118
- m.def("lvalue_nested", []() -> const decltype(lvnested) & { return lvnested; });
119
-
120
- static std::pair<int, std::string> int_string_pair{2, "items"};
121
- m.def("int_string_pair", []() { return &int_string_pair; });
122
-
123
- // test_builtins_cast_return_none
124
- m.def("return_none_string", []() -> std::string * { return nullptr; });
125
- m.def("return_none_char", []() -> const char * { return nullptr; });
126
- m.def("return_none_bool", []() -> bool * { return nullptr; });
127
- m.def("return_none_int", []() -> int * { return nullptr; });
128
- m.def("return_none_float", []() -> float * { return nullptr; });
129
- m.def("return_none_pair", []() -> std::pair<int,int> * { return nullptr; });
130
-
131
- // test_none_deferred
132
- m.def("defer_none_cstring", [](char *) { return false; });
133
- m.def("defer_none_cstring", [](py::none) { return true; });
134
- m.def("defer_none_custom", [](UserType *) { return false; });
135
- m.def("defer_none_custom", [](py::none) { return true; });
136
- m.def("nodefer_none_void", [](void *) { return true; });
137
- m.def("nodefer_none_void", [](py::none) { return false; });
138
-
139
- // test_void_caster
140
- m.def("load_nullptr_t", [](std::nullptr_t) {}); // not useful, but it should still compile
141
- m.def("cast_nullptr_t", []() { return std::nullptr_t{}; });
142
-
143
- // test_bool_caster
144
- m.def("bool_passthrough", [](bool arg) { return arg; });
145
- m.def("bool_passthrough_noconvert", [](bool arg) { return arg; }, py::arg().noconvert());
146
-
147
- // test_reference_wrapper
148
- m.def("refwrap_builtin", [](std::reference_wrapper<int> p) { return 10 * p.get(); });
149
- m.def("refwrap_usertype", [](std::reference_wrapper<UserType> p) { return p.get().value(); });
150
- // Not currently supported (std::pair caster has return-by-value cast operator);
151
- // triggers static_assert failure.
152
- //m.def("refwrap_pair", [](std::reference_wrapper<std::pair<int, int>>) { });
153
-
154
- m.def("refwrap_list", [](bool copy) {
155
- static IncType x1(1), x2(2);
156
- py::list l;
157
- for (auto &f : {std::ref(x1), std::ref(x2)}) {
158
- l.append(py::cast(f, copy ? py::return_value_policy::copy
159
- : py::return_value_policy::reference));
160
- }
161
- return l;
162
- }, "copy"_a);
163
-
164
- m.def("refwrap_iiw", [](const IncType &w) { return w.value(); });
165
- m.def("refwrap_call_iiw", [](IncType &w, py::function f) {
166
- py::list l;
167
- l.append(f(std::ref(w)));
168
- l.append(f(std::cref(w)));
169
- IncType x(w.value());
170
- l.append(f(std::ref(x)));
171
- IncType y(w.value());
172
- auto r3 = std::ref(y);
173
- l.append(f(r3));
174
- return l;
175
- });
176
-
177
- // test_complex
178
- m.def("complex_cast", [](float x) { return "{}"_s.format(x); });
179
- m.def("complex_cast", [](std::complex<float> x) { return "({}, {})"_s.format(x.real(), x.imag()); });
180
-
181
- // test int vs. long (Python 2)
182
- m.def("int_cast", []() {return (int) 42;});
183
- m.def("long_cast", []() {return (long) 42;});
184
- m.def("longlong_cast", []() {return ULLONG_MAX;});
185
-
186
- /// test void* cast operator
187
- m.def("test_void_caster", []() -> bool {
188
- void *v = (void *) 0xabcd;
189
- py::object o = py::cast(v);
190
- return py::cast<void *>(o) == v;
191
- });
192
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/tabulate.h DELETED
@@ -1,22 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system has no special version of this algorithm
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/config-checkpoint.py DELETED
@@ -1,34 +0,0 @@
1
- from transformers import PretrainedConfig
2
- from typing import List
3
-
4
-
5
- class MonoSceneConfig(PretrainedConfig):
6
-
7
- def __init__(
8
- self,
9
- block_type="bottleneck",
10
- layers: List[int] = [3, 4, 6, 3],
11
- num_classes: int = 1000,
12
- input_channels: int = 3,
13
- cardinality: int = 1,
14
- base_width: int = 64,
15
- stem_width: int = 64,
16
- stem_type: str = "",
17
- avg_down: bool = False,
18
- **kwargs,
19
- ):
20
- self.block_type = block_type
21
- self.layers = layers
22
- self.num_classes = num_classes
23
- self.input_channels = input_channels
24
- self.cardinality = cardinality
25
- self.base_width = base_width
26
- self.stem_width = stem_width
27
- self.stem_type = stem_type
28
- self.avg_down = avg_down
29
- super().__init__(**kwargs)
30
-
31
-
32
-
33
-
34
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarlDennis/Lovelive-VITS-JPZH/attentions.py DELETED
@@ -1,300 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- from modules import LayerNorm
8
-
9
-
10
- class Encoder(nn.Module):
11
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
12
- super().__init__()
13
- self.hidden_channels = hidden_channels
14
- self.filter_channels = filter_channels
15
- self.n_heads = n_heads
16
- self.n_layers = n_layers
17
- self.kernel_size = kernel_size
18
- self.p_dropout = p_dropout
19
- self.window_size = window_size
20
-
21
- self.drop = nn.Dropout(p_dropout)
22
- self.attn_layers = nn.ModuleList()
23
- self.norm_layers_1 = nn.ModuleList()
24
- self.ffn_layers = nn.ModuleList()
25
- self.norm_layers_2 = nn.ModuleList()
26
- for i in range(self.n_layers):
27
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
28
- self.norm_layers_1.append(LayerNorm(hidden_channels))
29
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
30
- self.norm_layers_2.append(LayerNorm(hidden_channels))
31
-
32
- def forward(self, x, x_mask):
33
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
34
- x = x * x_mask
35
- for i in range(self.n_layers):
36
- y = self.attn_layers[i](x, x, attn_mask)
37
- y = self.drop(y)
38
- x = self.norm_layers_1[i](x + y)
39
-
40
- y = self.ffn_layers[i](x, x_mask)
41
- y = self.drop(y)
42
- x = self.norm_layers_2[i](x + y)
43
- x = x * x_mask
44
- return x
45
-
46
-
47
- class Decoder(nn.Module):
48
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
49
- super().__init__()
50
- self.hidden_channels = hidden_channels
51
- self.filter_channels = filter_channels
52
- self.n_heads = n_heads
53
- self.n_layers = n_layers
54
- self.kernel_size = kernel_size
55
- self.p_dropout = p_dropout
56
- self.proximal_bias = proximal_bias
57
- self.proximal_init = proximal_init
58
-
59
- self.drop = nn.Dropout(p_dropout)
60
- self.self_attn_layers = nn.ModuleList()
61
- self.norm_layers_0 = nn.ModuleList()
62
- self.encdec_attn_layers = nn.ModuleList()
63
- self.norm_layers_1 = nn.ModuleList()
64
- self.ffn_layers = nn.ModuleList()
65
- self.norm_layers_2 = nn.ModuleList()
66
- for i in range(self.n_layers):
67
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
68
- self.norm_layers_0.append(LayerNorm(hidden_channels))
69
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
70
- self.norm_layers_1.append(LayerNorm(hidden_channels))
71
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
72
- self.norm_layers_2.append(LayerNorm(hidden_channels))
73
-
74
- def forward(self, x, x_mask, h, h_mask):
75
- """
76
- x: decoder input
77
- h: encoder output
78
- """
79
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
80
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
81
- x = x * x_mask
82
- for i in range(self.n_layers):
83
- y = self.self_attn_layers[i](x, x, self_attn_mask)
84
- y = self.drop(y)
85
- x = self.norm_layers_0[i](x + y)
86
-
87
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
88
- y = self.drop(y)
89
- x = self.norm_layers_1[i](x + y)
90
-
91
- y = self.ffn_layers[i](x, x_mask)
92
- y = self.drop(y)
93
- x = self.norm_layers_2[i](x + y)
94
- x = x * x_mask
95
- return x
96
-
97
-
98
- class MultiHeadAttention(nn.Module):
99
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
100
- super().__init__()
101
- assert channels % n_heads == 0
102
-
103
- self.channels = channels
104
- self.out_channels = out_channels
105
- self.n_heads = n_heads
106
- self.p_dropout = p_dropout
107
- self.window_size = window_size
108
- self.heads_share = heads_share
109
- self.block_length = block_length
110
- self.proximal_bias = proximal_bias
111
- self.proximal_init = proximal_init
112
- self.attn = None
113
-
114
- self.k_channels = channels // n_heads
115
- self.conv_q = nn.Conv1d(channels, channels, 1)
116
- self.conv_k = nn.Conv1d(channels, channels, 1)
117
- self.conv_v = nn.Conv1d(channels, channels, 1)
118
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
119
- self.drop = nn.Dropout(p_dropout)
120
-
121
- if window_size is not None:
122
- n_heads_rel = 1 if heads_share else n_heads
123
- rel_stddev = self.k_channels**-0.5
124
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
125
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
126
-
127
- nn.init.xavier_uniform_(self.conv_q.weight)
128
- nn.init.xavier_uniform_(self.conv_k.weight)
129
- nn.init.xavier_uniform_(self.conv_v.weight)
130
- if proximal_init:
131
- with torch.no_grad():
132
- self.conv_k.weight.copy_(self.conv_q.weight)
133
- self.conv_k.bias.copy_(self.conv_q.bias)
134
-
135
- def forward(self, x, c, attn_mask=None):
136
- q = self.conv_q(x)
137
- k = self.conv_k(c)
138
- v = self.conv_v(c)
139
-
140
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
141
-
142
- x = self.conv_o(x)
143
- return x
144
-
145
- def attention(self, query, key, value, mask=None):
146
- # reshape [b, d, t] -> [b, n_h, t, d_k]
147
- b, d, t_s, t_t = (*key.size(), query.size(2))
148
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
149
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
150
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
151
-
152
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
153
- if self.window_size is not None:
154
- assert t_s == t_t, "Relative attention is only available for self-attention."
155
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
156
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
157
- scores_local = self._relative_position_to_absolute_position(rel_logits)
158
- scores = scores + scores_local
159
- if self.proximal_bias:
160
- assert t_s == t_t, "Proximal bias is only available for self-attention."
161
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
162
- if mask is not None:
163
- scores = scores.masked_fill(mask == 0, -1e4)
164
- if self.block_length is not None:
165
- assert t_s == t_t, "Local attention is only available for self-attention."
166
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
167
- scores = scores.masked_fill(block_mask == 0, -1e4)
168
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
169
- p_attn = self.drop(p_attn)
170
- output = torch.matmul(p_attn, value)
171
- if self.window_size is not None:
172
- relative_weights = self._absolute_position_to_relative_position(p_attn)
173
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
174
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
175
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
176
- return output, p_attn
177
-
178
- def _matmul_with_relative_values(self, x, y):
179
- """
180
- x: [b, h, l, m]
181
- y: [h or 1, m, d]
182
- ret: [b, h, l, d]
183
- """
184
- ret = torch.matmul(x, y.unsqueeze(0))
185
- return ret
186
-
187
- def _matmul_with_relative_keys(self, x, y):
188
- """
189
- x: [b, h, l, d]
190
- y: [h or 1, m, d]
191
- ret: [b, h, l, m]
192
- """
193
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
194
- return ret
195
-
196
- def _get_relative_embeddings(self, relative_embeddings, length):
197
- max_relative_position = 2 * self.window_size + 1
198
- # Pad first before slice to avoid using cond ops.
199
- pad_length = max(length - (self.window_size + 1), 0)
200
- slice_start_position = max((self.window_size + 1) - length, 0)
201
- slice_end_position = slice_start_position + 2 * length - 1
202
- if pad_length > 0:
203
- padded_relative_embeddings = F.pad(
204
- relative_embeddings,
205
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
206
- else:
207
- padded_relative_embeddings = relative_embeddings
208
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
209
- return used_relative_embeddings
210
-
211
- def _relative_position_to_absolute_position(self, x):
212
- """
213
- x: [b, h, l, 2*l-1]
214
- ret: [b, h, l, l]
215
- """
216
- batch, heads, length, _ = x.size()
217
- # Concat columns of pad to shift from relative to absolute indexing.
218
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
219
-
220
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
221
- x_flat = x.view([batch, heads, length * 2 * length])
222
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
223
-
224
- # Reshape and slice out the padded elements.
225
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
226
- return x_final
227
-
228
- def _absolute_position_to_relative_position(self, x):
229
- """
230
- x: [b, h, l, l]
231
- ret: [b, h, l, 2*l-1]
232
- """
233
- batch, heads, length, _ = x.size()
234
- # padd along column
235
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
236
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
237
- # add 0's in the beginning that will skew the elements after reshape
238
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
239
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
240
- return x_final
241
-
242
- def _attention_bias_proximal(self, length):
243
- """Bias for self-attention to encourage attention to close positions.
244
- Args:
245
- length: an integer scalar.
246
- Returns:
247
- a Tensor with shape [1, 1, length, length]
248
- """
249
- r = torch.arange(length, dtype=torch.float32)
250
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
251
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
252
-
253
-
254
- class FFN(nn.Module):
255
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
256
- super().__init__()
257
- self.in_channels = in_channels
258
- self.out_channels = out_channels
259
- self.filter_channels = filter_channels
260
- self.kernel_size = kernel_size
261
- self.p_dropout = p_dropout
262
- self.activation = activation
263
- self.causal = causal
264
-
265
- if causal:
266
- self.padding = self._causal_padding
267
- else:
268
- self.padding = self._same_padding
269
-
270
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
271
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
272
- self.drop = nn.Dropout(p_dropout)
273
-
274
- def forward(self, x, x_mask):
275
- x = self.conv_1(self.padding(x * x_mask))
276
- if self.activation == "gelu":
277
- x = x * torch.sigmoid(1.702 * x)
278
- else:
279
- x = torch.relu(x)
280
- x = self.drop(x)
281
- x = self.conv_2(self.padding(x * x_mask))
282
- return x * x_mask
283
-
284
- def _causal_padding(self, x):
285
- if self.kernel_size == 1:
286
- return x
287
- pad_l = self.kernel_size - 1
288
- pad_r = 0
289
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
290
- x = F.pad(x, commons.convert_pad_shape(padding))
291
- return x
292
-
293
- def _same_padding(self, x):
294
- if self.kernel_size == 1:
295
- return x
296
- pad_l = (self.kernel_size - 1) // 2
297
- pad_r = self.kernel_size // 2
298
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
299
- x = F.pad(x, commons.convert_pad_shape(padding))
300
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DEEMOSTECH/ChatAvatar/static/js/main.21f66c0f.js DELETED
The diff for this file is too large to render. See raw diff
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BmpImagePlugin.py DELETED
@@ -1,471 +0,0 @@
1
- #
2
- # The Python Imaging Library.
3
- # $Id$
4
- #
5
- # BMP file handler
6
- #
7
- # Windows (and OS/2) native bitmap storage format.
8
- #
9
- # history:
10
- # 1995-09-01 fl Created
11
- # 1996-04-30 fl Added save
12
- # 1997-08-27 fl Fixed save of 1-bit images
13
- # 1998-03-06 fl Load P images as L where possible
14
- # 1998-07-03 fl Load P images as 1 where possible
15
- # 1998-12-29 fl Handle small palettes
16
- # 2002-12-30 fl Fixed load of 1-bit palette images
17
- # 2003-04-21 fl Fixed load of 1-bit monochrome images
18
- # 2003-04-23 fl Added limited support for BI_BITFIELDS compression
19
- #
20
- # Copyright (c) 1997-2003 by Secret Labs AB
21
- # Copyright (c) 1995-2003 by Fredrik Lundh
22
- #
23
- # See the README file for information on usage and redistribution.
24
- #
25
-
26
-
27
- import os
28
-
29
- from . import Image, ImageFile, ImagePalette
30
- from ._binary import i16le as i16
31
- from ._binary import i32le as i32
32
- from ._binary import o8
33
- from ._binary import o16le as o16
34
- from ._binary import o32le as o32
35
-
36
- #
37
- # --------------------------------------------------------------------
38
- # Read BMP file
39
-
40
- BIT2MODE = {
41
- # bits => mode, rawmode
42
- 1: ("P", "P;1"),
43
- 4: ("P", "P;4"),
44
- 8: ("P", "P"),
45
- 16: ("RGB", "BGR;15"),
46
- 24: ("RGB", "BGR"),
47
- 32: ("RGB", "BGRX"),
48
- }
49
-
50
-
51
- def _accept(prefix):
52
- return prefix[:2] == b"BM"
53
-
54
-
55
- def _dib_accept(prefix):
56
- return i32(prefix) in [12, 40, 64, 108, 124]
57
-
58
-
59
- # =============================================================================
60
- # Image plugin for the Windows BMP format.
61
- # =============================================================================
62
- class BmpImageFile(ImageFile.ImageFile):
63
- """Image plugin for the Windows Bitmap format (BMP)"""
64
-
65
- # ------------------------------------------------------------- Description
66
- format_description = "Windows Bitmap"
67
- format = "BMP"
68
-
69
- # -------------------------------------------------- BMP Compression values
70
- COMPRESSIONS = {"RAW": 0, "RLE8": 1, "RLE4": 2, "BITFIELDS": 3, "JPEG": 4, "PNG": 5}
71
- for k, v in COMPRESSIONS.items():
72
- vars()[k] = v
73
-
74
- def _bitmap(self, header=0, offset=0):
75
- """Read relevant info about the BMP"""
76
- read, seek = self.fp.read, self.fp.seek
77
- if header:
78
- seek(header)
79
- # read bmp header size @offset 14 (this is part of the header size)
80
- file_info = {"header_size": i32(read(4)), "direction": -1}
81
-
82
- # -------------------- If requested, read header at a specific position
83
- # read the rest of the bmp header, without its size
84
- header_data = ImageFile._safe_read(self.fp, file_info["header_size"] - 4)
85
-
86
- # -------------------------------------------------- IBM OS/2 Bitmap v1
87
- # ----- This format has different offsets because of width/height types
88
- if file_info["header_size"] == 12:
89
- file_info["width"] = i16(header_data, 0)
90
- file_info["height"] = i16(header_data, 2)
91
- file_info["planes"] = i16(header_data, 4)
92
- file_info["bits"] = i16(header_data, 6)
93
- file_info["compression"] = self.RAW
94
- file_info["palette_padding"] = 3
95
-
96
- # --------------------------------------------- Windows Bitmap v2 to v5
97
- # v3, OS/2 v2, v4, v5
98
- elif file_info["header_size"] in (40, 64, 108, 124):
99
- file_info["y_flip"] = header_data[7] == 0xFF
100
- file_info["direction"] = 1 if file_info["y_flip"] else -1
101
- file_info["width"] = i32(header_data, 0)
102
- file_info["height"] = (
103
- i32(header_data, 4)
104
- if not file_info["y_flip"]
105
- else 2**32 - i32(header_data, 4)
106
- )
107
- file_info["planes"] = i16(header_data, 8)
108
- file_info["bits"] = i16(header_data, 10)
109
- file_info["compression"] = i32(header_data, 12)
110
- # byte size of pixel data
111
- file_info["data_size"] = i32(header_data, 16)
112
- file_info["pixels_per_meter"] = (
113
- i32(header_data, 20),
114
- i32(header_data, 24),
115
- )
116
- file_info["colors"] = i32(header_data, 28)
117
- file_info["palette_padding"] = 4
118
- self.info["dpi"] = tuple(x / 39.3701 for x in file_info["pixels_per_meter"])
119
- if file_info["compression"] == self.BITFIELDS:
120
- if len(header_data) >= 52:
121
- for idx, mask in enumerate(
122
- ["r_mask", "g_mask", "b_mask", "a_mask"]
123
- ):
124
- file_info[mask] = i32(header_data, 36 + idx * 4)
125
- else:
126
- # 40 byte headers only have the three components in the
127
- # bitfields masks, ref:
128
- # https://msdn.microsoft.com/en-us/library/windows/desktop/dd183376(v=vs.85).aspx
129
- # See also
130
- # https://github.com/python-pillow/Pillow/issues/1293
131
- # There is a 4th component in the RGBQuad, in the alpha
132
- # location, but it is listed as a reserved component,
133
- # and it is not generally an alpha channel
134
- file_info["a_mask"] = 0x0
135
- for mask in ["r_mask", "g_mask", "b_mask"]:
136
- file_info[mask] = i32(read(4))
137
- file_info["rgb_mask"] = (
138
- file_info["r_mask"],
139
- file_info["g_mask"],
140
- file_info["b_mask"],
141
- )
142
- file_info["rgba_mask"] = (
143
- file_info["r_mask"],
144
- file_info["g_mask"],
145
- file_info["b_mask"],
146
- file_info["a_mask"],
147
- )
148
- else:
149
- msg = f"Unsupported BMP header type ({file_info['header_size']})"
150
- raise OSError(msg)
151
-
152
- # ------------------ Special case : header is reported 40, which
153
- # ---------------------- is shorter than real size for bpp >= 16
154
- self._size = file_info["width"], file_info["height"]
155
-
156
- # ------- If color count was not found in the header, compute from bits
157
- file_info["colors"] = (
158
- file_info["colors"]
159
- if file_info.get("colors", 0)
160
- else (1 << file_info["bits"])
161
- )
162
- if offset == 14 + file_info["header_size"] and file_info["bits"] <= 8:
163
- offset += 4 * file_info["colors"]
164
-
165
- # ---------------------- Check bit depth for unusual unsupported values
166
- self.mode, raw_mode = BIT2MODE.get(file_info["bits"], (None, None))
167
- if self.mode is None:
168
- msg = f"Unsupported BMP pixel depth ({file_info['bits']})"
169
- raise OSError(msg)
170
-
171
- # ---------------- Process BMP with Bitfields compression (not palette)
172
- decoder_name = "raw"
173
- if file_info["compression"] == self.BITFIELDS:
174
- SUPPORTED = {
175
- 32: [
176
- (0xFF0000, 0xFF00, 0xFF, 0x0),
177
- (0xFF000000, 0xFF0000, 0xFF00, 0x0),
178
- (0xFF000000, 0xFF0000, 0xFF00, 0xFF),
179
- (0xFF, 0xFF00, 0xFF0000, 0xFF000000),
180
- (0xFF0000, 0xFF00, 0xFF, 0xFF000000),
181
- (0x0, 0x0, 0x0, 0x0),
182
- ],
183
- 24: [(0xFF0000, 0xFF00, 0xFF)],
184
- 16: [(0xF800, 0x7E0, 0x1F), (0x7C00, 0x3E0, 0x1F)],
185
- }
186
- MASK_MODES = {
187
- (32, (0xFF0000, 0xFF00, 0xFF, 0x0)): "BGRX",
188
- (32, (0xFF000000, 0xFF0000, 0xFF00, 0x0)): "XBGR",
189
- (32, (0xFF000000, 0xFF0000, 0xFF00, 0xFF)): "ABGR",
190
- (32, (0xFF, 0xFF00, 0xFF0000, 0xFF000000)): "RGBA",
191
- (32, (0xFF0000, 0xFF00, 0xFF, 0xFF000000)): "BGRA",
192
- (32, (0x0, 0x0, 0x0, 0x0)): "BGRA",
193
- (24, (0xFF0000, 0xFF00, 0xFF)): "BGR",
194
- (16, (0xF800, 0x7E0, 0x1F)): "BGR;16",
195
- (16, (0x7C00, 0x3E0, 0x1F)): "BGR;15",
196
- }
197
- if file_info["bits"] in SUPPORTED:
198
- if (
199
- file_info["bits"] == 32
200
- and file_info["rgba_mask"] in SUPPORTED[file_info["bits"]]
201
- ):
202
- raw_mode = MASK_MODES[(file_info["bits"], file_info["rgba_mask"])]
203
- self.mode = "RGBA" if "A" in raw_mode else self.mode
204
- elif (
205
- file_info["bits"] in (24, 16)
206
- and file_info["rgb_mask"] in SUPPORTED[file_info["bits"]]
207
- ):
208
- raw_mode = MASK_MODES[(file_info["bits"], file_info["rgb_mask"])]
209
- else:
210
- msg = "Unsupported BMP bitfields layout"
211
- raise OSError(msg)
212
- else:
213
- msg = "Unsupported BMP bitfields layout"
214
- raise OSError(msg)
215
- elif file_info["compression"] == self.RAW:
216
- if file_info["bits"] == 32 and header == 22: # 32-bit .cur offset
217
- raw_mode, self.mode = "BGRA", "RGBA"
218
- elif file_info["compression"] in (self.RLE8, self.RLE4):
219
- decoder_name = "bmp_rle"
220
- else:
221
- msg = f"Unsupported BMP compression ({file_info['compression']})"
222
- raise OSError(msg)
223
-
224
- # --------------- Once the header is processed, process the palette/LUT
225
- if self.mode == "P": # Paletted for 1, 4 and 8 bit images
226
- # ---------------------------------------------------- 1-bit images
227
- if not (0 < file_info["colors"] <= 65536):
228
- msg = f"Unsupported BMP Palette size ({file_info['colors']})"
229
- raise OSError(msg)
230
- else:
231
- padding = file_info["palette_padding"]
232
- palette = read(padding * file_info["colors"])
233
- greyscale = True
234
- indices = (
235
- (0, 255)
236
- if file_info["colors"] == 2
237
- else list(range(file_info["colors"]))
238
- )
239
-
240
- # ----------------- Check if greyscale and ignore palette if so
241
- for ind, val in enumerate(indices):
242
- rgb = palette[ind * padding : ind * padding + 3]
243
- if rgb != o8(val) * 3:
244
- greyscale = False
245
-
246
- # ------- If all colors are grey, white or black, ditch palette
247
- if greyscale:
248
- self.mode = "1" if file_info["colors"] == 2 else "L"
249
- raw_mode = self.mode
250
- else:
251
- self.mode = "P"
252
- self.palette = ImagePalette.raw(
253
- "BGRX" if padding == 4 else "BGR", palette
254
- )
255
-
256
- # ---------------------------- Finally set the tile data for the plugin
257
- self.info["compression"] = file_info["compression"]
258
- args = [raw_mode]
259
- if decoder_name == "bmp_rle":
260
- args.append(file_info["compression"] == self.RLE4)
261
- else:
262
- args.append(((file_info["width"] * file_info["bits"] + 31) >> 3) & (~3))
263
- args.append(file_info["direction"])
264
- self.tile = [
265
- (
266
- decoder_name,
267
- (0, 0, file_info["width"], file_info["height"]),
268
- offset or self.fp.tell(),
269
- tuple(args),
270
- )
271
- ]
272
-
273
- def _open(self):
274
- """Open file, check magic number and read header"""
275
- # read 14 bytes: magic number, filesize, reserved, header final offset
276
- head_data = self.fp.read(14)
277
- # choke if the file does not have the required magic bytes
278
- if not _accept(head_data):
279
- msg = "Not a BMP file"
280
- raise SyntaxError(msg)
281
- # read the start position of the BMP image data (u32)
282
- offset = i32(head_data, 10)
283
- # load bitmap information (offset=raster info)
284
- self._bitmap(offset=offset)
285
-
286
-
287
- class BmpRleDecoder(ImageFile.PyDecoder):
288
- _pulls_fd = True
289
-
290
- def decode(self, buffer):
291
- rle4 = self.args[1]
292
- data = bytearray()
293
- x = 0
294
- while len(data) < self.state.xsize * self.state.ysize:
295
- pixels = self.fd.read(1)
296
- byte = self.fd.read(1)
297
- if not pixels or not byte:
298
- break
299
- num_pixels = pixels[0]
300
- if num_pixels:
301
- # encoded mode
302
- if x + num_pixels > self.state.xsize:
303
- # Too much data for row
304
- num_pixels = max(0, self.state.xsize - x)
305
- if rle4:
306
- first_pixel = o8(byte[0] >> 4)
307
- second_pixel = o8(byte[0] & 0x0F)
308
- for index in range(num_pixels):
309
- if index % 2 == 0:
310
- data += first_pixel
311
- else:
312
- data += second_pixel
313
- else:
314
- data += byte * num_pixels
315
- x += num_pixels
316
- else:
317
- if byte[0] == 0:
318
- # end of line
319
- while len(data) % self.state.xsize != 0:
320
- data += b"\x00"
321
- x = 0
322
- elif byte[0] == 1:
323
- # end of bitmap
324
- break
325
- elif byte[0] == 2:
326
- # delta
327
- bytes_read = self.fd.read(2)
328
- if len(bytes_read) < 2:
329
- break
330
- right, up = self.fd.read(2)
331
- data += b"\x00" * (right + up * self.state.xsize)
332
- x = len(data) % self.state.xsize
333
- else:
334
- # absolute mode
335
- if rle4:
336
- # 2 pixels per byte
337
- byte_count = byte[0] // 2
338
- bytes_read = self.fd.read(byte_count)
339
- for byte_read in bytes_read:
340
- data += o8(byte_read >> 4)
341
- data += o8(byte_read & 0x0F)
342
- else:
343
- byte_count = byte[0]
344
- bytes_read = self.fd.read(byte_count)
345
- data += bytes_read
346
- if len(bytes_read) < byte_count:
347
- break
348
- x += byte[0]
349
-
350
- # align to 16-bit word boundary
351
- if self.fd.tell() % 2 != 0:
352
- self.fd.seek(1, os.SEEK_CUR)
353
- rawmode = "L" if self.mode == "L" else "P"
354
- self.set_as_raw(bytes(data), (rawmode, 0, self.args[-1]))
355
- return -1, 0
356
-
357
-
358
- # =============================================================================
359
- # Image plugin for the DIB format (BMP alias)
360
- # =============================================================================
361
- class DibImageFile(BmpImageFile):
362
- format = "DIB"
363
- format_description = "Windows Bitmap"
364
-
365
- def _open(self):
366
- self._bitmap()
367
-
368
-
369
- #
370
- # --------------------------------------------------------------------
371
- # Write BMP file
372
-
373
-
374
- SAVE = {
375
- "1": ("1", 1, 2),
376
- "L": ("L", 8, 256),
377
- "P": ("P", 8, 256),
378
- "RGB": ("BGR", 24, 0),
379
- "RGBA": ("BGRA", 32, 0),
380
- }
381
-
382
-
383
- def _dib_save(im, fp, filename):
384
- _save(im, fp, filename, False)
385
-
386
-
387
- def _save(im, fp, filename, bitmap_header=True):
388
- try:
389
- rawmode, bits, colors = SAVE[im.mode]
390
- except KeyError as e:
391
- msg = f"cannot write mode {im.mode} as BMP"
392
- raise OSError(msg) from e
393
-
394
- info = im.encoderinfo
395
-
396
- dpi = info.get("dpi", (96, 96))
397
-
398
- # 1 meter == 39.3701 inches
399
- ppm = tuple(map(lambda x: int(x * 39.3701 + 0.5), dpi))
400
-
401
- stride = ((im.size[0] * bits + 7) // 8 + 3) & (~3)
402
- header = 40 # or 64 for OS/2 version 2
403
- image = stride * im.size[1]
404
-
405
- if im.mode == "1":
406
- palette = b"".join(o8(i) * 4 for i in (0, 255))
407
- elif im.mode == "L":
408
- palette = b"".join(o8(i) * 4 for i in range(256))
409
- elif im.mode == "P":
410
- palette = im.im.getpalette("RGB", "BGRX")
411
- colors = len(palette) // 4
412
- else:
413
- palette = None
414
-
415
- # bitmap header
416
- if bitmap_header:
417
- offset = 14 + header + colors * 4
418
- file_size = offset + image
419
- if file_size > 2**32 - 1:
420
- msg = "File size is too large for the BMP format"
421
- raise ValueError(msg)
422
- fp.write(
423
- b"BM" # file type (magic)
424
- + o32(file_size) # file size
425
- + o32(0) # reserved
426
- + o32(offset) # image data offset
427
- )
428
-
429
- # bitmap info header
430
- fp.write(
431
- o32(header) # info header size
432
- + o32(im.size[0]) # width
433
- + o32(im.size[1]) # height
434
- + o16(1) # planes
435
- + o16(bits) # depth
436
- + o32(0) # compression (0=uncompressed)
437
- + o32(image) # size of bitmap
438
- + o32(ppm[0]) # resolution
439
- + o32(ppm[1]) # resolution
440
- + o32(colors) # colors used
441
- + o32(colors) # colors important
442
- )
443
-
444
- fp.write(b"\0" * (header - 40)) # padding (for OS/2 format)
445
-
446
- if palette:
447
- fp.write(palette)
448
-
449
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, stride, -1))])
450
-
451
-
452
- #
453
- # --------------------------------------------------------------------
454
- # Registry
455
-
456
-
457
- Image.register_open(BmpImageFile.format, BmpImageFile, _accept)
458
- Image.register_save(BmpImageFile.format, _save)
459
-
460
- Image.register_extension(BmpImageFile.format, ".bmp")
461
-
462
- Image.register_mime(BmpImageFile.format, "image/bmp")
463
-
464
- Image.register_decoder("bmp_rle", BmpRleDecoder)
465
-
466
- Image.register_open(DibImageFile.format, DibImageFile, _dib_accept)
467
- Image.register_save(DibImageFile.format, _dib_save)
468
-
469
- Image.register_extension(DibImageFile.format, ".dib")
470
-
471
- Image.register_mime(DibImageFile.format, "image/bmp")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/WebPImagePlugin.py DELETED
@@ -1,366 +0,0 @@
1
- from io import BytesIO
2
-
3
- from . import Image, ImageFile
4
-
5
- try:
6
- from . import _webp
7
-
8
- SUPPORTED = True
9
- except ImportError:
10
- SUPPORTED = False
11
-
12
-
13
- _VALID_WEBP_MODES = {"RGBX": True, "RGBA": True, "RGB": True}
14
-
15
- _VALID_WEBP_LEGACY_MODES = {"RGB": True, "RGBA": True}
16
-
17
- _VP8_MODES_BY_IDENTIFIER = {
18
- b"VP8 ": "RGB",
19
- b"VP8X": "RGBA",
20
- b"VP8L": "RGBA", # lossless
21
- }
22
-
23
-
24
- def _accept(prefix):
25
- is_riff_file_format = prefix[:4] == b"RIFF"
26
- is_webp_file = prefix[8:12] == b"WEBP"
27
- is_valid_vp8_mode = prefix[12:16] in _VP8_MODES_BY_IDENTIFIER
28
-
29
- if is_riff_file_format and is_webp_file and is_valid_vp8_mode:
30
- if not SUPPORTED:
31
- return (
32
- "image file could not be identified because WEBP support not installed"
33
- )
34
- return True
35
-
36
-
37
- class WebPImageFile(ImageFile.ImageFile):
38
- format = "WEBP"
39
- format_description = "WebP image"
40
- __loaded = 0
41
- __logical_frame = 0
42
-
43
- def _open(self):
44
- if not _webp.HAVE_WEBPANIM:
45
- # Legacy mode
46
- data, width, height, self.mode, icc_profile, exif = _webp.WebPDecode(
47
- self.fp.read()
48
- )
49
- if icc_profile:
50
- self.info["icc_profile"] = icc_profile
51
- if exif:
52
- self.info["exif"] = exif
53
- self._size = width, height
54
- self.fp = BytesIO(data)
55
- self.tile = [("raw", (0, 0) + self.size, 0, self.mode)]
56
- self.n_frames = 1
57
- self.is_animated = False
58
- return
59
-
60
- # Use the newer AnimDecoder API to parse the (possibly) animated file,
61
- # and access muxed chunks like ICC/EXIF/XMP.
62
- self._decoder = _webp.WebPAnimDecoder(self.fp.read())
63
-
64
- # Get info from decoder
65
- width, height, loop_count, bgcolor, frame_count, mode = self._decoder.get_info()
66
- self._size = width, height
67
- self.info["loop"] = loop_count
68
- bg_a, bg_r, bg_g, bg_b = (
69
- (bgcolor >> 24) & 0xFF,
70
- (bgcolor >> 16) & 0xFF,
71
- (bgcolor >> 8) & 0xFF,
72
- bgcolor & 0xFF,
73
- )
74
- self.info["background"] = (bg_r, bg_g, bg_b, bg_a)
75
- self.n_frames = frame_count
76
- self.is_animated = self.n_frames > 1
77
- self.mode = "RGB" if mode == "RGBX" else mode
78
- self.rawmode = mode
79
- self.tile = []
80
-
81
- # Attempt to read ICC / EXIF / XMP chunks from file
82
- icc_profile = self._decoder.get_chunk("ICCP")
83
- exif = self._decoder.get_chunk("EXIF")
84
- xmp = self._decoder.get_chunk("XMP ")
85
- if icc_profile:
86
- self.info["icc_profile"] = icc_profile
87
- if exif:
88
- self.info["exif"] = exif
89
- if xmp:
90
- self.info["xmp"] = xmp
91
-
92
- # Initialize seek state
93
- self._reset(reset=False)
94
-
95
- def _getexif(self):
96
- if "exif" not in self.info:
97
- return None
98
- return self.getexif()._get_merged_dict()
99
-
100
- def getxmp(self):
101
- """
102
- Returns a dictionary containing the XMP tags.
103
- Requires defusedxml to be installed.
104
-
105
- :returns: XMP tags in a dictionary.
106
- """
107
- return self._getxmp(self.info["xmp"]) if "xmp" in self.info else {}
108
-
109
- def seek(self, frame):
110
- if not self._seek_check(frame):
111
- return
112
-
113
- # Set logical frame to requested position
114
- self.__logical_frame = frame
115
-
116
- def _reset(self, reset=True):
117
- if reset:
118
- self._decoder.reset()
119
- self.__physical_frame = 0
120
- self.__loaded = -1
121
- self.__timestamp = 0
122
-
123
- def _get_next(self):
124
- # Get next frame
125
- ret = self._decoder.get_next()
126
- self.__physical_frame += 1
127
-
128
- # Check if an error occurred
129
- if ret is None:
130
- self._reset() # Reset just to be safe
131
- self.seek(0)
132
- msg = "failed to decode next frame in WebP file"
133
- raise EOFError(msg)
134
-
135
- # Compute duration
136
- data, timestamp = ret
137
- duration = timestamp - self.__timestamp
138
- self.__timestamp = timestamp
139
-
140
- # libwebp gives frame end, adjust to start of frame
141
- timestamp -= duration
142
- return data, timestamp, duration
143
-
144
- def _seek(self, frame):
145
- if self.__physical_frame == frame:
146
- return # Nothing to do
147
- if frame < self.__physical_frame:
148
- self._reset() # Rewind to beginning
149
- while self.__physical_frame < frame:
150
- self._get_next() # Advance to the requested frame
151
-
152
- def load(self):
153
- if _webp.HAVE_WEBPANIM:
154
- if self.__loaded != self.__logical_frame:
155
- self._seek(self.__logical_frame)
156
-
157
- # We need to load the image data for this frame
158
- data, timestamp, duration = self._get_next()
159
- self.info["timestamp"] = timestamp
160
- self.info["duration"] = duration
161
- self.__loaded = self.__logical_frame
162
-
163
- # Set tile
164
- if self.fp and self._exclusive_fp:
165
- self.fp.close()
166
- self.fp = BytesIO(data)
167
- self.tile = [("raw", (0, 0) + self.size, 0, self.rawmode)]
168
-
169
- return super().load()
170
-
171
- def tell(self):
172
- if not _webp.HAVE_WEBPANIM:
173
- return super().tell()
174
-
175
- return self.__logical_frame
176
-
177
-
178
- def _save_all(im, fp, filename):
179
- encoderinfo = im.encoderinfo.copy()
180
- append_images = list(encoderinfo.get("append_images", []))
181
-
182
- # If total frame count is 1, then save using the legacy API, which
183
- # will preserve non-alpha modes
184
- total = 0
185
- for ims in [im] + append_images:
186
- total += getattr(ims, "n_frames", 1)
187
- if total == 1:
188
- _save(im, fp, filename)
189
- return
190
-
191
- background = (0, 0, 0, 0)
192
- if "background" in encoderinfo:
193
- background = encoderinfo["background"]
194
- elif "background" in im.info:
195
- background = im.info["background"]
196
- if isinstance(background, int):
197
- # GifImagePlugin stores a global color table index in
198
- # info["background"]. So it must be converted to an RGBA value
199
- palette = im.getpalette()
200
- if palette:
201
- r, g, b = palette[background * 3 : (background + 1) * 3]
202
- background = (r, g, b, 255)
203
- else:
204
- background = (background, background, background, 255)
205
-
206
- duration = im.encoderinfo.get("duration", im.info.get("duration", 0))
207
- loop = im.encoderinfo.get("loop", 0)
208
- minimize_size = im.encoderinfo.get("minimize_size", False)
209
- kmin = im.encoderinfo.get("kmin", None)
210
- kmax = im.encoderinfo.get("kmax", None)
211
- allow_mixed = im.encoderinfo.get("allow_mixed", False)
212
- verbose = False
213
- lossless = im.encoderinfo.get("lossless", False)
214
- quality = im.encoderinfo.get("quality", 80)
215
- method = im.encoderinfo.get("method", 0)
216
- icc_profile = im.encoderinfo.get("icc_profile") or ""
217
- exif = im.encoderinfo.get("exif", "")
218
- if isinstance(exif, Image.Exif):
219
- exif = exif.tobytes()
220
- xmp = im.encoderinfo.get("xmp", "")
221
- if allow_mixed:
222
- lossless = False
223
-
224
- # Sensible keyframe defaults are from gif2webp.c script
225
- if kmin is None:
226
- kmin = 9 if lossless else 3
227
- if kmax is None:
228
- kmax = 17 if lossless else 5
229
-
230
- # Validate background color
231
- if (
232
- not isinstance(background, (list, tuple))
233
- or len(background) != 4
234
- or not all(0 <= v < 256 for v in background)
235
- ):
236
- msg = f"Background color is not an RGBA tuple clamped to (0-255): {background}"
237
- raise OSError(msg)
238
-
239
- # Convert to packed uint
240
- bg_r, bg_g, bg_b, bg_a = background
241
- background = (bg_a << 24) | (bg_r << 16) | (bg_g << 8) | (bg_b << 0)
242
-
243
- # Setup the WebP animation encoder
244
- enc = _webp.WebPAnimEncoder(
245
- im.size[0],
246
- im.size[1],
247
- background,
248
- loop,
249
- minimize_size,
250
- kmin,
251
- kmax,
252
- allow_mixed,
253
- verbose,
254
- )
255
-
256
- # Add each frame
257
- frame_idx = 0
258
- timestamp = 0
259
- cur_idx = im.tell()
260
- try:
261
- for ims in [im] + append_images:
262
- # Get # of frames in this image
263
- nfr = getattr(ims, "n_frames", 1)
264
-
265
- for idx in range(nfr):
266
- ims.seek(idx)
267
- ims.load()
268
-
269
- # Make sure image mode is supported
270
- frame = ims
271
- rawmode = ims.mode
272
- if ims.mode not in _VALID_WEBP_MODES:
273
- alpha = (
274
- "A" in ims.mode
275
- or "a" in ims.mode
276
- or (ims.mode == "P" and "A" in ims.im.getpalettemode())
277
- )
278
- rawmode = "RGBA" if alpha else "RGB"
279
- frame = ims.convert(rawmode)
280
-
281
- if rawmode == "RGB":
282
- # For faster conversion, use RGBX
283
- rawmode = "RGBX"
284
-
285
- # Append the frame to the animation encoder
286
- enc.add(
287
- frame.tobytes("raw", rawmode),
288
- round(timestamp),
289
- frame.size[0],
290
- frame.size[1],
291
- rawmode,
292
- lossless,
293
- quality,
294
- method,
295
- )
296
-
297
- # Update timestamp and frame index
298
- if isinstance(duration, (list, tuple)):
299
- timestamp += duration[frame_idx]
300
- else:
301
- timestamp += duration
302
- frame_idx += 1
303
-
304
- finally:
305
- im.seek(cur_idx)
306
-
307
- # Force encoder to flush frames
308
- enc.add(None, round(timestamp), 0, 0, "", lossless, quality, 0)
309
-
310
- # Get the final output from the encoder
311
- data = enc.assemble(icc_profile, exif, xmp)
312
- if data is None:
313
- msg = "cannot write file as WebP (encoder returned None)"
314
- raise OSError(msg)
315
-
316
- fp.write(data)
317
-
318
-
319
- def _save(im, fp, filename):
320
- lossless = im.encoderinfo.get("lossless", False)
321
- quality = im.encoderinfo.get("quality", 80)
322
- icc_profile = im.encoderinfo.get("icc_profile") or ""
323
- exif = im.encoderinfo.get("exif", b"")
324
- if isinstance(exif, Image.Exif):
325
- exif = exif.tobytes()
326
- if exif.startswith(b"Exif\x00\x00"):
327
- exif = exif[6:]
328
- xmp = im.encoderinfo.get("xmp", "")
329
- method = im.encoderinfo.get("method", 4)
330
- exact = 1 if im.encoderinfo.get("exact") else 0
331
-
332
- if im.mode not in _VALID_WEBP_LEGACY_MODES:
333
- alpha = (
334
- "A" in im.mode
335
- or "a" in im.mode
336
- or (im.mode == "P" and "transparency" in im.info)
337
- )
338
- im = im.convert("RGBA" if alpha else "RGB")
339
-
340
- data = _webp.WebPEncode(
341
- im.tobytes(),
342
- im.size[0],
343
- im.size[1],
344
- lossless,
345
- float(quality),
346
- im.mode,
347
- icc_profile,
348
- method,
349
- exact,
350
- exif,
351
- xmp,
352
- )
353
- if data is None:
354
- msg = "cannot write file as WebP (encoder returned None)"
355
- raise OSError(msg)
356
-
357
- fp.write(data)
358
-
359
-
360
- Image.register_open(WebPImageFile.format, WebPImageFile, _accept)
361
- if SUPPORTED:
362
- Image.register_save(WebPImageFile.format, _save)
363
- if _webp.HAVE_WEBPANIM:
364
- Image.register_save_all(WebPImageFile.format, _save_all)
365
- Image.register_extension(WebPImageFile.format, ".webp")
366
- Image.register_mime(WebPImageFile.format, "image/webp")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/expr/__init__.py DELETED
@@ -1,19 +0,0 @@
1
- """Tools for creating transform & filter expressions with a python syntax"""
2
- # ruff: noqa
3
- from typing import Any
4
-
5
- from .core import datum, Expression
6
- from .funcs import *
7
- from .consts import *
8
- from ..vegalite.v5.schema.core import ExprRef as _ExprRef
9
-
10
-
11
- class _ExprType:
12
- def __init__(self, expr):
13
- vars(self).update(expr)
14
-
15
- def __call__(self, expr, **kwargs):
16
- return _ExprRef(expr, **kwargs)
17
-
18
-
19
- expr: Any = _ExprType(globals())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/models.py DELETED
@@ -1,337 +0,0 @@
1
- from encodings.aliases import aliases
2
- from hashlib import sha256
3
- from json import dumps
4
- from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
5
-
6
- from .constant import TOO_BIG_SEQUENCE
7
- from .utils import iana_name, is_multi_byte_encoding, unicode_range
8
-
9
-
10
- class CharsetMatch:
11
- def __init__(
12
- self,
13
- payload: bytes,
14
- guessed_encoding: str,
15
- mean_mess_ratio: float,
16
- has_sig_or_bom: bool,
17
- languages: "CoherenceMatches",
18
- decoded_payload: Optional[str] = None,
19
- ):
20
- self._payload: bytes = payload
21
-
22
- self._encoding: str = guessed_encoding
23
- self._mean_mess_ratio: float = mean_mess_ratio
24
- self._languages: CoherenceMatches = languages
25
- self._has_sig_or_bom: bool = has_sig_or_bom
26
- self._unicode_ranges: Optional[List[str]] = None
27
-
28
- self._leaves: List[CharsetMatch] = []
29
- self._mean_coherence_ratio: float = 0.0
30
-
31
- self._output_payload: Optional[bytes] = None
32
- self._output_encoding: Optional[str] = None
33
-
34
- self._string: Optional[str] = decoded_payload
35
-
36
- def __eq__(self, other: object) -> bool:
37
- if not isinstance(other, CharsetMatch):
38
- raise TypeError(
39
- "__eq__ cannot be invoked on {} and {}.".format(
40
- str(other.__class__), str(self.__class__)
41
- )
42
- )
43
- return self.encoding == other.encoding and self.fingerprint == other.fingerprint
44
-
45
- def __lt__(self, other: object) -> bool:
46
- """
47
- Implemented to make sorted available upon CharsetMatches items.
48
- """
49
- if not isinstance(other, CharsetMatch):
50
- raise ValueError
51
-
52
- chaos_difference: float = abs(self.chaos - other.chaos)
53
- coherence_difference: float = abs(self.coherence - other.coherence)
54
-
55
- # Below 1% difference --> Use Coherence
56
- if chaos_difference < 0.01 and coherence_difference > 0.02:
57
- # When having a tough decision, use the result that decoded as many multi-byte as possible.
58
- if chaos_difference == 0.0 and self.coherence == other.coherence:
59
- return self.multi_byte_usage > other.multi_byte_usage
60
- return self.coherence > other.coherence
61
-
62
- return self.chaos < other.chaos
63
-
64
- @property
65
- def multi_byte_usage(self) -> float:
66
- return 1.0 - len(str(self)) / len(self.raw)
67
-
68
- def __str__(self) -> str:
69
- # Lazy Str Loading
70
- if self._string is None:
71
- self._string = str(self._payload, self._encoding, "strict")
72
- return self._string
73
-
74
- def __repr__(self) -> str:
75
- return "<CharsetMatch '{}' bytes({})>".format(self.encoding, self.fingerprint)
76
-
77
- def add_submatch(self, other: "CharsetMatch") -> None:
78
- if not isinstance(other, CharsetMatch) or other == self:
79
- raise ValueError(
80
- "Unable to add instance <{}> as a submatch of a CharsetMatch".format(
81
- other.__class__
82
- )
83
- )
84
-
85
- other._string = None # Unload RAM usage; dirty trick.
86
- self._leaves.append(other)
87
-
88
- @property
89
- def encoding(self) -> str:
90
- return self._encoding
91
-
92
- @property
93
- def encoding_aliases(self) -> List[str]:
94
- """
95
- Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855.
96
- """
97
- also_known_as: List[str] = []
98
- for u, p in aliases.items():
99
- if self.encoding == u:
100
- also_known_as.append(p)
101
- elif self.encoding == p:
102
- also_known_as.append(u)
103
- return also_known_as
104
-
105
- @property
106
- def bom(self) -> bool:
107
- return self._has_sig_or_bom
108
-
109
- @property
110
- def byte_order_mark(self) -> bool:
111
- return self._has_sig_or_bom
112
-
113
- @property
114
- def languages(self) -> List[str]:
115
- """
116
- Return the complete list of possible languages found in decoded sequence.
117
- Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'.
118
- """
119
- return [e[0] for e in self._languages]
120
-
121
- @property
122
- def language(self) -> str:
123
- """
124
- Most probable language found in decoded sequence. If none were detected or inferred, the property will return
125
- "Unknown".
126
- """
127
- if not self._languages:
128
- # Trying to infer the language based on the given encoding
129
- # Its either English or we should not pronounce ourselves in certain cases.
130
- if "ascii" in self.could_be_from_charset:
131
- return "English"
132
-
133
- # doing it there to avoid circular import
134
- from charset_normalizer.cd import encoding_languages, mb_encoding_languages
135
-
136
- languages = (
137
- mb_encoding_languages(self.encoding)
138
- if is_multi_byte_encoding(self.encoding)
139
- else encoding_languages(self.encoding)
140
- )
141
-
142
- if len(languages) == 0 or "Latin Based" in languages:
143
- return "Unknown"
144
-
145
- return languages[0]
146
-
147
- return self._languages[0][0]
148
-
149
- @property
150
- def chaos(self) -> float:
151
- return self._mean_mess_ratio
152
-
153
- @property
154
- def coherence(self) -> float:
155
- if not self._languages:
156
- return 0.0
157
- return self._languages[0][1]
158
-
159
- @property
160
- def percent_chaos(self) -> float:
161
- return round(self.chaos * 100, ndigits=3)
162
-
163
- @property
164
- def percent_coherence(self) -> float:
165
- return round(self.coherence * 100, ndigits=3)
166
-
167
- @property
168
- def raw(self) -> bytes:
169
- """
170
- Original untouched bytes.
171
- """
172
- return self._payload
173
-
174
- @property
175
- def submatch(self) -> List["CharsetMatch"]:
176
- return self._leaves
177
-
178
- @property
179
- def has_submatch(self) -> bool:
180
- return len(self._leaves) > 0
181
-
182
- @property
183
- def alphabets(self) -> List[str]:
184
- if self._unicode_ranges is not None:
185
- return self._unicode_ranges
186
- # list detected ranges
187
- detected_ranges: List[Optional[str]] = [
188
- unicode_range(char) for char in str(self)
189
- ]
190
- # filter and sort
191
- self._unicode_ranges = sorted(list({r for r in detected_ranges if r}))
192
- return self._unicode_ranges
193
-
194
- @property
195
- def could_be_from_charset(self) -> List[str]:
196
- """
197
- The complete list of encoding that output the exact SAME str result and therefore could be the originating
198
- encoding.
199
- This list does include the encoding available in property 'encoding'.
200
- """
201
- return [self._encoding] + [m.encoding for m in self._leaves]
202
-
203
- def output(self, encoding: str = "utf_8") -> bytes:
204
- """
205
- Method to get re-encoded bytes payload using given target encoding. Default to UTF-8.
206
- Any errors will be simply ignored by the encoder NOT replaced.
207
- """
208
- if self._output_encoding is None or self._output_encoding != encoding:
209
- self._output_encoding = encoding
210
- self._output_payload = str(self).encode(encoding, "replace")
211
-
212
- return self._output_payload # type: ignore
213
-
214
- @property
215
- def fingerprint(self) -> str:
216
- """
217
- Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one.
218
- """
219
- return sha256(self.output()).hexdigest()
220
-
221
-
222
- class CharsetMatches:
223
- """
224
- Container with every CharsetMatch items ordered by default from most probable to the less one.
225
- Act like a list(iterable) but does not implements all related methods.
226
- """
227
-
228
- def __init__(self, results: Optional[List[CharsetMatch]] = None):
229
- self._results: List[CharsetMatch] = sorted(results) if results else []
230
-
231
- def __iter__(self) -> Iterator[CharsetMatch]:
232
- yield from self._results
233
-
234
- def __getitem__(self, item: Union[int, str]) -> CharsetMatch:
235
- """
236
- Retrieve a single item either by its position or encoding name (alias may be used here).
237
- Raise KeyError upon invalid index or encoding not present in results.
238
- """
239
- if isinstance(item, int):
240
- return self._results[item]
241
- if isinstance(item, str):
242
- item = iana_name(item, False)
243
- for result in self._results:
244
- if item in result.could_be_from_charset:
245
- return result
246
- raise KeyError
247
-
248
- def __len__(self) -> int:
249
- return len(self._results)
250
-
251
- def __bool__(self) -> bool:
252
- return len(self._results) > 0
253
-
254
- def append(self, item: CharsetMatch) -> None:
255
- """
256
- Insert a single match. Will be inserted accordingly to preserve sort.
257
- Can be inserted as a submatch.
258
- """
259
- if not isinstance(item, CharsetMatch):
260
- raise ValueError(
261
- "Cannot append instance '{}' to CharsetMatches".format(
262
- str(item.__class__)
263
- )
264
- )
265
- # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage)
266
- if len(item.raw) <= TOO_BIG_SEQUENCE:
267
- for match in self._results:
268
- if match.fingerprint == item.fingerprint and match.chaos == item.chaos:
269
- match.add_submatch(item)
270
- return
271
- self._results.append(item)
272
- self._results = sorted(self._results)
273
-
274
- def best(self) -> Optional["CharsetMatch"]:
275
- """
276
- Simply return the first match. Strict equivalent to matches[0].
277
- """
278
- if not self._results:
279
- return None
280
- return self._results[0]
281
-
282
- def first(self) -> Optional["CharsetMatch"]:
283
- """
284
- Redundant method, call the method best(). Kept for BC reasons.
285
- """
286
- return self.best()
287
-
288
-
289
- CoherenceMatch = Tuple[str, float]
290
- CoherenceMatches = List[CoherenceMatch]
291
-
292
-
293
- class CliDetectionResult:
294
- def __init__(
295
- self,
296
- path: str,
297
- encoding: Optional[str],
298
- encoding_aliases: List[str],
299
- alternative_encodings: List[str],
300
- language: str,
301
- alphabets: List[str],
302
- has_sig_or_bom: bool,
303
- chaos: float,
304
- coherence: float,
305
- unicode_path: Optional[str],
306
- is_preferred: bool,
307
- ):
308
- self.path: str = path
309
- self.unicode_path: Optional[str] = unicode_path
310
- self.encoding: Optional[str] = encoding
311
- self.encoding_aliases: List[str] = encoding_aliases
312
- self.alternative_encodings: List[str] = alternative_encodings
313
- self.language: str = language
314
- self.alphabets: List[str] = alphabets
315
- self.has_sig_or_bom: bool = has_sig_or_bom
316
- self.chaos: float = chaos
317
- self.coherence: float = coherence
318
- self.is_preferred: bool = is_preferred
319
-
320
- @property
321
- def __dict__(self) -> Dict[str, Any]: # type: ignore
322
- return {
323
- "path": self.path,
324
- "encoding": self.encoding,
325
- "encoding_aliases": self.encoding_aliases,
326
- "alternative_encodings": self.alternative_encodings,
327
- "language": self.language,
328
- "alphabets": self.alphabets,
329
- "has_sig_or_bom": self.has_sig_or_bom,
330
- "chaos": self.chaos,
331
- "coherence": self.coherence,
332
- "unicode_path": self.unicode_path,
333
- "is_preferred": self.is_preferred,
334
- }
335
-
336
- def to_json(self) -> str:
337
- return dumps(self.__dict__, ensure_ascii=True, indent=4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/dateutil/parser/__init__.py DELETED
@@ -1,61 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- from ._parser import parse, parser, parserinfo, ParserError
3
- from ._parser import DEFAULTPARSER, DEFAULTTZPARSER
4
- from ._parser import UnknownTimezoneWarning
5
-
6
- from ._parser import __doc__
7
-
8
- from .isoparser import isoparser, isoparse
9
-
10
- __all__ = ['parse', 'parser', 'parserinfo',
11
- 'isoparse', 'isoparser',
12
- 'ParserError',
13
- 'UnknownTimezoneWarning']
14
-
15
-
16
- ###
17
- # Deprecate portions of the private interface so that downstream code that
18
- # is improperly relying on it is given *some* notice.
19
-
20
-
21
- def __deprecated_private_func(f):
22
- from functools import wraps
23
- import warnings
24
-
25
- msg = ('{name} is a private function and may break without warning, '
26
- 'it will be moved and or renamed in future versions.')
27
- msg = msg.format(name=f.__name__)
28
-
29
- @wraps(f)
30
- def deprecated_func(*args, **kwargs):
31
- warnings.warn(msg, DeprecationWarning)
32
- return f(*args, **kwargs)
33
-
34
- return deprecated_func
35
-
36
- def __deprecate_private_class(c):
37
- import warnings
38
-
39
- msg = ('{name} is a private class and may break without warning, '
40
- 'it will be moved and or renamed in future versions.')
41
- msg = msg.format(name=c.__name__)
42
-
43
- class private_class(c):
44
- __doc__ = c.__doc__
45
-
46
- def __init__(self, *args, **kwargs):
47
- warnings.warn(msg, DeprecationWarning)
48
- super(private_class, self).__init__(*args, **kwargs)
49
-
50
- private_class.__name__ = c.__name__
51
-
52
- return private_class
53
-
54
-
55
- from ._parser import _timelex, _resultbase
56
- from ._parser import _tzparser, _parsetz
57
-
58
- _timelex = __deprecate_private_class(_timelex)
59
- _tzparser = __deprecate_private_class(_tzparser)
60
- _resultbase = __deprecate_private_class(_resultbase)
61
- _parsetz = __deprecated_private_func(_parsetz)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/sbixStrike.py DELETED
@@ -1,177 +0,0 @@
1
- from fontTools.misc import sstruct
2
- from fontTools.misc.textTools import safeEval
3
- from .sbixGlyph import Glyph
4
- import struct
5
-
6
- sbixStrikeHeaderFormat = """
7
- >
8
- ppem: H # The PPEM for which this strike was designed (e.g., 9,
9
- # 12, 24)
10
- resolution: H # The screen resolution (in dpi) for which this strike
11
- # was designed (e.g., 72)
12
- """
13
-
14
- sbixGlyphDataOffsetFormat = """
15
- >
16
- glyphDataOffset: L # Offset from the beginning of the strike data record
17
- # to data for the individual glyph
18
- """
19
-
20
- sbixStrikeHeaderFormatSize = sstruct.calcsize(sbixStrikeHeaderFormat)
21
- sbixGlyphDataOffsetFormatSize = sstruct.calcsize(sbixGlyphDataOffsetFormat)
22
-
23
-
24
- class Strike(object):
25
- def __init__(self, rawdata=None, ppem=0, resolution=72):
26
- self.data = rawdata
27
- self.ppem = ppem
28
- self.resolution = resolution
29
- self.glyphs = {}
30
-
31
- def decompile(self, ttFont):
32
- if self.data is None:
33
- from fontTools import ttLib
34
-
35
- raise ttLib.TTLibError
36
- if len(self.data) < sbixStrikeHeaderFormatSize:
37
- from fontTools import ttLib
38
-
39
- raise (
40
- ttLib.TTLibError,
41
- "Strike header too short: Expected %x, got %x.",
42
- ) % (sbixStrikeHeaderFormatSize, len(self.data))
43
-
44
- # read Strike header from raw data
45
- sstruct.unpack(
46
- sbixStrikeHeaderFormat, self.data[:sbixStrikeHeaderFormatSize], self
47
- )
48
-
49
- # calculate number of glyphs
50
- (firstGlyphDataOffset,) = struct.unpack(
51
- ">L",
52
- self.data[
53
- sbixStrikeHeaderFormatSize : sbixStrikeHeaderFormatSize
54
- + sbixGlyphDataOffsetFormatSize
55
- ],
56
- )
57
- self.numGlyphs = (
58
- firstGlyphDataOffset - sbixStrikeHeaderFormatSize
59
- ) // sbixGlyphDataOffsetFormatSize - 1
60
- # ^ -1 because there's one more offset than glyphs
61
-
62
- # build offset list for single glyph data offsets
63
- self.glyphDataOffsets = []
64
- for i in range(
65
- self.numGlyphs + 1
66
- ): # + 1 because there's one more offset than glyphs
67
- start = i * sbixGlyphDataOffsetFormatSize + sbixStrikeHeaderFormatSize
68
- (current_offset,) = struct.unpack(
69
- ">L", self.data[start : start + sbixGlyphDataOffsetFormatSize]
70
- )
71
- self.glyphDataOffsets.append(current_offset)
72
-
73
- # iterate through offset list and slice raw data into glyph data records
74
- for i in range(self.numGlyphs):
75
- current_glyph = Glyph(
76
- rawdata=self.data[
77
- self.glyphDataOffsets[i] : self.glyphDataOffsets[i + 1]
78
- ],
79
- gid=i,
80
- )
81
- current_glyph.decompile(ttFont)
82
- self.glyphs[current_glyph.glyphName] = current_glyph
83
- del self.glyphDataOffsets
84
- del self.numGlyphs
85
- del self.data
86
-
87
- def compile(self, ttFont):
88
- self.glyphDataOffsets = b""
89
- self.bitmapData = b""
90
-
91
- glyphOrder = ttFont.getGlyphOrder()
92
-
93
- # first glyph starts right after the header
94
- currentGlyphDataOffset = (
95
- sbixStrikeHeaderFormatSize
96
- + sbixGlyphDataOffsetFormatSize * (len(glyphOrder) + 1)
97
- )
98
- for glyphName in glyphOrder:
99
- if glyphName in self.glyphs:
100
- # we have glyph data for this glyph
101
- current_glyph = self.glyphs[glyphName]
102
- else:
103
- # must add empty glyph data record for this glyph
104
- current_glyph = Glyph(glyphName=glyphName)
105
- current_glyph.compile(ttFont)
106
- current_glyph.glyphDataOffset = currentGlyphDataOffset
107
- self.bitmapData += current_glyph.rawdata
108
- currentGlyphDataOffset += len(current_glyph.rawdata)
109
- self.glyphDataOffsets += sstruct.pack(
110
- sbixGlyphDataOffsetFormat, current_glyph
111
- )
112
-
113
- # add last "offset", really the end address of the last glyph data record
114
- dummy = Glyph()
115
- dummy.glyphDataOffset = currentGlyphDataOffset
116
- self.glyphDataOffsets += sstruct.pack(sbixGlyphDataOffsetFormat, dummy)
117
-
118
- # pack header
119
- self.data = sstruct.pack(sbixStrikeHeaderFormat, self)
120
- # add offsets and image data after header
121
- self.data += self.glyphDataOffsets + self.bitmapData
122
-
123
- def toXML(self, xmlWriter, ttFont):
124
- xmlWriter.begintag("strike")
125
- xmlWriter.newline()
126
- xmlWriter.simpletag("ppem", value=self.ppem)
127
- xmlWriter.newline()
128
- xmlWriter.simpletag("resolution", value=self.resolution)
129
- xmlWriter.newline()
130
- glyphOrder = ttFont.getGlyphOrder()
131
- for i in range(len(glyphOrder)):
132
- if glyphOrder[i] in self.glyphs:
133
- self.glyphs[glyphOrder[i]].toXML(xmlWriter, ttFont)
134
- # TODO: what if there are more glyph data records than (glyf table) glyphs?
135
- xmlWriter.endtag("strike")
136
- xmlWriter.newline()
137
-
138
- def fromXML(self, name, attrs, content, ttFont):
139
- if name in ["ppem", "resolution"]:
140
- setattr(self, name, safeEval(attrs["value"]))
141
- elif name == "glyph":
142
- if "graphicType" in attrs:
143
- myFormat = safeEval("'''" + attrs["graphicType"] + "'''")
144
- else:
145
- myFormat = None
146
- if "glyphname" in attrs:
147
- myGlyphName = safeEval("'''" + attrs["glyphname"] + "'''")
148
- elif "name" in attrs:
149
- myGlyphName = safeEval("'''" + attrs["name"] + "'''")
150
- else:
151
- from fontTools import ttLib
152
-
153
- raise ttLib.TTLibError("Glyph must have a glyph name.")
154
- if "originOffsetX" in attrs:
155
- myOffsetX = safeEval(attrs["originOffsetX"])
156
- else:
157
- myOffsetX = 0
158
- if "originOffsetY" in attrs:
159
- myOffsetY = safeEval(attrs["originOffsetY"])
160
- else:
161
- myOffsetY = 0
162
- current_glyph = Glyph(
163
- glyphName=myGlyphName,
164
- graphicType=myFormat,
165
- originOffsetX=myOffsetX,
166
- originOffsetY=myOffsetY,
167
- )
168
- for element in content:
169
- if isinstance(element, tuple):
170
- name, attrs, content = element
171
- current_glyph.fromXML(name, attrs, content, ttFont)
172
- current_glyph.compile(ttFont)
173
- self.glyphs[current_glyph.glyphName] = current_glyph
174
- else:
175
- from fontTools import ttLib
176
-
177
- raise ttLib.TTLibError("can't handle '%s' element" % name)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Daniton/Midjourney-Disney/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Midjourney Disney
3
- emoji: 📈
4
- colorFrom: green
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.21.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Davis/twitter_scraper/README.md DELETED
@@ -1,46 +0,0 @@
1
- ---
2
- title: Twitter_scraper
3
- emoji: 🐠
4
- colorFrom: purple
5
- colorTo: red
6
- sdk: streamlit
7
- app_file: app.py
8
- pinned: false
9
- license: mit
10
- ---
11
-
12
- # Configuration
13
-
14
- `title`: _string_
15
- Display title for the Space
16
-
17
- `emoji`: _string_
18
- Space emoji (emoji-only character allowed)
19
-
20
- `colorFrom`: _string_
21
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
22
-
23
- `colorTo`: _string_
24
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
25
-
26
- `sdk`: _string_
27
- Can be either `gradio`, `streamlit`, or `static`
28
-
29
- `sdk_version` : _string_
30
- Only applicable for `streamlit` SDK.
31
- See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
32
-
33
- `app_file`: _string_
34
- Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
35
- Path is relative to the root of the repository.
36
-
37
- `models`: _List[string]_
38
- HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
39
- Will be parsed automatically from your code if not specified here.
40
-
41
- `datasets`: _List[string]_
42
- HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
43
- Will be parsed automatically from your code if not specified here.
44
-
45
- `pinned`: _boolean_
46
- Whether the Space stays on top of your list.