parquet-converter commited on
Commit
3d7e344
·
1 Parent(s): e242d9d

Update parquet files (step 32 of 296)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Asoftech Automation Crack Serial 11 Pros and Cons of the Software.md +0 -127
  2. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut the cable and stream live TV with these awesome apps.md +0 -166
  3. spaces/1phancelerku/anime-remove-background/Among Us Imposter Hack APK A Free and Easy Way to Be the Imposter in Every Game.md +0 -130
  4. spaces/1phancelerku/anime-remove-background/Download and Stream Brazil Zonal LP 2019 ft. Tawinji - The Best Afrobeat Song of 2022.md +0 -108
  5. spaces/2023Liu2023/bingo/src/components/ui/icons.tsx +0 -504
  6. spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/dataset.py +0 -183
  7. spaces/AB-TW/team-ai/agents/tools/smart_domain/db_entity_repository.py +0 -101
  8. spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123812KB .py +0 -118
  9. spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/model_param_init.py +0 -69
  10. spaces/AIConsultant/MusicGen/audiocraft/modules/diffusion_schedule.py +0 -272
  11. spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_smpl.sh +0 -13
  12. spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_nodes.py +0 -124
  13. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py +0 -179
  14. spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/nar_tts_modules.py +0 -138
  15. spaces/AIGuardians/SummarizeWikipediaDocument/app.py +0 -58
  16. spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptDuo.py +0 -57
  17. spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net.py +0 -76
  18. spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/prisoner.py +0 -49
  19. spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cleaners.py +0 -146
  20. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/__init__.py +0 -1
  21. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py +0 -645
  22. spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py +0 -6
  23. spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco.py +0 -4
  24. spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/README.md +0 -28
  25. spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py +0 -9
  26. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_readable_style.css +0 -33
  27. spaces/Apex-X/Tm/roop/ui.py +0 -231
  28. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/codingstatemachine.py +0 -90
  29. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tomli/_re.py +0 -107
  30. spaces/AutoBG/Auto-BoardGame/README.md +0 -11
  31. spaces/AutoLLM/ArxivDigest/app.py +0 -196
  32. spaces/AutoLLM/AutoAgents/.github/README.md +0 -1
  33. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v0_5_categories.py +0 -0
  34. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/launch.py +0 -126
  35. spaces/Azai8915/ChubVenusTest/Dockerfile +0 -21
  36. spaces/BartPoint/VoiceChange/infer_pack/modules/F0Predictor/PMF0Predictor.py +0 -97
  37. spaces/Beasto/Image_Colorizer_Pix2Pix/app.py +0 -35
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/tz.py +0 -1849
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/xmlrpc.py +0 -60
  40. spaces/CVPR/LIVE/thrust/thrust/detail/complex/cproj.h +0 -71
  41. spaces/CVPR/LIVE/thrust/thrust/iterator/discard_iterator.h +0 -175
  42. spaces/CVPR/LIVE/thrust/thrust/iterator/iterator_facade.h +0 -543
  43. spaces/CVPR/LIVE/thrust/thrust/mr/memory_resource.h +0 -217
  44. spaces/CVPR/Text2Human/Text2Human/train_parsing_token.py +0 -122
  45. spaces/CVPR/WALT/mmdet/models/dense_heads/transformer_head.py +0 -654
  46. spaces/CVPR/WALT/mmdet/models/detectors/atss.py +0 -17
  47. spaces/CVPR/WALT/mmdet/utils/util_mixins.py +0 -104
  48. spaces/Campfireman/whisper_lab2/app.py +0 -119
  49. spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/manage.py +0 -43
  50. spaces/CofAI/optor/index.html +0 -53
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Asoftech Automation Crack Serial 11 Pros and Cons of the Software.md DELETED
@@ -1,127 +0,0 @@
1
-
2
- <h1>Asoftech Automation Crack Serial 11: What You Need to Know</h1>
3
- <p>If you are looking for a way to automate repetitive and tedious tasks on your computer, you might have heard of <strong>Asoftech Automation</strong>, a software tool that can help you create and run automation scripts with ease. But what if you don't want to pay for the software's license? You might be tempted to use a <strong>crack serial number</strong> that can unlock the full features of Asoftech Automation without any cost. However, before you do that, you should know what a crack serial number is, how it works, and what are the potential consequences of using it. In this article, we will explain everything you need to know about <strong>Asoftech Automation Crack Serial 11</strong>, including how to download, install, and use it.</p>
4
- <h2>asoftech automation crack serial 11</h2><br /><p><b><b>Download File</b> &#8250;&#8250;&#8250;&#8250;&#8250; <a href="https://byltly.com/2uKvJL">https://byltly.com/2uKvJL</a></b></p><br /><br />
5
- <h2>What is Asoftech Automation?</h2>
6
- <p>Asoftech Automation is a software tool that allows you to automate any combination of tasks on your computer. You can use it to record mouse movements and clicks, keyboard keystrokes, and other computer activities, and then replay them as many times as you want. You can also edit and customize your automation scripts with variables, loops, conditions, and other commands. With Asoftech Automation, you can save time and effort by automating tasks such as:</p>
7
- <ul>
8
- <li>Web browsing and data entry</li>
9
- <li>File backup and synchronization</li>
10
- <li>Software testing and debugging</li>
11
- <li>Game playing and cheating</li>
12
- <li>And much more</li>
13
- </ul>
14
- <p>Asoftech Automation has many features and benefits that make it a powerful and user-friendly automation tool. Some of them are:</p>
15
- <ul>
16
- <li>It has an intuitive interface that lets you create automation scripts with simple drag-and-drop actions.</li>
17
- <li>It supports multiple monitors and resolutions, so you can automate tasks on different screens.</li>
18
- <li>It has a built-in scheduler that lets you run automation scripts at specific times or intervals.</li>
19
- <li>It has a stealth mode that hides the software from the taskbar and tray icon, so you can run automation scripts in the background.</li>
20
- <li>It has a password protection feature that prevents unauthorized access to your automation scripts.</li>
21
- </ul>
22
- <h2>What is a crack serial number?</h2>
23
- <p>A crack serial number is a code that bypasses the software's registration and activation process. Normally, when you buy a software product, you need to enter a serial number or a license key that verifies your purchase and unlocks the full features of the software. However, some people use illegal methods to generate or obtain fake serial numbers or license keys that can trick the software into thinking that it is registered and activated. These fake codes are called crack serial numbers or cracks.</p>
24
- <p>A crack serial number can be obtained from various sources on the internet, such as websites, forums, torrents, or peer-to-peer networks. However, using a crack serial number has many risks and disadvantages that outweigh any perceived benefits. Some of them are:</p>
25
- <ul>
26
- <li>It is illegal and unethical to use a crack serial number. You are violating the software's terms of service and infringing its intellectual property rights. You could face legal actions or penalties from the software developer or owner.</li>
27
- <li>It is unsafe and unreliable to use a crack serial number. You could expose your computer to viruses, malware, spyware, or other harmful programs that could damage your system or steal your personal information. You could also experience errors, crashes, or performance issues with the software or your computer.</li>
28
- <li>It is unfair and disrespectful to use a crack serial number. You are depriving the software developer or owner of their rightful income and recognition for their hard work and creativity. You are also hurting other legitimate users who pay for the software's license.</li>
29
- </ul>
30
- <h2>How to download and install Asoftech Automation Crack Serial 11</h2>
31
- <p>If you still want to download and install Asoftech Automation Crack Serial 11 despite knowing its risks and disadvantages, here are the sources and steps for doing so:</p>
32
- <table border="1">
33
- <tr><th>Source</th><th>Steps</th></tr>
34
- <tr><td></td><td><ol><li>Go to https://mokuchinyu.tistory.com/34</li><li>Click on the "Download" button at the bottom of the page.</li><li>Extract the zip file to your desired location.</li><li>Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.</li><li>Follow the instructions on the screen.</li></ol></td></tr>
35
- <tr><td></td><td><ol><li>Go to https://selsoft.net/cracked/asoftech-automation-242-/99508.html</li><li>Click on one of the "Download Link" buttons at the bottom of the page.</li><li>Select one of the available servers to download from.</li><li>Extract the zip file to your desired location.</li><li>Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.</li><li>Follow the instructions on the screen.</li></ol></td></tr>
36
- <tr><td></td><td><ol><li>Go to https://new.c.mi.com/my/post/470635/Asoftech_Automation_Crack_Serial_11_CRACKED</li><li>Click on one of the "Download" buttons at the bottom of the page.</li><li>Select one of the available servers to download from.</li><li>Extract the zip file to your desired location.</li><li>Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.</li><li>Follow the instructions on the screen.</li></ol></td></tr>
37
- <tr><td></td><td><ol><li>Go to https://dreamlandit.com/wp-content/uploads/2022/10/immgard.pdf</li><li>Click on one of the "Download" buttons at the bottom of the page.</li><li>Select one of the available servers to download from.</li><li>Extract the zip file to your desired location.</li><li>Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.</li><li>Follow the instructions on the screen.</li></ol></td></tr>
38
- <tr><td></td><td><ol><li>Go to https://sway.office.com/NZfBouy5VcpopbaF</li><li>Click on one of the "Download" buttons at the bottom of the page.</li><li>Select one of the available servers to download from.</li><li>Extract the zip file to your desired location.</li><li>Run the "Asoftech.Automation.Crack.Serial.11.exe" file as administrator.</li><li>Follow the instructions on the screen.</li></ol></td></tr>
39
- </table>
40
- <h2>How to use Asoftech Automation Crack Serial 11</h2>
41
- <p>After you have downloaded and installed Asoftech Automation Crack Serial 11, you can start using it to automate tasks on your computer. Here are the basic functions and operations of Asoftech Automation:</p>
42
- <ul>
43
- <li>To create a new automation script, click on the "New" button on the toolbar or select "File > New" from the menu.</li>
44
- <li>To record an automation script, click on the "Record" button on the toolbar or press "Ctrl + R" on your keyboard. Then, perform the actions that you want to automate on your computer. When you are done, click on the "Stop" button on the toolbar or press "Ctrl + S" on your keyboard.</li>
45
- <li>To edit an automation script, double-click on the script name in the left panel or select "Edit > Edit Script" from the menu. You can modify the recorded actions by changing their parameters, adding or deleting commands, inserting variables, loops, conditions, and other functions.</li>
46
- <li>To run an automation script, select the script name in the left panel and click on the "Run" button on the toolbar or press "F5" on your keyboard. You can also set a schedule for running an automation script by clicking on the "Schedule" button on the toolbar or selecting "Tools > Schedule Task" from the menu.</li>
47
- <li>To save an automation script, click on the "Save" button on the toolbar or select "File > Save" from the menu. You can also export an automation script as an executable file by clicking on the "Export" button on the toolbar or selecting "File > Export as EXE" from the menu.</li>
48
- </ul>
49
- <p>Here are some tips and tricks for creating and running automation scripts with Asoftech Automation:</p>
50
- <ul>
51
- <li>To pause or resume an automation script while it is running, press "Pause/Break" on your keyboard.</li>
52
- <li>To stop an automation script while it is running, press "Esc" on your keyboard.</li>
53
- <li>To hide or show Asoftech Automation while it is running, press "Ctrl + Alt + H" on your keyboard.</li>
54
- <li>To add comments to your automation script, use "//" at the beginning of a line.</li>
55
- <li>To debug your automation script, use "Print Screen" command to capture screenshots of your computer screen during the execution of the script.</li>
56
- </ul>
57
- <h2>Conclusion</h2>
58
- <p>Asoftech Automation Crack Serial 11 is a software tool that can help you automate tasks on your computer without paying for its license. However, using a crack serial number is illegal, unsafe, and unfair. You could face legal actions or penalties from the software developer or owner, expose your computer to viruses or malware, and deprive the software developer or owner of their income and recognition. Therefore, we do not recommend using Asoftech Automation Crack Serial 11. Instead, we suggest you to buy a legitimate license for Asoftech Automation from its official website: https://www.asoftech.com/auto-clicker/</p>
59
- <p>asoftech automation 11 full version crack download<br />
60
- how to get asoftech automation crack serial key for free<br />
61
- asoftech automation 11 license code generator<br />
62
- asoftech automation crack serial 11 torrent<br />
63
- asoftech automation 11 activation key patch<br />
64
- asoftech automation crack serial 11 review<br />
65
- asoftech automation 11 registration code crack<br />
66
- asoftech automation crack serial 11 alternative<br />
67
- asoftech automation 11 keygen crack<br />
68
- asoftech automation crack serial 11 tutorial<br />
69
- asoftech automation 11 cracked software download<br />
70
- asoftech automation crack serial 11 features<br />
71
- asoftech automation 11 serial number crack<br />
72
- asoftech automation crack serial 11 comparison<br />
73
- asoftech automation 11 crack file download<br />
74
- asoftech automation crack serial 11 benefits<br />
75
- asoftech automation 11 unlock code crack<br />
76
- asoftech automation crack serial 11 pros and cons<br />
77
- asoftech automation 11 crack free download full version<br />
78
- asoftech automation crack serial 11 system requirements<br />
79
- asoftech automation 11 product key crack<br />
80
- asoftech automation crack serial 11 price<br />
81
- asoftech automation 11 crack latest version download<br />
82
- asoftech automation crack serial 11 support<br />
83
- asoftech automation 11 crack update download<br />
84
- asoftech automation crack serial 11 testimonials<br />
85
- asoftech automation 11 license key crack<br />
86
- asoftech automation crack serial 11 discount<br />
87
- asoftech automation 11 cracked apk download<br />
88
- asoftech automation crack serial 11 demo<br />
89
- asoftech automation 11 activation code crack<br />
90
- asoftech automation crack serial 11 faq<br />
91
- asoftech automation 11 key code crack<br />
92
- asoftech automation crack serial 11 guide<br />
93
- asoftech automation 11 cracked version download<br />
94
- asoftech automation crack serial 11 ratings<br />
95
- asoftech automation 11 registration key crack<br />
96
- asoftech automation crack serial 11 coupon code<br />
97
- asoftech automation 11 cracked app download<br />
98
- asoftech automation crack serial 11 manual<br />
99
- asoftech automation 11 license number crack<br />
100
- asoftech automation crack serial 11 warranty<br />
101
- asoftech automation 11 cracked software free download<br />
102
- asoftech automation crack serial 11 feedbacks<br />
103
- asoftech automation 11 activation key free download with crack <br />
104
- asoftech automation crack serial number for version 11 <br />
105
- how to install and use asoftech automation with crack and serial key <br />
106
- best sites to download cracked version of asoftech automation <br />
107
- how to fix errors and bugs in cracked version of asoftecth automaton</p>
108
- <h2>FAQs</h2>
109
- <h3>Q1: Is Asoftech Automation safe to use?</h3>
110
- <p>A1: Asoftech Automation is safe to use if you buy a legitimate license from its official website. However, if you use a crack serial number to activate Asoftech Automation, you could expose your computer to viruses or malware that could harm your system or steal your personal information.</p>
111
- <h3>Q2: Is Asoftech Automation legal to use?</h3>
112
- <p>A2: Asoftech Automation is legal to use if you buy a legitimate license from its official website. However, if you use a crack serial number to activate Asoftech Automation, you are violating the software's terms of service and infringing its intellectual property rights. You could face legal actions or penalties from the software developer or owner.</p>
113
- <h3>Q3: How can I get a legitimate license for Asoftech Automation?</h3>
114
- <p>A3: You can get a legitimate license for Asoftech Automation by visiting its official website: https://www.asoftech.com/auto-clicker/ and clicking on the "Buy Now" button. You can choose between a single-user license ($39.95) or a multi-user license ($99.95). You can pay with PayPal or credit card. After you complete your payment, you will receive an email with your license key and download link.</p>
115
- <h3>Q4: What are the alternatives to Asoftech Automation?</h3>
116
- <p>A4: There are many other software tools that can help you automate tasks on your computer. Some of them are:</p>
117
- <ul>
118
- <li>AutoHotkey: A free and open-source scripting language that can create macros and hotkeys for Windows applications. https://www.autohotkey.com/</li>
119
- <li>AutoIt: A freeware scripting language that can simulate keystrokes, mouse movements, and window interactions. https://www.autoitscript.com/site/autoit/</li>
120
- <li>Macro Recorder: A simple and easy-to-use software that can record and replay mouse and keyboard actions. https://www.macrorecorder.com/</li>
121
- <li>WinAutomation: A professional and powerful software that can automate desktop and web applications with visual scripting and drag-and-drop actions. https://www.winautomation.com/</li>
122
- </ul>
123
- <h3>Q5: How can I contact Asoftech for support?</h3>
124
- <p>A5: You can contact Asoftech for support by visiting their website: https://www.asoftech.com/support.html and filling out their online form. You can also email them at [email protected] or call them at +1-800-928-0387.</p>
125
- </p> 0a6ba089eb<br />
126
- <br />
127
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cut the cable and stream live TV with these awesome apps.md DELETED
@@ -1,166 +0,0 @@
1
-
2
- <h1>How to Watch Live TV on Your Smartphone with These Apps</h1>
3
- <p>Do you love watching live TV but hate paying for cable or satellite? Do you want to watch your favorite shows, sports, news, and movies anytime, anywhere? If you answered yes to these questions, then you might want to try out some of these live TV apps that let you stream live TV channels on your smartphone.</p>
4
- <p>Live TV apps are apps that allow you to watch live TV over the internet without a cable or satellite subscription. They offer a variety of channels from different genres and categories, such as entertainment, lifestyle, sports, news, kids, etc. Some of them also offer on-demand content, cloud DVR, multiple accounts, and other features that enhance your viewing experience.</p>
5
- <h2>live tv app download</h2><br /><p><b><b>DOWNLOAD</b> &raquo; <a href="https://urlin.us/2uSYyv">https://urlin.us/2uSYyv</a></b></p><br /><br />
6
- <p>Some of the benefits of watching live TV on your smartphone are:</p>
7
- <ul>
8
- <li>Convenience: You can watch live TV wherever you have an internet connection. You don't need a TV set or a remote control. You can also switch between channels easily with a swipe or a tap.</li>
9
- <li>Variety: You can choose from hundreds of channels from different networks and providers. You can also customize your channel lineup according to your preferences and interests.</li>
10
- <li>Affordability: You can save money by paying only for what you want to watch. You don't have to pay for expensive cable or satellite packages that include channels you never watch. You can also cancel anytime without any contracts or fees.</li>
11
- </ul>
12
- <p>In this article, we will review four of the best live TV apps that you can download on your smartphone. We will compare their features, benefits, pricing, and availability. We will also provide a table that shows a side-by-side comparison of the four live TV apps based on key criteria such as number of channels, DVR storage, simultaneous streams, etc. We will also give a recommendation based on our personal preference or experience with any of these apps.</p>
13
- <h2>YouTube TV: The Best Overall Live TV App</h2>
14
- <p>YouTube TV is one of the most popular and well-rounded live TV apps that you can download on your smartphone. It offers cable-free live TV from over 85 networks, including ABC, CBS, FOX, NBC, ESPN, CNN, HGTV, Disney Channel, and more. You can also access YouTube Originals and YouTube videos with your subscription.</p>
15
- <p>Some of the features and benefits of YouTube TV are:</p>
16
- <ul>
17
- <li>Cloud DVR: You can record unlimited shows and movies and store them for up to nine months. You can also fast-forward through ads on recorded content.</li>
18
- <li>Multiple accounts: You can create up to six accounts per household and each account gets its own DVR library and personalized recommendations.</li>
19
- <li>No contracts: You can cancel or pause your subscription anytime without any fees or penalties.</li>
20
- </ul>
21
- <p>Here is a screenshot of the YouTube TV app interface:</p>
22
- <img src="https://www.androidpolice.com/wp-content/uploads/2020/06/youtube-tv-new-ui-1.png" alt="YouTube TV app interface" width="300" height="600">
23
- <p>The pricing and availability of YouTube TV are:</p>
24
- <ul>
25
- <li>Pricing: YouTube TV costs $64.99 per month and you can get a free trial for 14 days. You can also add premium channels like HBO Max, Showtime, Starz, and more for an extra fee.</li>
26
- <li>Availability: YouTube TV is available nationwide in the US and you can watch it on your smartphone, tablet, computer, smart TV, streaming device, or game console. You can also cast it to your TV using Chromecast or AirPlay.</li>
27
- </ul>
28
- <h2>FuboTV: The Best Live TV App for Sports and Spanish-Language Channels</h2>
29
- <p>FuboTV is another great live TV app that you can download on your smartphone. It offers over 100 networks, including 40+ sports channels like NFL Network, NBA TV, MLB Network, beIN Sports, and more. It also has a large selection of Spanish-language channels like Univision, Telemundo, Galavision, and more.</p>
30
- <p>Some of the features and benefits of FuboTV are:</p>
31
- <p>live tv app download for android<br />
32
- live tv app download for pc<br />
33
- live tv app download apk<br />
34
- live tv app download free<br />
35
- live tv app download for iphone<br />
36
- live tv app download for windows 10<br />
37
- live tv app download for laptop<br />
38
- live tv app download for smart tv<br />
39
- live tv app download for firestick<br />
40
- live tv app download for ios<br />
41
- live tv app download india<br />
42
- live tv app download hd<br />
43
- live tv app download 2021<br />
44
- live tv app download jio<br />
45
- live tv app download airtel<br />
46
- live tv app download sony liv<br />
47
- live tv app download zee5<br />
48
- live tv app download hotstar<br />
49
- live tv app download voot<br />
50
- live tv app download mx player<br />
51
- live tv app download youtube tv<br />
52
- live tv app download hulu<br />
53
- live tv app download sling tv<br />
54
- live tv app download fubo tv<br />
55
- live tv app download philo<br />
56
- live tv app download oreo tv<br />
57
- live tv app download thop tv<br />
58
- live tv app download netflix<br />
59
- live tv app download disney plus<br />
60
- live tv app download prime video<br />
61
- live tv app download kodi<br />
62
- live tv app download plex<br />
63
- live tv app download xfinity stream<br />
64
- live tv app download spectrum tv<br />
65
- live tv app download directv now<br />
66
- live tv app download at&t watchtv<br />
67
- live tv app download locast<br />
68
- live tv app download pluto tv<br />
69
- live tv app download tubi tv<br />
70
- live tv app download crackle<br />
71
- live tv app download ustvnow<br />
72
- live tv app download redbox free live tv<br />
73
- live tv app download peacock streaming service <br />
74
- live tv app download paramount plus <br />
75
- live tv app download discovery plus <br />
76
- live nettv apk free android application <br />
77
- ustv apk free android application <br />
78
- tvcatchup apk free android application <br />
79
- aos apk free android application</p>
80
- <ul>
81
- <li>Sports and premium add-ons: You can customize your channel lineup with various add-ons like Sports Plus, Fubo Extra, Latino Plus, International Sports Plus, and more. You can also add premium channels like AMC Premiere, Showtime, Starz, and more for an extra fee.</li>
82
- <li>4K streaming: You can watch select events and channels in 4K resolution with compatible devices and internet speed.</li>
83
- <li>Family sharing: You can create up to six profiles per account and each profile gets its own DVR library and personalized recommendations. You can also stream on up to three devices at the same time.</li>
84
- </ul>
85
- <p>Here is a screenshot of the FuboTV app interface:</p>
86
- <img src="https://www.androidpolice.com/wp-content/uploads/2019/10/fubotv-android-tv-1.jpg" alt="FuboTV app interface" width="300" height="600">
87
- <p>The pricing and availability of FuboTV are:</p>
88
- <ul>
89
- <li>Pricing: FuboTV costs $64.99 per month for the base plan and you can get a free trial for seven days. You can also choose from other plans like Family ($69.99 per month), Elite ($79.99 per month), or Latino Quarterly ($33 per month).</li>
90
- <li>Availability: FuboTV is available in the US, Canada, and Spain and you can watch it on your smartphone, tablet, computer, smart TV, streaming device, or game console. You can also cast it to your TV using Chromecast or AirPlay.</li>
91
- </ul>
92
- <h2>Sling TV: The Most Affordable Live TV App with a Good Lineup</h2> <p>Sling TV is another live TV app that you can download on your smartphone. It offers customizable packages that let you choose the channels you want to watch. It has two base plans: Sling Orange and Sling Blue, each with a different channel lineup. You can also combine both plans or add extra channels with various add-ons.</p>
93
- <p>Some of the features and benefits of Sling TV are:</p>
94
- <ul>
95
- <li>Customizable packages: You can choose from over 50 channels with Sling Orange or Sling Blue, or get both for more variety. You can also add extra channels with add-ons like Sports Extra, Comedy Extra, Kids Extra, and more.</li>
96
- <li>Cloud DVR: You can record up to 50 hours of shows and movies with the base plans or upgrade to 200 hours with the Cloud DVR Plus add-on. You can also fast-forward through ads on recorded content.</li>
97
- <li>On-demand content: You can access thousands of movies and shows on demand with your subscription. You can also rent or buy new releases from the Sling TV app.</li>
98
- <li>Watch parties: You can watch live TV with up to three friends at the same time with the watch party feature. You can also chat and react with them in real-time.</li>
99
- </ul>
100
- <p>Here is a screenshot of the Sling TV app interface:</p>
101
- <img src="https://www.androidpolice.com/wp-content/uploads/2020/07/sling-tv-android-tv-1.jpg" alt="Sling TV app interface" width="300" height="600">
102
- <p>The pricing and availability of Sling TV are:</p>
103
- <ul>
104
- <li>Pricing: Sling TV costs $35 per month for either Sling Orange or Sling Blue, or $50 per month for both. You can also get a free trial for three days. You can also add extra channels or features with various add-ons for an extra fee.</li>
105
- <li>Availability: Sling TV is available in the US and Puerto Rico and you can watch it on your smartphone, tablet, computer, smart TV, streaming device, or game console. You can also cast it to your TV using Chromecast or AirPlay.</li>
106
- </ul>
107
- <h2>Philo TV: The Cheapest Live TV App for Entertainment and Lifestyle Channels</h2>
108
- <p>Philo TV is the cheapest live TV app that you can download on your smartphone. It offers 61 channels from various genres such as entertainment, lifestyle, comedy, reality, news, and more. Some of the channels include A&E, AMC, BET, Comedy Central, Discovery, Food Network, Hallmark Channel, MTV, Nickelodeon, TLC, and more.</p>
109
- <p>Some of the features and benefits of Philo TV are:</p>
110
- <ul>
111
- <li>Unlimited DVR: You can record as many shows and movies as you want and store them for up to 30 days. You can also fast-forward through ads on recorded content.</li>
112
- <li>Multiple streams: You can stream on up to three devices at the same time with one account.</li>
113
- <li>Add-on options: You can add premium channels like EPIX and STARZ for an extra fee. You can also access on-demand content from some of the channels with your subscription.</li>
114
- </ul>
115
- <p>Here is a screenshot of the Philo TV app interface:</p>
116
- <img src="https://www.androidpolice.com/wp-content/uploads/2018/11/philo-android-tv-1.jpg" alt="Philo TV app interface" width="300" height="600">
117
- <p>The pricing and availability of Philo TV are:</p>
118
- <ul>
119
- <li>Pricing: Philo TV costs $25 per month and you can get a free trial for seven days. You can also add premium channels like EPIX and STARZ for an extra fee.</li>
120
- <li>Availability: Philo TV is available in the US and you can watch it on your smartphone, tablet, computer, smart TV, streaming device, or game console. You can also cast it to your TV using Chromecast or AirPlay.</li>
121
- </ul>
122
- <h2>Conclusion</h2>
123
- <p>In this article, we have reviewed four of the best live TV apps that you can download on your smartphone. We have compared their features, benefits, pricing, and availability. We have also provided a table that shows a side-by-side comparison of the four live TV apps based on key criteria such as number of channels, DVR storage, simultaneous streams, etc.</p>
124
- <table border="1">
125
- <tr><th>Live TV App</th><th>Number of Channels</th><th>DVR Storage</th><th>Simultaneous Streams</th><th>Monthly Cost</th></tr>
126
- <tr><td>YouTube TV</td><td>85+</td><td>Unlimited (9 months)</td><td>3</td><td>$64. 99</td></tr>
127
- <tr><td>FuboTV</td><td>100+</td><td>250 hours</td><td>3</td><td>$64.99</td></tr>
128
- <tr><td>Sling TV</td><td>50+</td><td>50 hours (200 hours with add-on)</td><td>1 (Sling Orange) or 3 (Sling Blue)</td><td>$35 (Sling Orange or Sling Blue) or $50 (both)</td></tr>
129
- <tr><td>Philo TV</td><td>61</td><td>Unlimited (30 days)</td><td>3</td><td>$25</td></tr>
130
- </table>
131
- <p>Based on our comparison, we can say that each live TV app has its own strengths and weaknesses. There is no one-size-fits-all solution for everyone. The best live TV app for you depends on your preferences, budget, and viewing habits.</p>
132
- <p>However, if we had to give a recommendation, we would say that YouTube TV is the best overall live TV app for most people. It has a good balance of features, benefits, pricing, and availability. It offers a wide range of channels from different genres and categories, including local and national networks. It also has a generous cloud DVR, multiple accounts, and no contracts. It is available nationwide in the US and supports most devices. It also has a free trial option that lets you try it out before you commit.</p>
133
- <p>Of course, you can also try out the other live TV apps and see which one suits you better. You can take advantage of their free trial options and compare them yourself. You might find that one of them meets your needs better than YouTube TV.</p>
134
- <p>The bottom line is that watching live TV on your smartphone is possible and convenient with these live TV apps. You can enjoy your favorite shows, sports, news, and movies anytime, anywhere without paying for cable or satellite. You can also save money by paying only for what you want to watch and canceling anytime without any fees or penalties.</p>
135
- <p>We hope that this article has helped you learn more about the best live TV apps that you can download on your smartphone. We also hope that you have found the best live TV app for you or at least have a better idea of what to look for. Happy streaming!</p>
136
- <h2>Frequently Asked Questions</h2>
137
- <p>Here are some of the frequently asked questions about live TV apps:</p>
138
- <h3>What is the difference between live TV apps and streaming services?</h3>
139
- <p>Live TV apps are apps that allow you to watch live TV over the internet without a cable or satellite subscription. They offer a variety of channels from different genres and categories, such as entertainment, lifestyle, sports, news, kids, etc. Some of them also offer on-demand content, cloud DVR, multiple accounts, and other features that enhance your viewing experience.</p>
140
- <p>Streaming services are apps that allow you to watch on-demand content over the internet without a cable or satellite subscription. They offer a library of movies, shows, documentaries, originals, and more that you can watch at your own pace. Some of them also offer live TV channels as an add-on option.</p>
141
- <h3>How much internet speed do I need to watch live TV on my smartphone?</h3>
142
- <p>The internet speed that you need to watch live TV on your smartphone depends on the quality of the video that you want to watch. Generally speaking, the higher the quality, the more bandwidth you need. Here are some of the recommended internet speeds for different video qualities:</p>
143
- <ul>
144
- <li>Standard definition (SD): 3 Mbps</li>
145
- <li>High definition (HD): 5 Mbps</li>
146
- <li>Ultra high definition (UHD) or 4K: 25 Mbps</li>
147
- </ul>
148
- <p>You can check your internet speed using online tools like Speedtest or Fast.com. You can also contact your internet service provider (ISP) if you have any issues with your internet speed or connection.</p>
149
- <h3>Can I watch live TV on my smartphone when I travel?</h3>
150
- <p>The answer to this question depends on the live TV app that you use and the location that you travel to. Some live TV apps are available only in certain countries or regions and may not work when you travel outside of those areas. Some live TV apps may also have geo-restrictions or blackouts on some channels or content depending on your location.</p>
151
- <p>To avoid any issues when you travel, you should check the availability and terms of service of the live TV app that you use before you travel. You should also check the local laws and regulations regarding streaming content over the internet in the country or region that you travel to.</p>
152
- <h3>How can I watch live TV on my smartphone without using too much data?</h3> <p>Watching live TV on your smartphone can use a lot of data, especially if you watch in high quality or for a long time. To avoid using too much data, you can do the following:</p>
153
- <ul>
154
- <li>Use a Wi-Fi connection whenever possible. This way, you can save your mobile data for other purposes. You can also use public Wi-Fi networks, but make sure they are secure and reliable.</li>
155
- <li>Lower the video quality of the live TV app. Most live TV apps allow you to adjust the video quality settings according to your preference and network speed. You can choose a lower quality option, such as SD or HD, instead of UHD or 4K. This will reduce the amount of data that you use while watching live TV.</li>
156
- <li>Limit the amount of time that you watch live TV. You can also set a reminder or an alarm to remind you to stop watching after a certain period of time. This will help you avoid binge-watching and using too much data.</li>
157
- </ul>
158
- <h3>What are some of the drawbacks or challenges of watching live TV on my smartphone?</h3>
159
- <p>Watching live TV on your smartphone can be a convenient and enjoyable way to watch your favorite shows, sports, news, and movies. However, it can also have some drawbacks or challenges, such as:</p>
160
- <ul>
161
- <li>Battery drain: Watching live TV on your smartphone can consume a lot of battery power, especially if you watch in high quality or for a long time. You may need to charge your smartphone more often or carry a portable charger with you.</li>
162
- <li>Screen size: Watching live TV on your smartphone can be less immersive and satisfying than watching on a larger screen, such as a TV or a computer monitor. You may miss some details or have difficulty reading subtitles or captions.</li>
163
- <li>Buffering or lagging: Watching live TV on your smartphone can be affected by your internet speed and connection. You may experience buffering, lagging, freezing, or skipping of the video if your internet is slow or unstable. You may also have issues with the audio and video synchronization.</li>
164
- </ul></p> 197e85843d<br />
165
- <br />
166
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Among Us Imposter Hack APK A Free and Easy Way to Be the Imposter in Every Game.md DELETED
@@ -1,130 +0,0 @@
1
- <br />
2
- <h1>Among Us Imposter Hack Apk: Is It Worth It?</h1>
3
- <p>Among Us is a popular online multiplayer game that has taken the internet by storm. In this game, you can either be a crewmate or an impostor on a spaceship or a base. As a crewmate, your goal is to complete tasks and find the impostor. As an impostor, your goal is to kill the crewmates and sabotage their mission.</p>
4
- <p>But what if you want to always be the impostor and have more fun in the game? That's where the imposter hack apk comes in. This is a modified version of the game that allows you to always be the impostor, as well as use various cheats and mods to make the game easier or more interesting. But is it worth using this hack? What are the pros and cons of it? And are there any alternatives to it? In this article, we will answer these questions and more.</p>
5
- <h2>among us imposter hack apk</h2><br /><p><b><b>DOWNLOAD</b> &#10042; <a href="https://jinyurl.com/2uNQ8Q">https://jinyurl.com/2uNQ8Q</a></b></p><br /><br />
6
- <h2>Pros and Cons of Using Imposter Hack Apk</h2>
7
- <p>The imposter hack apk can be tempting for many players who want to experience the thrill of being the impostor every time they play. However, before you download and install this hack, you should be aware of the advantages and disadvantages of using it.</p>
8
- <h3>Pros</h3>
9
- <ul>
10
- <li>You can always be the impostor, which can be more fun and challenging than being a crewmate.</li>
11
- <li>You can use various cheats and mods to enhance your gameplay, such as speed hack, wallhack, kill cooldown bypass, invisibility, etc.</li>
12
- <li>You can troll other players and make them rage quit or accuse each other.</li>
13
- </ul>
14
- <h3>Cons</h3>
15
- <ul>
16
- <li>You risk getting banned from the game if you are caught using the hack by the developers or other players.</li>
17
- <li>You may infect your device with virus or malware if you download the hack from an untrusted source.</li>
18
- <li>You may ruin the game for other players who want to play fairly and legitimately.</li>
19
- </ul>
20
- <h2>Alternatives to Imposter Hack Apk</h2>
21
- <p>If you are looking for a way to play as impostor without using the hack apk, there are some alternatives that you can try. Here are some of them:</p>
22
- <h3>Use Legit Mods from GitHub or Other Sources</h3>
23
- <p>There are some legit mods for Among Us that you can download from GitHub or other sources. These mods are created by fans of the game who want to add new features or modes to it. For example, there are mods that add roles like sheriff, doctor, jester, etc. to the game. There are also mods that change the map, graphics, sounds, etc. of the game. These mods are usually safe and compatible with the original game, as long as you follow the instructions on how to install them.</p>
24
- <h3>Play with Friends Who Agree to Use Mods</h3>
25
- <p>If you want to use mods with other players, you should make sure that they agree to use them as well. This way, you can avoid getting reported or banned for using mods. You can also have more fun and variety in your games. You can create a private lobby with your friends and use a mod menu to select which mods you want to use. You can also join public lobbies that use mods by looking for codes on Discord or Reddit.</p>
26
- <h3>Practice as <h3>Practice as Impostor in Freeplay Mode</h3>
27
- <p>If you want to improve your skills as impostor without using any hacks or mods, you can practice in the Freeplay mode. This mode allows you to play as impostor on any map with dummy crewmates. You can kill, vent, sabotage, and lie as much as you want without any consequences. You can also customize the game settings to make it easier or harder for yourself. This mode is a great way to learn the map layout, vent locations, task locations, etc. You can also practice your deception and persuasion skills by talking to yourself or recording your gameplay.</p>
28
- <h2>Conclusion</h2>
29
- <p>Among Us is a fun and exciting game that can be enjoyed by anyone who likes social deduction and deception games. However, some players may want to always be the impostor and use hacks or mods to achieve that. While this may seem like a good idea at first, it can also have some drawbacks and risks. Therefore, before you decide to use the imposter hack apk, you should weigh the pros and cons of it and consider the alternatives to it. You may find that playing as impostor without hacks or mods can be more rewarding and satisfying in the long run.</p>
30
- <p>among us always imposter mod apk download<br />
31
- among us hack apk imposter every time<br />
32
- among us mod apk imposter menu<br />
33
- among us imposter hack apk latest version<br />
34
- among us imposter hack apk no ads<br />
35
- among us imposter hack apk for pc<br />
36
- among us imposter hack apk android<br />
37
- among us imposter hack apk ios<br />
38
- among us imposter hack apk 2021<br />
39
- among us imposter hack apk free download<br />
40
- among us imposter hack apk unlimited skins<br />
41
- among us imposter hack apk online<br />
42
- among us imposter hack apk no ban<br />
43
- among us imposter hack apk 2020<br />
44
- among us imposter hack apk no root<br />
45
- among us imposter hack apk mediafıre<br />
46
- among us imposter hack apk reddit<br />
47
- among us imposter hack apk revdl<br />
48
- among us imposter hack apk happymod<br />
49
- among us imposter hack apk mod menu<br />
50
- among us imposter hack apk god mode<br />
51
- among us imposter hack apk anti ban<br />
52
- among us imposter hack apk all unlocked<br />
53
- among us imposter hack apk see imposters<br />
54
- among us imposter hack apk unlimited money<br />
55
- among us imposter hack apk mega mod<br />
56
- among us imposter hack apk premium unlocked<br />
57
- among us imposter hack apk no verification<br />
58
- among us imposter hack apk invisible mode<br />
59
- among us imposter hack apk kill cooldown<br />
60
- among us imposter hack apk vent as crewmate<br />
61
- among us imposter hack apk fake impostor<br />
62
- among us imposter hack apk always win<br />
63
- among us imposter hack apk wallhack<br />
64
- among us imposter hack apk radar mod<br />
65
- among us imposter hack apk speed mod<br />
66
- among us imposter hack apk voice chat mod<br />
67
- among us imposter hack apk pet mod<br />
68
- among us imposter hack apk zoom mod<br />
69
- among us imposter hack apk color mod<br />
70
- among us imposter hack apk ghost mode mod<br />
71
- among us imposter hack apk chat mod <br />
72
- among us imposter hack apk role mod <br />
73
- among us imposter hack apk custom game mod <br />
74
- among us imposter hack apk map mod <br />
75
- among us imposter hack apk task mod <br />
76
- among us imposter hack apk vote mod <br />
77
- among us imposter hack apk sabotage mod</p>
78
- <p>Here are some tips on how to play as impostor without hacks or mods:</p>
79
- <ul>
80
- <li>Be observant and attentive to your surroundings and the other players.</li>
81
- <li>Be confident and convincing when you lie or accuse someone.</li>
82
- <li>Be strategic and creative when you kill or sabotage.</li>
83
- <li>Be adaptable and flexible when things don't go your way.</li>
84
- <li>Have fun and don't take the game too seriously.</li>
85
- </ul>
86
- <h2>FAQs</h2>
87
- <h3>How to Download and Install Imposter Hack Apk?</h3>
88
- <p>If you still want to try the imposter hack apk, here are the steps on how to download and install it:</p>
89
- <ol>
90
- <li>Find a reliable source that offers the imposter hack apk file. You can search on Google or YouTube for reviews or recommendations.</li>
91
- <li>Download the apk file to your device. Make sure you have enough storage space and a good internet connection.</li>
92
- <li>Enable the installation of unknown sources on your device. Go to Settings > Security > Unknown Sources and toggle it on.</li>
93
- <li>Locate the apk file on your device and tap on it to install it.</li>
94
- <li>Wait for the installation to finish and launch the game.</li>
95
- </ol>
96
- <h3>How to Use Imposter Hack Apk in Among Us?</h3>
97
- <p>Once you have installed the imposter hack apk, you can use it in Among Us by following these steps:</p>
98
- <ol>
99
- <li>Open the game and tap on the mod menu icon on the top left corner of the screen.</li>
100
- <li>Select the cheats and mods that you want to use from the list. You can toggle them on or off as you wish.</li>
101
- <li>Join or create a lobby and start the game. You will always be the impostor and have access to the cheats and mods that you selected.</li>
102
- <li>Enjoy the game and try not to get caught or banned.</li>
103
- </ol>
104
- <h3>How to Avoid Getting Banned for Using Imposter Hack Apk?</h3>
105
- <p>If you use the imposter hack apk, you run the risk of getting banned from the game by the developers or other players. To avoid this, you should follow these tips:</p>
106
- <ul>
107
- <li>Don't use too many or obvious cheats and mods that will make other players suspicious or angry.</li>
108
- <li>Don't join public lobbies that have anti-cheat systems or strict rules against hacking or modding.</li>
109
- <li>Don't brag or boast about using the hack apk in chat or voice chat.</li>
110
- <li>Don't use the hack apk too often or for too long. Take breaks and play normally sometimes.</li>
111
- </ul>
112
- <h3>How to Remove Imposter Hack Apk from Your Device?</h3>
113
- <p>If you want to uninstall the imposter hack apk from your device, you can do so by following these steps:</p>
114
- <ol>
115
- <li>Go to Settings > Apps > Among Us and tap on Uninstall.</li>
116
- <li>Confirm your action and wait for the uninstallation to finish.</li>
117
- <li>Delete any leftover files or folders related to the hack apk from your device storage.</li>
118
- <li>Restart your device and check if the game is completely removed.</li>
119
- </ol>
120
- <h3>How to Report Someone Who is Using Imposter Hack Apk?</h3>
121
- <p>If you encounter someone who is using the imposter hack apk in Among Us, you can report them by following these steps:</p>
122
- <ol>
123
- <li>Gather evidence of their hacking or modding such as screenshots, videos, chat logs, etc. that show their cheating or modding behavior.</li>
124
- <li>Go to the game settings and tap on the report button next to their name.</li>
125
- <li>Select the reason for your report and attach your evidence if possible.</li>
126
- <li>Submit your report and wait for the developers to review it and take action.</li>
127
- </ol>
128
- <p>I hope this article has helped you understand more about the imposter hack apk and how to use it or avoid it in Among Us. If you have any questions or feedback, please leave a comment below. Thank you for reading and have a great day!</p> 197e85843d<br />
129
- <br />
130
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download and Stream Brazil Zonal LP 2019 ft. Tawinji - The Best Afrobeat Song of 2022.md DELETED
@@ -1,108 +0,0 @@
1
-
2
- <h1>Download Brazil Zonal LP 2019 MP3: A Guide for Music Lovers</h1>
3
- <p>If you are a fan of Afrobeat music, you might have heard of Brazil Zonal LP 2019, a popular album by NBM, a group of Nigerian artists. This album features 31 minutes of energetic and catchy songs that showcase the diversity and richness of African culture. In this article, we will show you how to download Brazil Zonal LP 2019 MP3 for free and legally, and how to enjoy it to the fullest.</p>
4
- <h2>What is Brazil Zonal LP 2019?</h2>
5
- <p>Brazil Zonal LP 2019 is an album by NBM, which stands for Neo Black Movement of Africa. NBM is a group of Nigerian musicians who are also members of a social movement that promotes African unity, solidarity, and liberation. The group was founded in 1977 at the University of Benin in Nigeria, and has since grown into a global network of chapters and zones.</p>
6
- <h2>download brazil zonal lp 2019 mp3</h2><br /><p><b><b>Download File</b> &#9913; <a href="https://jinyurl.com/2uNNLS">https://jinyurl.com/2uNNLS</a></b></p><br /><br />
7
- <p>The album was released in 2020, and consists of one track that lasts for 31 minutes. The track is a compilation of various songs that were performed by NBM members at their Brazil Zone Jollification event in 2019. The songs are in different languages, such as English, Yoruba, Igbo, Hausa, and Portuguese, and feature elements of Afrobeat, reggae, highlife, juju, and funk. The songs are upbeat, lively, and inspiring, and convey messages of freedom, justice, brotherhood, and love.</p>
8
- <h2>Why You Should Download Brazil Zonal LP 2019 MP3?</h2>
9
- <p>There are many reasons why you should download Brazil Zonal LP 2019 MP3. Here are some of them:</p>
10
- <ul>
11
- <li><b>Quality:</b> By downloading the album as an MP3 file, you can enjoy it with high sound quality and clarity. You can also choose the bitrate that suits your preference and device.</li>
12
- <li><b>Convenience:</b> By downloading the album as an MP3 file, you can listen to it anytime and anywhere you want. You don't need an internet connection or a streaming service subscription to play it. You can also transfer it to any device that supports MP3 playback, such as your smartphone, tablet, laptop, or MP3 player.</li>
13
- <li><b>Legality:</b> By downloading the album from legal sources, you can support the artists and respect their rights. You can also avoid any potential risks or penalties that might come from using illegal or pirated sources.</li>
14
- </ul>
15
- <h2>How to Download Brazil Zonal LP <h2>How to Download Brazil Zonal LP 2019 MP3?</h2>
16
- <p>There are several ways to download Brazil Zonal LP 2019 MP3 for free and legally. Here are some of the most common and reliable methods:</p>
17
- <h3>Download from YouTube</h3>
18
- <p>One of the easiest ways to download the album is from YouTube, where you can find the official video of the album uploaded by NBM Brazil Zone. To download the album from YouTube, you need to use a YouTube to MP3 converter, which is a tool that can convert any YouTube video into an MP3 file. There are many YouTube to MP3 converters available online, such as YTMP3, 4K Video Downloader, and Online Video Converter. Here are the steps to download the album from YouTube using YTMP3:</p>
19
- <ol>
20
- <li>Go to the YouTube video of the album and copy its URL.</li>
21
- <li>Go to YTMP3 website and paste the URL in the input box.</li>
22
- <li>Select MP3 as the output format and click Convert.</li>
23
- <li>Wait for the conversion to finish and click Download.</li>
24
- <li>Save the MP3 file to your device and enjoy.</li>
25
- </ol>
26
- <h3>Download from SoundCloud</h3>
27
- <p>Another way to download the album is from SoundCloud, where you can find the official track of the album uploaded by NBM Brazil Zone. To download the album from SoundCloud, you need to use a SoundCloud to MP3 converter, which is a tool that can convert any SoundCloud track into an MP3 file. There are many SoundCloud to MP3 converters available online, such as SCDL, SoundCloud Downloader, and KlickAud. Here are the steps to download the album from SoundCloud using SCDL:</p>
28
- <ol>
29
- <li>Go to the SoundCloud track of the album and copy its URL.</li>
30
- <li>Go to SCDL website and paste the URL in the input box.</li>
31
- <li>Click Download and wait for the process to complete.</li>
32
- <li>Save the MP3 file to your device and enjoy.</li>
33
- </ol>
34
- <h3>Download from Other Websites</h3>
35
- <p>A third way to download the album is from other websites that offer free music downloads. These websites usually have a large collection of songs and albums that you can download without any registration or payment. However, you need to be careful when using these websites, as some of them might contain viruses, malware, or illegal content. Some of the websites that you can try are Mp3Juices, Free Music Archive, and Jamendo. Here are the steps to download the album from Mp3Juices:</p>
36
- <ol>
37
- <li>Go to Mp3Juices website and type Brazil Zonal LP 2019 in the search box.</li>
38
- <li>Select the result that matches the album and click Download.</li>
39
- <li>Choose a server and wait for the download to start.</li>
40
- <li>Save the MP3 file to your device and enjoy.</li>
41
- </ol>
42
- <h2>How to Enjoy Brazil Zonal LP 2019 MP3?</h2>
43
- <p>Now that you have downloaded Brazil Zonal LP 2019 MP3, you might wonder how to enjoy it to the fullest. Here are some tips on how to listen to and appreciate the album:</p>
44
- <p>Download NBM Brazil Zonal LP 2019 ft. Tawinji MP3 song<br />
45
- Brazil Zonal LP 2019 by Tawinji - NBM of Africa Jollification<br />
46
- Stream Brazil Zonal LP 2019 by Tawinji on Audiomack<br />
47
- Brazil Zonal LP 2019 - YouTube video by uhuru fam tv<br />
48
- How to download Brazil Zonal LP 2019 MP3 for free<br />
49
- Brazil Zonal LP 2019 lyrics and translation<br />
50
- Brazil Zonal LP 2019 MP3 download link<br />
51
- Brazil Zonal LP 2019 review and rating<br />
52
- Brazil Zonal LP 2019 by Tawinji - NBM Jolly song<br />
53
- Brazil Zonal LP 2019 - NBM Wazobia Egidigbo National Jolly<br />
54
- Download Brazil Zonal LP 2019 by Tawinji - Military Regime<br />
55
- Brazil Zonal LP 2019 by Tawinji - Afrobeat genre<br />
56
- Stream Brazil Zonal LP 2019 by Tawinji on Boomplay<br />
57
- Brazil Zonal LP 2019 by Tawinji - NBM New York Zone<br />
58
- How to listen to Brazil Zonal LP 2019 online<br />
59
- Brazil Zonal LP 2019 by Tawinji - NBM Neo black movement of Africa<br />
60
- Brazil Zonal LP 2019 MP3 file size and quality<br />
61
- Brazil Zonal LP 2019 by Tawinji - NBM Turkey Zone Jolly<br />
62
- Download Brazil Zonal LP 2019 by Tawinji - OYIMA FORUM JOLLY<br />
63
- Brazil Zonal LP 2019 by Tawinji - NBM Kanta Ethiopia Revolution Jolly<br />
64
- Stream Brazil Zonal LP 2019 by Tawinji on Spotify<br />
65
- Brazil Zonal LP 2019 by Tawinji - NBM Ekpe HT Jolly<br />
66
- Download Brazil Zonal LP 2019 by Tawinji - NGOR OKPALA C.A.F LP<br />
67
- Brazil Zonal LP 2019 by Tawinji - NBM Omambala HT Lp (Golden Regime)<br />
68
- How to buy Brazil Zonal LP 2019 MP3 on iTunes<br />
69
- Brazil Zonal LP 2019 by Tawinji - NBM Oceano Mental album<br />
70
- Download Brazil Zonal LP 2019 by Tawinji - Starting The Ritual song<br />
71
- Brazil Zonal LP 2019 by Tawinji - NBM Cleo series<br />
72
- Stream Brazil Zonal LP 2019 by Tawinji on SoundCloud<br />
73
- Brazil Zonal LP 2019 by Tawinji - NBM Lola Bunny song<br />
74
- Download Brazil Zonal LP 2019 by Tawinji - ABSURD song<br />
75
- Brazil Zonal LP 2019 by Tawinji - NBM Scripts album<br />
76
- Download Brazil Zonal LP 2019 by Tawinji - Nigeria song<br />
77
- Brazil Zonal LP 2019 by Tawinji - NBM Egede Joburg Central Sub Zone Jolly<br />
78
- Stream Brazil Zonal LP 2019 by Tawinji on Deezer<br />
79
- Brazil Zonal LP 2019 by Tawinji - NBM Dahomey HT LP Jolly<br />
80
- Download Brazil Zonal LP 2019 by TAWINJI - URATTA FORUM JOLLY <br />
81
- Brazil Zonal LP 2019 by TAWINJI - NBM Jurist Confraternity South Africa Zone Jolly <br />
82
- Stream Brazil Zonal LP 2019 by TAWINJI on Apple Music <br />
83
- Brazil Zonal LP 2019 by TAWINJI - NBM TRIBUTE LP TO JN EJIKE <br />
84
- Download Brazil ZONAL lp (2020) BY tAWINJI </p>
85
- <h3>Use a Good MP3 Player</h3>
86
- <p>To play the album with high quality and features, you need a good MP3 player that can support different formats, bitrates, and playlists. You can use either a dedicated MP3 player device or an app on your smartphone or tablet. Some of the best MP3 players that you can use are VLC Media Player, Winamp, and Poweramp. These players have advanced settings that allow you to adjust the volume, equalizer, bass, treble, and other aspects of the sound. They also have features that let you create playlists, shuffle songs, repeat songs, and more.</p>
87
- <h3>Use a Good Headphone or Speaker</h3>
88
- <p>To experience the best sound and experience, you need a good headphone or speaker that can deliver clear, balanced, and immersive sound. You can choose either a wired or wireless headphone or speaker, depending on your preference and budget. Some of the best headphones that you can use are Sony WH-1000XM4, Bose QuietComfort 35 II, and Sennheiser HD 650. These headphones have noise-canceling technology that blocks out any external noise and lets you focus on the music. They also have comfortable design and long battery life. Some of the best speakers that you can use are JBL Flip 5, Sonos One, and Bose SoundLink Revolve. These speakers have wireless connectivity, water resistance, and 360-degree sound. They also have compact design and long battery life.</p>
89
- <h3>Learn More About the Album and the Artists</h3>
90
- <p>To appreciate the album more, you can learn more about the album and the artists behind it. You can find more information and background about the album and the artists on their official websites, social media pages, and online articles. You can also watch their interviews, documentaries, and live performances on YouTube and other platforms. By learning more about the album and the artists, you can understand their vision, inspiration, and message better. You can also discover more of their songs and albums that you might like.</p>
91
- <h2>Conclusion</h2>
92
- <p>Brazil Zonal LP 2019 is a great album for music lovers who enjoy Afrobeat music and African culture. It is an album that showcases the talent, diversity, and spirit of NBM, a group of Nigerian artists and activists. In this article, we have shown you how to download Brazil Zonal LP 2019 MP3 for free and legally, and how to enjoy it to the fullest. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and family who might be interested in this topic. Thank you for reading and happy listening!</p>
93
- <h2>FAQs</h2>
94
- <p>Here are some of the frequently asked questions and answers about Brazil Zonal LP 2019 MP3:</p>
95
- <ul>
96
- <li><b>Q: What is NBM?</b></li>
97
- <li>A: NBM stands for Neo Black Movement of Africa, a group of Nigerian musicians who are also members of a social movement that promotes African unity, solidarity, and liberation.</li>
98
- <li><b>Q: What is the genre of Brazil Zonal LP 2019?</b></li>
99
- <li>A: The genre of Brazil Zonal LP 2019 is Afrobeat, which is a fusion of African music, jazz, funk, soul, and rock.</li>
100
- <li><b>Q: How long is Brazil Zonal LP 2019?</b></li>
101
- <li>A: Brazil Zonal LP 2019 is 31 minutes long.</li>
102
- <li><b>Q: How many songs are in Brazil Zonal LP 2019?</b></li>
103
- <li>A: Brazil Zonal LP 2019 consists of one track that is a compilation of various songs.</li>
104
- <li><b>Q: Where can I find more music by NBM?</b></li>
105
- <li>A: You can find more music by NBM on their official website, YouTube channel, SoundCloud page, and other streaming platforms.</li>
106
- </ul></p> 197e85843d<br />
107
- <br />
108
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2023Liu2023/bingo/src/components/ui/icons.tsx DELETED
@@ -1,504 +0,0 @@
1
- 'use client'
2
-
3
- import * as React from 'react'
4
-
5
- import { cn } from '@/lib/utils'
6
-
7
- function IconNextChat({
8
- className,
9
- inverted,
10
- ...props
11
- }: React.ComponentProps<'svg'> & { inverted?: boolean }) {
12
- const id = React.useId()
13
-
14
- return (
15
- <svg
16
- viewBox="0 0 17 17"
17
- fill="none"
18
- xmlns="http://www.w3.org/2000/svg"
19
- className={cn('h-4 w-4', className)}
20
- {...props}
21
- >
22
- <defs>
23
- <linearGradient
24
- id={`gradient-${id}-1`}
25
- x1="10.6889"
26
- y1="10.3556"
27
- x2="13.8445"
28
- y2="14.2667"
29
- gradientUnits="userSpaceOnUse"
30
- >
31
- <stop stopColor={inverted ? 'white' : 'black'} />
32
- <stop
33
- offset={1}
34
- stopColor={inverted ? 'white' : 'black'}
35
- stopOpacity={0}
36
- />
37
- </linearGradient>
38
- <linearGradient
39
- id={`gradient-${id}-2`}
40
- x1="11.7555"
41
- y1="4.8"
42
- x2="11.7376"
43
- y2="9.50002"
44
- gradientUnits="userSpaceOnUse"
45
- >
46
- <stop stopColor={inverted ? 'white' : 'black'} />
47
- <stop
48
- offset={1}
49
- stopColor={inverted ? 'white' : 'black'}
50
- stopOpacity={0}
51
- />
52
- </linearGradient>
53
- </defs>
54
- <path
55
- d="M1 16L2.58314 11.2506C1.83084 9.74642 1.63835 8.02363 2.04013 6.39052C2.4419 4.75741 3.41171 3.32057 4.776 2.33712C6.1403 1.35367 7.81003 0.887808 9.4864 1.02289C11.1628 1.15798 12.7364 1.8852 13.9256 3.07442C15.1148 4.26363 15.842 5.83723 15.9771 7.5136C16.1122 9.18997 15.6463 10.8597 14.6629 12.224C13.6794 13.5883 12.2426 14.5581 10.6095 14.9599C8.97637 15.3616 7.25358 15.1692 5.74942 14.4169L1 16Z"
56
- fill={inverted ? 'black' : 'white'}
57
- stroke={inverted ? 'black' : 'white'}
58
- strokeWidth={2}
59
- strokeLinecap="round"
60
- strokeLinejoin="round"
61
- />
62
- <mask
63
- id="mask0_91_2047"
64
- style={{ maskType: 'alpha' }}
65
- maskUnits="userSpaceOnUse"
66
- x={1}
67
- y={0}
68
- width={16}
69
- height={16}
70
- >
71
- <circle cx={9} cy={8} r={8} fill={inverted ? 'black' : 'white'} />
72
- </mask>
73
- <g mask="url(#mask0_91_2047)">
74
- <circle cx={9} cy={8} r={8} fill={inverted ? 'black' : 'white'} />
75
- <path
76
- d="M14.2896 14.0018L7.146 4.8H5.80005V11.1973H6.87681V6.16743L13.4444 14.6529C13.7407 14.4545 14.0231 14.2369 14.2896 14.0018Z"
77
- fill={`url(#gradient-${id}-1)`}
78
- />
79
- <rect
80
- x="11.2222"
81
- y="4.8"
82
- width="1.06667"
83
- height="6.4"
84
- fill={`url(#gradient-${id}-2)`}
85
- />
86
- </g>
87
- </svg>
88
- )
89
- }
90
-
91
- function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) {
92
- return (
93
- <svg
94
- fill="currentColor"
95
- viewBox="0 0 24 24"
96
- role="img"
97
- xmlns="http://www.w3.org/2000/svg"
98
- className={cn('h-4 w-4', className)}
99
- {...props}
100
- >
101
- <title>OpenAI icon</title>
102
- <path d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z" />
103
- </svg>
104
- )
105
- }
106
-
107
- function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) {
108
- return (
109
- <svg
110
- role="img"
111
- viewBox="0 0 24 24"
112
- xmlns="http://www.w3.org/2000/svg"
113
- fill="currentColor"
114
- className={cn('h-4 w-4', className)}
115
- {...props}
116
- >
117
- <title>GitHub</title>
118
- <path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12" />
119
- </svg>
120
- )
121
- }
122
-
123
- function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) {
124
- return (
125
- <svg
126
- fill="none"
127
- shapeRendering="geometricPrecision"
128
- stroke="currentColor"
129
- strokeLinecap="round"
130
- strokeLinejoin="round"
131
- strokeWidth="1"
132
- viewBox="0 0 24 24"
133
- aria-hidden="true"
134
- className={cn('h-4 w-4', className)}
135
- {...props}
136
- >
137
- <path d="M16.88 3.549L7.12 20.451"></path>
138
- </svg>
139
- )
140
- }
141
-
142
- function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) {
143
- return (
144
- <svg
145
- xmlns="http://www.w3.org/2000/svg"
146
- viewBox="0 0 256 256"
147
- fill="currentColor"
148
- className={cn('h-4 w-4', className)}
149
- {...props}
150
- >
151
- <path d="m205.66 149.66-72 72a8 8 0 0 1-11.32 0l-72-72a8 8 0 0 1 11.32-11.32L120 196.69V40a8 8 0 0 1 16 0v156.69l58.34-58.35a8 8 0 0 1 11.32 11.32Z" />
152
- </svg>
153
- )
154
- }
155
-
156
- function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) {
157
- return (
158
- <svg
159
- xmlns="http://www.w3.org/2000/svg"
160
- viewBox="0 0 256 256"
161
- fill="currentColor"
162
- className={cn('h-4 w-4', className)}
163
- {...props}
164
- >
165
- <path d="m221.66 133.66-72 72a8 8 0 0 1-11.32-11.32L196.69 136H40a8 8 0 0 1 0-16h156.69l-58.35-58.34a8 8 0 0 1 11.32-11.32l72 72a8 8 0 0 1 0 11.32Z" />
166
- </svg>
167
- )
168
- }
169
-
170
- function IconUser({ className, ...props }: React.ComponentProps<'svg'>) {
171
- return (
172
- <svg
173
- xmlns="http://www.w3.org/2000/svg"
174
- viewBox="0 0 256 256"
175
- fill="currentColor"
176
- className={cn('h-4 w-4', className)}
177
- {...props}
178
- >
179
- <path d="M230.92 212c-15.23-26.33-38.7-45.21-66.09-54.16a72 72 0 1 0-73.66 0c-27.39 8.94-50.86 27.82-66.09 54.16a8 8 0 1 0 13.85 8c18.84-32.56 52.14-52 89.07-52s70.23 19.44 89.07 52a8 8 0 1 0 13.85-8ZM72 96a56 56 0 1 1 56 56 56.06 56.06 0 0 1-56-56Z" />
180
- </svg>
181
- )
182
- }
183
-
184
- function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) {
185
- return (
186
- <svg
187
- xmlns="http://www.w3.org/2000/svg"
188
- viewBox="0 0 256 256"
189
- fill="currentColor"
190
- className={cn('h-4 w-4', className)}
191
- {...props}
192
- >
193
- <path d="M224 128a8 8 0 0 1-8 8h-80v80a8 8 0 0 1-16 0v-80H40a8 8 0 0 1 0-16h80V40a8 8 0 0 1 16 0v80h80a8 8 0 0 1 8 8Z" />
194
- </svg>
195
- )
196
- }
197
-
198
- function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) {
199
- return (
200
- <svg
201
- xmlns="http://www.w3.org/2000/svg"
202
- viewBox="0 0 256 256"
203
- fill="currentColor"
204
- className={cn('h-4 w-4', className)}
205
- {...props}
206
- >
207
- <path d="M200 32v144a8 8 0 0 1-8 8H67.31l34.35 34.34a8 8 0 0 1-11.32 11.32l-48-48a8 8 0 0 1 0-11.32l48-48a8 8 0 0 1 11.32 11.32L67.31 168H184V32a8 8 0 0 1 16 0Z" />
208
- </svg>
209
- )
210
- }
211
-
212
- function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) {
213
- return (
214
- <svg
215
- xmlns="http://www.w3.org/2000/svg"
216
- viewBox="0 0 256 256"
217
- fill="currentColor"
218
- className={cn('h-4 w-4 animate-spin', className)}
219
- {...props}
220
- >
221
- <path d="M232 128a104 104 0 0 1-208 0c0-41 23.81-78.36 60.66-95.27a8 8 0 0 1 6.68 14.54C60.15 61.59 40 93.27 40 128a88 88 0 0 0 176 0c0-34.73-20.15-66.41-51.34-80.73a8 8 0 0 1 6.68-14.54C208.19 49.64 232 87 232 128Z" />
222
- </svg>
223
- )
224
- }
225
-
226
- function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) {
227
- return (
228
- <svg
229
- xmlns="http://www.w3.org/2000/svg"
230
- viewBox="0 0 256 256"
231
- fill="currentColor"
232
- className={cn('h-4 w-4', className)}
233
- {...props}
234
- >
235
- <path d="M216 48H40a16 16 0 0 0-16 16v160a15.84 15.84 0 0 0 9.25 14.5A16.05 16.05 0 0 0 40 240a15.89 15.89 0 0 0 10.25-3.78.69.69 0 0 0 .13-.11L82.5 208H216a16 16 0 0 0 16-16V64a16 16 0 0 0-16-16ZM40 224Zm176-32H82.5a16 16 0 0 0-10.3 3.75l-.12.11L40 224V64h176Z" />
236
- </svg>
237
- )
238
- }
239
-
240
- function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) {
241
- return (
242
- <svg
243
- xmlns="http://www.w3.org/2000/svg"
244
- viewBox="0 0 256 256"
245
- fill="currentColor"
246
- className={cn('h-4 w-4', className)}
247
- {...props}
248
- >
249
- <path d="M216 48h-40v-8a24 24 0 0 0-24-24h-48a24 24 0 0 0-24 24v8H40a8 8 0 0 0 0 16h8v144a16 16 0 0 0 16 16h128a16 16 0 0 0 16-16V64h8a8 8 0 0 0 0-16ZM96 40a8 8 0 0 1 8-8h48a8 8 0 0 1 8 8v8H96Zm96 168H64V64h128Zm-80-104v64a8 8 0 0 1-16 0v-64a8 8 0 0 1 16 0Zm48 0v64a8 8 0 0 1-16 0v-64a8 8 0 0 1 16 0Z" />
250
- </svg>
251
- )
252
- }
253
-
254
- function IconMore({ className, ...props }: React.ComponentProps<'svg'>) {
255
- return (
256
- <svg
257
- viewBox="0 0 24 24"
258
- xmlns="http://www.w3.org/2000/svg"
259
- fill="currentColor"
260
- className={cn('h-4 w-4', className)}
261
- {...props}
262
- >
263
- <path d="M7.75 12C7.75 12.9665 6.9665 13.75 6 13.75C5.0335 13.75 4.25 12.9665 4.25 12C4.25 11.0335 5.0335 10.25 6 10.25C6.9665 10.25 7.75 11.0335 7.75 12ZM13.75 12C13.75 12.9665 12.9665 13.75 12 13.75C11.0335 13.75 10.25 12.9665 10.25 12C10.25 11.0335 11.0335 10.25 12 10.25C12.9665 10.25 13.75 11.0335 13.75 12ZM18 13.75C18.9665 13.75 19.75 12.9665 19.75 12C19.75 11.0335 18.9665 10.25 18 10.25C17.0335 10.25 16.25 11.0335 16.25 12C16.25 12.9665 17.0335 13.75 18 13.75Z"></path>
264
- </svg>
265
- )
266
- }
267
-
268
- function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) {
269
- return (
270
- <svg
271
- xmlns="http://www.w3.org/2000/svg"
272
- viewBox="0 0 256 256"
273
- fill="currentColor"
274
- className={cn('h-4 w-4', className)}
275
- {...props}
276
- >
277
- <path d="M197.67 186.37a8 8 0 0 1 0 11.29C196.58 198.73 170.82 224 128 224c-37.39 0-64.53-22.4-80-39.85V208a8 8 0 0 1-16 0v-48a8 8 0 0 1 8-8h48a8 8 0 0 1 0 16H55.44C67.76 183.35 93 208 128 208c36 0 58.14-21.46 58.36-21.68a8 8 0 0 1 11.31.05ZM216 40a8 8 0 0 0-8 8v23.85C192.53 54.4 165.39 32 128 32c-42.82 0-68.58 25.27-69.66 26.34a8 8 0 0 0 11.3 11.34C69.86 69.46 92 48 128 48c35 0 60.24 24.65 72.56 40H168a8 8 0 0 0 0 16h48a8 8 0 0 0 8-8V48a8 8 0 0 0-8-8Z" />
278
- </svg>
279
- )
280
- }
281
-
282
- function IconStop({ className, ...props }: React.ComponentProps<'svg'>) {
283
- return (
284
- <svg
285
- xmlns="http://www.w3.org/2000/svg"
286
- viewBox="0 0 256 256"
287
- fill="currentColor"
288
- className={cn('h-4 w-4', className)}
289
- {...props}
290
- >
291
- <path d="M128 24a104 104 0 1 0 104 104A104.11 104.11 0 0 0 128 24Zm0 192a88 88 0 1 1 88-88 88.1 88.1 0 0 1-88 88Zm24-120h-48a8 8 0 0 0-8 8v48a8 8 0 0 0 8 8h48a8 8 0 0 0 8-8v-48a8 8 0 0 0-8-8Zm-8 48h-32v-32h32Z" />
292
- </svg>
293
- )
294
- }
295
-
296
- function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) {
297
- return (
298
- <svg
299
- xmlns="http://www.w3.org/2000/svg"
300
- viewBox="0 0 256 256"
301
- fill="currentColor"
302
- className={cn('h-4 w-4', className)}
303
- {...props}
304
- >
305
- <path d="M216 40H40a16 16 0 0 0-16 16v144a16 16 0 0 0 16 16h176a16 16 0 0 0 16-16V56a16 16 0 0 0-16-16ZM40 56h40v144H40Zm176 144H96V56h120v144Z" />
306
- </svg>
307
- )
308
- }
309
-
310
- function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) {
311
- return (
312
- <svg
313
- xmlns="http://www.w3.org/2000/svg"
314
- viewBox="0 0 256 256"
315
- fill="currentColor"
316
- className={cn('h-4 w-4', className)}
317
- {...props}
318
- >
319
- <path d="M233.54 142.23a8 8 0 0 0-8-2 88.08 88.08 0 0 1-109.8-109.8 8 8 0 0 0-10-10 104.84 104.84 0 0 0-52.91 37A104 104 0 0 0 136 224a103.09 103.09 0 0 0 62.52-20.88 104.84 104.84 0 0 0 37-52.91 8 8 0 0 0-1.98-7.98Zm-44.64 48.11A88 88 0 0 1 65.66 67.11a89 89 0 0 1 31.4-26A106 106 0 0 0 96 56a104.11 104.11 0 0 0 104 104 106 106 0 0 0 14.92-1.06 89 89 0 0 1-26.02 31.4Z" />
320
- </svg>
321
- )
322
- }
323
-
324
- function IconSun({ className, ...props }: React.ComponentProps<'svg'>) {
325
- return (
326
- <svg
327
- xmlns="http://www.w3.org/2000/svg"
328
- viewBox="0 0 256 256"
329
- fill="currentColor"
330
- className={cn('h-4 w-4', className)}
331
- {...props}
332
- >
333
- <path d="M120 40V16a8 8 0 0 1 16 0v24a8 8 0 0 1-16 0Zm72 88a64 64 0 1 1-64-64 64.07 64.07 0 0 1 64 64Zm-16 0a48 48 0 1 0-48 48 48.05 48.05 0 0 0 48-48ZM58.34 69.66a8 8 0 0 0 11.32-11.32l-16-16a8 8 0 0 0-11.32 11.32Zm0 116.68-16 16a8 8 0 0 0 11.32 11.32l16-16a8 8 0 0 0-11.32-11.32ZM192 72a8 8 0 0 0 5.66-2.34l16-16a8 8 0 0 0-11.32-11.32l-16 16A8 8 0 0 0 192 72Zm5.66 114.34a8 8 0 0 0-11.32 11.32l16 16a8 8 0 0 0 11.32-11.32ZM48 128a8 8 0 0 0-8-8H16a8 8 0 0 0 0 16h24a8 8 0 0 0 8-8Zm80 80a8 8 0 0 0-8 8v24a8 8 0 0 0 16 0v-24a8 8 0 0 0-8-8Zm112-88h-24a8 8 0 0 0 0 16h24a8 8 0 0 0 0-16Z" />
334
- </svg>
335
- )
336
- }
337
-
338
- function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) {
339
- return (
340
- <svg
341
- xmlns="http://www.w3.org/2000/svg"
342
- viewBox="0 0 256 256"
343
- fill="currentColor"
344
- className={cn('h-4 w-4', className)}
345
- {...props}
346
- >
347
- <path d="M216 32H88a8 8 0 0 0-8 8v40H40a8 8 0 0 0-8 8v128a8 8 0 0 0 8 8h128a8 8 0 0 0 8-8v-40h40a8 8 0 0 0 8-8V40a8 8 0 0 0-8-8Zm-56 176H48V96h112Zm48-48h-32V88a8 8 0 0 0-8-8H96V48h112Z" />
348
- </svg>
349
- )
350
- }
351
-
352
- function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) {
353
- return (
354
- <svg
355
- xmlns="http://www.w3.org/2000/svg"
356
- viewBox="0 0 256 256"
357
- fill="currentColor"
358
- className={cn('h-4 w-4', className)}
359
- {...props}
360
- >
361
- <path d="m229.66 77.66-128 128a8 8 0 0 1-11.32 0l-56-56a8 8 0 0 1 11.32-11.32L96 188.69 218.34 66.34a8 8 0 0 1 11.32 11.32Z" />
362
- </svg>
363
- )
364
- }
365
-
366
- function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) {
367
- return (
368
- <svg
369
- xmlns="http://www.w3.org/2000/svg"
370
- viewBox="0 0 256 256"
371
- fill="currentColor"
372
- className={cn('h-4 w-4', className)}
373
- {...props}
374
- >
375
- <path d="M224 152v56a16 16 0 0 1-16 16H48a16 16 0 0 1-16-16v-56a8 8 0 0 1 16 0v56h160v-56a8 8 0 0 1 16 0Zm-101.66 5.66a8 8 0 0 0 11.32 0l40-40a8 8 0 0 0-11.32-11.32L136 132.69V40a8 8 0 0 0-16 0v92.69l-26.34-26.35a8 8 0 0 0-11.32 11.32Z" />
376
- </svg>
377
- )
378
- }
379
-
380
- function IconClose({ className, ...props }: React.ComponentProps<'svg'>) {
381
- return (
382
- <svg
383
- xmlns="http://www.w3.org/2000/svg"
384
- viewBox="0 0 256 256"
385
- fill="currentColor"
386
- className={cn('h-4 w-4', className)}
387
- {...props}
388
- >
389
- <path d="M205.66 194.34a8 8 0 0 1-11.32 11.32L128 139.31l-66.34 66.35a8 8 0 0 1-11.32-11.32L116.69 128 50.34 61.66a8 8 0 0 1 11.32-11.32L128 116.69l66.34-66.35a8 8 0 0 1 11.32 11.32L139.31 128Z" />
390
- </svg>
391
- )
392
- }
393
-
394
- function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) {
395
- return (
396
- <svg
397
- xmlns="http://www.w3.org/2000/svg"
398
- fill="none"
399
- viewBox="0 0 24 24"
400
- strokeWidth={1.5}
401
- stroke="currentColor"
402
- className={cn('h-4 w-4', className)}
403
- {...props}
404
- >
405
- <path
406
- strokeLinecap="round"
407
- strokeLinejoin="round"
408
- d="M16.862 4.487l1.687-1.688a1.875 1.875 0 112.652 2.652L10.582 16.07a4.5 4.5 0 01-1.897 1.13L6 18l.8-2.685a4.5 4.5 0 011.13-1.897l8.932-8.931zm0 0L19.5 7.125M18 14v4.75A2.25 2.25 0 0115.75 21H5.25A2.25 2.25 0 013 18.75V8.25A2.25 2.25 0 015.25 6H10"
409
- />
410
- </svg>
411
- )
412
- }
413
-
414
- function IconShare({ className, ...props }: React.ComponentProps<'svg'>) {
415
- return (
416
- <svg
417
- xmlns="http://www.w3.org/2000/svg"
418
- fill="currentColor"
419
- className={cn('h-4 w-4', className)}
420
- viewBox="0 0 256 256"
421
- {...props}
422
- >
423
- <path d="m237.66 106.35-80-80A8 8 0 0 0 144 32v40.35c-25.94 2.22-54.59 14.92-78.16 34.91-28.38 24.08-46.05 55.11-49.76 87.37a12 12 0 0 0 20.68 9.58c11-11.71 50.14-48.74 107.24-52V192a8 8 0 0 0 13.66 5.65l80-80a8 8 0 0 0 0-11.3ZM160 172.69V144a8 8 0 0 0-8-8c-28.08 0-55.43 7.33-81.29 21.8a196.17 196.17 0 0 0-36.57 26.52c5.8-23.84 20.42-46.51 42.05-64.86C99.41 99.77 127.75 88 152 88a8 8 0 0 0 8-8V51.32L220.69 112Z" />
424
- </svg>
425
- )
426
- }
427
-
428
- function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) {
429
- return (
430
- <svg
431
- xmlns="http://www.w3.org/2000/svg"
432
- fill="currentColor"
433
- className={cn('h-4 w-4', className)}
434
- viewBox="0 0 256 256"
435
- {...props}
436
- >
437
- <path d="M117.25 157.92a60 60 0 1 0-66.5 0 95.83 95.83 0 0 0-47.22 37.71 8 8 0 1 0 13.4 8.74 80 80 0 0 1 134.14 0 8 8 0 0 0 13.4-8.74 95.83 95.83 0 0 0-47.22-37.71ZM40 108a44 44 0 1 1 44 44 44.05 44.05 0 0 1-44-44Zm210.14 98.7a8 8 0 0 1-11.07-2.33A79.83 79.83 0 0 0 172 168a8 8 0 0 1 0-16 44 44 0 1 0-16.34-84.87 8 8 0 1 1-5.94-14.85 60 60 0 0 1 55.53 105.64 95.83 95.83 0 0 1 47.22 37.71 8 8 0 0 1-2.33 11.07Z" />
438
- </svg>
439
- )
440
- }
441
-
442
- function IconExternalLink({
443
- className,
444
- ...props
445
- }: React.ComponentProps<'svg'>) {
446
- return (
447
- <svg
448
- xmlns="http://www.w3.org/2000/svg"
449
- fill="currentColor"
450
- className={cn('h-4 w-4', className)}
451
- viewBox="0 0 256 256"
452
- {...props}
453
- >
454
- <path d="M224 104a8 8 0 0 1-16 0V59.32l-66.33 66.34a8 8 0 0 1-11.32-11.32L196.68 48H152a8 8 0 0 1 0-16h64a8 8 0 0 1 8 8Zm-40 24a8 8 0 0 0-8 8v72H48V80h72a8 8 0 0 0 0-16H48a16 16 0 0 0-16 16v128a16 16 0 0 0 16 16h128a16 16 0 0 0 16-16v-72a8 8 0 0 0-8-8Z" />
455
- </svg>
456
- )
457
- }
458
-
459
- function IconChevronUpDown({
460
- className,
461
- ...props
462
- }: React.ComponentProps<'svg'>) {
463
- return (
464
- <svg
465
- xmlns="http://www.w3.org/2000/svg"
466
- fill="currentColor"
467
- className={cn('h-4 w-4', className)}
468
- viewBox="0 0 256 256"
469
- {...props}
470
- >
471
- <path d="M181.66 170.34a8 8 0 0 1 0 11.32l-48 48a8 8 0 0 1-11.32 0l-48-48a8 8 0 0 1 11.32-11.32L128 212.69l42.34-42.35a8 8 0 0 1 11.32 0Zm-96-84.68L128 43.31l42.34 42.35a8 8 0 0 0 11.32-11.32l-48-48a8 8 0 0 0-11.32 0l-48 48a8 8 0 0 0 11.32 11.32Z" />
472
- </svg>
473
- )
474
- }
475
-
476
- export {
477
- IconEdit,
478
- IconNextChat,
479
- IconOpenAI,
480
- IconGitHub,
481
- IconSeparator,
482
- IconArrowDown,
483
- IconArrowRight,
484
- IconUser,
485
- IconPlus,
486
- IconArrowElbow,
487
- IconSpinner,
488
- IconMessage,
489
- IconTrash,
490
- IconMore,
491
- IconRefresh,
492
- IconStop,
493
- IconSidebar,
494
- IconMoon,
495
- IconSun,
496
- IconCopy,
497
- IconCheck,
498
- IconDownload,
499
- IconClose,
500
- IconShare,
501
- IconUsers,
502
- IconExternalLink,
503
- IconChevronUpDown
504
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/dataset.py DELETED
@@ -1,183 +0,0 @@
1
- import os
2
- import random
3
-
4
- import numpy as np
5
- import torch
6
- import torch.utils.data
7
- from tqdm import tqdm
8
-
9
- from . import spec_utils
10
-
11
-
12
- class VocalRemoverValidationSet(torch.utils.data.Dataset):
13
- def __init__(self, patch_list):
14
- self.patch_list = patch_list
15
-
16
- def __len__(self):
17
- return len(self.patch_list)
18
-
19
- def __getitem__(self, idx):
20
- path = self.patch_list[idx]
21
- data = np.load(path)
22
-
23
- X, y = data["X"], data["y"]
24
-
25
- X_mag = np.abs(X)
26
- y_mag = np.abs(y)
27
-
28
- return X_mag, y_mag
29
-
30
-
31
- def make_pair(mix_dir, inst_dir):
32
- input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"]
33
-
34
- X_list = sorted(
35
- [
36
- os.path.join(mix_dir, fname)
37
- for fname in os.listdir(mix_dir)
38
- if os.path.splitext(fname)[1] in input_exts
39
- ]
40
- )
41
- y_list = sorted(
42
- [
43
- os.path.join(inst_dir, fname)
44
- for fname in os.listdir(inst_dir)
45
- if os.path.splitext(fname)[1] in input_exts
46
- ]
47
- )
48
-
49
- filelist = list(zip(X_list, y_list))
50
-
51
- return filelist
52
-
53
-
54
- def train_val_split(dataset_dir, split_mode, val_rate, val_filelist):
55
- if split_mode == "random":
56
- filelist = make_pair(
57
- os.path.join(dataset_dir, "mixtures"),
58
- os.path.join(dataset_dir, "instruments"),
59
- )
60
-
61
- random.shuffle(filelist)
62
-
63
- if len(val_filelist) == 0:
64
- val_size = int(len(filelist) * val_rate)
65
- train_filelist = filelist[:-val_size]
66
- val_filelist = filelist[-val_size:]
67
- else:
68
- train_filelist = [
69
- pair for pair in filelist if list(pair) not in val_filelist
70
- ]
71
- elif split_mode == "subdirs":
72
- if len(val_filelist) != 0:
73
- raise ValueError(
74
- "The `val_filelist` option is not available in `subdirs` mode"
75
- )
76
-
77
- train_filelist = make_pair(
78
- os.path.join(dataset_dir, "training/mixtures"),
79
- os.path.join(dataset_dir, "training/instruments"),
80
- )
81
-
82
- val_filelist = make_pair(
83
- os.path.join(dataset_dir, "validation/mixtures"),
84
- os.path.join(dataset_dir, "validation/instruments"),
85
- )
86
-
87
- return train_filelist, val_filelist
88
-
89
-
90
- def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha):
91
- perm = np.random.permutation(len(X))
92
- for i, idx in enumerate(tqdm(perm)):
93
- if np.random.uniform() < reduction_rate:
94
- y[idx] = spec_utils.reduce_vocal_aggressively(
95
- X[idx], y[idx], reduction_mask
96
- )
97
-
98
- if np.random.uniform() < 0.5:
99
- # swap channel
100
- X[idx] = X[idx, ::-1]
101
- y[idx] = y[idx, ::-1]
102
- if np.random.uniform() < 0.02:
103
- # mono
104
- X[idx] = X[idx].mean(axis=0, keepdims=True)
105
- y[idx] = y[idx].mean(axis=0, keepdims=True)
106
- if np.random.uniform() < 0.02:
107
- # inst
108
- X[idx] = y[idx]
109
-
110
- if np.random.uniform() < mixup_rate and i < len(perm) - 1:
111
- lam = np.random.beta(mixup_alpha, mixup_alpha)
112
- X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]]
113
- y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]]
114
-
115
- return X, y
116
-
117
-
118
- def make_padding(width, cropsize, offset):
119
- left = offset
120
- roi_size = cropsize - left * 2
121
- if roi_size == 0:
122
- roi_size = cropsize
123
- right = roi_size - (width % roi_size) + left
124
-
125
- return left, right, roi_size
126
-
127
-
128
- def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset):
129
- len_dataset = patches * len(filelist)
130
-
131
- X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
132
- y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
133
-
134
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
135
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
136
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
137
- X, y = X / coef, y / coef
138
-
139
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
140
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
141
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
142
-
143
- starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches)
144
- ends = starts + cropsize
145
- for j in range(patches):
146
- idx = i * patches + j
147
- X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]]
148
- y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]]
149
-
150
- return X_dataset, y_dataset
151
-
152
-
153
- def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset):
154
- patch_list = []
155
- patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format(
156
- cropsize, sr, hop_length, n_fft, offset
157
- )
158
- os.makedirs(patch_dir, exist_ok=True)
159
-
160
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
161
- basename = os.path.splitext(os.path.basename(X_path))[0]
162
-
163
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
164
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
165
- X, y = X / coef, y / coef
166
-
167
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
168
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
169
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
170
-
171
- len_dataset = int(np.ceil(X.shape[2] / roi_size))
172
- for j in range(len_dataset):
173
- outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j))
174
- start = j * roi_size
175
- if not os.path.exists(outpath):
176
- np.savez(
177
- outpath,
178
- X=X_pad[:, :, start : start + cropsize],
179
- y=y_pad[:, :, start : start + cropsize],
180
- )
181
- patch_list.append(outpath)
182
-
183
- return VocalRemoverValidationSet(patch_list)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AB-TW/team-ai/agents/tools/smart_domain/db_entity_repository.py DELETED
@@ -1,101 +0,0 @@
1
- from langchain.prompts import PromptTemplate
2
- from agents.tools.smart_domain.common import getPrefix
3
- from langchain.chains import LLMChain
4
- from langchain.agents import tool
5
- from models import llm
6
-
7
- db_entity_tech_stack = """Java17、reactor、lombok、Junit5、reactor test、Mockito、 Spring Data Reactive Couchbase、Couchbase"""
8
-
9
- db_entity_architecture = """
10
- * DbEntity: This component is use to define data structure that save to DB.
11
- ---eaxmple code:
12
- @Document
13
- public class FeatureDb {{
14
- @Version
15
- private long version;
16
-
17
- @Id
18
- @GeneratedValue(strategy = GenerationStrategy.UNIQUE)
19
- private String id;
20
-
21
- private String featureKey;
22
-
23
- private Feature.FeatureDescription description;
24
- }}
25
- ---end of eaxmple code
26
- * Repository: This component is use to define the interface to access DB.
27
- ---eaxmple code:
28
- public interface FeatureDbRepository extends ReactiveCrudRepository<FeatureDb, String> {{
29
- Mono<FeatureDb> findByFeatureKey(String featureKey);
30
- }}
31
- ---end of eaxmple code
32
- """
33
-
34
- db_entity_test_strategy = """For the DbEntity And Repository, we can write component test to test the actual implementation of database operations, test class should extends RepositoryTestBase to use Testcontainers ability.
35
- ---eaxmple code:
36
- class FeatureDbRepositoryTest extends RepositoryTestBase {{
37
- @Autowired
38
- FeatureDbRepository repository;
39
-
40
- @BeforeEach
41
- void setUp() {{
42
- repository.deleteAll().block();
43
- }}
44
-
45
- @AfterEach
46
- void tearDown() {{
47
- repository.deleteAll().block();
48
- }}
49
-
50
- @Test
51
- void should_save_Feature_success() {{
52
- var featureKey = "featureKey1";
53
- repository.save(FeatureTestUtil.createFeatureDb(featureKey))
54
- .as(StepVerifier::create)
55
- .expectNextCount(1)
56
- .verifyComplete();
57
- }}
58
-
59
- @Test
60
- void should_add_same_featureKey_fail() {{
61
- var featureKey = "featureKey1";
62
- repository.save(FeatureTestUtil.createFeatureDb(featureKey)).block();
63
-
64
- repository.save(FeatureTestUtil.createFeatureDb(featureKey))
65
- .as(StepVerifier::create)
66
- .expectError()
67
- .verify();
68
- }}
69
- }}
70
- ---end of eaxmple code
71
- """
72
-
73
- db_entity_task = """Your task is to generate the DbEntity and Repository tests and product code."""
74
-
75
- DB_ENTITY = getPrefix(db_entity_task, db_entity_tech_stack, db_entity_architecture, db_entity_test_strategy) + """
76
-
77
- Use the following format:
78
- request: the request that you need to fulfill
79
-
80
- Entity:
81
- ```
82
- the Entity code that you write to fulfill the request, follow TechStack and Architecture
83
- ```
84
-
85
- Test:
86
- ```
87
- the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy
88
- ```
89
-
90
- request: {input}"""
91
-
92
- DB_ENTITY_PROMPT = PromptTemplate(input_variables=["input"], template=DB_ENTITY,)
93
-
94
- db_entity_Repository_chain = LLMChain(llm = llm(temperature=0.1), prompt=DB_ENTITY_PROMPT)
95
-
96
-
97
- @tool("Generate DBEntity and Repository Code", return_direct=True)
98
- def dbEntityRepositoryCodeGenerator(input: str) -> str:
99
- '''useful for when you need to generate DBEntity and Repository code'''
100
- response = db_entity_Repository_chain.run(input)
101
- return response
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123812KB .py DELETED
@@ -1,118 +0,0 @@
1
- import torch
2
- from torch import nn
3
- import torch.nn.functional as F
4
-
5
- from uvr5_pack.lib_v5 import spec_utils
6
-
7
-
8
- class Conv2DBNActiv(nn.Module):
9
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
10
- super(Conv2DBNActiv, self).__init__()
11
- self.conv = nn.Sequential(
12
- nn.Conv2d(
13
- nin,
14
- nout,
15
- kernel_size=ksize,
16
- stride=stride,
17
- padding=pad,
18
- dilation=dilation,
19
- bias=False,
20
- ),
21
- nn.BatchNorm2d(nout),
22
- activ(),
23
- )
24
-
25
- def __call__(self, x):
26
- return self.conv(x)
27
-
28
-
29
- class SeperableConv2DBNActiv(nn.Module):
30
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
31
- super(SeperableConv2DBNActiv, self).__init__()
32
- self.conv = nn.Sequential(
33
- nn.Conv2d(
34
- nin,
35
- nin,
36
- kernel_size=ksize,
37
- stride=stride,
38
- padding=pad,
39
- dilation=dilation,
40
- groups=nin,
41
- bias=False,
42
- ),
43
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
44
- nn.BatchNorm2d(nout),
45
- activ(),
46
- )
47
-
48
- def __call__(self, x):
49
- return self.conv(x)
50
-
51
-
52
- class Encoder(nn.Module):
53
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
54
- super(Encoder, self).__init__()
55
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
56
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
57
-
58
- def __call__(self, x):
59
- skip = self.conv1(x)
60
- h = self.conv2(skip)
61
-
62
- return h, skip
63
-
64
-
65
- class Decoder(nn.Module):
66
- def __init__(
67
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
68
- ):
69
- super(Decoder, self).__init__()
70
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
71
- self.dropout = nn.Dropout2d(0.1) if dropout else None
72
-
73
- def __call__(self, x, skip=None):
74
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
75
- if skip is not None:
76
- skip = spec_utils.crop_center(skip, x)
77
- x = torch.cat([x, skip], dim=1)
78
- h = self.conv(x)
79
-
80
- if self.dropout is not None:
81
- h = self.dropout(h)
82
-
83
- return h
84
-
85
-
86
- class ASPPModule(nn.Module):
87
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
88
- super(ASPPModule, self).__init__()
89
- self.conv1 = nn.Sequential(
90
- nn.AdaptiveAvgPool2d((1, None)),
91
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
92
- )
93
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
94
- self.conv3 = SeperableConv2DBNActiv(
95
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
96
- )
97
- self.conv4 = SeperableConv2DBNActiv(
98
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
99
- )
100
- self.conv5 = SeperableConv2DBNActiv(
101
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
102
- )
103
- self.bottleneck = nn.Sequential(
104
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
105
- )
106
-
107
- def forward(self, x):
108
- _, _, h, w = x.size()
109
- feat1 = F.interpolate(
110
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
111
- )
112
- feat2 = self.conv2(x)
113
- feat3 = self.conv3(x)
114
- feat4 = self.conv4(x)
115
- feat5 = self.conv5(x)
116
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
117
- bottle = self.bottleneck(out)
118
- return bottle
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/model_param_init.py DELETED
@@ -1,69 +0,0 @@
1
- import json
2
- import os
3
- import pathlib
4
-
5
- default_param = {}
6
- default_param["bins"] = 768
7
- default_param["unstable_bins"] = 9 # training only
8
- default_param["reduction_bins"] = 762 # training only
9
- default_param["sr"] = 44100
10
- default_param["pre_filter_start"] = 757
11
- default_param["pre_filter_stop"] = 768
12
- default_param["band"] = {}
13
-
14
-
15
- default_param["band"][1] = {
16
- "sr": 11025,
17
- "hl": 128,
18
- "n_fft": 960,
19
- "crop_start": 0,
20
- "crop_stop": 245,
21
- "lpf_start": 61, # inference only
22
- "res_type": "polyphase",
23
- }
24
-
25
- default_param["band"][2] = {
26
- "sr": 44100,
27
- "hl": 512,
28
- "n_fft": 1536,
29
- "crop_start": 24,
30
- "crop_stop": 547,
31
- "hpf_start": 81, # inference only
32
- "res_type": "sinc_best",
33
- }
34
-
35
-
36
- def int_keys(d):
37
- r = {}
38
- for k, v in d:
39
- if k.isdigit():
40
- k = int(k)
41
- r[k] = v
42
- return r
43
-
44
-
45
- class ModelParameters(object):
46
- def __init__(self, config_path=""):
47
- if ".pth" == pathlib.Path(config_path).suffix:
48
- import zipfile
49
-
50
- with zipfile.ZipFile(config_path, "r") as zip:
51
- self.param = json.loads(
52
- zip.read("param.json"), object_pairs_hook=int_keys
53
- )
54
- elif ".json" == pathlib.Path(config_path).suffix:
55
- with open(config_path, "r") as f:
56
- self.param = json.loads(f.read(), object_pairs_hook=int_keys)
57
- else:
58
- self.param = default_param
59
-
60
- for k in [
61
- "mid_side",
62
- "mid_side_b",
63
- "mid_side_b2",
64
- "stereo_w",
65
- "stereo_n",
66
- "reverse",
67
- ]:
68
- if not k in self.param:
69
- self.param[k] = False
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/modules/diffusion_schedule.py DELETED
@@ -1,272 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- """
8
- Functions for Noise Schedule, defines diffusion process, reverse process and data processor.
9
- """
10
-
11
- from collections import namedtuple
12
- import random
13
- import typing as tp
14
- import julius
15
- import torch
16
-
17
- TrainingItem = namedtuple("TrainingItem", "noisy noise step")
18
-
19
-
20
- def betas_from_alpha_bar(alpha_bar):
21
- alphas = torch.cat([torch.Tensor([alpha_bar[0]]), alpha_bar[1:]/alpha_bar[:-1]])
22
- return 1 - alphas
23
-
24
-
25
- class SampleProcessor(torch.nn.Module):
26
- def project_sample(self, x: torch.Tensor):
27
- """Project the original sample to the 'space' where the diffusion will happen."""
28
- return x
29
-
30
- def return_sample(self, z: torch.Tensor):
31
- """Project back from diffusion space to the actual sample space."""
32
- return z
33
-
34
-
35
- class MultiBandProcessor(SampleProcessor):
36
- """
37
- MultiBand sample processor. The input audio is splitted across
38
- frequency bands evenly distributed in mel-scale.
39
-
40
- Each band will be rescaled to match the power distribution
41
- of Gaussian noise in that band, using online metrics
42
- computed on the first few samples.
43
-
44
- Args:
45
- n_bands (int): Number of mel-bands to split the signal over.
46
- sample_rate (int): Sample rate of the audio.
47
- num_samples (int): Number of samples to use to fit the rescaling
48
- for each band. The processor won't be stable
49
- until it has seen that many samples.
50
- power_std (float or list/tensor): The rescaling factor computed to match the
51
- power of Gaussian noise in each band is taken to
52
- that power, i.e. `1.` means full correction of the energy
53
- in each band, and values less than `1` means only partial
54
- correction. Can be used to balance the relative importance
55
- of low vs. high freq in typical audio signals.
56
- """
57
- def __init__(self, n_bands: int = 8, sample_rate: float = 24_000,
58
- num_samples: int = 10_000, power_std: tp.Union[float, tp.List[float], torch.Tensor] = 1.):
59
- super().__init__()
60
- self.n_bands = n_bands
61
- self.split_bands = julius.SplitBands(sample_rate, n_bands=n_bands)
62
- self.num_samples = num_samples
63
- self.power_std = power_std
64
- if isinstance(power_std, list):
65
- assert len(power_std) == n_bands
66
- power_std = torch.tensor(power_std)
67
- self.register_buffer('counts', torch.zeros(1))
68
- self.register_buffer('sum_x', torch.zeros(n_bands))
69
- self.register_buffer('sum_x2', torch.zeros(n_bands))
70
- self.register_buffer('sum_target_x2', torch.zeros(n_bands))
71
- self.counts: torch.Tensor
72
- self.sum_x: torch.Tensor
73
- self.sum_x2: torch.Tensor
74
- self.sum_target_x2: torch.Tensor
75
-
76
- @property
77
- def mean(self):
78
- mean = self.sum_x / self.counts
79
- return mean
80
-
81
- @property
82
- def std(self):
83
- std = (self.sum_x2 / self.counts - self.mean**2).clamp(min=0).sqrt()
84
- return std
85
-
86
- @property
87
- def target_std(self):
88
- target_std = self.sum_target_x2 / self.counts
89
- return target_std
90
-
91
- def project_sample(self, x: torch.Tensor):
92
- assert x.dim() == 3
93
- bands = self.split_bands(x)
94
- if self.counts.item() < self.num_samples:
95
- ref_bands = self.split_bands(torch.randn_like(x))
96
- self.counts += len(x)
97
- self.sum_x += bands.mean(dim=(2, 3)).sum(dim=1)
98
- self.sum_x2 += bands.pow(2).mean(dim=(2, 3)).sum(dim=1)
99
- self.sum_target_x2 += ref_bands.pow(2).mean(dim=(2, 3)).sum(dim=1)
100
- rescale = (self.target_std / self.std.clamp(min=1e-12)) ** self.power_std # same output size
101
- bands = (bands - self.mean.view(-1, 1, 1, 1)) * rescale.view(-1, 1, 1, 1)
102
- return bands.sum(dim=0)
103
-
104
- def return_sample(self, x: torch.Tensor):
105
- assert x.dim() == 3
106
- bands = self.split_bands(x)
107
- rescale = (self.std / self.target_std) ** self.power_std
108
- bands = bands * rescale.view(-1, 1, 1, 1) + self.mean.view(-1, 1, 1, 1)
109
- return bands.sum(dim=0)
110
-
111
-
112
- class NoiseSchedule:
113
- """Noise schedule for diffusion.
114
-
115
- Args:
116
- beta_t0 (float): Variance of the first diffusion step.
117
- beta_t1 (float): Variance of the last diffusion step.
118
- beta_exp (float): Power schedule exponent
119
- num_steps (int): Number of diffusion step.
120
- variance (str): choice of the sigma value for the denoising eq. Choices: "beta" or "beta_tilde"
121
- clip (float): clipping value for the denoising steps
122
- rescale (float): rescaling value to avoid vanishing signals unused by default (i.e 1)
123
- repartition (str): shape of the schedule only power schedule is supported
124
- sample_processor (SampleProcessor): Module that normalize data to match better the gaussian distribution
125
- noise_scale (float): Scaling factor for the noise
126
- """
127
- def __init__(self, beta_t0: float = 1e-4, beta_t1: float = 0.02, num_steps: int = 1000, variance: str = 'beta',
128
- clip: float = 5., rescale: float = 1., device='cuda', beta_exp: float = 1,
129
- repartition: str = "power", alpha_sigmoid: dict = {}, n_bands: tp.Optional[int] = None,
130
- sample_processor: SampleProcessor = SampleProcessor(), noise_scale: float = 1.0, **kwargs):
131
-
132
- self.beta_t0 = beta_t0
133
- self.beta_t1 = beta_t1
134
- self.variance = variance
135
- self.num_steps = num_steps
136
- self.clip = clip
137
- self.sample_processor = sample_processor
138
- self.rescale = rescale
139
- self.n_bands = n_bands
140
- self.noise_scale = noise_scale
141
- assert n_bands is None
142
- if repartition == "power":
143
- self.betas = torch.linspace(beta_t0 ** (1 / beta_exp), beta_t1 ** (1 / beta_exp), num_steps,
144
- device=device, dtype=torch.float) ** beta_exp
145
- else:
146
- raise RuntimeError('Not implemented')
147
- self.rng = random.Random(1234)
148
-
149
- def get_beta(self, step: tp.Union[int, torch.Tensor]):
150
- if self.n_bands is None:
151
- return self.betas[step]
152
- else:
153
- return self.betas[:, step] # [n_bands, len(step)]
154
-
155
- def get_initial_noise(self, x: torch.Tensor):
156
- if self.n_bands is None:
157
- return torch.randn_like(x)
158
- return torch.randn((x.size(0), self.n_bands, x.size(2)))
159
-
160
- def get_alpha_bar(self, step: tp.Optional[tp.Union[int, torch.Tensor]] = None) -> torch.Tensor:
161
- """Return 'alpha_bar', either for a given step, or as a tensor with its value for each step."""
162
- if step is None:
163
- return (1 - self.betas).cumprod(dim=-1) # works for simgle and multi bands
164
- if type(step) is int:
165
- return (1 - self.betas[:step + 1]).prod()
166
- else:
167
- return (1 - self.betas).cumprod(dim=0)[step].view(-1, 1, 1)
168
-
169
- def get_training_item(self, x: torch.Tensor, tensor_step: bool = False) -> TrainingItem:
170
- """Create a noisy data item for diffusion model training:
171
-
172
- Args:
173
- x (torch.Tensor): clean audio data torch.tensor(bs, 1, T)
174
- tensor_step (bool): If tensor_step = false, only one step t is sample,
175
- the whole batch is diffused to the same step and t is int.
176
- If tensor_step = true, t is a tensor of size (x.size(0),)
177
- every element of the batch is diffused to a independently sampled.
178
- """
179
- step: tp.Union[int, torch.Tensor]
180
- if tensor_step:
181
- bs = x.size(0)
182
- step = torch.randint(0, self.num_steps, size=(bs,), device=x.device)
183
- else:
184
- step = self.rng.randrange(self.num_steps)
185
- alpha_bar = self.get_alpha_bar(step) # [batch_size, n_bands, 1]
186
-
187
- x = self.sample_processor.project_sample(x)
188
- noise = torch.randn_like(x)
189
- noisy = (alpha_bar.sqrt() / self.rescale) * x + (1 - alpha_bar).sqrt() * noise * self.noise_scale
190
- return TrainingItem(noisy, noise, step)
191
-
192
- def generate(self, model: torch.nn.Module, initial: tp.Optional[torch.Tensor] = None,
193
- condition: tp.Optional[torch.Tensor] = None, return_list: bool = False):
194
- """Full ddpm reverse process.
195
-
196
- Args:
197
- model (nn.Module): Diffusion model.
198
- initial (tensor): Initial Noise.
199
- condition (tensor): Input conditionning Tensor (e.g. encodec compressed representation).
200
- return_list (bool): Whether to return the whole process or only the sampled point.
201
- """
202
- alpha_bar = self.get_alpha_bar(step=self.num_steps - 1)
203
- current = initial
204
- iterates = [initial]
205
- for step in range(self.num_steps)[::-1]:
206
- with torch.no_grad():
207
- estimate = model(current, step, condition=condition).sample
208
- alpha = 1 - self.betas[step]
209
- previous = (current - (1 - alpha) / (1 - alpha_bar).sqrt() * estimate) / alpha.sqrt()
210
- previous_alpha_bar = self.get_alpha_bar(step=step - 1)
211
- if step == 0:
212
- sigma2 = 0
213
- elif self.variance == 'beta':
214
- sigma2 = 1 - alpha
215
- elif self.variance == 'beta_tilde':
216
- sigma2 = (1 - previous_alpha_bar) / (1 - alpha_bar) * (1 - alpha)
217
- elif self.variance == 'none':
218
- sigma2 = 0
219
- else:
220
- raise ValueError(f'Invalid variance type {self.variance}')
221
-
222
- if sigma2 > 0:
223
- previous += sigma2**0.5 * torch.randn_like(previous) * self.noise_scale
224
- if self.clip:
225
- previous = previous.clamp(-self.clip, self.clip)
226
- current = previous
227
- alpha_bar = previous_alpha_bar
228
- if step == 0:
229
- previous *= self.rescale
230
- if return_list:
231
- iterates.append(previous.cpu())
232
-
233
- if return_list:
234
- return iterates
235
- else:
236
- return self.sample_processor.return_sample(previous)
237
-
238
- def generate_subsampled(self, model: torch.nn.Module, initial: torch.Tensor, step_list: tp.Optional[list] = None,
239
- condition: tp.Optional[torch.Tensor] = None, return_list: bool = False):
240
- """Reverse process that only goes through Markov chain states in step_list."""
241
- if step_list is None:
242
- step_list = list(range(1000))[::-50] + [0]
243
- alpha_bar = self.get_alpha_bar(step=self.num_steps - 1)
244
- alpha_bars_subsampled = (1 - self.betas).cumprod(dim=0)[list(reversed(step_list))].cpu()
245
- betas_subsampled = betas_from_alpha_bar(alpha_bars_subsampled)
246
- current = initial * self.noise_scale
247
- iterates = [current]
248
- for idx, step in enumerate(step_list[:-1]):
249
- with torch.no_grad():
250
- estimate = model(current, step, condition=condition).sample * self.noise_scale
251
- alpha = 1 - betas_subsampled[-1 - idx]
252
- previous = (current - (1 - alpha) / (1 - alpha_bar).sqrt() * estimate) / alpha.sqrt()
253
- previous_alpha_bar = self.get_alpha_bar(step_list[idx + 1])
254
- if step == step_list[-2]:
255
- sigma2 = 0
256
- previous_alpha_bar = torch.tensor(1.0)
257
- else:
258
- sigma2 = (1 - previous_alpha_bar) / (1 - alpha_bar) * (1 - alpha)
259
- if sigma2 > 0:
260
- previous += sigma2**0.5 * torch.randn_like(previous) * self.noise_scale
261
- if self.clip:
262
- previous = previous.clamp(-self.clip, self.clip)
263
- current = previous
264
- alpha_bar = previous_alpha_bar
265
- if step == 0:
266
- previous *= self.rescale
267
- if return_list:
268
- iterates.append(previous.cpu())
269
- if return_list:
270
- return iterates
271
- else:
272
- return self.sample_processor.return_sample(previous)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_smpl.sh DELETED
@@ -1,13 +0,0 @@
1
-
2
- mkdir -p body_models
3
- cd body_models/
4
-
5
- echo -e "The smpl files will be stored in the 'body_models/smpl/' folder\n"
6
- gdown 1INYlGA76ak_cKGzvpOV2Pe6RkYTlXTW2
7
- rm -rf smpl
8
-
9
- unzip smpl.zip
10
- echo -e "Cleaning\n"
11
- rm smpl.zip
12
-
13
- echo -e "Downloading done!"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_nodes.py DELETED
@@ -1,124 +0,0 @@
1
- import numpy as np
2
- import pytest
3
- from trimesh import transformations
4
-
5
- from pyrender import (DirectionalLight, PerspectiveCamera, Mesh, Node)
6
-
7
-
8
- def test_nodes():
9
-
10
- x = Node()
11
- assert x.name is None
12
- assert x.camera is None
13
- assert x.children == []
14
- assert x.skin is None
15
- assert np.allclose(x.matrix, np.eye(4))
16
- assert x.mesh is None
17
- assert np.allclose(x.rotation, [0,0,0,1])
18
- assert np.allclose(x.scale, np.ones(3))
19
- assert np.allclose(x.translation, np.zeros(3))
20
- assert x.weights is None
21
- assert x.light is None
22
-
23
- x.name = 'node'
24
-
25
- # Test node light/camera/mesh tests
26
- c = PerspectiveCamera(yfov=2.0)
27
- m = Mesh([])
28
- d = DirectionalLight()
29
- x.camera = c
30
- assert x.camera == c
31
- with pytest.raises(TypeError):
32
- x.camera = m
33
- x.camera = d
34
- x.camera = None
35
- x.mesh = m
36
- assert x.mesh == m
37
- with pytest.raises(TypeError):
38
- x.mesh = c
39
- x.mesh = d
40
- x.light = d
41
- assert x.light == d
42
- with pytest.raises(TypeError):
43
- x.light = m
44
- x.light = c
45
-
46
- # Test transformations getters/setters/etc...
47
- # Set up test values
48
- x = np.array([1.0, 0.0, 0.0])
49
- y = np.array([0.0, 1.0, 0.0])
50
- t = np.array([1.0, 2.0, 3.0])
51
- s = np.array([0.5, 2.0, 1.0])
52
-
53
- Mx = transformations.rotation_matrix(np.pi / 2.0, x)
54
- qx = np.roll(transformations.quaternion_about_axis(np.pi / 2.0, x), -1)
55
- Mxt = Mx.copy()
56
- Mxt[:3,3] = t
57
- S = np.eye(4)
58
- S[:3,:3] = np.diag(s)
59
- Mxts = Mxt.dot(S)
60
-
61
- My = transformations.rotation_matrix(np.pi / 2.0, y)
62
- qy = np.roll(transformations.quaternion_about_axis(np.pi / 2.0, y), -1)
63
- Myt = My.copy()
64
- Myt[:3,3] = t
65
-
66
- x = Node(matrix=Mx)
67
- assert np.allclose(x.matrix, Mx)
68
- assert np.allclose(x.rotation, qx)
69
- assert np.allclose(x.translation, np.zeros(3))
70
- assert np.allclose(x.scale, np.ones(3))
71
-
72
- x.matrix = My
73
- assert np.allclose(x.matrix, My)
74
- assert np.allclose(x.rotation, qy)
75
- assert np.allclose(x.translation, np.zeros(3))
76
- assert np.allclose(x.scale, np.ones(3))
77
- x.translation = t
78
- assert np.allclose(x.matrix, Myt)
79
- assert np.allclose(x.rotation, qy)
80
- x.rotation = qx
81
- assert np.allclose(x.matrix, Mxt)
82
- x.scale = s
83
- assert np.allclose(x.matrix, Mxts)
84
-
85
- x = Node(matrix=Mxt)
86
- assert np.allclose(x.matrix, Mxt)
87
- assert np.allclose(x.rotation, qx)
88
- assert np.allclose(x.translation, t)
89
- assert np.allclose(x.scale, np.ones(3))
90
-
91
- x = Node(matrix=Mxts)
92
- assert np.allclose(x.matrix, Mxts)
93
- assert np.allclose(x.rotation, qx)
94
- assert np.allclose(x.translation, t)
95
- assert np.allclose(x.scale, s)
96
-
97
- # Individual element getters
98
- x.scale[0] = 0
99
- assert np.allclose(x.scale[0], 0)
100
-
101
- x.translation[0] = 0
102
- assert np.allclose(x.translation[0], 0)
103
-
104
- x.matrix = np.eye(4)
105
- x.matrix[0,0] = 500
106
- assert x.matrix[0,0] == 1.0
107
-
108
- # Failures
109
- with pytest.raises(ValueError):
110
- x.matrix = 5 * np.eye(4)
111
- with pytest.raises(ValueError):
112
- x.matrix = np.eye(5)
113
- with pytest.raises(ValueError):
114
- x.matrix = np.eye(4).dot([5,1,1,1])
115
- with pytest.raises(ValueError):
116
- x.rotation = np.array([1,2])
117
- with pytest.raises(ValueError):
118
- x.rotation = np.array([1,2,3])
119
- with pytest.raises(ValueError):
120
- x.rotation = np.array([1,2,3,4])
121
- with pytest.raises(ValueError):
122
- x.translation = np.array([1,2,3,4])
123
- with pytest.raises(ValueError):
124
- x.scale = np.array([1,2,3,4])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py DELETED
@@ -1,179 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from torchlibrosa.stft import Spectrogram, LogmelFilterBank
5
-
6
- def get_audio_encoder(name: str):
7
- if name == "Cnn14":
8
- return Cnn14
9
- else:
10
- raise Exception('The audio encoder name {} is incorrect or not supported'.format(name))
11
-
12
-
13
- class ConvBlock(nn.Module):
14
- def __init__(self, in_channels, out_channels):
15
-
16
- super(ConvBlock, self).__init__()
17
-
18
- self.conv1 = nn.Conv2d(in_channels=in_channels,
19
- out_channels=out_channels,
20
- kernel_size=(3, 3), stride=(1, 1),
21
- padding=(1, 1), bias=False)
22
-
23
- self.conv2 = nn.Conv2d(in_channels=out_channels,
24
- out_channels=out_channels,
25
- kernel_size=(3, 3), stride=(1, 1),
26
- padding=(1, 1), bias=False)
27
-
28
- self.bn1 = nn.BatchNorm2d(out_channels)
29
- self.bn2 = nn.BatchNorm2d(out_channels)
30
-
31
-
32
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
33
-
34
- x = input
35
- x = F.relu_(self.bn1(self.conv1(x)))
36
- x = F.relu_(self.bn2(self.conv2(x)))
37
- if pool_type == 'max':
38
- x = F.max_pool2d(x, kernel_size=pool_size)
39
- elif pool_type == 'avg':
40
- x = F.avg_pool2d(x, kernel_size=pool_size)
41
- elif pool_type == 'avg+max':
42
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
43
- x2 = F.max_pool2d(x, kernel_size=pool_size)
44
- x = x1 + x2
45
- else:
46
- raise Exception('Incorrect argument!')
47
-
48
- return x
49
-
50
-
51
- class ConvBlock5x5(nn.Module):
52
- def __init__(self, in_channels, out_channels):
53
-
54
- super(ConvBlock5x5, self).__init__()
55
-
56
- self.conv1 = nn.Conv2d(in_channels=in_channels,
57
- out_channels=out_channels,
58
- kernel_size=(5, 5), stride=(1, 1),
59
- padding=(2, 2), bias=False)
60
-
61
- self.bn1 = nn.BatchNorm2d(out_channels)
62
-
63
-
64
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
65
-
66
- x = input
67
- x = F.relu_(self.bn1(self.conv1(x)))
68
- if pool_type == 'max':
69
- x = F.max_pool2d(x, kernel_size=pool_size)
70
- elif pool_type == 'avg':
71
- x = F.avg_pool2d(x, kernel_size=pool_size)
72
- elif pool_type == 'avg+max':
73
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
74
- x2 = F.max_pool2d(x, kernel_size=pool_size)
75
- x = x1 + x2
76
- else:
77
- raise Exception('Incorrect argument!')
78
-
79
- return x
80
-
81
-
82
- class AttBlock(nn.Module):
83
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
84
- super(AttBlock, self).__init__()
85
-
86
- self.activation = activation
87
- self.temperature = temperature
88
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
89
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
90
-
91
- self.bn_att = nn.BatchNorm1d(n_out)
92
-
93
- def forward(self, x):
94
- # x: (n_samples, n_in, n_time)
95
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
96
- cla = self.nonlinear_transform(self.cla(x))
97
- x = torch.sum(norm_att * cla, dim=2)
98
- return x, norm_att, cla
99
-
100
- def nonlinear_transform(self, x):
101
- if self.activation == 'linear':
102
- return x
103
- elif self.activation == 'sigmoid':
104
- return torch.sigmoid(x)
105
-
106
-
107
- class Cnn14(nn.Module):
108
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
109
- fmax, classes_num, out_emb):
110
-
111
- super(Cnn14, self).__init__()
112
-
113
- window = 'hann'
114
- center = True
115
- pad_mode = 'reflect'
116
- ref = 1.0
117
- amin = 1e-10
118
- top_db = None
119
-
120
- # Spectrogram extractor
121
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
122
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
123
- freeze_parameters=True)
124
-
125
- # Logmel feature extractor
126
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
127
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
128
- freeze_parameters=True)
129
-
130
- self.bn0 = nn.BatchNorm2d(64)
131
-
132
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
133
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
134
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
135
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
136
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
137
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
138
-
139
- # out_emb is 2048 for best Cnn14
140
- self.fc1 = nn.Linear(2048, out_emb, bias=True)
141
- self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True)
142
-
143
- def forward(self, input, mixup_lambda=None):
144
- """
145
- Input: (batch_size, data_length)
146
- """
147
-
148
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
149
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
150
-
151
- x = x.transpose(1, 3)
152
- x = self.bn0(x)
153
- x = x.transpose(1, 3)
154
-
155
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
156
- x = F.dropout(x, p=0.2, training=self.training)
157
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
158
- x = F.dropout(x, p=0.2, training=self.training)
159
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
160
- x = F.dropout(x, p=0.2, training=self.training)
161
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
162
- x = F.dropout(x, p=0.2, training=self.training)
163
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
164
- x = F.dropout(x, p=0.2, training=self.training)
165
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
166
- x = F.dropout(x, p=0.2, training=self.training)
167
- x = torch.mean(x, dim=3)
168
-
169
- (x1, _) = torch.max(x, dim=2)
170
- x2 = torch.mean(x, dim=2)
171
- x = x1 + x2
172
- x = F.dropout(x, p=0.5, training=self.training)
173
- x = F.relu_(self.fc1(x))
174
- embedding = F.dropout(x, p=0.5, training=self.training)
175
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
176
-
177
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
178
-
179
- return output_dict
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/nar_tts_modules.py DELETED
@@ -1,138 +0,0 @@
1
- import torch
2
- from torch import nn
3
-
4
- from text_to_speech.modules.commons.layers import LayerNorm
5
- import torch.nn.functional as F
6
-
7
- class DurationPredictor(torch.nn.Module):
8
- def __init__(self, idim, n_layers=2, n_chans=384, kernel_size=3, dropout_rate=0.1, offset=1.0):
9
- super(DurationPredictor, self).__init__()
10
- self.offset = offset
11
- self.conv = torch.nn.ModuleList()
12
- self.kernel_size = kernel_size
13
- for idx in range(n_layers):
14
- in_chans = idim if idx == 0 else n_chans
15
- self.conv += [torch.nn.Sequential(
16
- torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=kernel_size // 2),
17
- torch.nn.ReLU(),
18
- LayerNorm(n_chans, dim=1),
19
- torch.nn.Dropout(dropout_rate)
20
- )]
21
- self.linear = nn.Sequential(torch.nn.Linear(n_chans, 1), nn.Softplus())
22
-
23
- def forward(self, x, x_padding=None):
24
- x = x.transpose(1, -1) # (B, idim, Tmax)
25
- for f in self.conv:
26
- x = f(x) # (B, C, Tmax)
27
- if x_padding is not None:
28
- x = x * (1 - x_padding.float())[:, None, :]
29
-
30
- x = self.linear(x.transpose(1, -1)) # [B, T, C]
31
- x = x * (1 - x_padding.float())[:, :, None] # (B, T, C)
32
- x = x[..., 0] # (B, Tmax)
33
- return x
34
-
35
-
36
- class SyntaDurationPredictor(torch.nn.Module):
37
- def __init__(self, idim, n_layers=2, n_chans=384, kernel_size=3, dropout_rate=0.1, offset=1.0):
38
- super(SyntaDurationPredictor, self).__init__()
39
- from text_to_speech.modules.tts.syntaspeech.syntactic_graph_encoder import GraphAuxEnc
40
- self.graph_encoder = GraphAuxEnc(in_dim=idim, hid_dim=idim, out_dim=idim)
41
- self.offset = offset
42
- self.conv = torch.nn.ModuleList()
43
- self.kernel_size = kernel_size
44
- for idx in range(n_layers):
45
- in_chans = idim if idx == 0 else n_chans
46
- self.conv += [torch.nn.Sequential(
47
- torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=kernel_size // 2),
48
- torch.nn.ReLU(),
49
- LayerNorm(n_chans, dim=1),
50
- torch.nn.Dropout(dropout_rate)
51
- )]
52
- self.linear = nn.Sequential(torch.nn.Linear(n_chans, 1), nn.Softplus())
53
-
54
- def forward(self, x, x_padding=None, ph2word=None, graph_lst=None, etypes_lst=None):
55
- x = x.transpose(1, -1) # (B, idim, Tmax)
56
- assert ph2word is not None and graph_lst is not None and etypes_lst is not None
57
- x_graph = self.graph_encoder(graph_lst, x, ph2word, etypes_lst)
58
- x = x + x_graph * 1.
59
-
60
- for f in self.conv:
61
- x = f(x) # (B, C, Tmax)
62
- if x_padding is not None:
63
- x = x * (1 - x_padding.float())[:, None, :]
64
-
65
- x = self.linear(x.transpose(1, -1)) # [B, T, C]
66
- x = x * (1 - x_padding.float())[:, :, None] # (B, T, C)
67
- x = x[..., 0] # (B, Tmax)
68
- return x
69
-
70
-
71
- class LengthRegulator(torch.nn.Module):
72
- def __init__(self, pad_value=0.0):
73
- super(LengthRegulator, self).__init__()
74
- self.pad_value = pad_value
75
-
76
- def forward(self, dur, dur_padding=None, alpha=1.0):
77
- """
78
- Example (no batch dim version):
79
- 1. dur = [2,2,3]
80
- 2. token_idx = [[1],[2],[3]], dur_cumsum = [2,4,7], dur_cumsum_prev = [0,2,4]
81
- 3. token_mask = [[1,1,0,0,0,0,0],
82
- [0,0,1,1,0,0,0],
83
- [0,0,0,0,1,1,1]]
84
- 4. token_idx * token_mask = [[1,1,0,0,0,0,0],
85
- [0,0,2,2,0,0,0],
86
- [0,0,0,0,3,3,3]]
87
- 5. (token_idx * token_mask).sum(0) = [1,1,2,2,3,3,3]
88
-
89
- :param dur: Batch of durations of each frame (B, T_txt)
90
- :param dur_padding: Batch of padding of each frame (B, T_txt)
91
- :param alpha: duration rescale coefficient
92
- :return:
93
- mel2ph (B, T_speech)
94
- assert alpha > 0
95
- """
96
- dur = torch.round(dur.float() * alpha).long()
97
- if dur_padding is not None:
98
- dur = dur * (1 - dur_padding.long())
99
- token_idx = torch.arange(1, dur.shape[1] + 1)[None, :, None].to(dur.device)
100
- dur_cumsum = torch.cumsum(dur, 1)
101
- dur_cumsum_prev = F.pad(dur_cumsum, [1, -1], mode='constant', value=0)
102
-
103
- pos_idx = torch.arange(dur.sum(-1).max())[None, None].to(dur.device)
104
- token_mask = (pos_idx >= dur_cumsum_prev[:, :, None]) & (pos_idx < dur_cumsum[:, :, None])
105
- mel2token = (token_idx * token_mask.long()).sum(1)
106
- return mel2token
107
-
108
-
109
- class PitchPredictor(torch.nn.Module):
110
- def __init__(self, idim, n_layers=5, n_chans=384, odim=2, kernel_size=5, dropout_rate=0.1):
111
- super(PitchPredictor, self).__init__()
112
- self.conv = torch.nn.ModuleList()
113
- self.kernel_size = kernel_size
114
- for idx in range(n_layers):
115
- in_chans = idim if idx == 0 else n_chans
116
- self.conv += [torch.nn.Sequential(
117
- torch.nn.Conv1d(in_chans, n_chans, kernel_size, padding=kernel_size // 2),
118
- torch.nn.ReLU(),
119
- LayerNorm(n_chans, dim=1),
120
- torch.nn.Dropout(dropout_rate)
121
- )]
122
- self.linear = torch.nn.Linear(n_chans, odim)
123
-
124
- def forward(self, x):
125
- """
126
-
127
- :param x: [B, T, H]
128
- :return: [B, T, H]
129
- """
130
- x = x.transpose(1, -1) # (B, idim, Tmax)
131
- for f in self.conv:
132
- x = f(x) # (B, C, Tmax)
133
- x = self.linear(x.transpose(1, -1)) # (B, Tmax, H)
134
- return x
135
-
136
-
137
- class EnergyPredictor(PitchPredictor):
138
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGuardians/SummarizeWikipediaDocument/app.py DELETED
@@ -1,58 +0,0 @@
1
- import gradio as gr
2
- import wikipedia
3
- from transformers import pipeline
4
- import os
5
-
6
- # Setting to use the 0th GPU
7
- os.environ["CUDA_VISIBLE_DEVICES"] = "0"
8
-
9
-
10
- def summarize(text):
11
- # Setting to use the bart-large-cnn model for summarization
12
- summarizer = pipeline("summarization")
13
-
14
- # To use the t5-base model for summarization:
15
- # summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
16
-
17
- summary_text = summarizer(text, max_length=100, min_length=5, do_sample=False)[0]['summary_text']
18
- print(f'Length of initial text: {len(text)}')
19
- print(f'Length of summary: {len(summary_text)}')
20
- print(summary_text)
21
- return summary_text
22
-
23
-
24
- def greet(name):
25
- return "Hello " + name.orig_name + "!"
26
-
27
-
28
- def get_ocr():
29
- return ''
30
-
31
-
32
- def search_wiki(text):
33
- return wikipedia.search(text)
34
-
35
-
36
- def get_wiki(search_term):
37
- # text = wikipedia.summary(search_term)
38
- orig_text_len = len(search_term)
39
- text = summarize(search_term)
40
- sum_length = len(text)
41
- return [text, orig_text_len, sum_length]
42
-
43
-
44
- # def inference(file):
45
- # get_ocr()
46
- # model = AutoModelForSeq2SeqLM.from_pretrained("sgugger/my-awesome-model")
47
-
48
- out_sum_text = gr.Textbox(label='Summarized Text', lines=15)
49
- out_orig_test_len = gr.Number(label='Original Text Length')
50
- out_sum_text_len = gr.Number(label='Summarized Text Length')
51
-
52
- iface = gr.Interface(fn=get_wiki,
53
- inputs=gr.Textbox(lines=50, placeholder="Paste article here....", label='Article to Summarize'),
54
- outputs=[out_sum_text, out_orig_test_len, out_sum_text_len],
55
- title='Article Summary',
56
- description='Paste in an article and it will be summarized.'
57
- )
58
- iface.launch() # To create a public link, set `share=True` in `launch()`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptDuo.py DELETED
@@ -1,57 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from curl_cffi.requests import AsyncSession
4
- from .base_provider import AsyncProvider, format_prompt
5
-
6
-
7
- class ChatgptDuo(AsyncProvider):
8
- url = "https://chatgptduo.com"
9
- supports_gpt_35_turbo = True
10
- working = True
11
-
12
- @classmethod
13
- async def create_async(
14
- cls,
15
- model: str,
16
- messages: list[dict[str, str]],
17
- proxy: str = None,
18
- timeout: int = 30,
19
- **kwargs
20
- ) -> str:
21
- async with AsyncSession(
22
- impersonate="chrome107",
23
- proxies={"https": proxy},
24
- timeout=timeout
25
- ) as session:
26
- prompt = format_prompt(messages),
27
- data = {
28
- "prompt": prompt,
29
- "search": prompt,
30
- "purpose": "ask",
31
- }
32
- response = await session.post(f"{cls.url}/", data=data)
33
- response.raise_for_status()
34
- data = response.json()
35
-
36
- cls._sources = [{
37
- "title": source["title"],
38
- "url": source["link"],
39
- "snippet": source["snippet"]
40
- } for source in data["results"]]
41
-
42
- return data["answer"]
43
-
44
- @classmethod
45
- def get_sources(cls):
46
- return cls._sources
47
-
48
- @classmethod
49
- @property
50
- def params(cls):
51
- params = [
52
- ("model", "str"),
53
- ("messages", "list[dict[str, str]]"),
54
- ("stream", "bool"),
55
- ]
56
- param = ", ".join([": ".join(p) for p in params])
57
- return f"g4f.provider.{cls.__name__} supports: ({param})"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net.py DELETED
@@ -1,76 +0,0 @@
1
- """MidashNet: Network for monocular depth estimation trained by mixing several datasets.
2
- This file contains code that is adapted from
3
- https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
4
- """
5
- import torch
6
- import torch.nn as nn
7
-
8
- from .base_model import BaseModel
9
- from .blocks import FeatureFusionBlock, Interpolate, _make_encoder
10
-
11
-
12
- class MidasNet(BaseModel):
13
- """Network for monocular depth estimation.
14
- """
15
-
16
- def __init__(self, path=None, features=256, non_negative=True):
17
- """Init.
18
-
19
- Args:
20
- path (str, optional): Path to saved model. Defaults to None.
21
- features (int, optional): Number of features. Defaults to 256.
22
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
23
- """
24
- print("Loading weights: ", path)
25
-
26
- super(MidasNet, self).__init__()
27
-
28
- use_pretrained = False if path is None else True
29
-
30
- self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained)
31
-
32
- self.scratch.refinenet4 = FeatureFusionBlock(features)
33
- self.scratch.refinenet3 = FeatureFusionBlock(features)
34
- self.scratch.refinenet2 = FeatureFusionBlock(features)
35
- self.scratch.refinenet1 = FeatureFusionBlock(features)
36
-
37
- self.scratch.output_conv = nn.Sequential(
38
- nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
39
- Interpolate(scale_factor=2, mode="bilinear"),
40
- nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
41
- nn.ReLU(True),
42
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
43
- nn.ReLU(True) if non_negative else nn.Identity(),
44
- )
45
-
46
- if path:
47
- self.load(path)
48
-
49
- def forward(self, x):
50
- """Forward pass.
51
-
52
- Args:
53
- x (tensor): input data (image)
54
-
55
- Returns:
56
- tensor: depth
57
- """
58
-
59
- layer_1 = self.pretrained.layer1(x)
60
- layer_2 = self.pretrained.layer2(layer_1)
61
- layer_3 = self.pretrained.layer3(layer_2)
62
- layer_4 = self.pretrained.layer4(layer_3)
63
-
64
- layer_1_rn = self.scratch.layer1_rn(layer_1)
65
- layer_2_rn = self.scratch.layer2_rn(layer_2)
66
- layer_3_rn = self.scratch.layer3_rn(layer_3)
67
- layer_4_rn = self.scratch.layer4_rn(layer_4)
68
-
69
- path_4 = self.scratch.refinenet4(layer_4_rn)
70
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
71
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
72
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
73
-
74
- out = self.scratch.output_conv(path_1)
75
-
76
- return torch.squeeze(out, dim=1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/prisoner.py DELETED
@@ -1,49 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from typing import TYPE_CHECKING, Any, List
4
-
5
- from . import describer_registry as DescriberRegistry
6
- from .base import BaseDescriber
7
-
8
- if TYPE_CHECKING:
9
- from agentverse.environments import BaseEnvironment
10
-
11
-
12
- @DescriberRegistry.register("prisoner")
13
- class PrisonerDescriber(BaseDescriber):
14
- switch_func = {
15
- "Both Suspects": "Suspect2",
16
- "Suspect1": "Suspect2",
17
- "Suspect2": "Suspect1",
18
- }
19
- receiver: str = "Both Suspects"
20
-
21
- def get_env_description(self, environment: BaseEnvironment) -> List[str]:
22
- if environment.cnt_turn == 0:
23
- environment.agents[0].set_receiver({"all"})
24
- environment.agents[1].set_receiver({"Police", "Suspect1"})
25
- environment.agents[2].set_receiver({"Police", "Suspect2"})
26
-
27
- # only police have to choose to talk to suspect1 or suspect
28
- description = []
29
- for i, agent in enumerate(environment.agents):
30
- if i == 0:
31
- # police -> suspect1 -> police -> suspect2
32
- if environment.cnt_turn % 2 == 1:
33
- description.append("")
34
- continue
35
-
36
- # Police will have to choose talk to which suspect
37
- description.append(f"You are now talking to {self.receiver}")
38
-
39
- receiver = "all" if self.receiver == "Both Suspects" else self.receiver
40
- self.receiver = self.switch_func[self.receiver]
41
- agent.set_receiver({receiver})
42
-
43
- else:
44
- description.append("")
45
-
46
- return description
47
-
48
- def reset(self) -> None:
49
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cleaners.py DELETED
@@ -1,146 +0,0 @@
1
- import re
2
-
3
-
4
- def japanese_cleaners(text):
5
- from text.japanese import japanese_to_romaji_with_accent
6
- text = japanese_to_romaji_with_accent(text)
7
- text = re.sub(r'([A-Za-z])$', r'\1.', text)
8
- return text
9
-
10
-
11
- def japanese_cleaners2(text):
12
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
13
-
14
-
15
- def korean_cleaners(text):
16
- '''Pipeline for Korean text'''
17
- from text.korean import latin_to_hangul, number_to_hangul, divide_hangul
18
- text = latin_to_hangul(text)
19
- text = number_to_hangul(text)
20
- text = divide_hangul(text)
21
- text = re.sub(r'([\u3131-\u3163])$', r'\1.', text)
22
- return text
23
-
24
-
25
- def chinese_cleaners(text):
26
- '''Pipeline for Chinese text'''
27
- from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo
28
- text = number_to_chinese(text)
29
- text = chinese_to_bopomofo(text)
30
- text = latin_to_bopomofo(text)
31
- text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text)
32
- return text
33
-
34
-
35
- def zh_ja_mixture_cleaners(text):
36
- from text.mandarin import chinese_to_romaji
37
- from text.japanese import japanese_to_romaji_with_accent
38
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
39
- lambda x: chinese_to_romaji(x.group(1))+' ', text)
40
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent(
41
- x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text)
42
- text = re.sub(r'\s+$', '', text)
43
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
44
- return text
45
-
46
-
47
- def sanskrit_cleaners(text):
48
- text = text.replace('॥', '।').replace('ॐ', 'ओम्')
49
- if text[-1] != '।':
50
- text += ' ।'
51
- return text
52
-
53
-
54
- def cjks_cleaners(text):
55
- from text.mandarin import chinese_to_lazy_ipa
56
- from text.japanese import japanese_to_ipa
57
- from text.korean import korean_to_lazy_ipa
58
- from text.sanskrit import devanagari_to_ipa
59
- from text.english import english_to_lazy_ipa
60
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
61
- lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text)
62
- text = re.sub(r'\[JA\](.*?)\[JA\]',
63
- lambda x: japanese_to_ipa(x.group(1))+' ', text)
64
- text = re.sub(r'\[KO\](.*?)\[KO\]',
65
- lambda x: korean_to_lazy_ipa(x.group(1))+' ', text)
66
- text = re.sub(r'\[SA\](.*?)\[SA\]',
67
- lambda x: devanagari_to_ipa(x.group(1))+' ', text)
68
- text = re.sub(r'\[EN\](.*?)\[EN\]',
69
- lambda x: english_to_lazy_ipa(x.group(1))+' ', text)
70
- text = re.sub(r'\s+$', '', text)
71
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
72
- return text
73
-
74
-
75
- def cjke_cleaners(text):
76
- from text.mandarin import chinese_to_lazy_ipa
77
- from text.japanese import japanese_to_ipa
78
- from text.korean import korean_to_ipa
79
- from text.english import english_to_ipa2
80
- text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace(
81
- 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text)
82
- text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace(
83
- 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text)
84
- text = re.sub(r'\[KO\](.*?)\[KO\]',
85
- lambda x: korean_to_ipa(x.group(1))+' ', text)
86
- text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace(
87
- 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text)
88
- text = re.sub(r'\s+$', '', text)
89
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
90
- return text
91
-
92
-
93
- def cjke_cleaners2(text):
94
- from text.mandarin import chinese_to_ipa
95
- from text.japanese import japanese_to_ipa2
96
- from text.korean import korean_to_ipa
97
- from text.english import english_to_ipa2
98
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
99
- lambda x: chinese_to_ipa(x.group(1))+' ', text)
100
- text = re.sub(r'\[JA\](.*?)\[JA\]',
101
- lambda x: japanese_to_ipa2(x.group(1))+' ', text)
102
- text = re.sub(r'\[KO\](.*?)\[KO\]',
103
- lambda x: korean_to_ipa(x.group(1))+' ', text)
104
- text = re.sub(r'\[EN\](.*?)\[EN\]',
105
- lambda x: english_to_ipa2(x.group(1))+' ', text)
106
- text = re.sub(r'\s+$', '', text)
107
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
108
- return text
109
-
110
-
111
- def thai_cleaners(text):
112
- from text.thai import num_to_thai, latin_to_thai
113
- text = num_to_thai(text)
114
- text = latin_to_thai(text)
115
- return text
116
-
117
-
118
- def shanghainese_cleaners(text):
119
- from text.shanghainese import shanghainese_to_ipa
120
- text = shanghainese_to_ipa(text)
121
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
122
- return text
123
-
124
-
125
- def chinese_dialect_cleaners(text):
126
- from text.mandarin import chinese_to_ipa2
127
- from text.japanese import japanese_to_ipa3
128
- from text.shanghainese import shanghainese_to_ipa
129
- from text.cantonese import cantonese_to_ipa
130
- from text.english import english_to_lazy_ipa2
131
- from text.ngu_dialect import ngu_dialect_to_ipa
132
- text = re.sub(r'\[ZH\](.*?)\[ZH\]',
133
- lambda x: chinese_to_ipa2(x.group(1))+' ', text)
134
- text = re.sub(r'\[JA\](.*?)\[JA\]',
135
- lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text)
136
- text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5',
137
- '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text)
138
- text = re.sub(r'\[GD\](.*?)\[GD\]',
139
- lambda x: cantonese_to_ipa(x.group(1))+' ', text)
140
- text = re.sub(r'\[EN\](.*?)\[EN\]',
141
- lambda x: english_to_lazy_ipa2(x.group(1))+' ', text)
142
- text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group(
143
- 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text)
144
- text = re.sub(r'\s+$', '', text)
145
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
146
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/__init__.py DELETED
@@ -1 +0,0 @@
1
- from .pipeline_consistency_models import ConsistencyModelPipeline
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py DELETED
@@ -1,645 +0,0 @@
1
- import copy
2
- from dataclasses import dataclass
3
- from typing import Callable, List, Optional, Union
4
-
5
- import numpy as np
6
- import PIL
7
- import torch
8
- import torch.nn.functional as F
9
- from torch.nn.functional import grid_sample
10
- from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
11
-
12
- from diffusers.models import AutoencoderKL, UNet2DConditionModel
13
- from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline, StableDiffusionSafetyChecker
14
- from diffusers.schedulers import KarrasDiffusionSchedulers
15
- from diffusers.utils import BaseOutput
16
-
17
-
18
- def rearrange_0(tensor, f):
19
- F, C, H, W = tensor.size()
20
- tensor = torch.permute(torch.reshape(tensor, (F // f, f, C, H, W)), (0, 2, 1, 3, 4))
21
- return tensor
22
-
23
-
24
- def rearrange_1(tensor):
25
- B, C, F, H, W = tensor.size()
26
- return torch.reshape(torch.permute(tensor, (0, 2, 1, 3, 4)), (B * F, C, H, W))
27
-
28
-
29
- def rearrange_3(tensor, f):
30
- F, D, C = tensor.size()
31
- return torch.reshape(tensor, (F // f, f, D, C))
32
-
33
-
34
- def rearrange_4(tensor):
35
- B, F, D, C = tensor.size()
36
- return torch.reshape(tensor, (B * F, D, C))
37
-
38
-
39
- class CrossFrameAttnProcessor:
40
- """
41
- Cross frame attention processor. Each frame attends the first frame.
42
-
43
- Args:
44
- batch_size: The number that represents actual batch size, other than the frames.
45
- For example, calling unet with a single prompt and num_images_per_prompt=1, batch_size should be equal to
46
- 2, due to classifier-free guidance.
47
- """
48
-
49
- def __init__(self, batch_size=2):
50
- self.batch_size = batch_size
51
-
52
- def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None):
53
- batch_size, sequence_length, _ = hidden_states.shape
54
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
55
- query = attn.to_q(hidden_states)
56
-
57
- is_cross_attention = encoder_hidden_states is not None
58
- if encoder_hidden_states is None:
59
- encoder_hidden_states = hidden_states
60
- elif attn.norm_cross:
61
- encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states)
62
-
63
- key = attn.to_k(encoder_hidden_states)
64
- value = attn.to_v(encoder_hidden_states)
65
-
66
- # Cross Frame Attention
67
- if not is_cross_attention:
68
- video_length = key.size()[0] // self.batch_size
69
- first_frame_index = [0] * video_length
70
-
71
- # rearrange keys to have batch and frames in the 1st and 2nd dims respectively
72
- key = rearrange_3(key, video_length)
73
- key = key[:, first_frame_index]
74
- # rearrange values to have batch and frames in the 1st and 2nd dims respectively
75
- value = rearrange_3(value, video_length)
76
- value = value[:, first_frame_index]
77
-
78
- # rearrange back to original shape
79
- key = rearrange_4(key)
80
- value = rearrange_4(value)
81
-
82
- query = attn.head_to_batch_dim(query)
83
- key = attn.head_to_batch_dim(key)
84
- value = attn.head_to_batch_dim(value)
85
-
86
- attention_probs = attn.get_attention_scores(query, key, attention_mask)
87
- hidden_states = torch.bmm(attention_probs, value)
88
- hidden_states = attn.batch_to_head_dim(hidden_states)
89
-
90
- # linear proj
91
- hidden_states = attn.to_out[0](hidden_states)
92
- # dropout
93
- hidden_states = attn.to_out[1](hidden_states)
94
-
95
- return hidden_states
96
-
97
-
98
- class CrossFrameAttnProcessor2_0:
99
- """
100
- Cross frame attention processor with scaled_dot_product attention of Pytorch 2.0.
101
-
102
- Args:
103
- batch_size: The number that represents actual batch size, other than the frames.
104
- For example, calling unet with a single prompt and num_images_per_prompt=1, batch_size should be equal to
105
- 2, due to classifier-free guidance.
106
- """
107
-
108
- def __init__(self, batch_size=2):
109
- if not hasattr(F, "scaled_dot_product_attention"):
110
- raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
111
- self.batch_size = batch_size
112
-
113
- def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None):
114
- batch_size, sequence_length, _ = (
115
- hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape
116
- )
117
- inner_dim = hidden_states.shape[-1]
118
-
119
- if attention_mask is not None:
120
- attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
121
- # scaled_dot_product_attention expects attention_mask shape to be
122
- # (batch, heads, source_length, target_length)
123
- attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
124
-
125
- query = attn.to_q(hidden_states)
126
-
127
- is_cross_attention = encoder_hidden_states is not None
128
- if encoder_hidden_states is None:
129
- encoder_hidden_states = hidden_states
130
- elif attn.norm_cross:
131
- encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states)
132
-
133
- key = attn.to_k(encoder_hidden_states)
134
- value = attn.to_v(encoder_hidden_states)
135
-
136
- # Cross Frame Attention
137
- if not is_cross_attention:
138
- video_length = key.size()[0] // self.batch_size
139
- first_frame_index = [0] * video_length
140
-
141
- # rearrange keys to have batch and frames in the 1st and 2nd dims respectively
142
- key = rearrange_3(key, video_length)
143
- key = key[:, first_frame_index]
144
- # rearrange values to have batch and frames in the 1st and 2nd dims respectively
145
- value = rearrange_3(value, video_length)
146
- value = value[:, first_frame_index]
147
-
148
- # rearrange back to original shape
149
- key = rearrange_4(key)
150
- value = rearrange_4(value)
151
-
152
- head_dim = inner_dim // attn.heads
153
- query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
154
- key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
155
- value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
156
-
157
- # the output of sdp = (batch, num_heads, seq_len, head_dim)
158
- # TODO: add support for attn.scale when we move to Torch 2.1
159
- hidden_states = F.scaled_dot_product_attention(
160
- query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
161
- )
162
-
163
- hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
164
- hidden_states = hidden_states.to(query.dtype)
165
-
166
- # linear proj
167
- hidden_states = attn.to_out[0](hidden_states)
168
- # dropout
169
- hidden_states = attn.to_out[1](hidden_states)
170
- return hidden_states
171
-
172
-
173
- @dataclass
174
- class TextToVideoPipelineOutput(BaseOutput):
175
- r"""
176
- Output class for zero-shot text-to-video pipeline.
177
-
178
- Args:
179
- images (`[List[PIL.Image.Image]`, `np.ndarray`]):
180
- List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
181
- num_channels)`.
182
- nsfw_content_detected (`[List[bool]]`):
183
- List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
184
- `None` if safety checking could not be performed.
185
- """
186
- images: Union[List[PIL.Image.Image], np.ndarray]
187
- nsfw_content_detected: Optional[List[bool]]
188
-
189
-
190
- def coords_grid(batch, ht, wd, device):
191
- # Adapted from https://github.com/princeton-vl/RAFT/blob/master/core/utils/utils.py
192
- coords = torch.meshgrid(torch.arange(ht, device=device), torch.arange(wd, device=device))
193
- coords = torch.stack(coords[::-1], dim=0).float()
194
- return coords[None].repeat(batch, 1, 1, 1)
195
-
196
-
197
- def warp_single_latent(latent, reference_flow):
198
- """
199
- Warp latent of a single frame with given flow
200
-
201
- Args:
202
- latent: latent code of a single frame
203
- reference_flow: flow which to warp the latent with
204
-
205
- Returns:
206
- warped: warped latent
207
- """
208
- _, _, H, W = reference_flow.size()
209
- _, _, h, w = latent.size()
210
- coords0 = coords_grid(1, H, W, device=latent.device).to(latent.dtype)
211
-
212
- coords_t0 = coords0 + reference_flow
213
- coords_t0[:, 0] /= W
214
- coords_t0[:, 1] /= H
215
-
216
- coords_t0 = coords_t0 * 2.0 - 1.0
217
- coords_t0 = F.interpolate(coords_t0, size=(h, w), mode="bilinear")
218
- coords_t0 = torch.permute(coords_t0, (0, 2, 3, 1))
219
-
220
- warped = grid_sample(latent, coords_t0, mode="nearest", padding_mode="reflection")
221
- return warped
222
-
223
-
224
- def create_motion_field(motion_field_strength_x, motion_field_strength_y, frame_ids, device, dtype):
225
- """
226
- Create translation motion field
227
-
228
- Args:
229
- motion_field_strength_x: motion strength along x-axis
230
- motion_field_strength_y: motion strength along y-axis
231
- frame_ids: indexes of the frames the latents of which are being processed.
232
- This is needed when we perform chunk-by-chunk inference
233
- device: device
234
- dtype: dtype
235
-
236
- Returns:
237
-
238
- """
239
- seq_length = len(frame_ids)
240
- reference_flow = torch.zeros((seq_length, 2, 512, 512), device=device, dtype=dtype)
241
- for fr_idx in range(seq_length):
242
- reference_flow[fr_idx, 0, :, :] = motion_field_strength_x * (frame_ids[fr_idx])
243
- reference_flow[fr_idx, 1, :, :] = motion_field_strength_y * (frame_ids[fr_idx])
244
- return reference_flow
245
-
246
-
247
- def create_motion_field_and_warp_latents(motion_field_strength_x, motion_field_strength_y, frame_ids, latents):
248
- """
249
- Creates translation motion and warps the latents accordingly
250
-
251
- Args:
252
- motion_field_strength_x: motion strength along x-axis
253
- motion_field_strength_y: motion strength along y-axis
254
- frame_ids: indexes of the frames the latents of which are being processed.
255
- This is needed when we perform chunk-by-chunk inference
256
- latents: latent codes of frames
257
-
258
- Returns:
259
- warped_latents: warped latents
260
- """
261
- motion_field = create_motion_field(
262
- motion_field_strength_x=motion_field_strength_x,
263
- motion_field_strength_y=motion_field_strength_y,
264
- frame_ids=frame_ids,
265
- device=latents.device,
266
- dtype=latents.dtype,
267
- )
268
- warped_latents = latents.clone().detach()
269
- for i in range(len(warped_latents)):
270
- warped_latents[i] = warp_single_latent(latents[i][None], motion_field[i][None])
271
- return warped_latents
272
-
273
-
274
- class TextToVideoZeroPipeline(StableDiffusionPipeline):
275
- r"""
276
- Pipeline for zero-shot text-to-video generation using Stable Diffusion.
277
-
278
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
279
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
280
-
281
- Args:
282
- vae ([`AutoencoderKL`]):
283
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
284
- text_encoder ([`CLIPTextModel`]):
285
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
286
- tokenizer (`CLIPTokenizer`):
287
- A [`~transformers.CLIPTokenizer`] to tokenize text.
288
- unet ([`UNet2DConditionModel`]):
289
- A [`UNet3DConditionModel`] to denoise the encoded video latents.
290
- scheduler ([`SchedulerMixin`]):
291
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
292
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
293
- safety_checker ([`StableDiffusionSafetyChecker`]):
294
- Classification module that estimates whether generated images could be considered offensive or harmful.
295
- Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
296
- about a model's potential harms.
297
- feature_extractor ([`CLIPImageProcessor`]):
298
- A [`CLIPImageProcessor`] to extract features from generated images; used as inputs to the `safety_checker`.
299
- """
300
-
301
- def __init__(
302
- self,
303
- vae: AutoencoderKL,
304
- text_encoder: CLIPTextModel,
305
- tokenizer: CLIPTokenizer,
306
- unet: UNet2DConditionModel,
307
- scheduler: KarrasDiffusionSchedulers,
308
- safety_checker: StableDiffusionSafetyChecker,
309
- feature_extractor: CLIPImageProcessor,
310
- requires_safety_checker: bool = True,
311
- ):
312
- super().__init__(
313
- vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker
314
- )
315
- processor = (
316
- CrossFrameAttnProcessor2_0(batch_size=2)
317
- if hasattr(F, "scaled_dot_product_attention")
318
- else CrossFrameAttnProcessor(batch_size=2)
319
- )
320
- self.unet.set_attn_processor(processor)
321
-
322
- def forward_loop(self, x_t0, t0, t1, generator):
323
- """
324
- Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance.
325
-
326
- Args:
327
- x_t0:
328
- Latent code at time t0.
329
- t0:
330
- Timestep at t0.
331
- t1:
332
- Timestamp at t1.
333
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
334
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
335
- generation deterministic.
336
-
337
- Returns:
338
- x_t1:
339
- Forward process applied to x_t0 from time t0 to t1.
340
- """
341
- eps = torch.randn(x_t0.size(), generator=generator, dtype=x_t0.dtype, device=x_t0.device)
342
- alpha_vec = torch.prod(self.scheduler.alphas[t0:t1])
343
- x_t1 = torch.sqrt(alpha_vec) * x_t0 + torch.sqrt(1 - alpha_vec) * eps
344
- return x_t1
345
-
346
- def backward_loop(
347
- self,
348
- latents,
349
- timesteps,
350
- prompt_embeds,
351
- guidance_scale,
352
- callback,
353
- callback_steps,
354
- num_warmup_steps,
355
- extra_step_kwargs,
356
- cross_attention_kwargs=None,
357
- ):
358
- """
359
- Perform backward process given list of time steps.
360
-
361
- Args:
362
- latents:
363
- Latents at time timesteps[0].
364
- timesteps:
365
- Time steps along which to perform backward process.
366
- prompt_embeds:
367
- Pre-generated text embeddings.
368
- guidance_scale:
369
- A higher guidance scale value encourages the model to generate images closely linked to the text
370
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
371
- callback (`Callable`, *optional*):
372
- A function that calls every `callback_steps` steps during inference. The function is called with the
373
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
374
- callback_steps (`int`, *optional*, defaults to 1):
375
- The frequency at which the `callback` function is called. If not specified, the callback is called at
376
- every step.
377
- extra_step_kwargs:
378
- Extra_step_kwargs.
379
- cross_attention_kwargs:
380
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
381
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
382
- num_warmup_steps:
383
- number of warmup steps.
384
-
385
- Returns:
386
- latents:
387
- Latents of backward process output at time timesteps[-1].
388
- """
389
- do_classifier_free_guidance = guidance_scale > 1.0
390
- num_steps = (len(timesteps) - num_warmup_steps) // self.scheduler.order
391
- with self.progress_bar(total=num_steps) as progress_bar:
392
- for i, t in enumerate(timesteps):
393
- # expand the latents if we are doing classifier free guidance
394
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
395
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
396
-
397
- # predict the noise residual
398
- noise_pred = self.unet(
399
- latent_model_input,
400
- t,
401
- encoder_hidden_states=prompt_embeds,
402
- cross_attention_kwargs=cross_attention_kwargs,
403
- ).sample
404
-
405
- # perform guidance
406
- if do_classifier_free_guidance:
407
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
408
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
409
-
410
- # compute the previous noisy sample x_t -> x_t-1
411
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
412
-
413
- # call the callback, if provided
414
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
415
- progress_bar.update()
416
- if callback is not None and i % callback_steps == 0:
417
- callback(i, t, latents)
418
- return latents.clone().detach()
419
-
420
- @torch.no_grad()
421
- def __call__(
422
- self,
423
- prompt: Union[str, List[str]],
424
- video_length: Optional[int] = 8,
425
- height: Optional[int] = None,
426
- width: Optional[int] = None,
427
- num_inference_steps: int = 50,
428
- guidance_scale: float = 7.5,
429
- negative_prompt: Optional[Union[str, List[str]]] = None,
430
- num_videos_per_prompt: Optional[int] = 1,
431
- eta: float = 0.0,
432
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
433
- latents: Optional[torch.FloatTensor] = None,
434
- motion_field_strength_x: float = 12,
435
- motion_field_strength_y: float = 12,
436
- output_type: Optional[str] = "tensor",
437
- return_dict: bool = True,
438
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
439
- callback_steps: Optional[int] = 1,
440
- t0: int = 44,
441
- t1: int = 47,
442
- frame_ids: Optional[List[int]] = None,
443
- ):
444
- """
445
- The call function to the pipeline for generation.
446
-
447
- Args:
448
- prompt (`str` or `List[str]`, *optional*):
449
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
450
- video_length (`int`, *optional*, defaults to 8):
451
- The number of generated video frames.
452
- height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
453
- The height in pixels of the generated image.
454
- width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
455
- The width in pixels of the generated image.
456
- num_inference_steps (`int`, *optional*, defaults to 50):
457
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
458
- expense of slower inference.
459
- guidance_scale (`float`, *optional*, defaults to 7.5):
460
- A higher guidance scale value encourages the model to generate images closely linked to the text
461
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
462
- negative_prompt (`str` or `List[str]`, *optional*):
463
- The prompt or prompts to guide what to not include in video generation. If not defined, you need to
464
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
465
- num_videos_per_prompt (`int`, *optional*, defaults to 1):
466
- The number of videos to generate per prompt.
467
- eta (`float`, *optional*, defaults to 0.0):
468
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
469
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
470
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
471
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
472
- generation deterministic.
473
- latents (`torch.FloatTensor`, *optional*):
474
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
475
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
476
- tensor is generated by sampling using the supplied random `generator`.
477
- output_type (`str`, *optional*, defaults to `"numpy"`):
478
- The output format of the generated video. Choose between `"latent"` and `"numpy"`.
479
- return_dict (`bool`, *optional*, defaults to `True`):
480
- Whether or not to return a
481
- [`~pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput`] instead of
482
- a plain tuple.
483
- callback (`Callable`, *optional*):
484
- A function that calls every `callback_steps` steps during inference. The function is called with the
485
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
486
- callback_steps (`int`, *optional*, defaults to 1):
487
- The frequency at which the `callback` function is called. If not specified, the callback is called at
488
- every step.
489
- motion_field_strength_x (`float`, *optional*, defaults to 12):
490
- Strength of motion in generated video along x-axis. See the [paper](https://arxiv.org/abs/2303.13439),
491
- Sect. 3.3.1.
492
- motion_field_strength_y (`float`, *optional*, defaults to 12):
493
- Strength of motion in generated video along y-axis. See the [paper](https://arxiv.org/abs/2303.13439),
494
- Sect. 3.3.1.
495
- t0 (`int`, *optional*, defaults to 44):
496
- Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the
497
- [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1.
498
- t1 (`int`, *optional*, defaults to 47):
499
- Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the
500
- [paper](https://arxiv.org/abs/2303.13439), Sect. 3.3.1.
501
- frame_ids (`List[int]`, *optional*):
502
- Indexes of the frames that are being generated. This is used when generating longer videos
503
- chunk-by-chunk.
504
-
505
- Returns:
506
- [`~pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput`]:
507
- The output contains a `ndarray` of the generated video, when `output_type` != `"latent"`, otherwise a
508
- latent code of generated videos and a list of `bool`s indicating whether the corresponding generated
509
- video contains "not-safe-for-work" (nsfw) content..
510
- """
511
- assert video_length > 0
512
- if frame_ids is None:
513
- frame_ids = list(range(video_length))
514
- assert len(frame_ids) == video_length
515
-
516
- assert num_videos_per_prompt == 1
517
-
518
- if isinstance(prompt, str):
519
- prompt = [prompt]
520
- if isinstance(negative_prompt, str):
521
- negative_prompt = [negative_prompt]
522
-
523
- # Default height and width to unet
524
- height = height or self.unet.config.sample_size * self.vae_scale_factor
525
- width = width or self.unet.config.sample_size * self.vae_scale_factor
526
-
527
- # Check inputs. Raise error if not correct
528
- self.check_inputs(prompt, height, width, callback_steps)
529
-
530
- # Define call parameters
531
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
532
- device = self._execution_device
533
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
534
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
535
- # corresponds to doing no classifier free guidance.
536
- do_classifier_free_guidance = guidance_scale > 1.0
537
-
538
- # Encode input prompt
539
- prompt_embeds = self._encode_prompt(
540
- prompt, device, num_videos_per_prompt, do_classifier_free_guidance, negative_prompt
541
- )
542
-
543
- # Prepare timesteps
544
- self.scheduler.set_timesteps(num_inference_steps, device=device)
545
- timesteps = self.scheduler.timesteps
546
-
547
- # Prepare latent variables
548
- num_channels_latents = self.unet.config.in_channels
549
- latents = self.prepare_latents(
550
- batch_size * num_videos_per_prompt,
551
- num_channels_latents,
552
- height,
553
- width,
554
- prompt_embeds.dtype,
555
- device,
556
- generator,
557
- latents,
558
- )
559
- # Prepare extra step kwargs.
560
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
561
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
562
-
563
- # Perform the first backward process up to time T_1
564
- x_1_t1 = self.backward_loop(
565
- timesteps=timesteps[: -t1 - 1],
566
- prompt_embeds=prompt_embeds,
567
- latents=latents,
568
- guidance_scale=guidance_scale,
569
- callback=callback,
570
- callback_steps=callback_steps,
571
- extra_step_kwargs=extra_step_kwargs,
572
- num_warmup_steps=num_warmup_steps,
573
- )
574
- scheduler_copy = copy.deepcopy(self.scheduler)
575
-
576
- # Perform the second backward process up to time T_0
577
- x_1_t0 = self.backward_loop(
578
- timesteps=timesteps[-t1 - 1 : -t0 - 1],
579
- prompt_embeds=prompt_embeds,
580
- latents=x_1_t1,
581
- guidance_scale=guidance_scale,
582
- callback=callback,
583
- callback_steps=callback_steps,
584
- extra_step_kwargs=extra_step_kwargs,
585
- num_warmup_steps=0,
586
- )
587
-
588
- # Propagate first frame latents at time T_0 to remaining frames
589
- x_2k_t0 = x_1_t0.repeat(video_length - 1, 1, 1, 1)
590
-
591
- # Add motion in latents at time T_0
592
- x_2k_t0 = create_motion_field_and_warp_latents(
593
- motion_field_strength_x=motion_field_strength_x,
594
- motion_field_strength_y=motion_field_strength_y,
595
- latents=x_2k_t0,
596
- frame_ids=frame_ids[1:],
597
- )
598
-
599
- # Perform forward process up to time T_1
600
- x_2k_t1 = self.forward_loop(
601
- x_t0=x_2k_t0,
602
- t0=timesteps[-t0 - 1].item(),
603
- t1=timesteps[-t1 - 1].item(),
604
- generator=generator,
605
- )
606
-
607
- # Perform backward process from time T_1 to 0
608
- x_1k_t1 = torch.cat([x_1_t1, x_2k_t1])
609
- b, l, d = prompt_embeds.size()
610
- prompt_embeds = prompt_embeds[:, None].repeat(1, video_length, 1, 1).reshape(b * video_length, l, d)
611
-
612
- self.scheduler = scheduler_copy
613
- x_1k_0 = self.backward_loop(
614
- timesteps=timesteps[-t1 - 1 :],
615
- prompt_embeds=prompt_embeds,
616
- latents=x_1k_t1,
617
- guidance_scale=guidance_scale,
618
- callback=callback,
619
- callback_steps=callback_steps,
620
- extra_step_kwargs=extra_step_kwargs,
621
- num_warmup_steps=0,
622
- )
623
- latents = x_1k_0
624
-
625
- # manually for max memory savings
626
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
627
- self.unet.to("cpu")
628
- torch.cuda.empty_cache()
629
-
630
- if output_type == "latent":
631
- image = latents
632
- has_nsfw_concept = None
633
- else:
634
- image = self.decode_latents(latents)
635
- # Run safety checker
636
- image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
637
-
638
- # Offload last model to CPU
639
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
640
- self.final_offload_hook.offload()
641
-
642
- if not return_dict:
643
- return (image, has_nsfw_concept)
644
-
645
- return TextToVideoPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py DELETED
@@ -1,6 +0,0 @@
1
- _base_ = './faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- roi_head=dict(
4
- bbox_head=dict(
5
- reg_decoded_bbox=True,
6
- loss_bbox=dict(type='IoULoss', loss_weight=10.0))))
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = './mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py'
2
- # learning policy
3
- lr_config = dict(step=[28, 34])
4
- runner = dict(type='EpochBasedRunner', max_epochs=36)
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/README.md DELETED
@@ -1,28 +0,0 @@
1
- # Searching for MobileNetV3
2
-
3
- ## Introduction
4
-
5
- <!-- [ALGORITHM] -->
6
-
7
- ```latex
8
- @inproceedings{Howard_2019_ICCV,
9
- title={Searching for MobileNetV3},
10
- author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and Le, Quoc V. and Adam, Hartwig},
11
- booktitle={The IEEE International Conference on Computer Vision (ICCV)},
12
- pages={1314-1324},
13
- month={October},
14
- year={2019},
15
- doi={10.1109/ICCV.2019.00140}}
16
- }
17
- ```
18
-
19
- ## Results and models
20
-
21
- ### Cityscapes
22
-
23
- | Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
24
- | ------ | ------------------ | --------- | ------: | -------: | -------------- | ----: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
25
- | LRASPP | M-V3-D8 | 512x1024 | 320000 | 8.9 | 15.22 | 69.54 | 70.89 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes/lraspp_m-v3-d8_512x1024_320k_cityscapes_20201224_220337-cfe8fb07.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes/lraspp_m-v3-d8_512x1024_320k_cityscapes-20201224_220337.log.json) |
26
- | LRASPP | M-V3-D8 (scratch) | 512x1024 | 320000 | 8.9 | 14.77 | 67.87 | 69.78 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes_20201224_220337-9f29cd72.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes-20201224_220337.log.json) |
27
- | LRASPP | M-V3s-D8 | 512x1024 | 320000 | 5.3 | 23.64 | 64.11 | 66.42 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes/lraspp_m-v3s-d8_512x1024_320k_cityscapes_20201224_223935-61565b34.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes/lraspp_m-v3s-d8_512x1024_320k_cityscapes-20201224_223935.log.json) |
28
- | LRASPP | M-V3s-D8 (scratch) | 512x1024 | 320000 | 5.3 | 24.50 | 62.74 | 65.01 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes_20201224_223935-03daeabb.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes-20201224_223935.log.json) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = '../deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnest101',
4
- backbone=dict(
5
- type='ResNeSt',
6
- stem_channels=128,
7
- radix=2,
8
- reduction_factor=4,
9
- avg_down_stride=True))
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_readable_style.css DELETED
@@ -1,33 +0,0 @@
1
- .container {
2
- max-width: 600px;
3
- margin-left: auto;
4
- margin-right: auto;
5
- background-color: rgb(31, 41, 55);
6
- padding: 3em;
7
- word-break: break-word;
8
- overflow-wrap: anywhere;
9
- color: #efefef !important;
10
- }
11
-
12
- .container p, .container li {
13
- font-size: 16px !important;
14
- color: #efefef !important;
15
- margin-bottom: 22px;
16
- line-height: 1.4 !important;
17
- }
18
-
19
- .container li > p {
20
- display: inline !important;
21
- }
22
-
23
- .container code {
24
- overflow-x: auto;
25
- }
26
-
27
- .container :not(pre) > code {
28
- white-space: normal !important;
29
- }
30
-
31
- .container .hoverable {
32
- font-size: 14px;
33
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/Tm/roop/ui.py DELETED
@@ -1,231 +0,0 @@
1
- import os
2
- import webbrowser
3
- import customtkinter as ctk
4
- from typing import Callable, Tuple
5
- import cv2
6
- from PIL import Image, ImageOps
7
-
8
- import roop.globals
9
- import roop.metadata
10
- from roop.face_analyser import get_one_face
11
- from roop.capturer import get_video_frame, get_video_frame_total
12
- from roop.predicter import predict_frame
13
- from roop.processors.frame.core import get_frame_processors_modules
14
- from roop.utilities import is_image, is_video, resolve_relative_path
15
-
16
- ROOT = None
17
- ROOT_HEIGHT = 700
18
- ROOT_WIDTH = 600
19
-
20
- PREVIEW = None
21
- PREVIEW_MAX_HEIGHT = 700
22
- PREVIEW_MAX_WIDTH = 1200
23
-
24
- RECENT_DIRECTORY_SOURCE = None
25
- RECENT_DIRECTORY_TARGET = None
26
- RECENT_DIRECTORY_OUTPUT = None
27
-
28
- preview_label = None
29
- preview_slider = None
30
- source_label = None
31
- target_label = None
32
- status_label = None
33
-
34
-
35
- def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
36
- global ROOT, PREVIEW
37
-
38
- ROOT = create_root(start, destroy)
39
- PREVIEW = create_preview(ROOT)
40
-
41
- return ROOT
42
-
43
-
44
- def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
45
- global source_label, target_label, status_label
46
-
47
- ctk.deactivate_automatic_dpi_awareness()
48
- ctk.set_appearance_mode('system')
49
- ctk.set_default_color_theme(resolve_relative_path('ui.json'))
50
-
51
- root = ctk.CTk()
52
- root.minsize(ROOT_WIDTH, ROOT_HEIGHT)
53
- root.title(f'{roop.metadata.name} {roop.metadata.version}')
54
- root.configure()
55
- root.protocol('WM_DELETE_WINDOW', lambda: destroy())
56
-
57
- source_label = ctk.CTkLabel(root, text=None)
58
- source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25)
59
-
60
- target_label = ctk.CTkLabel(root, text=None)
61
- target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
62
-
63
- source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path())
64
- source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1)
65
-
66
- target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path())
67
- target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1)
68
-
69
- keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps)
70
- keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps))
71
- keep_fps_checkbox.place(relx=0.1, rely=0.6)
72
-
73
- keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames)
74
- keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get()))
75
- keep_frames_switch.place(relx=0.1, rely=0.65)
76
-
77
- keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio)
78
- keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get()))
79
- keep_audio_switch.place(relx=0.6, rely=0.6)
80
-
81
- many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces)
82
- many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get()))
83
- many_faces_switch.place(relx=0.6, rely=0.65)
84
-
85
- start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start))
86
- start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05)
87
-
88
- stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy())
89
- stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05)
90
-
91
- preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview())
92
- preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05)
93
-
94
- status_label = ctk.CTkLabel(root, text=None, justify='center')
95
- status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
96
-
97
- donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2')
98
- donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
99
- donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color'))
100
- donate_label.bind('<Button>', lambda event: webbrowser.open('https://github.com/sponsors/s0md3v'))
101
-
102
- return root
103
-
104
-
105
- def create_preview(parent: ctk.CTkToplevel) -> ctk.CTkToplevel:
106
- global preview_label, preview_slider
107
-
108
- preview = ctk.CTkToplevel(parent)
109
- preview.withdraw()
110
- preview.title('Preview')
111
- preview.configure()
112
- preview.protocol('WM_DELETE_WINDOW', lambda: toggle_preview())
113
- preview.resizable(width=False, height=False)
114
-
115
- preview_label = ctk.CTkLabel(preview, text=None)
116
- preview_label.pack(fill='both', expand=True)
117
-
118
- preview_slider = ctk.CTkSlider(preview, from_=0, to=0, command=lambda frame_value: update_preview(frame_value))
119
-
120
- return preview
121
-
122
-
123
- def update_status(text: str) -> None:
124
- status_label.configure(text=text)
125
- ROOT.update()
126
-
127
-
128
- def select_source_path() -> None:
129
- global RECENT_DIRECTORY_SOURCE
130
-
131
- PREVIEW.withdraw()
132
- source_path = ctk.filedialog.askopenfilename(title='select an source image', initialdir=RECENT_DIRECTORY_SOURCE)
133
- if is_image(source_path):
134
- roop.globals.source_path = source_path
135
- RECENT_DIRECTORY_SOURCE = os.path.dirname(roop.globals.source_path)
136
- image = render_image_preview(roop.globals.source_path, (200, 200))
137
- source_label.configure(image=image)
138
- else:
139
- roop.globals.source_path = None
140
- source_label.configure(image=None)
141
-
142
-
143
- def select_target_path() -> None:
144
- global RECENT_DIRECTORY_TARGET
145
-
146
- PREVIEW.withdraw()
147
- target_path = ctk.filedialog.askopenfilename(title='select an target image or video', initialdir=RECENT_DIRECTORY_TARGET)
148
- if is_image(target_path):
149
- roop.globals.target_path = target_path
150
- RECENT_DIRECTORY_TARGET = os.path.dirname(roop.globals.target_path)
151
- image = render_image_preview(roop.globals.target_path, (200, 200))
152
- target_label.configure(image=image)
153
- elif is_video(target_path):
154
- roop.globals.target_path = target_path
155
- RECENT_DIRECTORY_TARGET = os.path.dirname(roop.globals.target_path)
156
- video_frame = render_video_preview(target_path, (200, 200))
157
- target_label.configure(image=video_frame)
158
- else:
159
- roop.globals.target_path = None
160
- target_label.configure(image=None)
161
-
162
-
163
- def select_output_path(start: Callable[[], None]) -> None:
164
- global RECENT_DIRECTORY_OUTPUT
165
-
166
- if is_image(roop.globals.target_path):
167
- output_path = ctk.filedialog.asksaveasfilename(title='save image output file', defaultextension='.png', initialfile='output.png', initialdir=RECENT_DIRECTORY_OUTPUT)
168
- elif is_video(roop.globals.target_path):
169
- output_path = ctk.filedialog.asksaveasfilename(title='save video output file', defaultextension='.mp4', initialfile='output.mp4', initialdir=RECENT_DIRECTORY_OUTPUT)
170
- else:
171
- output_path = None
172
- if output_path:
173
- roop.globals.output_path = output_path
174
- RECENT_DIRECTORY_OUTPUT = os.path.dirname(roop.globals.output_path)
175
- start()
176
-
177
-
178
- def render_image_preview(image_path: str, size: Tuple[int, int]) -> ctk.CTkImage:
179
- image = Image.open(image_path)
180
- if size:
181
- image = ImageOps.fit(image, size, Image.LANCZOS)
182
- return ctk.CTkImage(image, size=image.size)
183
-
184
-
185
- def render_video_preview(video_path: str, size: Tuple[int, int], frame_number: int = 0) -> ctk.CTkImage:
186
- capture = cv2.VideoCapture(video_path)
187
- if frame_number:
188
- capture.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
189
- has_frame, frame = capture.read()
190
- if has_frame:
191
- image = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
192
- if size:
193
- image = ImageOps.fit(image, size, Image.LANCZOS)
194
- return ctk.CTkImage(image, size=image.size)
195
- capture.release()
196
- cv2.destroyAllWindows()
197
-
198
-
199
- def toggle_preview() -> None:
200
- if PREVIEW.state() == 'normal':
201
- PREVIEW.withdraw()
202
- elif roop.globals.source_path and roop.globals.target_path:
203
- init_preview()
204
- update_preview()
205
- PREVIEW.deiconify()
206
-
207
-
208
- def init_preview() -> None:
209
- if is_image(roop.globals.target_path):
210
- preview_slider.pack_forget()
211
- if is_video(roop.globals.target_path):
212
- video_frame_total = get_video_frame_total(roop.globals.target_path)
213
- preview_slider.configure(to=video_frame_total)
214
- preview_slider.pack(fill='x')
215
- preview_slider.set(0)
216
-
217
-
218
- def update_preview(frame_number: int = 0) -> None:
219
- if roop.globals.source_path and roop.globals.target_path:
220
- temp_frame = get_video_frame(roop.globals.target_path, frame_number)
221
- if predict_frame(temp_frame):
222
- quit()
223
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
224
- temp_frame = frame_processor.process_frame(
225
- get_one_face(cv2.imread(roop.globals.source_path)),
226
- temp_frame
227
- )
228
- image = Image.fromarray(cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB))
229
- image = ImageOps.contain(image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS)
230
- image = ctk.CTkImage(image, size=image.size)
231
- preview_label.configure(image=image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/codingstatemachine.py DELETED
@@ -1,90 +0,0 @@
1
- ######################## BEGIN LICENSE BLOCK ########################
2
- # The Original Code is mozilla.org code.
3
- #
4
- # The Initial Developer of the Original Code is
5
- # Netscape Communications Corporation.
6
- # Portions created by the Initial Developer are Copyright (C) 1998
7
- # the Initial Developer. All Rights Reserved.
8
- #
9
- # Contributor(s):
10
- # Mark Pilgrim - port to Python
11
- #
12
- # This library is free software; you can redistribute it and/or
13
- # modify it under the terms of the GNU Lesser General Public
14
- # License as published by the Free Software Foundation; either
15
- # version 2.1 of the License, or (at your option) any later version.
16
- #
17
- # This library is distributed in the hope that it will be useful,
18
- # but WITHOUT ANY WARRANTY; without even the implied warranty of
19
- # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
20
- # Lesser General Public License for more details.
21
- #
22
- # You should have received a copy of the GNU Lesser General Public
23
- # License along with this library; if not, write to the Free Software
24
- # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
25
- # 02110-1301 USA
26
- ######################### END LICENSE BLOCK #########################
27
-
28
- import logging
29
-
30
- from .codingstatemachinedict import CodingStateMachineDict
31
- from .enums import MachineState
32
-
33
-
34
- class CodingStateMachine:
35
- """
36
- A state machine to verify a byte sequence for a particular encoding. For
37
- each byte the detector receives, it will feed that byte to every active
38
- state machine available, one byte at a time. The state machine changes its
39
- state based on its previous state and the byte it receives. There are 3
40
- states in a state machine that are of interest to an auto-detector:
41
-
42
- START state: This is the state to start with, or a legal byte sequence
43
- (i.e. a valid code point) for character has been identified.
44
-
45
- ME state: This indicates that the state machine identified a byte sequence
46
- that is specific to the charset it is designed for and that
47
- there is no other possible encoding which can contain this byte
48
- sequence. This will to lead to an immediate positive answer for
49
- the detector.
50
-
51
- ERROR state: This indicates the state machine identified an illegal byte
52
- sequence for that encoding. This will lead to an immediate
53
- negative answer for this encoding. Detector will exclude this
54
- encoding from consideration from here on.
55
- """
56
-
57
- def __init__(self, sm: CodingStateMachineDict) -> None:
58
- self._model = sm
59
- self._curr_byte_pos = 0
60
- self._curr_char_len = 0
61
- self._curr_state = MachineState.START
62
- self.active = True
63
- self.logger = logging.getLogger(__name__)
64
- self.reset()
65
-
66
- def reset(self) -> None:
67
- self._curr_state = MachineState.START
68
-
69
- def next_state(self, c: int) -> int:
70
- # for each byte we get its class
71
- # if it is first byte, we also get byte length
72
- byte_class = self._model["class_table"][c]
73
- if self._curr_state == MachineState.START:
74
- self._curr_byte_pos = 0
75
- self._curr_char_len = self._model["char_len_table"][byte_class]
76
- # from byte's class and state_table, we get its next state
77
- curr_state = self._curr_state * self._model["class_factor"] + byte_class
78
- self._curr_state = self._model["state_table"][curr_state]
79
- self._curr_byte_pos += 1
80
- return self._curr_state
81
-
82
- def get_current_charlen(self) -> int:
83
- return self._curr_char_len
84
-
85
- def get_coding_state_machine(self) -> str:
86
- return self._model["name"]
87
-
88
- @property
89
- def language(self) -> str:
90
- return self._model["language"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tomli/_re.py DELETED
@@ -1,107 +0,0 @@
1
- # SPDX-License-Identifier: MIT
2
- # SPDX-FileCopyrightText: 2021 Taneli Hukkinen
3
- # Licensed to PSF under a Contributor Agreement.
4
-
5
- from __future__ import annotations
6
-
7
- from datetime import date, datetime, time, timedelta, timezone, tzinfo
8
- from functools import lru_cache
9
- import re
10
- from typing import Any
11
-
12
- from ._types import ParseFloat
13
-
14
- # E.g.
15
- # - 00:32:00.999999
16
- # - 00:32:00
17
- _TIME_RE_STR = r"([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(?:\.([0-9]{1,6})[0-9]*)?"
18
-
19
- RE_NUMBER = re.compile(
20
- r"""
21
- 0
22
- (?:
23
- x[0-9A-Fa-f](?:_?[0-9A-Fa-f])* # hex
24
- |
25
- b[01](?:_?[01])* # bin
26
- |
27
- o[0-7](?:_?[0-7])* # oct
28
- )
29
- |
30
- [+-]?(?:0|[1-9](?:_?[0-9])*) # dec, integer part
31
- (?P<floatpart>
32
- (?:\.[0-9](?:_?[0-9])*)? # optional fractional part
33
- (?:[eE][+-]?[0-9](?:_?[0-9])*)? # optional exponent part
34
- )
35
- """,
36
- flags=re.VERBOSE,
37
- )
38
- RE_LOCALTIME = re.compile(_TIME_RE_STR)
39
- RE_DATETIME = re.compile(
40
- rf"""
41
- ([0-9]{{4}})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) # date, e.g. 1988-10-27
42
- (?:
43
- [Tt ]
44
- {_TIME_RE_STR}
45
- (?:([Zz])|([+-])([01][0-9]|2[0-3]):([0-5][0-9]))? # optional time offset
46
- )?
47
- """,
48
- flags=re.VERBOSE,
49
- )
50
-
51
-
52
- def match_to_datetime(match: re.Match) -> datetime | date:
53
- """Convert a `RE_DATETIME` match to `datetime.datetime` or `datetime.date`.
54
-
55
- Raises ValueError if the match does not correspond to a valid date
56
- or datetime.
57
- """
58
- (
59
- year_str,
60
- month_str,
61
- day_str,
62
- hour_str,
63
- minute_str,
64
- sec_str,
65
- micros_str,
66
- zulu_time,
67
- offset_sign_str,
68
- offset_hour_str,
69
- offset_minute_str,
70
- ) = match.groups()
71
- year, month, day = int(year_str), int(month_str), int(day_str)
72
- if hour_str is None:
73
- return date(year, month, day)
74
- hour, minute, sec = int(hour_str), int(minute_str), int(sec_str)
75
- micros = int(micros_str.ljust(6, "0")) if micros_str else 0
76
- if offset_sign_str:
77
- tz: tzinfo | None = cached_tz(
78
- offset_hour_str, offset_minute_str, offset_sign_str
79
- )
80
- elif zulu_time:
81
- tz = timezone.utc
82
- else: # local date-time
83
- tz = None
84
- return datetime(year, month, day, hour, minute, sec, micros, tzinfo=tz)
85
-
86
-
87
- @lru_cache(maxsize=None)
88
- def cached_tz(hour_str: str, minute_str: str, sign_str: str) -> timezone:
89
- sign = 1 if sign_str == "+" else -1
90
- return timezone(
91
- timedelta(
92
- hours=sign * int(hour_str),
93
- minutes=sign * int(minute_str),
94
- )
95
- )
96
-
97
-
98
- def match_to_localtime(match: re.Match) -> time:
99
- hour_str, minute_str, sec_str, micros_str = match.groups()
100
- micros = int(micros_str.ljust(6, "0")) if micros_str else 0
101
- return time(int(hour_str), int(minute_str), int(sec_str), micros)
102
-
103
-
104
- def match_to_number(match: re.Match, parse_float: ParseFloat) -> Any:
105
- if match.group("floatpart"):
106
- return parse_float(match.group())
107
- return int(match.group(), 0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AutoBG/Auto-BoardGame/README.md DELETED
@@ -1,11 +0,0 @@
1
- ---
2
- title: Auto BoardGame
3
- emoji: 🎲
4
- colorFrom: indigo
5
- colorTo: indigo
6
- sdk: streamlit
7
- sdk_version: 1.19.0
8
- app_file: Home.py
9
- pinned: false
10
- license: cc-by-nc-sa-2.0
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AutoLLM/ArxivDigest/app.py DELETED
@@ -1,196 +0,0 @@
1
- import gradio as gr
2
- from download_new_papers import get_papers
3
- import utils
4
- from relevancy import generate_relevance_score, process_subject_fields
5
- from sendgrid.helpers.mail import Mail, Email, To, Content
6
- import sendgrid
7
- import os
8
- import openai
9
-
10
- topics = {
11
- "Physics": "",
12
- "Mathematics": "math",
13
- "Computer Science": "cs",
14
- "Quantitative Biology": "q-bio",
15
- "Quantitative Finance": "q-fin",
16
- "Statistics": "stat",
17
- "Electrical Engineering and Systems Science": "eess",
18
- "Economics": "econ"
19
- }
20
-
21
- physics_topics = {
22
- "Astrophysics": "astro-ph",
23
- "Condensed Matter": "cond-mat",
24
- "General Relativity and Quantum Cosmology": "gr-qc",
25
- "High Energy Physics - Experiment": "hep-ex",
26
- "High Energy Physics - Lattice": "hep-lat",
27
- "High Energy Physics - Phenomenology": "hep-ph",
28
- "High Energy Physics - Theory": "hep-th",
29
- "Mathematical Physics": "math-ph",
30
- "Nonlinear Sciences": "nlin",
31
- "Nuclear Experiment": "nucl-ex",
32
- "Nuclear Theory": "nucl-th",
33
- "Physics": "physics",
34
- "Quantum Physics": "quant-ph"
35
- }
36
-
37
- categories_map = {
38
- "Astrophysics": ["Astrophysics of Galaxies", "Cosmology and Nongalactic Astrophysics", "Earth and Planetary Astrophysics", "High Energy Astrophysical Phenomena", "Instrumentation and Methods for Astrophysics", "Solar and Stellar Astrophysics"],
39
- "Condensed Matter": ["Disordered Systems and Neural Networks", "Materials Science", "Mesoscale and Nanoscale Physics", "Other Condensed Matter", "Quantum Gases", "Soft Condensed Matter", "Statistical Mechanics", "Strongly Correlated Electrons", "Superconductivity"],
40
- "General Relativity and Quantum Cosmology": ["None"],
41
- "High Energy Physics - Experiment": ["None"],
42
- "High Energy Physics - Lattice": ["None"],
43
- "High Energy Physics - Phenomenology": ["None"],
44
- "High Energy Physics - Theory": ["None"],
45
- "Mathematical Physics": ["None"],
46
- "Nonlinear Sciences": ["Adaptation and Self-Organizing Systems", "Cellular Automata and Lattice Gases", "Chaotic Dynamics", "Exactly Solvable and Integrable Systems", "Pattern Formation and Solitons"],
47
- "Nuclear Experiment": ["None"],
48
- "Nuclear Theory": ["None"],
49
- "Physics": ["Accelerator Physics", "Applied Physics", "Atmospheric and Oceanic Physics", "Atomic and Molecular Clusters", "Atomic Physics", "Biological Physics", "Chemical Physics", "Classical Physics", "Computational Physics", "Data Analysis, Statistics and Probability", "Fluid Dynamics", "General Physics", "Geophysics", "History and Philosophy of Physics", "Instrumentation and Detectors", "Medical Physics", "Optics", "Physics and Society", "Physics Education", "Plasma Physics", "Popular Physics", "Space Physics"],
50
- "Quantum Physics": ["None"],
51
- "Mathematics": ["Algebraic Geometry", "Algebraic Topology", "Analysis of PDEs", "Category Theory", "Classical Analysis and ODEs", "Combinatorics", "Commutative Algebra", "Complex Variables", "Differential Geometry", "Dynamical Systems", "Functional Analysis", "General Mathematics", "General Topology", "Geometric Topology", "Group Theory", "History and Overview", "Information Theory", "K-Theory and Homology", "Logic", "Mathematical Physics", "Metric Geometry", "Number Theory", "Numerical Analysis", "Operator Algebras", "Optimization and Control", "Probability", "Quantum Algebra", "Representation Theory", "Rings and Algebras", "Spectral Theory", "Statistics Theory", "Symplectic Geometry"],
52
- "Computer Science": ["Artificial Intelligence", "Computation and Language", "Computational Complexity", "Computational Engineering, Finance, and Science", "Computational Geometry", "Computer Science and Game Theory", "Computer Vision and Pattern Recognition", "Computers and Society", "Cryptography and Security", "Data Structures and Algorithms", "Databases", "Digital Libraries", "Discrete Mathematics", "Distributed, Parallel, and Cluster Computing", "Emerging Technologies", "Formal Languages and Automata Theory", "General Literature", "Graphics", "Hardware Architecture", "Human-Computer Interaction", "Information Retrieval", "Information Theory", "Logic in Computer Science", "Machine Learning", "Mathematical Software", "Multiagent Systems", "Multimedia", "Networking and Internet Architecture", "Neural and Evolutionary Computing", "Numerical Analysis", "Operating Systems", "Other Computer Science", "Performance", "Programming Languages", "Robotics", "Social and Information Networks", "Software Engineering", "Sound", "Symbolic Computation", "Systems and Control"],
53
- "Quantitative Biology": ["Biomolecules", "Cell Behavior", "Genomics", "Molecular Networks", "Neurons and Cognition", "Other Quantitative Biology", "Populations and Evolution", "Quantitative Methods", "Subcellular Processes", "Tissues and Organs"],
54
- "Quantitative Finance": ["Computational Finance", "Economics", "General Finance", "Mathematical Finance", "Portfolio Management", "Pricing of Securities", "Risk Management", "Statistical Finance", "Trading and Market Microstructure"],
55
- "Statistics": ["Applications", "Computation", "Machine Learning", "Methodology", "Other Statistics", "Statistics Theory"],
56
- "Electrical Engineering and Systems Science": ["Audio and Speech Processing", "Image and Video Processing", "Signal Processing", "Systems and Control"],
57
- "Economics": ["Econometrics", "General Economics", "Theoretical Economics"]
58
- }
59
-
60
-
61
- def sample(email, topic, physics_topic, categories, interest):
62
- if not topic:
63
- raise gr.Error("You must choose a topic.")
64
- if topic == "Physics":
65
- if isinstance(physics_topic, list):
66
- raise gr.Error("You must choose a physics topic.")
67
- topic = physics_topic
68
- abbr = physics_topics[topic]
69
- else:
70
- abbr = topics[topic]
71
- if categories:
72
- papers = get_papers(abbr)
73
- papers = [
74
- t for t in papers
75
- if bool(set(process_subject_fields(t['subjects'])) & set(categories))][:4]
76
- else:
77
- papers = get_papers(abbr, limit=4)
78
- if interest:
79
- if not openai.api_key: raise gr.Error("Set your OpenAI api key on the left first")
80
- relevancy, _ = generate_relevance_score(
81
- papers,
82
- query={"interest": interest},
83
- threshold_score=0,
84
- num_paper_in_prompt=4)
85
- return "\n\n".join([paper["summarized_text"] for paper in relevancy])
86
- else:
87
- return "\n\n".join(f"Title: {paper['title']}\nAuthors: {paper['authors']}" for paper in papers)
88
-
89
-
90
- def change_subsubject(subject, physics_subject):
91
- if subject != "Physics":
92
- return gr.Dropdown.update(choices=categories_map[subject], value=[], visible=True)
93
- else:
94
- if physics_subject and not isinstance(physics_subject, list):
95
- return gr.Dropdown.update(choices=categories_map[physics_subject], value=[], visible=True)
96
- else:
97
- return gr.Dropdown.update(choices=[], value=[], visible=False)
98
-
99
-
100
- def change_physics(subject):
101
- if subject != "Physics":
102
- return gr.Dropdown.update(visible=False, value=[])
103
- else:
104
- return gr.Dropdown.update(physics_topics, visible=True)
105
-
106
-
107
- def test(email, topic, physics_topic, categories, interest, key):
108
- if not email: raise gr.Error("Set your email")
109
- if not key: raise gr.Error("Set your SendGrid key")
110
- if topic == "Physics":
111
- if isinstance(physics_topic, list):
112
- raise gr.Error("You must choose a physics topic.")
113
- topic = physics_topic
114
- abbr = physics_topics[topic]
115
- else:
116
- abbr = topics[topic]
117
- if categories:
118
- papers = get_papers(abbr)
119
- papers = [
120
- t for t in papers
121
- if bool(set(process_subject_fields(t['subjects'])) & set(categories))][:4]
122
- else:
123
- papers = get_papers(abbr, limit=4)
124
- if interest:
125
- if not openai.api_key: raise gr.Error("Set your OpenAI api key on the left first")
126
- relevancy, hallucination = generate_relevance_score(
127
- papers,
128
- query={"interest": interest},
129
- threshold_score=7,
130
- num_paper_in_prompt=8)
131
- body = "<br><br>".join([f'Title: <a href="{paper["main_page"]}">{paper["title"]}</a><br>Authors: {paper["authors"]}<br>Score: {paper["Relevancy score"]}<br>Reason: {paper["Reasons for match"]}' for paper in relevancy])
132
- if hallucination:
133
- body = "Warning: the model hallucinated some papers. We have tried to remove them, but the scores may not be accurate.<br><br>" + body
134
- else:
135
- body = "<br><br>".join([f'Title: <a href="{paper["main_page"]}">{paper["title"]}</a><br>Authors: {paper["authors"]}' for paper in papers])
136
- sg = sendgrid.SendGridAPIClient(api_key=key)
137
- from_email = Email(email)
138
- to_email = To(email)
139
- subject = "arXiv digest"
140
- content = Content("text/html", body)
141
- mail = Mail(from_email, to_email, subject, content)
142
- mail_json = mail.get()
143
-
144
- # Send an HTTP POST request to /mail/send
145
- response = sg.client.mail.send.post(request_body=mail_json)
146
- if response.status_code >= 200 and response.status_code <= 300:
147
- return "Success!"
148
- else:
149
- return "Failure: ({response.status_code})"
150
-
151
-
152
- def register_openai_token(token):
153
- openai.api_key = token
154
-
155
- with gr.Blocks() as demo:
156
- with gr.Row():
157
- with gr.Column(scale=1):
158
- token = gr.Textbox(label="OpenAI API Key", type="password")
159
- subject = gr.Radio(
160
- list(topics.keys()), label="Topic"
161
- )
162
- physics_subject = gr.Dropdown(physics_topics, value=[], multiselect=False, label="Physics category", visible=False, info="")
163
- subsubject = gr.Dropdown(
164
- [], value=[], multiselect=True, label="Subtopic", info="Optional. Leaving it empty will use all subtopics.", visible=False)
165
- subject.change(fn=change_physics, inputs=[subject], outputs=physics_subject)
166
- subject.change(fn=change_subsubject, inputs=[subject, physics_subject], outputs=subsubject)
167
- physics_subject.change(fn=change_subsubject, inputs=[subject, physics_subject], outputs=subsubject)
168
-
169
- interest = gr.Textbox(label="A natural language description of what you are interested in. We will generate relevancy scores (1-10) and explanations for the papers in the selected topics according to this statement.", info="Press shift-enter or click the button below to update.", lines=7)
170
- sample_btn = gr.Button("Generate Digest")
171
- sample_output = gr.Textbox(label="Sample relevancy results for your configuration.", info="For runtime purposes, this is only done on a small subset of recent papers in the topic you have selected. Papers will not be filtered by relevancy, only sorted on a scale of 1-10. Selecting more relevant subtopics will help return more relevant results.")
172
- with gr.Column(scale=0.40):
173
- with gr.Box():
174
- title = gr.Markdown(
175
- """
176
- # Email Setup, Optional
177
- Send an email to the below address using the configuration on the left. Requires a sendgrid token. These values are not needed to use the left side of the page.
178
- Additionally, this email will use the entire list of papers for a topic, rather than a small subset. Generating the email can take on the order of 10 minutes for large topics.
179
-
180
- To create a scheduled job for this, see our [Github Repository](https://github.com/AutoLLM/ArxivDigest)
181
- """,
182
- interactive=False, show_label=False)
183
- email = gr.Textbox(label="Email address", type="email", placeholder="")
184
- sendgrid_token = gr.Textbox(label="SendGrid API Key", type="password")
185
- with gr.Row():
186
- test_btn = gr.Button("Send email")
187
- output = gr.Textbox(show_label=False, placeholder="email status")
188
- test_btn.click(fn=test, inputs=[email, subject, physics_subject, subsubject, interest, sendgrid_token], outputs=output)
189
- token.change(fn=register_openai_token, inputs=[token])
190
- sample_btn.click(fn=sample, inputs=[email, subject, physics_subject, subsubject, interest], outputs=sample_output)
191
- subject.change(fn=sample, inputs=[email, subject, physics_subject, subsubject, interest], outputs=sample_output)
192
- physics_subject.change(fn=sample, inputs=[email, subject, physics_subject, subsubject, interest], outputs=sample_output)
193
- subsubject.change(fn=sample, inputs=[email, subject, physics_subject, subsubject, interest], outputs=sample_output)
194
- interest.submit(fn=sample, inputs=[email, subject, physics_subject, subsubject, interest], outputs=sample_output)
195
-
196
- demo.launch(show_api=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AutoLLM/AutoAgents/.github/README.md DELETED
@@ -1 +0,0 @@
1
- ../README-main.md
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v0_5_categories.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/launch.py DELETED
@@ -1,126 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import logging
3
- from datetime import timedelta
4
- import torch
5
- import torch.distributed as dist
6
- import torch.multiprocessing as mp
7
-
8
- from detectron2.utils import comm
9
-
10
- __all__ = ["DEFAULT_TIMEOUT", "launch"]
11
-
12
- DEFAULT_TIMEOUT = timedelta(minutes=30)
13
-
14
-
15
- def _find_free_port():
16
- import socket
17
-
18
- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
19
- # Binding to port 0 will cause the OS to find an available port for us
20
- sock.bind(("", 0))
21
- port = sock.getsockname()[1]
22
- sock.close()
23
- # NOTE: there is still a chance the port could be taken by other processes.
24
- return port
25
-
26
-
27
- def launch(
28
- main_func,
29
- num_gpus_per_machine,
30
- num_machines=1,
31
- machine_rank=0,
32
- dist_url=None,
33
- args=(),
34
- timeout=DEFAULT_TIMEOUT,
35
- ):
36
- """
37
- Launch multi-gpu or distributed training.
38
- This function must be called on all machines involved in the training.
39
- It will spawn child processes (defined by ``num_gpus_per_machine``) on each machine.
40
-
41
- Args:
42
- main_func: a function that will be called by `main_func(*args)`
43
- num_gpus_per_machine (int): number of GPUs per machine
44
- num_machines (int): the total number of machines
45
- machine_rank (int): the rank of this machine
46
- dist_url (str): url to connect to for distributed jobs, including protocol
47
- e.g. "tcp://127.0.0.1:8686".
48
- Can be set to "auto" to automatically select a free port on localhost
49
- timeout (timedelta): timeout of the distributed workers
50
- args (tuple): arguments passed to main_func
51
- """
52
- world_size = num_machines * num_gpus_per_machine
53
- if world_size > 1:
54
- # https://github.com/pytorch/pytorch/pull/14391
55
- # TODO prctl in spawned processes
56
-
57
- if dist_url == "auto":
58
- assert num_machines == 1, "dist_url=auto not supported in multi-machine jobs."
59
- port = _find_free_port()
60
- dist_url = f"tcp://127.0.0.1:{port}"
61
- if num_machines > 1 and dist_url.startswith("file://"):
62
- logger = logging.getLogger(__name__)
63
- logger.warning(
64
- "file:// is not a reliable init_method in multi-machine jobs. Prefer tcp://"
65
- )
66
-
67
- mp.spawn(
68
- _distributed_worker,
69
- nprocs=num_gpus_per_machine,
70
- args=(
71
- main_func,
72
- world_size,
73
- num_gpus_per_machine,
74
- machine_rank,
75
- dist_url,
76
- args,
77
- timeout,
78
- ),
79
- daemon=False,
80
- )
81
- else:
82
- main_func(*args)
83
-
84
-
85
- def _distributed_worker(
86
- local_rank,
87
- main_func,
88
- world_size,
89
- num_gpus_per_machine,
90
- machine_rank,
91
- dist_url,
92
- args,
93
- timeout=DEFAULT_TIMEOUT,
94
- ):
95
- assert torch.cuda.is_available(), "cuda is not available. Please check your installation."
96
- global_rank = machine_rank * num_gpus_per_machine + local_rank
97
- try:
98
- dist.init_process_group(
99
- backend="NCCL",
100
- init_method=dist_url,
101
- world_size=world_size,
102
- rank=global_rank,
103
- timeout=timeout,
104
- )
105
- except Exception as e:
106
- logger = logging.getLogger(__name__)
107
- logger.error("Process group URL: {}".format(dist_url))
108
- raise e
109
-
110
- # Setup the local process group (which contains ranks within the same machine)
111
- assert comm._LOCAL_PROCESS_GROUP is None
112
- num_machines = world_size // num_gpus_per_machine
113
- for i in range(num_machines):
114
- ranks_on_i = list(range(i * num_gpus_per_machine, (i + 1) * num_gpus_per_machine))
115
- pg = dist.new_group(ranks_on_i)
116
- if i == machine_rank:
117
- comm._LOCAL_PROCESS_GROUP = pg
118
-
119
- assert num_gpus_per_machine <= torch.cuda.device_count()
120
- torch.cuda.set_device(local_rank)
121
-
122
- # synchronize is needed here to prevent a possible timeout after calling init_process_group
123
- # See: https://github.com/facebookresearch/maskrcnn-benchmark/issues/172
124
- comm.synchronize()
125
-
126
- main_func(*args)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Azai8915/ChubVenusTest/Dockerfile DELETED
@@ -1,21 +0,0 @@
1
- FROM node:18-bullseye-slim
2
-
3
- RUN apt-get update && \
4
-
5
- apt-get install -y git
6
-
7
- RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
8
-
9
- WORKDIR /app
10
-
11
- RUN npm install
12
-
13
- COPY Dockerfile greeting.md* .env* ./
14
-
15
- RUN npm run build
16
-
17
- EXPOSE 7860
18
-
19
- ENV NODE_ENV=production
20
-
21
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BartPoint/VoiceChange/infer_pack/modules/F0Predictor/PMF0Predictor.py DELETED
@@ -1,97 +0,0 @@
1
- from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
2
- import parselmouth
3
- import numpy as np
4
-
5
-
6
- class PMF0Predictor(F0Predictor):
7
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
8
- self.hop_length = hop_length
9
- self.f0_min = f0_min
10
- self.f0_max = f0_max
11
- self.sampling_rate = sampling_rate
12
-
13
- def interpolate_f0(self, f0):
14
- """
15
- 对F0进行插值处理
16
- """
17
-
18
- data = np.reshape(f0, (f0.size, 1))
19
-
20
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
21
- vuv_vector[data > 0.0] = 1.0
22
- vuv_vector[data <= 0.0] = 0.0
23
-
24
- ip_data = data
25
-
26
- frame_number = data.size
27
- last_value = 0.0
28
- for i in range(frame_number):
29
- if data[i] <= 0.0:
30
- j = i + 1
31
- for j in range(i + 1, frame_number):
32
- if data[j] > 0.0:
33
- break
34
- if j < frame_number - 1:
35
- if last_value > 0.0:
36
- step = (data[j] - data[i - 1]) / float(j - i)
37
- for k in range(i, j):
38
- ip_data[k] = data[i - 1] + step * (k - i + 1)
39
- else:
40
- for k in range(i, j):
41
- ip_data[k] = data[j]
42
- else:
43
- for k in range(i, frame_number):
44
- ip_data[k] = last_value
45
- else:
46
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
47
- last_value = data[i]
48
-
49
- return ip_data[:, 0], vuv_vector[:, 0]
50
-
51
- def compute_f0(self, wav, p_len=None):
52
- x = wav
53
- if p_len is None:
54
- p_len = x.shape[0] // self.hop_length
55
- else:
56
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
57
- time_step = self.hop_length / self.sampling_rate * 1000
58
- f0 = (
59
- parselmouth.Sound(x, self.sampling_rate)
60
- .to_pitch_ac(
61
- time_step=time_step / 1000,
62
- voicing_threshold=0.6,
63
- pitch_floor=self.f0_min,
64
- pitch_ceiling=self.f0_max,
65
- )
66
- .selected_array["frequency"]
67
- )
68
-
69
- pad_size = (p_len - len(f0) + 1) // 2
70
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
71
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
72
- f0, uv = self.interpolate_f0(f0)
73
- return f0
74
-
75
- def compute_f0_uv(self, wav, p_len=None):
76
- x = wav
77
- if p_len is None:
78
- p_len = x.shape[0] // self.hop_length
79
- else:
80
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
81
- time_step = self.hop_length / self.sampling_rate * 1000
82
- f0 = (
83
- parselmouth.Sound(x, self.sampling_rate)
84
- .to_pitch_ac(
85
- time_step=time_step / 1000,
86
- voicing_threshold=0.6,
87
- pitch_floor=self.f0_min,
88
- pitch_ceiling=self.f0_max,
89
- )
90
- .selected_array["frequency"]
91
- )
92
-
93
- pad_size = (p_len - len(f0) + 1) // 2
94
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
95
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
96
- f0, uv = self.interpolate_f0(f0)
97
- return f0, uv
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Beasto/Image_Colorizer_Pix2Pix/app.py DELETED
@@ -1,35 +0,0 @@
1
- import streamlit as st
2
- import tensorflow as tf
3
- import numpy as np
4
- from PIL import Image
5
- import tensorflow_addons as tfa
6
- import cv2
7
- import tensorflow as tf
8
- from tensorflow.keras.utils import custom_object_scope
9
-
10
- # Define a function to create the InstanceNormalization layer
11
- def create_in():
12
- return tfa.layers.InstanceNormalization()
13
-
14
-
15
- def model_out(model_path,img):
16
- with custom_object_scope({'InstanceNormalization': create_in}):
17
- model = tf.keras.models.load_model(model_path)
18
- img = (img-127.5)/127.5
19
- img = np.expand_dims(img, 0)
20
- pred = model.predict(img)
21
- pred = np.asarray(pred)
22
- return pred[0]
23
-
24
- st.title("GrayScale to Colorized Image Pix2Pix")
25
- day_inp = st.file_uploader("Grayscale image input")
26
-
27
- if day_inp is not None:
28
- file_bytes = day_inp.read()
29
- img = cv2.imdecode(np.frombuffer(file_bytes, np.uint8), cv2.IMREAD_GRAYSCALE)
30
- img = cv2.resize(img, (256, 256))
31
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
32
- img = np.array(img)
33
- pred = model_out('colorizer.h5', img)
34
- st.image(img, caption="Uploaded Image")
35
- st.image(((pred + 1) * 127.5).astype(np.uint8), caption="Generated Colorized Painting")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/tz.py DELETED
@@ -1,1849 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """
3
- This module offers timezone implementations subclassing the abstract
4
- :py:class:`datetime.tzinfo` type. There are classes to handle tzfile format
5
- files (usually are in :file:`/etc/localtime`, :file:`/usr/share/zoneinfo`,
6
- etc), TZ environment string (in all known formats), given ranges (with help
7
- from relative deltas), local machine timezone, fixed offset timezone, and UTC
8
- timezone.
9
- """
10
- import datetime
11
- import struct
12
- import time
13
- import sys
14
- import os
15
- import bisect
16
- import weakref
17
- from collections import OrderedDict
18
-
19
- import six
20
- from six import string_types
21
- from six.moves import _thread
22
- from ._common import tzname_in_python2, _tzinfo
23
- from ._common import tzrangebase, enfold
24
- from ._common import _validate_fromutc_inputs
25
-
26
- from ._factories import _TzSingleton, _TzOffsetFactory
27
- from ._factories import _TzStrFactory
28
- try:
29
- from .win import tzwin, tzwinlocal
30
- except ImportError:
31
- tzwin = tzwinlocal = None
32
-
33
- # For warning about rounding tzinfo
34
- from warnings import warn
35
-
36
- ZERO = datetime.timedelta(0)
37
- EPOCH = datetime.datetime.utcfromtimestamp(0)
38
- EPOCHORDINAL = EPOCH.toordinal()
39
-
40
-
41
- @six.add_metaclass(_TzSingleton)
42
- class tzutc(datetime.tzinfo):
43
- """
44
- This is a tzinfo object that represents the UTC time zone.
45
-
46
- **Examples:**
47
-
48
- .. doctest::
49
-
50
- >>> from datetime import *
51
- >>> from dateutil.tz import *
52
-
53
- >>> datetime.now()
54
- datetime.datetime(2003, 9, 27, 9, 40, 1, 521290)
55
-
56
- >>> datetime.now(tzutc())
57
- datetime.datetime(2003, 9, 27, 12, 40, 12, 156379, tzinfo=tzutc())
58
-
59
- >>> datetime.now(tzutc()).tzname()
60
- 'UTC'
61
-
62
- .. versionchanged:: 2.7.0
63
- ``tzutc()`` is now a singleton, so the result of ``tzutc()`` will
64
- always return the same object.
65
-
66
- .. doctest::
67
-
68
- >>> from dateutil.tz import tzutc, UTC
69
- >>> tzutc() is tzutc()
70
- True
71
- >>> tzutc() is UTC
72
- True
73
- """
74
- def utcoffset(self, dt):
75
- return ZERO
76
-
77
- def dst(self, dt):
78
- return ZERO
79
-
80
- @tzname_in_python2
81
- def tzname(self, dt):
82
- return "UTC"
83
-
84
- def is_ambiguous(self, dt):
85
- """
86
- Whether or not the "wall time" of a given datetime is ambiguous in this
87
- zone.
88
-
89
- :param dt:
90
- A :py:class:`datetime.datetime`, naive or time zone aware.
91
-
92
-
93
- :return:
94
- Returns ``True`` if ambiguous, ``False`` otherwise.
95
-
96
- .. versionadded:: 2.6.0
97
- """
98
- return False
99
-
100
- @_validate_fromutc_inputs
101
- def fromutc(self, dt):
102
- """
103
- Fast track version of fromutc() returns the original ``dt`` object for
104
- any valid :py:class:`datetime.datetime` object.
105
- """
106
- return dt
107
-
108
- def __eq__(self, other):
109
- if not isinstance(other, (tzutc, tzoffset)):
110
- return NotImplemented
111
-
112
- return (isinstance(other, tzutc) or
113
- (isinstance(other, tzoffset) and other._offset == ZERO))
114
-
115
- __hash__ = None
116
-
117
- def __ne__(self, other):
118
- return not (self == other)
119
-
120
- def __repr__(self):
121
- return "%s()" % self.__class__.__name__
122
-
123
- __reduce__ = object.__reduce__
124
-
125
-
126
- #: Convenience constant providing a :class:`tzutc()` instance
127
- #:
128
- #: .. versionadded:: 2.7.0
129
- UTC = tzutc()
130
-
131
-
132
- @six.add_metaclass(_TzOffsetFactory)
133
- class tzoffset(datetime.tzinfo):
134
- """
135
- A simple class for representing a fixed offset from UTC.
136
-
137
- :param name:
138
- The timezone name, to be returned when ``tzname()`` is called.
139
- :param offset:
140
- The time zone offset in seconds, or (since version 2.6.0, represented
141
- as a :py:class:`datetime.timedelta` object).
142
- """
143
- def __init__(self, name, offset):
144
- self._name = name
145
-
146
- try:
147
- # Allow a timedelta
148
- offset = offset.total_seconds()
149
- except (TypeError, AttributeError):
150
- pass
151
-
152
- self._offset = datetime.timedelta(seconds=_get_supported_offset(offset))
153
-
154
- def utcoffset(self, dt):
155
- return self._offset
156
-
157
- def dst(self, dt):
158
- return ZERO
159
-
160
- @tzname_in_python2
161
- def tzname(self, dt):
162
- return self._name
163
-
164
- @_validate_fromutc_inputs
165
- def fromutc(self, dt):
166
- return dt + self._offset
167
-
168
- def is_ambiguous(self, dt):
169
- """
170
- Whether or not the "wall time" of a given datetime is ambiguous in this
171
- zone.
172
-
173
- :param dt:
174
- A :py:class:`datetime.datetime`, naive or time zone aware.
175
- :return:
176
- Returns ``True`` if ambiguous, ``False`` otherwise.
177
-
178
- .. versionadded:: 2.6.0
179
- """
180
- return False
181
-
182
- def __eq__(self, other):
183
- if not isinstance(other, tzoffset):
184
- return NotImplemented
185
-
186
- return self._offset == other._offset
187
-
188
- __hash__ = None
189
-
190
- def __ne__(self, other):
191
- return not (self == other)
192
-
193
- def __repr__(self):
194
- return "%s(%s, %s)" % (self.__class__.__name__,
195
- repr(self._name),
196
- int(self._offset.total_seconds()))
197
-
198
- __reduce__ = object.__reduce__
199
-
200
-
201
- class tzlocal(_tzinfo):
202
- """
203
- A :class:`tzinfo` subclass built around the ``time`` timezone functions.
204
- """
205
- def __init__(self):
206
- super(tzlocal, self).__init__()
207
-
208
- self._std_offset = datetime.timedelta(seconds=-time.timezone)
209
- if time.daylight:
210
- self._dst_offset = datetime.timedelta(seconds=-time.altzone)
211
- else:
212
- self._dst_offset = self._std_offset
213
-
214
- self._dst_saved = self._dst_offset - self._std_offset
215
- self._hasdst = bool(self._dst_saved)
216
- self._tznames = tuple(time.tzname)
217
-
218
- def utcoffset(self, dt):
219
- if dt is None and self._hasdst:
220
- return None
221
-
222
- if self._isdst(dt):
223
- return self._dst_offset
224
- else:
225
- return self._std_offset
226
-
227
- def dst(self, dt):
228
- if dt is None and self._hasdst:
229
- return None
230
-
231
- if self._isdst(dt):
232
- return self._dst_offset - self._std_offset
233
- else:
234
- return ZERO
235
-
236
- @tzname_in_python2
237
- def tzname(self, dt):
238
- return self._tznames[self._isdst(dt)]
239
-
240
- def is_ambiguous(self, dt):
241
- """
242
- Whether or not the "wall time" of a given datetime is ambiguous in this
243
- zone.
244
-
245
- :param dt:
246
- A :py:class:`datetime.datetime`, naive or time zone aware.
247
-
248
-
249
- :return:
250
- Returns ``True`` if ambiguous, ``False`` otherwise.
251
-
252
- .. versionadded:: 2.6.0
253
- """
254
- naive_dst = self._naive_is_dst(dt)
255
- return (not naive_dst and
256
- (naive_dst != self._naive_is_dst(dt - self._dst_saved)))
257
-
258
- def _naive_is_dst(self, dt):
259
- timestamp = _datetime_to_timestamp(dt)
260
- return time.localtime(timestamp + time.timezone).tm_isdst
261
-
262
- def _isdst(self, dt, fold_naive=True):
263
- # We can't use mktime here. It is unstable when deciding if
264
- # the hour near to a change is DST or not.
265
- #
266
- # timestamp = time.mktime((dt.year, dt.month, dt.day, dt.hour,
267
- # dt.minute, dt.second, dt.weekday(), 0, -1))
268
- # return time.localtime(timestamp).tm_isdst
269
- #
270
- # The code above yields the following result:
271
- #
272
- # >>> import tz, datetime
273
- # >>> t = tz.tzlocal()
274
- # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
275
- # 'BRDT'
276
- # >>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname()
277
- # 'BRST'
278
- # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
279
- # 'BRST'
280
- # >>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname()
281
- # 'BRDT'
282
- # >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
283
- # 'BRDT'
284
- #
285
- # Here is a more stable implementation:
286
- #
287
- if not self._hasdst:
288
- return False
289
-
290
- # Check for ambiguous times:
291
- dstval = self._naive_is_dst(dt)
292
- fold = getattr(dt, 'fold', None)
293
-
294
- if self.is_ambiguous(dt):
295
- if fold is not None:
296
- return not self._fold(dt)
297
- else:
298
- return True
299
-
300
- return dstval
301
-
302
- def __eq__(self, other):
303
- if isinstance(other, tzlocal):
304
- return (self._std_offset == other._std_offset and
305
- self._dst_offset == other._dst_offset)
306
- elif isinstance(other, tzutc):
307
- return (not self._hasdst and
308
- self._tznames[0] in {'UTC', 'GMT'} and
309
- self._std_offset == ZERO)
310
- elif isinstance(other, tzoffset):
311
- return (not self._hasdst and
312
- self._tznames[0] == other._name and
313
- self._std_offset == other._offset)
314
- else:
315
- return NotImplemented
316
-
317
- __hash__ = None
318
-
319
- def __ne__(self, other):
320
- return not (self == other)
321
-
322
- def __repr__(self):
323
- return "%s()" % self.__class__.__name__
324
-
325
- __reduce__ = object.__reduce__
326
-
327
-
328
- class _ttinfo(object):
329
- __slots__ = ["offset", "delta", "isdst", "abbr",
330
- "isstd", "isgmt", "dstoffset"]
331
-
332
- def __init__(self):
333
- for attr in self.__slots__:
334
- setattr(self, attr, None)
335
-
336
- def __repr__(self):
337
- l = []
338
- for attr in self.__slots__:
339
- value = getattr(self, attr)
340
- if value is not None:
341
- l.append("%s=%s" % (attr, repr(value)))
342
- return "%s(%s)" % (self.__class__.__name__, ", ".join(l))
343
-
344
- def __eq__(self, other):
345
- if not isinstance(other, _ttinfo):
346
- return NotImplemented
347
-
348
- return (self.offset == other.offset and
349
- self.delta == other.delta and
350
- self.isdst == other.isdst and
351
- self.abbr == other.abbr and
352
- self.isstd == other.isstd and
353
- self.isgmt == other.isgmt and
354
- self.dstoffset == other.dstoffset)
355
-
356
- __hash__ = None
357
-
358
- def __ne__(self, other):
359
- return not (self == other)
360
-
361
- def __getstate__(self):
362
- state = {}
363
- for name in self.__slots__:
364
- state[name] = getattr(self, name, None)
365
- return state
366
-
367
- def __setstate__(self, state):
368
- for name in self.__slots__:
369
- if name in state:
370
- setattr(self, name, state[name])
371
-
372
-
373
- class _tzfile(object):
374
- """
375
- Lightweight class for holding the relevant transition and time zone
376
- information read from binary tzfiles.
377
- """
378
- attrs = ['trans_list', 'trans_list_utc', 'trans_idx', 'ttinfo_list',
379
- 'ttinfo_std', 'ttinfo_dst', 'ttinfo_before', 'ttinfo_first']
380
-
381
- def __init__(self, **kwargs):
382
- for attr in self.attrs:
383
- setattr(self, attr, kwargs.get(attr, None))
384
-
385
-
386
- class tzfile(_tzinfo):
387
- """
388
- This is a ``tzinfo`` subclass that allows one to use the ``tzfile(5)``
389
- format timezone files to extract current and historical zone information.
390
-
391
- :param fileobj:
392
- This can be an opened file stream or a file name that the time zone
393
- information can be read from.
394
-
395
- :param filename:
396
- This is an optional parameter specifying the source of the time zone
397
- information in the event that ``fileobj`` is a file object. If omitted
398
- and ``fileobj`` is a file stream, this parameter will be set either to
399
- ``fileobj``'s ``name`` attribute or to ``repr(fileobj)``.
400
-
401
- See `Sources for Time Zone and Daylight Saving Time Data
402
- <https://data.iana.org/time-zones/tz-link.html>`_ for more information.
403
- Time zone files can be compiled from the `IANA Time Zone database files
404
- <https://www.iana.org/time-zones>`_ with the `zic time zone compiler
405
- <https://www.freebsd.org/cgi/man.cgi?query=zic&sektion=8>`_
406
-
407
- .. note::
408
-
409
- Only construct a ``tzfile`` directly if you have a specific timezone
410
- file on disk that you want to read into a Python ``tzinfo`` object.
411
- If you want to get a ``tzfile`` representing a specific IANA zone,
412
- (e.g. ``'America/New_York'``), you should call
413
- :func:`dateutil.tz.gettz` with the zone identifier.
414
-
415
-
416
- **Examples:**
417
-
418
- Using the US Eastern time zone as an example, we can see that a ``tzfile``
419
- provides time zone information for the standard Daylight Saving offsets:
420
-
421
- .. testsetup:: tzfile
422
-
423
- from dateutil.tz import gettz
424
- from datetime import datetime
425
-
426
- .. doctest:: tzfile
427
-
428
- >>> NYC = gettz('America/New_York')
429
- >>> NYC
430
- tzfile('/usr/share/zoneinfo/America/New_York')
431
-
432
- >>> print(datetime(2016, 1, 3, tzinfo=NYC)) # EST
433
- 2016-01-03 00:00:00-05:00
434
-
435
- >>> print(datetime(2016, 7, 7, tzinfo=NYC)) # EDT
436
- 2016-07-07 00:00:00-04:00
437
-
438
-
439
- The ``tzfile`` structure contains a fully history of the time zone,
440
- so historical dates will also have the right offsets. For example, before
441
- the adoption of the UTC standards, New York used local solar mean time:
442
-
443
- .. doctest:: tzfile
444
-
445
- >>> print(datetime(1901, 4, 12, tzinfo=NYC)) # LMT
446
- 1901-04-12 00:00:00-04:56
447
-
448
- And during World War II, New York was on "Eastern War Time", which was a
449
- state of permanent daylight saving time:
450
-
451
- .. doctest:: tzfile
452
-
453
- >>> print(datetime(1944, 2, 7, tzinfo=NYC)) # EWT
454
- 1944-02-07 00:00:00-04:00
455
-
456
- """
457
-
458
- def __init__(self, fileobj, filename=None):
459
- super(tzfile, self).__init__()
460
-
461
- file_opened_here = False
462
- if isinstance(fileobj, string_types):
463
- self._filename = fileobj
464
- fileobj = open(fileobj, 'rb')
465
- file_opened_here = True
466
- elif filename is not None:
467
- self._filename = filename
468
- elif hasattr(fileobj, "name"):
469
- self._filename = fileobj.name
470
- else:
471
- self._filename = repr(fileobj)
472
-
473
- if fileobj is not None:
474
- if not file_opened_here:
475
- fileobj = _nullcontext(fileobj)
476
-
477
- with fileobj as file_stream:
478
- tzobj = self._read_tzfile(file_stream)
479
-
480
- self._set_tzdata(tzobj)
481
-
482
- def _set_tzdata(self, tzobj):
483
- """ Set the time zone data of this object from a _tzfile object """
484
- # Copy the relevant attributes over as private attributes
485
- for attr in _tzfile.attrs:
486
- setattr(self, '_' + attr, getattr(tzobj, attr))
487
-
488
- def _read_tzfile(self, fileobj):
489
- out = _tzfile()
490
-
491
- # From tzfile(5):
492
- #
493
- # The time zone information files used by tzset(3)
494
- # begin with the magic characters "TZif" to identify
495
- # them as time zone information files, followed by
496
- # sixteen bytes reserved for future use, followed by
497
- # six four-byte values of type long, written in a
498
- # ``standard'' byte order (the high-order byte
499
- # of the value is written first).
500
- if fileobj.read(4).decode() != "TZif":
501
- raise ValueError("magic not found")
502
-
503
- fileobj.read(16)
504
-
505
- (
506
- # The number of UTC/local indicators stored in the file.
507
- ttisgmtcnt,
508
-
509
- # The number of standard/wall indicators stored in the file.
510
- ttisstdcnt,
511
-
512
- # The number of leap seconds for which data is
513
- # stored in the file.
514
- leapcnt,
515
-
516
- # The number of "transition times" for which data
517
- # is stored in the file.
518
- timecnt,
519
-
520
- # The number of "local time types" for which data
521
- # is stored in the file (must not be zero).
522
- typecnt,
523
-
524
- # The number of characters of "time zone
525
- # abbreviation strings" stored in the file.
526
- charcnt,
527
-
528
- ) = struct.unpack(">6l", fileobj.read(24))
529
-
530
- # The above header is followed by tzh_timecnt four-byte
531
- # values of type long, sorted in ascending order.
532
- # These values are written in ``standard'' byte order.
533
- # Each is used as a transition time (as returned by
534
- # time(2)) at which the rules for computing local time
535
- # change.
536
-
537
- if timecnt:
538
- out.trans_list_utc = list(struct.unpack(">%dl" % timecnt,
539
- fileobj.read(timecnt*4)))
540
- else:
541
- out.trans_list_utc = []
542
-
543
- # Next come tzh_timecnt one-byte values of type unsigned
544
- # char; each one tells which of the different types of
545
- # ``local time'' types described in the file is associated
546
- # with the same-indexed transition time. These values
547
- # serve as indices into an array of ttinfo structures that
548
- # appears next in the file.
549
-
550
- if timecnt:
551
- out.trans_idx = struct.unpack(">%dB" % timecnt,
552
- fileobj.read(timecnt))
553
- else:
554
- out.trans_idx = []
555
-
556
- # Each ttinfo structure is written as a four-byte value
557
- # for tt_gmtoff of type long, in a standard byte
558
- # order, followed by a one-byte value for tt_isdst
559
- # and a one-byte value for tt_abbrind. In each
560
- # structure, tt_gmtoff gives the number of
561
- # seconds to be added to UTC, tt_isdst tells whether
562
- # tm_isdst should be set by localtime(3), and
563
- # tt_abbrind serves as an index into the array of
564
- # time zone abbreviation characters that follow the
565
- # ttinfo structure(s) in the file.
566
-
567
- ttinfo = []
568
-
569
- for i in range(typecnt):
570
- ttinfo.append(struct.unpack(">lbb", fileobj.read(6)))
571
-
572
- abbr = fileobj.read(charcnt).decode()
573
-
574
- # Then there are tzh_leapcnt pairs of four-byte
575
- # values, written in standard byte order; the
576
- # first value of each pair gives the time (as
577
- # returned by time(2)) at which a leap second
578
- # occurs; the second gives the total number of
579
- # leap seconds to be applied after the given time.
580
- # The pairs of values are sorted in ascending order
581
- # by time.
582
-
583
- # Not used, for now (but seek for correct file position)
584
- if leapcnt:
585
- fileobj.seek(leapcnt * 8, os.SEEK_CUR)
586
-
587
- # Then there are tzh_ttisstdcnt standard/wall
588
- # indicators, each stored as a one-byte value;
589
- # they tell whether the transition times associated
590
- # with local time types were specified as standard
591
- # time or wall clock time, and are used when
592
- # a time zone file is used in handling POSIX-style
593
- # time zone environment variables.
594
-
595
- if ttisstdcnt:
596
- isstd = struct.unpack(">%db" % ttisstdcnt,
597
- fileobj.read(ttisstdcnt))
598
-
599
- # Finally, there are tzh_ttisgmtcnt UTC/local
600
- # indicators, each stored as a one-byte value;
601
- # they tell whether the transition times associated
602
- # with local time types were specified as UTC or
603
- # local time, and are used when a time zone file
604
- # is used in handling POSIX-style time zone envi-
605
- # ronment variables.
606
-
607
- if ttisgmtcnt:
608
- isgmt = struct.unpack(">%db" % ttisgmtcnt,
609
- fileobj.read(ttisgmtcnt))
610
-
611
- # Build ttinfo list
612
- out.ttinfo_list = []
613
- for i in range(typecnt):
614
- gmtoff, isdst, abbrind = ttinfo[i]
615
- gmtoff = _get_supported_offset(gmtoff)
616
- tti = _ttinfo()
617
- tti.offset = gmtoff
618
- tti.dstoffset = datetime.timedelta(0)
619
- tti.delta = datetime.timedelta(seconds=gmtoff)
620
- tti.isdst = isdst
621
- tti.abbr = abbr[abbrind:abbr.find('\x00', abbrind)]
622
- tti.isstd = (ttisstdcnt > i and isstd[i] != 0)
623
- tti.isgmt = (ttisgmtcnt > i and isgmt[i] != 0)
624
- out.ttinfo_list.append(tti)
625
-
626
- # Replace ttinfo indexes for ttinfo objects.
627
- out.trans_idx = [out.ttinfo_list[idx] for idx in out.trans_idx]
628
-
629
- # Set standard, dst, and before ttinfos. before will be
630
- # used when a given time is before any transitions,
631
- # and will be set to the first non-dst ttinfo, or to
632
- # the first dst, if all of them are dst.
633
- out.ttinfo_std = None
634
- out.ttinfo_dst = None
635
- out.ttinfo_before = None
636
- if out.ttinfo_list:
637
- if not out.trans_list_utc:
638
- out.ttinfo_std = out.ttinfo_first = out.ttinfo_list[0]
639
- else:
640
- for i in range(timecnt-1, -1, -1):
641
- tti = out.trans_idx[i]
642
- if not out.ttinfo_std and not tti.isdst:
643
- out.ttinfo_std = tti
644
- elif not out.ttinfo_dst and tti.isdst:
645
- out.ttinfo_dst = tti
646
-
647
- if out.ttinfo_std and out.ttinfo_dst:
648
- break
649
- else:
650
- if out.ttinfo_dst and not out.ttinfo_std:
651
- out.ttinfo_std = out.ttinfo_dst
652
-
653
- for tti in out.ttinfo_list:
654
- if not tti.isdst:
655
- out.ttinfo_before = tti
656
- break
657
- else:
658
- out.ttinfo_before = out.ttinfo_list[0]
659
-
660
- # Now fix transition times to become relative to wall time.
661
- #
662
- # I'm not sure about this. In my tests, the tz source file
663
- # is setup to wall time, and in the binary file isstd and
664
- # isgmt are off, so it should be in wall time. OTOH, it's
665
- # always in gmt time. Let me know if you have comments
666
- # about this.
667
- lastdst = None
668
- lastoffset = None
669
- lastdstoffset = None
670
- lastbaseoffset = None
671
- out.trans_list = []
672
-
673
- for i, tti in enumerate(out.trans_idx):
674
- offset = tti.offset
675
- dstoffset = 0
676
-
677
- if lastdst is not None:
678
- if tti.isdst:
679
- if not lastdst:
680
- dstoffset = offset - lastoffset
681
-
682
- if not dstoffset and lastdstoffset:
683
- dstoffset = lastdstoffset
684
-
685
- tti.dstoffset = datetime.timedelta(seconds=dstoffset)
686
- lastdstoffset = dstoffset
687
-
688
- # If a time zone changes its base offset during a DST transition,
689
- # then you need to adjust by the previous base offset to get the
690
- # transition time in local time. Otherwise you use the current
691
- # base offset. Ideally, I would have some mathematical proof of
692
- # why this is true, but I haven't really thought about it enough.
693
- baseoffset = offset - dstoffset
694
- adjustment = baseoffset
695
- if (lastbaseoffset is not None and baseoffset != lastbaseoffset
696
- and tti.isdst != lastdst):
697
- # The base DST has changed
698
- adjustment = lastbaseoffset
699
-
700
- lastdst = tti.isdst
701
- lastoffset = offset
702
- lastbaseoffset = baseoffset
703
-
704
- out.trans_list.append(out.trans_list_utc[i] + adjustment)
705
-
706
- out.trans_idx = tuple(out.trans_idx)
707
- out.trans_list = tuple(out.trans_list)
708
- out.trans_list_utc = tuple(out.trans_list_utc)
709
-
710
- return out
711
-
712
- def _find_last_transition(self, dt, in_utc=False):
713
- # If there's no list, there are no transitions to find
714
- if not self._trans_list:
715
- return None
716
-
717
- timestamp = _datetime_to_timestamp(dt)
718
-
719
- # Find where the timestamp fits in the transition list - if the
720
- # timestamp is a transition time, it's part of the "after" period.
721
- trans_list = self._trans_list_utc if in_utc else self._trans_list
722
- idx = bisect.bisect_right(trans_list, timestamp)
723
-
724
- # We want to know when the previous transition was, so subtract off 1
725
- return idx - 1
726
-
727
- def _get_ttinfo(self, idx):
728
- # For no list or after the last transition, default to _ttinfo_std
729
- if idx is None or (idx + 1) >= len(self._trans_list):
730
- return self._ttinfo_std
731
-
732
- # If there is a list and the time is before it, return _ttinfo_before
733
- if idx < 0:
734
- return self._ttinfo_before
735
-
736
- return self._trans_idx[idx]
737
-
738
- def _find_ttinfo(self, dt):
739
- idx = self._resolve_ambiguous_time(dt)
740
-
741
- return self._get_ttinfo(idx)
742
-
743
- def fromutc(self, dt):
744
- """
745
- The ``tzfile`` implementation of :py:func:`datetime.tzinfo.fromutc`.
746
-
747
- :param dt:
748
- A :py:class:`datetime.datetime` object.
749
-
750
- :raises TypeError:
751
- Raised if ``dt`` is not a :py:class:`datetime.datetime` object.
752
-
753
- :raises ValueError:
754
- Raised if this is called with a ``dt`` which does not have this
755
- ``tzinfo`` attached.
756
-
757
- :return:
758
- Returns a :py:class:`datetime.datetime` object representing the
759
- wall time in ``self``'s time zone.
760
- """
761
- # These isinstance checks are in datetime.tzinfo, so we'll preserve
762
- # them, even if we don't care about duck typing.
763
- if not isinstance(dt, datetime.datetime):
764
- raise TypeError("fromutc() requires a datetime argument")
765
-
766
- if dt.tzinfo is not self:
767
- raise ValueError("dt.tzinfo is not self")
768
-
769
- # First treat UTC as wall time and get the transition we're in.
770
- idx = self._find_last_transition(dt, in_utc=True)
771
- tti = self._get_ttinfo(idx)
772
-
773
- dt_out = dt + datetime.timedelta(seconds=tti.offset)
774
-
775
- fold = self.is_ambiguous(dt_out, idx=idx)
776
-
777
- return enfold(dt_out, fold=int(fold))
778
-
779
- def is_ambiguous(self, dt, idx=None):
780
- """
781
- Whether or not the "wall time" of a given datetime is ambiguous in this
782
- zone.
783
-
784
- :param dt:
785
- A :py:class:`datetime.datetime`, naive or time zone aware.
786
-
787
-
788
- :return:
789
- Returns ``True`` if ambiguous, ``False`` otherwise.
790
-
791
- .. versionadded:: 2.6.0
792
- """
793
- if idx is None:
794
- idx = self._find_last_transition(dt)
795
-
796
- # Calculate the difference in offsets from current to previous
797
- timestamp = _datetime_to_timestamp(dt)
798
- tti = self._get_ttinfo(idx)
799
-
800
- if idx is None or idx <= 0:
801
- return False
802
-
803
- od = self._get_ttinfo(idx - 1).offset - tti.offset
804
- tt = self._trans_list[idx] # Transition time
805
-
806
- return timestamp < tt + od
807
-
808
- def _resolve_ambiguous_time(self, dt):
809
- idx = self._find_last_transition(dt)
810
-
811
- # If we have no transitions, return the index
812
- _fold = self._fold(dt)
813
- if idx is None or idx == 0:
814
- return idx
815
-
816
- # If it's ambiguous and we're in a fold, shift to a different index.
817
- idx_offset = int(not _fold and self.is_ambiguous(dt, idx))
818
-
819
- return idx - idx_offset
820
-
821
- def utcoffset(self, dt):
822
- if dt is None:
823
- return None
824
-
825
- if not self._ttinfo_std:
826
- return ZERO
827
-
828
- return self._find_ttinfo(dt).delta
829
-
830
- def dst(self, dt):
831
- if dt is None:
832
- return None
833
-
834
- if not self._ttinfo_dst:
835
- return ZERO
836
-
837
- tti = self._find_ttinfo(dt)
838
-
839
- if not tti.isdst:
840
- return ZERO
841
-
842
- # The documentation says that utcoffset()-dst() must
843
- # be constant for every dt.
844
- return tti.dstoffset
845
-
846
- @tzname_in_python2
847
- def tzname(self, dt):
848
- if not self._ttinfo_std or dt is None:
849
- return None
850
- return self._find_ttinfo(dt).abbr
851
-
852
- def __eq__(self, other):
853
- if not isinstance(other, tzfile):
854
- return NotImplemented
855
- return (self._trans_list == other._trans_list and
856
- self._trans_idx == other._trans_idx and
857
- self._ttinfo_list == other._ttinfo_list)
858
-
859
- __hash__ = None
860
-
861
- def __ne__(self, other):
862
- return not (self == other)
863
-
864
- def __repr__(self):
865
- return "%s(%s)" % (self.__class__.__name__, repr(self._filename))
866
-
867
- def __reduce__(self):
868
- return self.__reduce_ex__(None)
869
-
870
- def __reduce_ex__(self, protocol):
871
- return (self.__class__, (None, self._filename), self.__dict__)
872
-
873
-
874
- class tzrange(tzrangebase):
875
- """
876
- The ``tzrange`` object is a time zone specified by a set of offsets and
877
- abbreviations, equivalent to the way the ``TZ`` variable can be specified
878
- in POSIX-like systems, but using Python delta objects to specify DST
879
- start, end and offsets.
880
-
881
- :param stdabbr:
882
- The abbreviation for standard time (e.g. ``'EST'``).
883
-
884
- :param stdoffset:
885
- An integer or :class:`datetime.timedelta` object or equivalent
886
- specifying the base offset from UTC.
887
-
888
- If unspecified, +00:00 is used.
889
-
890
- :param dstabbr:
891
- The abbreviation for DST / "Summer" time (e.g. ``'EDT'``).
892
-
893
- If specified, with no other DST information, DST is assumed to occur
894
- and the default behavior or ``dstoffset``, ``start`` and ``end`` is
895
- used. If unspecified and no other DST information is specified, it
896
- is assumed that this zone has no DST.
897
-
898
- If this is unspecified and other DST information is *is* specified,
899
- DST occurs in the zone but the time zone abbreviation is left
900
- unchanged.
901
-
902
- :param dstoffset:
903
- A an integer or :class:`datetime.timedelta` object or equivalent
904
- specifying the UTC offset during DST. If unspecified and any other DST
905
- information is specified, it is assumed to be the STD offset +1 hour.
906
-
907
- :param start:
908
- A :class:`relativedelta.relativedelta` object or equivalent specifying
909
- the time and time of year that daylight savings time starts. To
910
- specify, for example, that DST starts at 2AM on the 2nd Sunday in
911
- March, pass:
912
-
913
- ``relativedelta(hours=2, month=3, day=1, weekday=SU(+2))``
914
-
915
- If unspecified and any other DST information is specified, the default
916
- value is 2 AM on the first Sunday in April.
917
-
918
- :param end:
919
- A :class:`relativedelta.relativedelta` object or equivalent
920
- representing the time and time of year that daylight savings time
921
- ends, with the same specification method as in ``start``. One note is
922
- that this should point to the first time in the *standard* zone, so if
923
- a transition occurs at 2AM in the DST zone and the clocks are set back
924
- 1 hour to 1AM, set the ``hours`` parameter to +1.
925
-
926
-
927
- **Examples:**
928
-
929
- .. testsetup:: tzrange
930
-
931
- from dateutil.tz import tzrange, tzstr
932
-
933
- .. doctest:: tzrange
934
-
935
- >>> tzstr('EST5EDT') == tzrange("EST", -18000, "EDT")
936
- True
937
-
938
- >>> from dateutil.relativedelta import *
939
- >>> range1 = tzrange("EST", -18000, "EDT")
940
- >>> range2 = tzrange("EST", -18000, "EDT", -14400,
941
- ... relativedelta(hours=+2, month=4, day=1,
942
- ... weekday=SU(+1)),
943
- ... relativedelta(hours=+1, month=10, day=31,
944
- ... weekday=SU(-1)))
945
- >>> tzstr('EST5EDT') == range1 == range2
946
- True
947
-
948
- """
949
- def __init__(self, stdabbr, stdoffset=None,
950
- dstabbr=None, dstoffset=None,
951
- start=None, end=None):
952
-
953
- global relativedelta
954
- from dateutil import relativedelta
955
-
956
- self._std_abbr = stdabbr
957
- self._dst_abbr = dstabbr
958
-
959
- try:
960
- stdoffset = stdoffset.total_seconds()
961
- except (TypeError, AttributeError):
962
- pass
963
-
964
- try:
965
- dstoffset = dstoffset.total_seconds()
966
- except (TypeError, AttributeError):
967
- pass
968
-
969
- if stdoffset is not None:
970
- self._std_offset = datetime.timedelta(seconds=stdoffset)
971
- else:
972
- self._std_offset = ZERO
973
-
974
- if dstoffset is not None:
975
- self._dst_offset = datetime.timedelta(seconds=dstoffset)
976
- elif dstabbr and stdoffset is not None:
977
- self._dst_offset = self._std_offset + datetime.timedelta(hours=+1)
978
- else:
979
- self._dst_offset = ZERO
980
-
981
- if dstabbr and start is None:
982
- self._start_delta = relativedelta.relativedelta(
983
- hours=+2, month=4, day=1, weekday=relativedelta.SU(+1))
984
- else:
985
- self._start_delta = start
986
-
987
- if dstabbr and end is None:
988
- self._end_delta = relativedelta.relativedelta(
989
- hours=+1, month=10, day=31, weekday=relativedelta.SU(-1))
990
- else:
991
- self._end_delta = end
992
-
993
- self._dst_base_offset_ = self._dst_offset - self._std_offset
994
- self.hasdst = bool(self._start_delta)
995
-
996
- def transitions(self, year):
997
- """
998
- For a given year, get the DST on and off transition times, expressed
999
- always on the standard time side. For zones with no transitions, this
1000
- function returns ``None``.
1001
-
1002
- :param year:
1003
- The year whose transitions you would like to query.
1004
-
1005
- :return:
1006
- Returns a :class:`tuple` of :class:`datetime.datetime` objects,
1007
- ``(dston, dstoff)`` for zones with an annual DST transition, or
1008
- ``None`` for fixed offset zones.
1009
- """
1010
- if not self.hasdst:
1011
- return None
1012
-
1013
- base_year = datetime.datetime(year, 1, 1)
1014
-
1015
- start = base_year + self._start_delta
1016
- end = base_year + self._end_delta
1017
-
1018
- return (start, end)
1019
-
1020
- def __eq__(self, other):
1021
- if not isinstance(other, tzrange):
1022
- return NotImplemented
1023
-
1024
- return (self._std_abbr == other._std_abbr and
1025
- self._dst_abbr == other._dst_abbr and
1026
- self._std_offset == other._std_offset and
1027
- self._dst_offset == other._dst_offset and
1028
- self._start_delta == other._start_delta and
1029
- self._end_delta == other._end_delta)
1030
-
1031
- @property
1032
- def _dst_base_offset(self):
1033
- return self._dst_base_offset_
1034
-
1035
-
1036
- @six.add_metaclass(_TzStrFactory)
1037
- class tzstr(tzrange):
1038
- """
1039
- ``tzstr`` objects are time zone objects specified by a time-zone string as
1040
- it would be passed to a ``TZ`` variable on POSIX-style systems (see
1041
- the `GNU C Library: TZ Variable`_ for more details).
1042
-
1043
- There is one notable exception, which is that POSIX-style time zones use an
1044
- inverted offset format, so normally ``GMT+3`` would be parsed as an offset
1045
- 3 hours *behind* GMT. The ``tzstr`` time zone object will parse this as an
1046
- offset 3 hours *ahead* of GMT. If you would like to maintain the POSIX
1047
- behavior, pass a ``True`` value to ``posix_offset``.
1048
-
1049
- The :class:`tzrange` object provides the same functionality, but is
1050
- specified using :class:`relativedelta.relativedelta` objects. rather than
1051
- strings.
1052
-
1053
- :param s:
1054
- A time zone string in ``TZ`` variable format. This can be a
1055
- :class:`bytes` (2.x: :class:`str`), :class:`str` (2.x:
1056
- :class:`unicode`) or a stream emitting unicode characters
1057
- (e.g. :class:`StringIO`).
1058
-
1059
- :param posix_offset:
1060
- Optional. If set to ``True``, interpret strings such as ``GMT+3`` or
1061
- ``UTC+3`` as being 3 hours *behind* UTC rather than ahead, per the
1062
- POSIX standard.
1063
-
1064
- .. caution::
1065
-
1066
- Prior to version 2.7.0, this function also supported time zones
1067
- in the format:
1068
-
1069
- * ``EST5EDT,4,0,6,7200,10,0,26,7200,3600``
1070
- * ``EST5EDT,4,1,0,7200,10,-1,0,7200,3600``
1071
-
1072
- This format is non-standard and has been deprecated; this function
1073
- will raise a :class:`DeprecatedTZFormatWarning` until
1074
- support is removed in a future version.
1075
-
1076
- .. _`GNU C Library: TZ Variable`:
1077
- https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html
1078
- """
1079
- def __init__(self, s, posix_offset=False):
1080
- global parser
1081
- from dateutil.parser import _parser as parser
1082
-
1083
- self._s = s
1084
-
1085
- res = parser._parsetz(s)
1086
- if res is None or res.any_unused_tokens:
1087
- raise ValueError("unknown string format")
1088
-
1089
- # Here we break the compatibility with the TZ variable handling.
1090
- # GMT-3 actually *means* the timezone -3.
1091
- if res.stdabbr in ("GMT", "UTC") and not posix_offset:
1092
- res.stdoffset *= -1
1093
-
1094
- # We must initialize it first, since _delta() needs
1095
- # _std_offset and _dst_offset set. Use False in start/end
1096
- # to avoid building it two times.
1097
- tzrange.__init__(self, res.stdabbr, res.stdoffset,
1098
- res.dstabbr, res.dstoffset,
1099
- start=False, end=False)
1100
-
1101
- if not res.dstabbr:
1102
- self._start_delta = None
1103
- self._end_delta = None
1104
- else:
1105
- self._start_delta = self._delta(res.start)
1106
- if self._start_delta:
1107
- self._end_delta = self._delta(res.end, isend=1)
1108
-
1109
- self.hasdst = bool(self._start_delta)
1110
-
1111
- def _delta(self, x, isend=0):
1112
- from dateutil import relativedelta
1113
- kwargs = {}
1114
- if x.month is not None:
1115
- kwargs["month"] = x.month
1116
- if x.weekday is not None:
1117
- kwargs["weekday"] = relativedelta.weekday(x.weekday, x.week)
1118
- if x.week > 0:
1119
- kwargs["day"] = 1
1120
- else:
1121
- kwargs["day"] = 31
1122
- elif x.day:
1123
- kwargs["day"] = x.day
1124
- elif x.yday is not None:
1125
- kwargs["yearday"] = x.yday
1126
- elif x.jyday is not None:
1127
- kwargs["nlyearday"] = x.jyday
1128
- if not kwargs:
1129
- # Default is to start on first sunday of april, and end
1130
- # on last sunday of october.
1131
- if not isend:
1132
- kwargs["month"] = 4
1133
- kwargs["day"] = 1
1134
- kwargs["weekday"] = relativedelta.SU(+1)
1135
- else:
1136
- kwargs["month"] = 10
1137
- kwargs["day"] = 31
1138
- kwargs["weekday"] = relativedelta.SU(-1)
1139
- if x.time is not None:
1140
- kwargs["seconds"] = x.time
1141
- else:
1142
- # Default is 2AM.
1143
- kwargs["seconds"] = 7200
1144
- if isend:
1145
- # Convert to standard time, to follow the documented way
1146
- # of working with the extra hour. See the documentation
1147
- # of the tzinfo class.
1148
- delta = self._dst_offset - self._std_offset
1149
- kwargs["seconds"] -= delta.seconds + delta.days * 86400
1150
- return relativedelta.relativedelta(**kwargs)
1151
-
1152
- def __repr__(self):
1153
- return "%s(%s)" % (self.__class__.__name__, repr(self._s))
1154
-
1155
-
1156
- class _tzicalvtzcomp(object):
1157
- def __init__(self, tzoffsetfrom, tzoffsetto, isdst,
1158
- tzname=None, rrule=None):
1159
- self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom)
1160
- self.tzoffsetto = datetime.timedelta(seconds=tzoffsetto)
1161
- self.tzoffsetdiff = self.tzoffsetto - self.tzoffsetfrom
1162
- self.isdst = isdst
1163
- self.tzname = tzname
1164
- self.rrule = rrule
1165
-
1166
-
1167
- class _tzicalvtz(_tzinfo):
1168
- def __init__(self, tzid, comps=[]):
1169
- super(_tzicalvtz, self).__init__()
1170
-
1171
- self._tzid = tzid
1172
- self._comps = comps
1173
- self._cachedate = []
1174
- self._cachecomp = []
1175
- self._cache_lock = _thread.allocate_lock()
1176
-
1177
- def _find_comp(self, dt):
1178
- if len(self._comps) == 1:
1179
- return self._comps[0]
1180
-
1181
- dt = dt.replace(tzinfo=None)
1182
-
1183
- try:
1184
- with self._cache_lock:
1185
- return self._cachecomp[self._cachedate.index(
1186
- (dt, self._fold(dt)))]
1187
- except ValueError:
1188
- pass
1189
-
1190
- lastcompdt = None
1191
- lastcomp = None
1192
-
1193
- for comp in self._comps:
1194
- compdt = self._find_compdt(comp, dt)
1195
-
1196
- if compdt and (not lastcompdt or lastcompdt < compdt):
1197
- lastcompdt = compdt
1198
- lastcomp = comp
1199
-
1200
- if not lastcomp:
1201
- # RFC says nothing about what to do when a given
1202
- # time is before the first onset date. We'll look for the
1203
- # first standard component, or the first component, if
1204
- # none is found.
1205
- for comp in self._comps:
1206
- if not comp.isdst:
1207
- lastcomp = comp
1208
- break
1209
- else:
1210
- lastcomp = comp[0]
1211
-
1212
- with self._cache_lock:
1213
- self._cachedate.insert(0, (dt, self._fold(dt)))
1214
- self._cachecomp.insert(0, lastcomp)
1215
-
1216
- if len(self._cachedate) > 10:
1217
- self._cachedate.pop()
1218
- self._cachecomp.pop()
1219
-
1220
- return lastcomp
1221
-
1222
- def _find_compdt(self, comp, dt):
1223
- if comp.tzoffsetdiff < ZERO and self._fold(dt):
1224
- dt -= comp.tzoffsetdiff
1225
-
1226
- compdt = comp.rrule.before(dt, inc=True)
1227
-
1228
- return compdt
1229
-
1230
- def utcoffset(self, dt):
1231
- if dt is None:
1232
- return None
1233
-
1234
- return self._find_comp(dt).tzoffsetto
1235
-
1236
- def dst(self, dt):
1237
- comp = self._find_comp(dt)
1238
- if comp.isdst:
1239
- return comp.tzoffsetdiff
1240
- else:
1241
- return ZERO
1242
-
1243
- @tzname_in_python2
1244
- def tzname(self, dt):
1245
- return self._find_comp(dt).tzname
1246
-
1247
- def __repr__(self):
1248
- return "<tzicalvtz %s>" % repr(self._tzid)
1249
-
1250
- __reduce__ = object.__reduce__
1251
-
1252
-
1253
- class tzical(object):
1254
- """
1255
- This object is designed to parse an iCalendar-style ``VTIMEZONE`` structure
1256
- as set out in `RFC 5545`_ Section 4.6.5 into one or more `tzinfo` objects.
1257
-
1258
- :param `fileobj`:
1259
- A file or stream in iCalendar format, which should be UTF-8 encoded
1260
- with CRLF endings.
1261
-
1262
- .. _`RFC 5545`: https://tools.ietf.org/html/rfc5545
1263
- """
1264
- def __init__(self, fileobj):
1265
- global rrule
1266
- from dateutil import rrule
1267
-
1268
- if isinstance(fileobj, string_types):
1269
- self._s = fileobj
1270
- # ical should be encoded in UTF-8 with CRLF
1271
- fileobj = open(fileobj, 'r')
1272
- else:
1273
- self._s = getattr(fileobj, 'name', repr(fileobj))
1274
- fileobj = _nullcontext(fileobj)
1275
-
1276
- self._vtz = {}
1277
-
1278
- with fileobj as fobj:
1279
- self._parse_rfc(fobj.read())
1280
-
1281
- def keys(self):
1282
- """
1283
- Retrieves the available time zones as a list.
1284
- """
1285
- return list(self._vtz.keys())
1286
-
1287
- def get(self, tzid=None):
1288
- """
1289
- Retrieve a :py:class:`datetime.tzinfo` object by its ``tzid``.
1290
-
1291
- :param tzid:
1292
- If there is exactly one time zone available, omitting ``tzid``
1293
- or passing :py:const:`None` value returns it. Otherwise a valid
1294
- key (which can be retrieved from :func:`keys`) is required.
1295
-
1296
- :raises ValueError:
1297
- Raised if ``tzid`` is not specified but there are either more
1298
- or fewer than 1 zone defined.
1299
-
1300
- :returns:
1301
- Returns either a :py:class:`datetime.tzinfo` object representing
1302
- the relevant time zone or :py:const:`None` if the ``tzid`` was
1303
- not found.
1304
- """
1305
- if tzid is None:
1306
- if len(self._vtz) == 0:
1307
- raise ValueError("no timezones defined")
1308
- elif len(self._vtz) > 1:
1309
- raise ValueError("more than one timezone available")
1310
- tzid = next(iter(self._vtz))
1311
-
1312
- return self._vtz.get(tzid)
1313
-
1314
- def _parse_offset(self, s):
1315
- s = s.strip()
1316
- if not s:
1317
- raise ValueError("empty offset")
1318
- if s[0] in ('+', '-'):
1319
- signal = (-1, +1)[s[0] == '+']
1320
- s = s[1:]
1321
- else:
1322
- signal = +1
1323
- if len(s) == 4:
1324
- return (int(s[:2]) * 3600 + int(s[2:]) * 60) * signal
1325
- elif len(s) == 6:
1326
- return (int(s[:2]) * 3600 + int(s[2:4]) * 60 + int(s[4:])) * signal
1327
- else:
1328
- raise ValueError("invalid offset: " + s)
1329
-
1330
- def _parse_rfc(self, s):
1331
- lines = s.splitlines()
1332
- if not lines:
1333
- raise ValueError("empty string")
1334
-
1335
- # Unfold
1336
- i = 0
1337
- while i < len(lines):
1338
- line = lines[i].rstrip()
1339
- if not line:
1340
- del lines[i]
1341
- elif i > 0 and line[0] == " ":
1342
- lines[i-1] += line[1:]
1343
- del lines[i]
1344
- else:
1345
- i += 1
1346
-
1347
- tzid = None
1348
- comps = []
1349
- invtz = False
1350
- comptype = None
1351
- for line in lines:
1352
- if not line:
1353
- continue
1354
- name, value = line.split(':', 1)
1355
- parms = name.split(';')
1356
- if not parms:
1357
- raise ValueError("empty property name")
1358
- name = parms[0].upper()
1359
- parms = parms[1:]
1360
- if invtz:
1361
- if name == "BEGIN":
1362
- if value in ("STANDARD", "DAYLIGHT"):
1363
- # Process component
1364
- pass
1365
- else:
1366
- raise ValueError("unknown component: "+value)
1367
- comptype = value
1368
- founddtstart = False
1369
- tzoffsetfrom = None
1370
- tzoffsetto = None
1371
- rrulelines = []
1372
- tzname = None
1373
- elif name == "END":
1374
- if value == "VTIMEZONE":
1375
- if comptype:
1376
- raise ValueError("component not closed: "+comptype)
1377
- if not tzid:
1378
- raise ValueError("mandatory TZID not found")
1379
- if not comps:
1380
- raise ValueError(
1381
- "at least one component is needed")
1382
- # Process vtimezone
1383
- self._vtz[tzid] = _tzicalvtz(tzid, comps)
1384
- invtz = False
1385
- elif value == comptype:
1386
- if not founddtstart:
1387
- raise ValueError("mandatory DTSTART not found")
1388
- if tzoffsetfrom is None:
1389
- raise ValueError(
1390
- "mandatory TZOFFSETFROM not found")
1391
- if tzoffsetto is None:
1392
- raise ValueError(
1393
- "mandatory TZOFFSETFROM not found")
1394
- # Process component
1395
- rr = None
1396
- if rrulelines:
1397
- rr = rrule.rrulestr("\n".join(rrulelines),
1398
- compatible=True,
1399
- ignoretz=True,
1400
- cache=True)
1401
- comp = _tzicalvtzcomp(tzoffsetfrom, tzoffsetto,
1402
- (comptype == "DAYLIGHT"),
1403
- tzname, rr)
1404
- comps.append(comp)
1405
- comptype = None
1406
- else:
1407
- raise ValueError("invalid component end: "+value)
1408
- elif comptype:
1409
- if name == "DTSTART":
1410
- # DTSTART in VTIMEZONE takes a subset of valid RRULE
1411
- # values under RFC 5545.
1412
- for parm in parms:
1413
- if parm != 'VALUE=DATE-TIME':
1414
- msg = ('Unsupported DTSTART param in ' +
1415
- 'VTIMEZONE: ' + parm)
1416
- raise ValueError(msg)
1417
- rrulelines.append(line)
1418
- founddtstart = True
1419
- elif name in ("RRULE", "RDATE", "EXRULE", "EXDATE"):
1420
- rrulelines.append(line)
1421
- elif name == "TZOFFSETFROM":
1422
- if parms:
1423
- raise ValueError(
1424
- "unsupported %s parm: %s " % (name, parms[0]))
1425
- tzoffsetfrom = self._parse_offset(value)
1426
- elif name == "TZOFFSETTO":
1427
- if parms:
1428
- raise ValueError(
1429
- "unsupported TZOFFSETTO parm: "+parms[0])
1430
- tzoffsetto = self._parse_offset(value)
1431
- elif name == "TZNAME":
1432
- if parms:
1433
- raise ValueError(
1434
- "unsupported TZNAME parm: "+parms[0])
1435
- tzname = value
1436
- elif name == "COMMENT":
1437
- pass
1438
- else:
1439
- raise ValueError("unsupported property: "+name)
1440
- else:
1441
- if name == "TZID":
1442
- if parms:
1443
- raise ValueError(
1444
- "unsupported TZID parm: "+parms[0])
1445
- tzid = value
1446
- elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"):
1447
- pass
1448
- else:
1449
- raise ValueError("unsupported property: "+name)
1450
- elif name == "BEGIN" and value == "VTIMEZONE":
1451
- tzid = None
1452
- comps = []
1453
- invtz = True
1454
-
1455
- def __repr__(self):
1456
- return "%s(%s)" % (self.__class__.__name__, repr(self._s))
1457
-
1458
-
1459
- if sys.platform != "win32":
1460
- TZFILES = ["/etc/localtime", "localtime"]
1461
- TZPATHS = ["/usr/share/zoneinfo",
1462
- "/usr/lib/zoneinfo",
1463
- "/usr/share/lib/zoneinfo",
1464
- "/etc/zoneinfo"]
1465
- else:
1466
- TZFILES = []
1467
- TZPATHS = []
1468
-
1469
-
1470
- def __get_gettz():
1471
- tzlocal_classes = (tzlocal,)
1472
- if tzwinlocal is not None:
1473
- tzlocal_classes += (tzwinlocal,)
1474
-
1475
- class GettzFunc(object):
1476
- """
1477
- Retrieve a time zone object from a string representation
1478
-
1479
- This function is intended to retrieve the :py:class:`tzinfo` subclass
1480
- that best represents the time zone that would be used if a POSIX
1481
- `TZ variable`_ were set to the same value.
1482
-
1483
- If no argument or an empty string is passed to ``gettz``, local time
1484
- is returned:
1485
-
1486
- .. code-block:: python3
1487
-
1488
- >>> gettz()
1489
- tzfile('/etc/localtime')
1490
-
1491
- This function is also the preferred way to map IANA tz database keys
1492
- to :class:`tzfile` objects:
1493
-
1494
- .. code-block:: python3
1495
-
1496
- >>> gettz('Pacific/Kiritimati')
1497
- tzfile('/usr/share/zoneinfo/Pacific/Kiritimati')
1498
-
1499
- On Windows, the standard is extended to include the Windows-specific
1500
- zone names provided by the operating system:
1501
-
1502
- .. code-block:: python3
1503
-
1504
- >>> gettz('Egypt Standard Time')
1505
- tzwin('Egypt Standard Time')
1506
-
1507
- Passing a GNU ``TZ`` style string time zone specification returns a
1508
- :class:`tzstr` object:
1509
-
1510
- .. code-block:: python3
1511
-
1512
- >>> gettz('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3')
1513
- tzstr('AEST-10AEDT-11,M10.1.0/2,M4.1.0/3')
1514
-
1515
- :param name:
1516
- A time zone name (IANA, or, on Windows, Windows keys), location of
1517
- a ``tzfile(5)`` zoneinfo file or ``TZ`` variable style time zone
1518
- specifier. An empty string, no argument or ``None`` is interpreted
1519
- as local time.
1520
-
1521
- :return:
1522
- Returns an instance of one of ``dateutil``'s :py:class:`tzinfo`
1523
- subclasses.
1524
-
1525
- .. versionchanged:: 2.7.0
1526
-
1527
- After version 2.7.0, any two calls to ``gettz`` using the same
1528
- input strings will return the same object:
1529
-
1530
- .. code-block:: python3
1531
-
1532
- >>> tz.gettz('America/Chicago') is tz.gettz('America/Chicago')
1533
- True
1534
-
1535
- In addition to improving performance, this ensures that
1536
- `"same zone" semantics`_ are used for datetimes in the same zone.
1537
-
1538
-
1539
- .. _`TZ variable`:
1540
- https://www.gnu.org/software/libc/manual/html_node/TZ-Variable.html
1541
-
1542
- .. _`"same zone" semantics`:
1543
- https://blog.ganssle.io/articles/2018/02/aware-datetime-arithmetic.html
1544
- """
1545
- def __init__(self):
1546
-
1547
- self.__instances = weakref.WeakValueDictionary()
1548
- self.__strong_cache_size = 8
1549
- self.__strong_cache = OrderedDict()
1550
- self._cache_lock = _thread.allocate_lock()
1551
-
1552
- def __call__(self, name=None):
1553
- with self._cache_lock:
1554
- rv = self.__instances.get(name, None)
1555
-
1556
- if rv is None:
1557
- rv = self.nocache(name=name)
1558
- if not (name is None
1559
- or isinstance(rv, tzlocal_classes)
1560
- or rv is None):
1561
- # tzlocal is slightly more complicated than the other
1562
- # time zone providers because it depends on environment
1563
- # at construction time, so don't cache that.
1564
- #
1565
- # We also cannot store weak references to None, so we
1566
- # will also not store that.
1567
- self.__instances[name] = rv
1568
- else:
1569
- # No need for strong caching, return immediately
1570
- return rv
1571
-
1572
- self.__strong_cache[name] = self.__strong_cache.pop(name, rv)
1573
-
1574
- if len(self.__strong_cache) > self.__strong_cache_size:
1575
- self.__strong_cache.popitem(last=False)
1576
-
1577
- return rv
1578
-
1579
- def set_cache_size(self, size):
1580
- with self._cache_lock:
1581
- self.__strong_cache_size = size
1582
- while len(self.__strong_cache) > size:
1583
- self.__strong_cache.popitem(last=False)
1584
-
1585
- def cache_clear(self):
1586
- with self._cache_lock:
1587
- self.__instances = weakref.WeakValueDictionary()
1588
- self.__strong_cache.clear()
1589
-
1590
- @staticmethod
1591
- def nocache(name=None):
1592
- """A non-cached version of gettz"""
1593
- tz = None
1594
- if not name:
1595
- try:
1596
- name = os.environ["TZ"]
1597
- except KeyError:
1598
- pass
1599
- if name is None or name in ("", ":"):
1600
- for filepath in TZFILES:
1601
- if not os.path.isabs(filepath):
1602
- filename = filepath
1603
- for path in TZPATHS:
1604
- filepath = os.path.join(path, filename)
1605
- if os.path.isfile(filepath):
1606
- break
1607
- else:
1608
- continue
1609
- if os.path.isfile(filepath):
1610
- try:
1611
- tz = tzfile(filepath)
1612
- break
1613
- except (IOError, OSError, ValueError):
1614
- pass
1615
- else:
1616
- tz = tzlocal()
1617
- else:
1618
- try:
1619
- if name.startswith(":"):
1620
- name = name[1:]
1621
- except TypeError as e:
1622
- if isinstance(name, bytes):
1623
- new_msg = "gettz argument should be str, not bytes"
1624
- six.raise_from(TypeError(new_msg), e)
1625
- else:
1626
- raise
1627
- if os.path.isabs(name):
1628
- if os.path.isfile(name):
1629
- tz = tzfile(name)
1630
- else:
1631
- tz = None
1632
- else:
1633
- for path in TZPATHS:
1634
- filepath = os.path.join(path, name)
1635
- if not os.path.isfile(filepath):
1636
- filepath = filepath.replace(' ', '_')
1637
- if not os.path.isfile(filepath):
1638
- continue
1639
- try:
1640
- tz = tzfile(filepath)
1641
- break
1642
- except (IOError, OSError, ValueError):
1643
- pass
1644
- else:
1645
- tz = None
1646
- if tzwin is not None:
1647
- try:
1648
- tz = tzwin(name)
1649
- except (WindowsError, UnicodeEncodeError):
1650
- # UnicodeEncodeError is for Python 2.7 compat
1651
- tz = None
1652
-
1653
- if not tz:
1654
- from dateutil.zoneinfo import get_zonefile_instance
1655
- tz = get_zonefile_instance().get(name)
1656
-
1657
- if not tz:
1658
- for c in name:
1659
- # name is not a tzstr unless it has at least
1660
- # one offset. For short values of "name", an
1661
- # explicit for loop seems to be the fastest way
1662
- # To determine if a string contains a digit
1663
- if c in "0123456789":
1664
- try:
1665
- tz = tzstr(name)
1666
- except ValueError:
1667
- pass
1668
- break
1669
- else:
1670
- if name in ("GMT", "UTC"):
1671
- tz = UTC
1672
- elif name in time.tzname:
1673
- tz = tzlocal()
1674
- return tz
1675
-
1676
- return GettzFunc()
1677
-
1678
-
1679
- gettz = __get_gettz()
1680
- del __get_gettz
1681
-
1682
-
1683
- def datetime_exists(dt, tz=None):
1684
- """
1685
- Given a datetime and a time zone, determine whether or not a given datetime
1686
- would fall in a gap.
1687
-
1688
- :param dt:
1689
- A :class:`datetime.datetime` (whose time zone will be ignored if ``tz``
1690
- is provided.)
1691
-
1692
- :param tz:
1693
- A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If
1694
- ``None`` or not provided, the datetime's own time zone will be used.
1695
-
1696
- :return:
1697
- Returns a boolean value whether or not the "wall time" exists in
1698
- ``tz``.
1699
-
1700
- .. versionadded:: 2.7.0
1701
- """
1702
- if tz is None:
1703
- if dt.tzinfo is None:
1704
- raise ValueError('Datetime is naive and no time zone provided.')
1705
- tz = dt.tzinfo
1706
-
1707
- dt = dt.replace(tzinfo=None)
1708
-
1709
- # This is essentially a test of whether or not the datetime can survive
1710
- # a round trip to UTC.
1711
- dt_rt = dt.replace(tzinfo=tz).astimezone(UTC).astimezone(tz)
1712
- dt_rt = dt_rt.replace(tzinfo=None)
1713
-
1714
- return dt == dt_rt
1715
-
1716
-
1717
- def datetime_ambiguous(dt, tz=None):
1718
- """
1719
- Given a datetime and a time zone, determine whether or not a given datetime
1720
- is ambiguous (i.e if there are two times differentiated only by their DST
1721
- status).
1722
-
1723
- :param dt:
1724
- A :class:`datetime.datetime` (whose time zone will be ignored if ``tz``
1725
- is provided.)
1726
-
1727
- :param tz:
1728
- A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If
1729
- ``None`` or not provided, the datetime's own time zone will be used.
1730
-
1731
- :return:
1732
- Returns a boolean value whether or not the "wall time" is ambiguous in
1733
- ``tz``.
1734
-
1735
- .. versionadded:: 2.6.0
1736
- """
1737
- if tz is None:
1738
- if dt.tzinfo is None:
1739
- raise ValueError('Datetime is naive and no time zone provided.')
1740
-
1741
- tz = dt.tzinfo
1742
-
1743
- # If a time zone defines its own "is_ambiguous" function, we'll use that.
1744
- is_ambiguous_fn = getattr(tz, 'is_ambiguous', None)
1745
- if is_ambiguous_fn is not None:
1746
- try:
1747
- return tz.is_ambiguous(dt)
1748
- except Exception:
1749
- pass
1750
-
1751
- # If it doesn't come out and tell us it's ambiguous, we'll just check if
1752
- # the fold attribute has any effect on this particular date and time.
1753
- dt = dt.replace(tzinfo=tz)
1754
- wall_0 = enfold(dt, fold=0)
1755
- wall_1 = enfold(dt, fold=1)
1756
-
1757
- same_offset = wall_0.utcoffset() == wall_1.utcoffset()
1758
- same_dst = wall_0.dst() == wall_1.dst()
1759
-
1760
- return not (same_offset and same_dst)
1761
-
1762
-
1763
- def resolve_imaginary(dt):
1764
- """
1765
- Given a datetime that may be imaginary, return an existing datetime.
1766
-
1767
- This function assumes that an imaginary datetime represents what the
1768
- wall time would be in a zone had the offset transition not occurred, so
1769
- it will always fall forward by the transition's change in offset.
1770
-
1771
- .. doctest::
1772
-
1773
- >>> from dateutil import tz
1774
- >>> from datetime import datetime
1775
- >>> NYC = tz.gettz('America/New_York')
1776
- >>> print(tz.resolve_imaginary(datetime(2017, 3, 12, 2, 30, tzinfo=NYC)))
1777
- 2017-03-12 03:30:00-04:00
1778
-
1779
- >>> KIR = tz.gettz('Pacific/Kiritimati')
1780
- >>> print(tz.resolve_imaginary(datetime(1995, 1, 1, 12, 30, tzinfo=KIR)))
1781
- 1995-01-02 12:30:00+14:00
1782
-
1783
- As a note, :func:`datetime.astimezone` is guaranteed to produce a valid,
1784
- existing datetime, so a round-trip to and from UTC is sufficient to get
1785
- an extant datetime, however, this generally "falls back" to an earlier time
1786
- rather than falling forward to the STD side (though no guarantees are made
1787
- about this behavior).
1788
-
1789
- :param dt:
1790
- A :class:`datetime.datetime` which may or may not exist.
1791
-
1792
- :return:
1793
- Returns an existing :class:`datetime.datetime`. If ``dt`` was not
1794
- imaginary, the datetime returned is guaranteed to be the same object
1795
- passed to the function.
1796
-
1797
- .. versionadded:: 2.7.0
1798
- """
1799
- if dt.tzinfo is not None and not datetime_exists(dt):
1800
-
1801
- curr_offset = (dt + datetime.timedelta(hours=24)).utcoffset()
1802
- old_offset = (dt - datetime.timedelta(hours=24)).utcoffset()
1803
-
1804
- dt += curr_offset - old_offset
1805
-
1806
- return dt
1807
-
1808
-
1809
- def _datetime_to_timestamp(dt):
1810
- """
1811
- Convert a :class:`datetime.datetime` object to an epoch timestamp in
1812
- seconds since January 1, 1970, ignoring the time zone.
1813
- """
1814
- return (dt.replace(tzinfo=None) - EPOCH).total_seconds()
1815
-
1816
-
1817
- if sys.version_info >= (3, 6):
1818
- def _get_supported_offset(second_offset):
1819
- return second_offset
1820
- else:
1821
- def _get_supported_offset(second_offset):
1822
- # For python pre-3.6, round to full-minutes if that's not the case.
1823
- # Python's datetime doesn't accept sub-minute timezones. Check
1824
- # http://python.org/sf/1447945 or https://bugs.python.org/issue5288
1825
- # for some information.
1826
- old_offset = second_offset
1827
- calculated_offset = 60 * ((second_offset + 30) // 60)
1828
- return calculated_offset
1829
-
1830
-
1831
- try:
1832
- # Python 3.7 feature
1833
- from contextlib import nullcontext as _nullcontext
1834
- except ImportError:
1835
- class _nullcontext(object):
1836
- """
1837
- Class for wrapping contexts so that they are passed through in a
1838
- with statement.
1839
- """
1840
- def __init__(self, context):
1841
- self.context = context
1842
-
1843
- def __enter__(self):
1844
- return self.context
1845
-
1846
- def __exit__(*args, **kwargs):
1847
- pass
1848
-
1849
- # vim:ts=4:sw=4:et
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/xmlrpc.py DELETED
@@ -1,60 +0,0 @@
1
- """xmlrpclib.Transport implementation
2
- """
3
-
4
- import logging
5
- import urllib.parse
6
- import xmlrpc.client
7
- from typing import TYPE_CHECKING, Tuple
8
-
9
- from pip._internal.exceptions import NetworkConnectionError
10
- from pip._internal.network.session import PipSession
11
- from pip._internal.network.utils import raise_for_status
12
-
13
- if TYPE_CHECKING:
14
- from xmlrpc.client import _HostType, _Marshallable
15
-
16
- logger = logging.getLogger(__name__)
17
-
18
-
19
- class PipXmlrpcTransport(xmlrpc.client.Transport):
20
- """Provide a `xmlrpclib.Transport` implementation via a `PipSession`
21
- object.
22
- """
23
-
24
- def __init__(
25
- self, index_url: str, session: PipSession, use_datetime: bool = False
26
- ) -> None:
27
- super().__init__(use_datetime)
28
- index_parts = urllib.parse.urlparse(index_url)
29
- self._scheme = index_parts.scheme
30
- self._session = session
31
-
32
- def request(
33
- self,
34
- host: "_HostType",
35
- handler: str,
36
- request_body: bytes,
37
- verbose: bool = False,
38
- ) -> Tuple["_Marshallable", ...]:
39
- assert isinstance(host, str)
40
- parts = (self._scheme, host, handler, None, None, None)
41
- url = urllib.parse.urlunparse(parts)
42
- try:
43
- headers = {"Content-Type": "text/xml"}
44
- response = self._session.post(
45
- url,
46
- data=request_body,
47
- headers=headers,
48
- stream=True,
49
- )
50
- raise_for_status(response)
51
- self.verbose = verbose
52
- return self.parse_response(response.raw)
53
- except NetworkConnectionError as exc:
54
- assert exc.response
55
- logger.critical(
56
- "HTTP error %s while getting %s",
57
- exc.response.status_code,
58
- url,
59
- )
60
- raise
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/detail/complex/cproj.h DELETED
@@ -1,71 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- * Copyright 2013 Filipe RNC Maia
4
- *
5
- * Licensed under the Apache License, Version 2.0 (the "License");
6
- * you may not use this file except in compliance with the License.
7
- * You may obtain a copy of the License at
8
- *
9
- * http://www.apache.org/licenses/LICENSE-2.0
10
- *
11
- * Unless required by applicable law or agreed to in writing, software
12
- * distributed under the License is distributed on an "AS IS" BASIS,
13
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- * See the License for the specific language governing permissions and
15
- * limitations under the License.
16
- */
17
-
18
- #pragma once
19
-
20
- #include <thrust/complex.h>
21
- #include <thrust/detail/complex/math_private.h>
22
- #include <cmath>
23
-
24
- namespace thrust{
25
- namespace detail{
26
- namespace complex{
27
- __host__ __device__
28
- inline complex<float> cprojf(const complex<float>& z){
29
- if(!isinf(z.real()) && !isinf(z.imag())){
30
- return z;
31
- }else{
32
- // std::numeric_limits<T>::infinity() doesn't run on the GPU
33
- return complex<float>(infinity<float>(), copysignf(0.0, z.imag()));
34
- }
35
- }
36
-
37
- __host__ __device__
38
- inline complex<double> cproj(const complex<double>& z){
39
- if(!isinf(z.real()) && !isinf(z.imag())){
40
- return z;
41
- }else{
42
- // std::numeric_limits<T>::infinity() doesn't run on the GPU
43
- return complex<double>(infinity<double>(), copysign(0.0, z.imag()));
44
- }
45
- }
46
-
47
- }
48
-
49
- }
50
-
51
- template <typename T>
52
- __host__ __device__
53
- inline thrust::complex<T> proj(const thrust::complex<T>& z){
54
- return detail::complex::cproj(z);
55
- }
56
-
57
-
58
- template <>
59
- __host__ __device__
60
- inline thrust::complex<double> proj(const thrust::complex<double>& z){
61
- return detail::complex::cproj(z);
62
- }
63
-
64
- template <>
65
- __host__ __device__
66
- inline thrust::complex<float> proj(const thrust::complex<float>& z){
67
- return detail::complex::cprojf(z);
68
- }
69
-
70
- }
71
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/discard_iterator.h DELETED
@@ -1,175 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file thrust/iterator/discard_iterator.h
19
- * \brief An iterator which "discards" (ignores) values assigned to it upon dereference
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/iterator/detail/discard_iterator_base.h>
26
- #include <thrust/iterator/iterator_facade.h>
27
-
28
- THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING_BEGIN
29
-
30
- namespace thrust
31
- {
32
-
33
- /*! \addtogroup iterators
34
- * \{
35
- */
36
-
37
- /*! \addtogroup fancyiterator Fancy Iterators
38
- * \ingroup iterators
39
- * \{
40
- */
41
-
42
- /*! \p discard_iterator is an iterator which represents a special kind of pointer that
43
- * ignores values written to it upon dereference. This iterator is useful for ignoring
44
- * the output of certain algorithms without wasting memory capacity or bandwidth.
45
- * \p discard_iterator may also be used to count the size of an algorithm's output which
46
- * may not be known a priori.
47
- *
48
- * The following code snippet demonstrates how to use \p discard_iterator to ignore
49
- * ignore one of the output ranges of reduce_by_key
50
- *
51
- * \code
52
- * #include <thrust/iterator/discard_iterator.h>
53
- * #include <thrust/reduce.h>
54
- * #include <thrust/device_vector.h>
55
- *
56
- * int main()
57
- * {
58
- * thrust::device_vector<int> keys(7), values(7);
59
- *
60
- * keys[0] = 1;
61
- * keys[1] = 3;
62
- * keys[2] = 3;
63
- * keys[3] = 3;
64
- * keys[4] = 2;
65
- * keys[5] = 2;
66
- * keys[6] = 1;
67
- *
68
- * values[0] = 9;
69
- * values[1] = 8;
70
- * values[2] = 7;
71
- * values[3] = 6;
72
- * values[4] = 5;
73
- * values[5] = 4;
74
- * values[6] = 3;
75
- *
76
- * thrust::device_vector<int> result(4);
77
- *
78
- * // we are only interested in the reduced values
79
- * // use discard_iterator to ignore the output keys
80
- * thrust::reduce_by_key(keys.begin(), keys.end(),
81
- * values.begin(),
82
- * thrust::make_discard_iterator(),
83
- * result.begin());
84
- *
85
- * // result is now [9, 21, 9, 3]
86
- *
87
- * return 0;
88
- * }
89
- * \endcode
90
- *
91
- * \see make_discard_iterator
92
- */
93
- template<typename System = use_default>
94
- class discard_iterator
95
- : public detail::discard_iterator_base<System>::type
96
- {
97
- /*! \cond
98
- */
99
- friend class thrust::iterator_core_access;
100
- typedef typename detail::discard_iterator_base<System>::type super_t;
101
- typedef typename detail::discard_iterator_base<System>::incrementable incrementable;
102
- typedef typename detail::discard_iterator_base<System>::base_iterator base_iterator;
103
-
104
- public:
105
- typedef typename super_t::reference reference;
106
- typedef typename super_t::value_type value_type;
107
-
108
- /*! \endcond
109
- */
110
-
111
- /*! Copy constructor copies from a source discard_iterator.
112
- *
113
- * \p rhs The discard_iterator to copy.
114
- */
115
- __host__ __device__
116
- discard_iterator(discard_iterator const &rhs)
117
- : super_t(rhs.base()) {}
118
-
119
- #if THRUST_CPP_DIALECT >= 2011
120
- discard_iterator & operator=(const discard_iterator &) = default;
121
- #endif
122
-
123
- /*! This constructor receives an optional index specifying the position of this
124
- * \p discard_iterator in a range.
125
- *
126
- * \p i The index of this \p discard_iterator in a range. Defaults to the
127
- * value returned by \c Incrementable's null constructor. For example,
128
- * when <tt>Incrementable == int</tt>, \c 0.
129
- */
130
- __host__ __device__
131
- discard_iterator(incrementable const &i = incrementable())
132
- : super_t(base_iterator(i)) {}
133
-
134
- /*! \cond
135
- */
136
-
137
- private: // Core iterator interface
138
- __host__ __device__
139
- reference dereference() const
140
- {
141
- return m_element;
142
- }
143
-
144
- mutable value_type m_element;
145
-
146
- /*! \endcond
147
- */
148
- }; // end constant_iterator
149
-
150
-
151
- /*! \p make_discard_iterator creates a \p discard_iterator from an optional index parameter.
152
- *
153
- * \param i The index of the returned \p discard_iterator within a range.
154
- * In the default case, the value of this parameter is \c 0.
155
- *
156
- * \return A new \p discard_iterator with index as given by \p i.
157
- *
158
- * \see constant_iterator
159
- */
160
- inline __host__ __device__
161
- discard_iterator<> make_discard_iterator(discard_iterator<>::difference_type i = discard_iterator<>::difference_type(0))
162
- {
163
- return discard_iterator<>(i);
164
- } // end make_discard_iterator()
165
-
166
- /*! \} // end fancyiterators
167
- */
168
-
169
- /*! \} // end iterators
170
- */
171
-
172
- } // end namespace thrust
173
-
174
- THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING_END
175
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/iterator_facade.h DELETED
@@ -1,543 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- /*! \file thrust/iterator/iterator_facade.h
18
- * \brief A class which exposes a public interface for iterators
19
- */
20
-
21
- /*
22
- * (C) Copyright David Abrahams 2002.
23
- * (C) Copyright Jeremy Siek 2002.
24
- * (C) Copyright Thomas Witt 2002.
25
- *
26
- * Distributed under the Boost Software License, Version 1.0.
27
- * (See accompanying NOTICE file for the complete license)
28
- *
29
- * For more information, see http://www.boost.org
30
- */
31
-
32
-
33
- #pragma once
34
-
35
- #include <thrust/detail/config.h>
36
- #include <thrust/detail/type_traits.h>
37
- #include <thrust/iterator/detail/iterator_facade_category.h>
38
- #include <thrust/iterator/detail/distance_from_result.h>
39
-
40
- namespace thrust
41
- {
42
-
43
- /*! \addtogroup iterators
44
- * \{
45
- */
46
-
47
- /*! \addtogroup fancyiterator Fancy Iterators
48
- * \ingroup iterators
49
- * \{
50
- */
51
-
52
-
53
- // This forward declaration is required for the friend declaration
54
- // in iterator_core_access
55
- template<typename Derived, typename Value, typename System, typename Traversal, typename Reference, typename Difference> class iterator_facade;
56
-
57
-
58
- /*! \p iterator_core_access is the class which user iterator types derived from \p thrust::iterator_adaptor
59
- * or \p thrust::iterator_facade must befriend to allow it to access their private interface.
60
- */
61
- class iterator_core_access
62
- {
63
- /*! \cond
64
- */
65
-
66
- // declare our friends
67
- template<typename Derived, typename Value, typename System, typename Traversal, typename Reference, typename Difference> friend class iterator_facade;
68
-
69
- // iterator comparisons are our friends
70
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
71
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
72
- inline __host__ __device__
73
- friend bool
74
- operator ==(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
75
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
76
-
77
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
78
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
79
- inline __host__ __device__
80
- friend bool
81
- operator !=(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
82
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
83
-
84
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
85
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
86
- inline __host__ __device__
87
- friend bool
88
- operator <(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
89
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
90
-
91
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
92
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
93
- inline __host__ __device__
94
- friend bool
95
- operator >(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
96
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
97
-
98
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
99
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
100
- inline __host__ __device__
101
- friend bool
102
- operator <=(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
103
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
104
-
105
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
106
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
107
- inline __host__ __device__
108
- friend bool
109
- operator >=(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
110
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
111
-
112
- // iterator difference is our friend
113
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
114
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
115
- inline __host__ __device__
116
- friend
117
- typename thrust::detail::distance_from_result<
118
- iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1>,
119
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2>
120
- >::type
121
- operator-(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
122
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs);
123
-
124
- template<typename Facade>
125
- __host__ __device__
126
- static typename Facade::reference dereference(Facade const& f)
127
- {
128
- return f.dereference();
129
- }
130
-
131
- template<typename Facade>
132
- __host__ __device__
133
- static void increment(Facade& f)
134
- {
135
- f.increment();
136
- }
137
-
138
- template<typename Facade>
139
- __host__ __device__
140
- static void decrement(Facade& f)
141
- {
142
- f.decrement();
143
- }
144
-
145
- template <class Facade1, class Facade2>
146
- __host__ __device__
147
- static bool equal(Facade1 const& f1, Facade2 const& f2)
148
- {
149
- return f1.equal(f2);
150
- }
151
-
152
- // XXX TODO: Investigate whether we need both of these cases
153
- //template <class Facade1, class Facade2>
154
- //__host__ __device__
155
- //static bool equal(Facade1 const& f1, Facade2 const& f2, mpl::true_)
156
- //{
157
- // return f1.equal(f2);
158
- //}
159
-
160
- //template <class Facade1, class Facade2>
161
- //__host__ __device__
162
- //static bool equal(Facade1 const& f1, Facade2 const& f2, mpl::false_)
163
- //{
164
- // return f2.equal(f1);
165
- //}
166
-
167
- template <class Facade>
168
- __host__ __device__
169
- static void advance(Facade& f, typename Facade::difference_type n)
170
- {
171
- f.advance(n);
172
- }
173
-
174
- // Facade2 is convertible to Facade1,
175
- // so return Facade1's difference_type
176
- template <class Facade1, class Facade2>
177
- __host__ __device__
178
- static typename Facade1::difference_type
179
- distance_from(Facade1 const& f1, Facade2 const& f2, thrust::detail::true_type)
180
- {
181
- return -f1.distance_to(f2);
182
- }
183
-
184
- // Facade2 is not convertible to Facade1,
185
- // so return Facade2's difference_type
186
- template <class Facade1, class Facade2>
187
- __host__ __device__
188
- static typename Facade2::difference_type
189
- distance_from(Facade1 const& f1, Facade2 const& f2, thrust::detail::false_type)
190
- {
191
- return f2.distance_to(f1);
192
- }
193
-
194
- template <class Facade1, class Facade2>
195
- __host__ __device__
196
- static typename thrust::detail::distance_from_result<Facade1,Facade2>::type
197
- distance_from(Facade1 const& f1, Facade2 const& f2)
198
- {
199
- // dispatch the implementation of this method upon whether or not
200
- // Facade2 is convertible to Facade1
201
- return distance_from(f1, f2,
202
- typename thrust::detail::is_convertible<Facade2,Facade1>::type());
203
- }
204
-
205
- //
206
- // Curiously Recurring Template interface.
207
- //
208
- template <typename Derived, typename Value, typename System, typename Traversal, typename Reference, typename Difference>
209
- __host__ __device__
210
- static Derived& derived(iterator_facade<Derived,Value,System,Traversal,Reference,Difference>& facade)
211
- {
212
- return *static_cast<Derived*>(&facade);
213
- }
214
-
215
- template <typename Derived, typename Value, typename System, typename Traversal, typename Reference, typename Difference>
216
- __host__ __device__
217
- static Derived const& derived(iterator_facade<Derived,Value,System,Traversal,Reference,Difference> const& facade)
218
- {
219
- return *static_cast<Derived const*>(&facade);
220
- }
221
-
222
- /*! \endcond
223
- */
224
- }; // end iterator_core_access
225
-
226
-
227
- /*! \p iterator_facade is a template which allows the programmer to define a novel iterator with a standards-conforming interface
228
- * which Thrust can use to reason about algorithm acceleration opportunities.
229
- *
230
- * Because most of a standard iterator's interface is defined in terms of a small set of core primitives, \p iterator_facade
231
- * defines the non-primitive portion mechanically. In principle a novel iterator could explicitly provide the entire interface in
232
- * an ad hoc fashion but doing so might be tedious and prone to subtle errors.
233
- *
234
- * Often \p iterator_facade is too primitive a tool to use for defining novel iterators. In these cases, \p iterator_adaptor
235
- * or a specific fancy iterator should be used instead.
236
- *
237
- * \p iterator_facade's functionality is derived from and generally equivalent to \p boost::iterator_facade.
238
- * The exception is Thrust's addition of the template parameter \p System, which is necessary to allow Thrust
239
- * to dispatch an algorithm to one of several parallel backend systems. An additional exception is Thrust's omission
240
- * of the \c operator-> member function.
241
- *
242
- * Interested users may refer to <tt>boost::iterator_facade</tt>'s documentation for usage examples.
243
- *
244
- * \note \p iterator_facade's arithmetic operator free functions exist with the usual meanings but are omitted here for brevity.
245
- */
246
- template<typename Derived,
247
- typename Value,
248
- typename System,
249
- typename Traversal,
250
- typename Reference,
251
- typename Difference = std::ptrdiff_t>
252
- class iterator_facade
253
- {
254
- private:
255
- /*! \cond
256
- */
257
-
258
- //
259
- // Curiously Recurring Template interface.
260
- //
261
- __host__ __device__
262
- Derived& derived()
263
- {
264
- return *static_cast<Derived*>(this);
265
- }
266
-
267
- __host__ __device__
268
- Derived const& derived() const
269
- {
270
- return *static_cast<Derived const*>(this);
271
- }
272
- /*! \endcond
273
- */
274
-
275
- public:
276
- /*! The type of element pointed to by \p iterator_facade.
277
- */
278
- typedef typename thrust::detail::remove_const<Value>::type value_type;
279
-
280
- /*! The return type of \p iterator_facade::operator*().
281
- */
282
- typedef Reference reference;
283
-
284
- /*! The return type of \p iterator_facade's non-existent \c operator->()
285
- * member function. Unlike \c boost::iterator_facade, \p iterator_facade
286
- * disallows access to the \p value_type's members through expressions of the
287
- * form <tt>iter->member</tt>. \p pointer is defined to \c void to indicate
288
- * that these expressions are not allowed. This limitation may be relaxed in a
289
- * future version of Thrust.
290
- */
291
- typedef void pointer;
292
-
293
- /*! The type of expressions of the form <tt>x - y</tt> where <tt>x</tt> and <tt>y</tt>
294
- * are of type \p iterator_facade.
295
- */
296
- typedef Difference difference_type;
297
-
298
- /*! The type of iterator category of \p iterator_facade.
299
- */
300
- typedef typename thrust::detail::iterator_facade_category<
301
- System, Traversal, Value, Reference
302
- >::type iterator_category;
303
-
304
- /*! \p operator*() dereferences this \p iterator_facade.
305
- * \return A reference to the element pointed to by this \p iterator_facade.
306
- */
307
- __host__ __device__
308
- reference operator*() const
309
- {
310
- return iterator_core_access::dereference(this->derived());
311
- }
312
-
313
- // XXX unimplemented for now, consider implementing it later
314
- //pointer operator->() const
315
- //{
316
- // return;
317
- //}
318
-
319
- // XXX investigate whether or not we need to go to the lengths
320
- // boost does to determine the return type
321
-
322
- /*! \p operator[] performs indexed dereference.
323
- * \return A reference to the element \p n distance away from this \p iterator_facade.
324
- */
325
- __host__ __device__
326
- reference operator[](difference_type n) const
327
- {
328
- return *(this->derived() + n);
329
- }
330
-
331
- /*! \p operator++ pre-increments this \p iterator_facade to refer to the element in the next position.
332
- * \return <tt>*this</tt>
333
- */
334
- __host__ __device__
335
- Derived& operator++()
336
- {
337
- iterator_core_access::increment(this->derived());
338
- return this->derived();
339
- }
340
-
341
- /*! \p operator++ post-increments this \p iterator_facade and returns a new \p iterator_facade referring to the element in the next position.
342
- * \return A copy of <tt>*this</tt> before increment.
343
- */
344
- __host__ __device__
345
- Derived operator++(int)
346
- {
347
- Derived tmp(this->derived());
348
- ++*this;
349
- return tmp;
350
- }
351
-
352
- /*! \p operator-- pre-decrements this \p iterator_facade to refer to the element in the previous position.
353
- * \return <tt>*this</tt>
354
- */
355
- __host__ __device__
356
- Derived& operator--()
357
- {
358
- iterator_core_access::decrement(this->derived());
359
- return this->derived();
360
- }
361
-
362
- /*! \p operator-- post-decrements this \p iterator_facade and returns a new \p iterator_facade referring to the element in the previous position.
363
- * \return A copy of <tt>*this</tt> before decrement.
364
- */
365
- __host__ __device__
366
- Derived operator--(int)
367
- {
368
- Derived tmp(this->derived());
369
- --*this;
370
- return tmp;
371
- }
372
-
373
- /*! \p operator+= increments this \p iterator_facade to refer to an element a given distance after its current position.
374
- * \param n The quantity to increment.
375
- * \return <tt>*this</tt>
376
- */
377
- __host__ __device__
378
- Derived& operator+=(difference_type n)
379
- {
380
- iterator_core_access::advance(this->derived(), n);
381
- return this->derived();
382
- }
383
-
384
- /*! \p operator-= decrements this \p iterator_facade to refer to an element a given distance before its current postition.
385
- * \param n The quantity to decrement.
386
- * \return <tt>*this</tt>
387
- */
388
- __host__ __device__
389
- Derived& operator-=(difference_type n)
390
- {
391
- iterator_core_access::advance(this->derived(), -n);
392
- return this->derived();
393
- }
394
-
395
- /*! \p operator- subtracts a given quantity from this \p iterator_facade and returns a new \p iterator_facade referring to the element at the given position before this \p iterator_facade.
396
- * \param n The quantity to decrement
397
- * \return An \p iterator_facade pointing \p n elements before this \p iterator_facade.
398
- */
399
- __host__ __device__
400
- Derived operator-(difference_type n) const
401
- {
402
- Derived result(this->derived());
403
- return result -= n;
404
- }
405
- }; // end iterator_facade
406
-
407
- /*! \cond
408
- */
409
-
410
- // Comparison operators
411
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
412
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
413
- inline __host__ __device__
414
- // XXX it might be nice to implement this at some point
415
- //typename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition
416
- bool
417
- operator ==(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
418
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
419
- {
420
- return iterator_core_access
421
- ::equal(*static_cast<Derived1 const*>(&lhs),
422
- *static_cast<Derived2 const*>(&rhs));
423
- }
424
-
425
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
426
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
427
- inline __host__ __device__
428
- // XXX it might be nice to implement this at some point
429
- //typename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition
430
- bool
431
- operator !=(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
432
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
433
- {
434
- return !iterator_core_access
435
- ::equal(*static_cast<Derived1 const*>(&lhs),
436
- *static_cast<Derived2 const*>(&rhs));
437
- }
438
-
439
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
440
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
441
- inline __host__ __device__
442
- // XXX it might be nice to implement this at some point
443
- //typename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition
444
- bool
445
- operator <(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
446
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
447
- {
448
- return 0 > iterator_core_access
449
- ::distance_from(*static_cast<Derived1 const*>(&lhs),
450
- *static_cast<Derived2 const*>(&rhs));
451
- }
452
-
453
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
454
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
455
- inline __host__ __device__
456
- // XXX it might be nice to implement this at some point
457
- //typename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition
458
- bool
459
- operator >(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
460
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
461
- {
462
- return 0 < iterator_core_access
463
- ::distance_from(*static_cast<Derived1 const*>(&lhs),
464
- *static_cast<Derived2 const*>(&rhs));
465
- }
466
-
467
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
468
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
469
- inline __host__ __device__
470
- // XXX it might be nice to implement this at some point
471
- //typename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition
472
- bool
473
- operator <=(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
474
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
475
- {
476
- return 0 >= iterator_core_access
477
- ::distance_from(*static_cast<Derived1 const*>(&lhs),
478
- *static_cast<Derived2 const*>(&rhs));
479
- }
480
-
481
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
482
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
483
- inline __host__ __device__
484
- // XXX it might be nice to implement this at some point
485
- //typename enable_if_interoperable<Dr1,Dr2,bool>::type // exposition
486
- bool
487
- operator >=(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
488
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
489
- {
490
- return 0 <= iterator_core_access
491
- ::distance_from(*static_cast<Derived1 const*>(&lhs),
492
- *static_cast<Derived2 const*>(&rhs));
493
- }
494
-
495
- // Iterator difference
496
- template <typename Derived1, typename Value1, typename System1, typename Traversal1, typename Reference1, typename Difference1,
497
- typename Derived2, typename Value2, typename System2, typename Traversal2, typename Reference2, typename Difference2>
498
- inline __host__ __device__
499
-
500
- // divine the type this operator returns
501
- typename thrust::detail::distance_from_result<
502
- iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1>,
503
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2>
504
- >::type
505
-
506
- operator-(iterator_facade<Derived1,Value1,System1,Traversal1,Reference1,Difference1> const& lhs,
507
- iterator_facade<Derived2,Value2,System2,Traversal2,Reference2,Difference2> const& rhs)
508
- {
509
- return iterator_core_access
510
- ::distance_from(*static_cast<Derived1 const*>(&lhs),
511
- *static_cast<Derived2 const*>(&rhs));
512
- }
513
-
514
- // Iterator addition
515
- template <typename Derived, typename Value, typename System, typename Traversal, typename Reference, typename Difference>
516
- inline __host__ __device__
517
- Derived operator+ (iterator_facade<Derived,Value,System,Traversal,Reference,Difference> const& i,
518
- typename Derived::difference_type n)
519
- {
520
- Derived tmp(static_cast<Derived const&>(i));
521
- return tmp += n;
522
- }
523
-
524
- template <typename Derived, typename Value, typename System, typename Traversal, typename Reference, typename Difference>
525
- inline __host__ __device__
526
- Derived operator+ (typename Derived::difference_type n,
527
- iterator_facade<Derived,Value,System,Traversal,Reference,Difference> const& i)
528
- {
529
- Derived tmp(static_cast<Derived const&>(i));
530
- return tmp += n;
531
- }
532
-
533
- /*! \endcond
534
- */
535
-
536
- /*! \} // end fancyiterators
537
- */
538
-
539
- /*! \} // end iterators
540
- */
541
-
542
- } // end thrust
543
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/mr/memory_resource.h DELETED
@@ -1,217 +0,0 @@
1
- /*
2
- * Copyright 2018 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- /*! \file mr/memory_resource.h
18
- * \brief A base class for the memory resource system, similar to std::memory_resource,
19
- * and related utilities.
20
- */
21
-
22
- #pragma once
23
-
24
- #include "detail/config.h"
25
- #ifdef THRUST_MR_STD_MR_HEADER
26
- # include THRUST_MR_STD_MR_HEADER
27
- #endif
28
-
29
- namespace thrust
30
- {
31
- /*! \brief \p thrust::mr is the namespace containing system agnostic types and functions for \p memory_resource related functionalities.
32
- */
33
- namespace mr
34
- {
35
-
36
- /** \addtogroup memory_resources Memory Resources
37
- * \ingroup memory_management_classes
38
- * \{
39
- */
40
-
41
- /*! \p memory_resource is the base class for all other memory resources.
42
- *
43
- * \tparam Pointer the pointer type that is allocated and deallocated by the memory resource
44
- * derived from this base class. If this is <tt>void *</tt>, this class derives from
45
- * <tt>std::pmr::memory_resource</tt>.
46
- */
47
- template<typename Pointer = void *>
48
- class memory_resource
49
- {
50
- public:
51
- /*! Alias for the template parameter.
52
- */
53
- typedef Pointer pointer;
54
-
55
- /*! Virtual destructor, defaulted when possible.
56
- */
57
- virtual ~memory_resource() THRUST_DEFAULT
58
-
59
- /*! Allocates memory of size at least \p bytes and alignment at least \p alignment.
60
- *
61
- * \param bytes size, in bytes, that is requested from this allocation
62
- * \param alignment alignment that is requested from this allocation
63
- * \throws thrust::bad_alloc when no memory with requested size and alignment can be allocated.
64
- * \returns A pointer to void to the newly allocated memory.
65
- */
66
- THRUST_NODISCARD
67
- pointer allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT)
68
- {
69
- return do_allocate(bytes, alignment);
70
- }
71
-
72
- /*! Deallocates memory pointed to by \p p.
73
- *
74
- * \param p pointer to be deallocated
75
- * \param bytes the size of the allocation. This must be equivalent to the value of \p bytes that
76
- * was passed to the allocation function that returned \p p.
77
- * \param alignment the alignment of the allocation. This must be equivalent to the value of \p alignment
78
- * that was passed to the allocation function that returned \p p.
79
- */
80
- void deallocate(pointer p, std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT)
81
- {
82
- do_deallocate(p, bytes, alignment);
83
- }
84
-
85
- /*! Compares this resource to the other one. The default implementation uses identity comparison,
86
- * which is often the right thing to do and doesn't require RTTI involvement.
87
- *
88
- * \param other the other resource to compare this resource to
89
- * \returns whether the two resources are equivalent.
90
- */
91
- __host__ __device__
92
- bool is_equal(const memory_resource & other) const THRUST_NOEXCEPT
93
- {
94
- return do_is_equal(other);
95
- }
96
-
97
- /*! Allocates memory of size at least \p bytes and alignment at least \p alignment.
98
- *
99
- * \param bytes size, in bytes, that is requested from this allocation
100
- * \param alignment alignment that is requested from this allocation
101
- * \throws thrust::bad_alloc when no memory with requested size and alignment can be allocated.
102
- * \returns A pointer to void to the newly allocated memory.
103
- */
104
- virtual pointer do_allocate(std::size_t bytes, std::size_t alignment) = 0;
105
-
106
- /*! Deallocates memory pointed to by \p p.
107
- *
108
- * \param p pointer to be deallocated
109
- * \param bytes the size of the allocation. This must be equivalent to the value of \p bytes that
110
- * was passed to the allocation function that returned \p p.
111
- * \param alignment the size of the allocation. This must be equivalent to the value of \p alignment
112
- * that was passed to the allocation function that returned \p p.
113
- */
114
- virtual void do_deallocate(pointer p, std::size_t bytes, std::size_t alignment) = 0;
115
-
116
- /*! Compares this resource to the other one. The default implementation uses identity comparison,
117
- * which is often the right thing to do and doesn't require RTTI involvement.
118
- *
119
- * \param other the other resource to compare this resource to
120
- * \returns whether the two resources are equivalent.
121
- */
122
- __host__ __device__
123
- virtual bool do_is_equal(const memory_resource & other) const THRUST_NOEXCEPT
124
- {
125
- return this == &other;
126
- }
127
- };
128
-
129
- template<>
130
- class memory_resource<void *>
131
- #ifdef THRUST_STD_MR_NS
132
- : THRUST_STD_MR_NS::memory_resource
133
- #endif
134
- {
135
- public:
136
- typedef void * pointer;
137
-
138
- virtual ~memory_resource() THRUST_DEFAULT
139
-
140
- THRUST_NODISCARD
141
- pointer allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT)
142
- {
143
- return do_allocate(bytes, alignment);
144
- }
145
-
146
- void deallocate(pointer p, std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT)
147
- {
148
- do_deallocate(p, bytes, alignment);
149
- }
150
-
151
- __host__ __device__
152
- bool is_equal(const memory_resource & other) const THRUST_NOEXCEPT
153
- {
154
- return do_is_equal(other);
155
- }
156
-
157
- virtual pointer do_allocate(std::size_t bytes, std::size_t alignment) = 0;
158
- virtual void do_deallocate(pointer p, std::size_t bytes, std::size_t alignment) = 0;
159
- __host__ __device__
160
- virtual bool do_is_equal(const memory_resource & other) const THRUST_NOEXCEPT
161
- {
162
- return this == &other;
163
- }
164
-
165
- #ifdef THRUST_STD_MR_NS
166
- // the above do_is_equal is a different function than the one from the standard memory resource
167
- // can't implement this reasonably without RTTI though; it's reasonable to assume false otherwise
168
-
169
- virtual bool do_is_equal(const THRUST_STD_MR_NS::memory_resource & other) const noexcept override
170
- {
171
- # ifdef THRUST_HAS_DYNAMIC_CAST
172
- auto mr_resource = dynamic_cast<memory_resource<> *>(&other);
173
- return mr_resource && do_is_equal(*mr_resource);
174
- # else
175
- return this == &other;
176
- # endif
177
- }
178
- #endif
179
- };
180
-
181
- /*! Compares the memory resources for equality, first by identity, then by \p is_equal.
182
- */
183
- template<typename Pointer>
184
- __host__ __device__
185
- bool operator==(const memory_resource<Pointer> & lhs, const memory_resource<Pointer> & rhs) THRUST_NOEXCEPT
186
- {
187
- return &lhs == &rhs || rhs.is_equal(rhs);
188
- }
189
-
190
- /*! Compares the memory resources for inequality, first by identity, then by \p is_equal.
191
- */
192
- template<typename Pointer>
193
- __host__ __device__
194
- bool operator!=(const memory_resource<Pointer> & lhs, const memory_resource<Pointer> & rhs) THRUST_NOEXCEPT
195
- {
196
- return !(lhs == rhs);
197
- }
198
-
199
- /*! Returns a global instance of \p MR, created as a function local static variable.
200
- *
201
- * \tparam MR type of a memory resource to get an instance from. Must be \p DefaultConstructible.
202
- * \returns a pointer to a global instance of \p MR.
203
- */
204
- template<typename MR>
205
- __host__
206
- MR * get_global_resource()
207
- {
208
- static MR resource;
209
- return &resource;
210
- }
211
-
212
- /*! \}
213
- */
214
-
215
- } // end mr
216
- } // end thrust
217
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Text2Human/Text2Human/train_parsing_token.py DELETED
@@ -1,122 +0,0 @@
1
- import argparse
2
- import logging
3
- import os
4
- import os.path as osp
5
- import random
6
- import time
7
-
8
- import torch
9
-
10
- from data.mask_dataset import MaskDataset
11
- from models import create_model
12
- from utils.logger import MessageLogger, get_root_logger, init_tb_logger
13
- from utils.options import dict2str, dict_to_nonedict, parse
14
- from utils.util import make_exp_dirs
15
-
16
-
17
- def main():
18
- # options
19
- parser = argparse.ArgumentParser()
20
- parser.add_argument('-opt', type=str, help='Path to option YAML file.')
21
- args = parser.parse_args()
22
- opt = parse(args.opt, is_train=True)
23
-
24
- # mkdir and loggers
25
- make_exp_dirs(opt)
26
- log_file = osp.join(opt['path']['log'], f"train_{opt['name']}.log")
27
- logger = get_root_logger(
28
- logger_name='base', log_level=logging.INFO, log_file=log_file)
29
- logger.info(dict2str(opt))
30
- # initialize tensorboard logger
31
- tb_logger = None
32
- if opt['use_tb_logger'] and 'debug' not in opt['name']:
33
- tb_logger = init_tb_logger(log_dir='./tb_logger/' + opt['name'])
34
-
35
- # convert to NoneDict, which returns None for missing keys
36
- opt = dict_to_nonedict(opt)
37
-
38
- # set up data loader
39
- train_dataset = MaskDataset(
40
- segm_dir=opt['segm_dir'], ann_dir=opt['train_ann_file'], xflip=True)
41
- train_loader = torch.utils.data.DataLoader(
42
- dataset=train_dataset,
43
- batch_size=opt['batch_size'],
44
- shuffle=True,
45
- num_workers=opt['num_workers'],
46
- persistent_workers=True,
47
- drop_last=True)
48
- logger.info(f'Number of train set: {len(train_dataset)}.')
49
- opt['max_iters'] = opt['num_epochs'] * len(
50
- train_dataset) // opt['batch_size']
51
-
52
- val_dataset = MaskDataset(
53
- segm_dir=opt['segm_dir'], ann_dir=opt['val_ann_file'])
54
- val_loader = torch.utils.data.DataLoader(
55
- dataset=val_dataset, batch_size=1, shuffle=False)
56
- logger.info(f'Number of val set: {len(val_dataset)}.')
57
-
58
- test_dataset = MaskDataset(
59
- segm_dir=opt['segm_dir'], ann_dir=opt['test_ann_file'])
60
- test_loader = torch.utils.data.DataLoader(
61
- dataset=test_dataset, batch_size=1, shuffle=False)
62
- logger.info(f'Number of test set: {len(test_dataset)}.')
63
-
64
- current_iter = 0
65
- best_epoch = None
66
- best_loss = 100000
67
-
68
- model = create_model(opt)
69
-
70
- data_time, iter_time = 0, 0
71
- current_iter = 0
72
-
73
- # create message logger (formatted outputs)
74
- msg_logger = MessageLogger(opt, current_iter, tb_logger)
75
-
76
- for epoch in range(opt['num_epochs']):
77
- lr = model.update_learning_rate(epoch)
78
-
79
- for _, batch_data in enumerate(train_loader):
80
- data_time = time.time() - data_time
81
-
82
- current_iter += 1
83
-
84
- model.optimize_parameters(batch_data, current_iter)
85
-
86
- iter_time = time.time() - iter_time
87
- if current_iter % opt['print_freq'] == 0:
88
- log_vars = {'epoch': epoch, 'iter': current_iter}
89
- log_vars.update({'lrs': [lr]})
90
- log_vars.update({'time': iter_time, 'data_time': data_time})
91
- log_vars.update(model.get_current_log())
92
- msg_logger(log_vars)
93
-
94
- data_time = time.time()
95
- iter_time = time.time()
96
-
97
- if epoch % opt['val_freq'] == 0:
98
- save_dir = f'{opt["path"]["visualization"]}/valset/epoch_{epoch:03d}' # noqa
99
- os.makedirs(save_dir, exist_ok=opt['debug'])
100
- val_loss_total, _, _ = model.inference(val_loader, save_dir)
101
-
102
- save_dir = f'{opt["path"]["visualization"]}/testset/epoch_{epoch:03d}' # noqa
103
- os.makedirs(save_dir, exist_ok=opt['debug'])
104
- test_loss_total, _, _ = model.inference(test_loader, save_dir)
105
-
106
- logger.info(f'Epoch: {epoch}, '
107
- f'val_loss_total: {val_loss_total}, '
108
- f'test_loss_total: {test_loss_total}.')
109
-
110
- if test_loss_total < best_loss:
111
- best_epoch = epoch
112
- best_loss = test_loss_total
113
-
114
- logger.info(f'Best epoch: {best_epoch}, '
115
- f'Best test loss: {best_loss: .4f}.')
116
-
117
- # save model
118
- model.save_network(f'{opt["path"]["models"]}/epoch{epoch}.pth')
119
-
120
-
121
- if __name__ == '__main__':
122
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/dense_heads/transformer_head.py DELETED
@@ -1,654 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from mmcv.cnn import Conv2d, Linear, build_activation_layer
5
- from mmcv.runner import force_fp32
6
-
7
- from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh,
8
- build_assigner, build_sampler, multi_apply,
9
- reduce_mean)
10
- from mmdet.models.utils import (FFN, build_positional_encoding,
11
- build_transformer)
12
- from ..builder import HEADS, build_loss
13
- from .anchor_free_head import AnchorFreeHead
14
-
15
-
16
- @HEADS.register_module()
17
- class TransformerHead(AnchorFreeHead):
18
- """Implements the DETR transformer head.
19
-
20
- See `paper: End-to-End Object Detection with Transformers
21
- <https://arxiv.org/pdf/2005.12872>`_ for details.
22
-
23
- Args:
24
- num_classes (int): Number of categories excluding the background.
25
- in_channels (int): Number of channels in the input feature map.
26
- num_fcs (int, optional): Number of fully-connected layers used in
27
- `FFN`, which is then used for the regression head. Default 2.
28
- transformer (dict, optional): Config for transformer.
29
- positional_encoding (dict, optional): Config for position encoding.
30
- loss_cls (dict, optional): Config of the classification loss.
31
- Default `CrossEntropyLoss`.
32
- loss_bbox (dict, optional): Config of the regression loss.
33
- Default `L1Loss`.
34
- loss_iou (dict, optional): Config of the regression iou loss.
35
- Default `GIoULoss`.
36
- tran_cfg (dict, optional): Training config of transformer head.
37
- test_cfg (dict, optional): Testing config of transformer head.
38
-
39
- Example:
40
- >>> import torch
41
- >>> self = TransformerHead(80, 2048)
42
- >>> x = torch.rand(1, 2048, 32, 32)
43
- >>> mask = torch.ones(1, 32, 32).to(x.dtype)
44
- >>> mask[:, :16, :15] = 0
45
- >>> all_cls_scores, all_bbox_preds = self(x, mask)
46
- """
47
-
48
- def __init__(self,
49
- num_classes,
50
- in_channels,
51
- num_fcs=2,
52
- transformer=dict(
53
- type='Transformer',
54
- embed_dims=256,
55
- num_heads=8,
56
- num_encoder_layers=6,
57
- num_decoder_layers=6,
58
- feedforward_channels=2048,
59
- dropout=0.1,
60
- act_cfg=dict(type='ReLU', inplace=True),
61
- norm_cfg=dict(type='LN'),
62
- num_fcs=2,
63
- pre_norm=False,
64
- return_intermediate_dec=True),
65
- positional_encoding=dict(
66
- type='SinePositionalEncoding',
67
- num_feats=128,
68
- normalize=True),
69
- loss_cls=dict(
70
- type='CrossEntropyLoss',
71
- bg_cls_weight=0.1,
72
- use_sigmoid=False,
73
- loss_weight=1.0,
74
- class_weight=1.0),
75
- loss_bbox=dict(type='L1Loss', loss_weight=5.0),
76
- loss_iou=dict(type='GIoULoss', loss_weight=2.0),
77
- train_cfg=dict(
78
- assigner=dict(
79
- type='HungarianAssigner',
80
- cls_cost=dict(type='ClassificationCost', weight=1.),
81
- reg_cost=dict(type='BBoxL1Cost', weight=5.0),
82
- iou_cost=dict(
83
- type='IoUCost', iou_mode='giou', weight=2.0))),
84
- test_cfg=dict(max_per_img=100),
85
- **kwargs):
86
- # NOTE here use `AnchorFreeHead` instead of `TransformerHead`,
87
- # since it brings inconvenience when the initialization of
88
- # `AnchorFreeHead` is called.
89
- super(AnchorFreeHead, self).__init__()
90
- use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
91
- assert not use_sigmoid_cls, 'setting use_sigmoid_cls as True is ' \
92
- 'not supported in DETR, since background is needed for the ' \
93
- 'matching process.'
94
- assert 'embed_dims' in transformer \
95
- and 'num_feats' in positional_encoding
96
- num_feats = positional_encoding['num_feats']
97
- embed_dims = transformer['embed_dims']
98
- assert num_feats * 2 == embed_dims, 'embed_dims should' \
99
- f' be exactly 2 times of num_feats. Found {embed_dims}' \
100
- f' and {num_feats}.'
101
- assert test_cfg is not None and 'max_per_img' in test_cfg
102
-
103
- class_weight = loss_cls.get('class_weight', None)
104
- if class_weight is not None:
105
- assert isinstance(class_weight, float), 'Expected ' \
106
- 'class_weight to have type float. Found ' \
107
- f'{type(class_weight)}.'
108
- # NOTE following the official DETR rep0, bg_cls_weight means
109
- # relative classification weight of the no-object class.
110
- bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight)
111
- assert isinstance(bg_cls_weight, float), 'Expected ' \
112
- 'bg_cls_weight to have type float. Found ' \
113
- f'{type(bg_cls_weight)}.'
114
- class_weight = torch.ones(num_classes + 1) * class_weight
115
- # set background class as the last indice
116
- class_weight[num_classes] = bg_cls_weight
117
- loss_cls.update({'class_weight': class_weight})
118
- if 'bg_cls_weight' in loss_cls:
119
- loss_cls.pop('bg_cls_weight')
120
- self.bg_cls_weight = bg_cls_weight
121
-
122
- if train_cfg:
123
- assert 'assigner' in train_cfg, 'assigner should be provided '\
124
- 'when train_cfg is set.'
125
- assigner = train_cfg['assigner']
126
- assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \
127
- 'The classification weight for loss and matcher should be' \
128
- 'exactly the same.'
129
- assert loss_bbox['loss_weight'] == assigner['reg_cost'][
130
- 'weight'], 'The regression L1 weight for loss and matcher ' \
131
- 'should be exactly the same.'
132
- assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \
133
- 'The regression iou weight for loss and matcher should be' \
134
- 'exactly the same.'
135
- self.assigner = build_assigner(assigner)
136
- # DETR sampling=False, so use PseudoSampler
137
- sampler_cfg = dict(type='PseudoSampler')
138
- self.sampler = build_sampler(sampler_cfg, context=self)
139
- self.num_classes = num_classes
140
- self.cls_out_channels = num_classes + 1
141
- self.in_channels = in_channels
142
- self.num_fcs = num_fcs
143
- self.train_cfg = train_cfg
144
- self.test_cfg = test_cfg
145
- self.use_sigmoid_cls = use_sigmoid_cls
146
- self.embed_dims = embed_dims
147
- self.num_query = test_cfg['max_per_img']
148
- self.fp16_enabled = False
149
- self.loss_cls = build_loss(loss_cls)
150
- self.loss_bbox = build_loss(loss_bbox)
151
- self.loss_iou = build_loss(loss_iou)
152
- self.act_cfg = transformer.get('act_cfg',
153
- dict(type='ReLU', inplace=True))
154
- self.activate = build_activation_layer(self.act_cfg)
155
- self.positional_encoding = build_positional_encoding(
156
- positional_encoding)
157
- self.transformer = build_transformer(transformer)
158
- self._init_layers()
159
-
160
- def _init_layers(self):
161
- """Initialize layers of the transformer head."""
162
- self.input_proj = Conv2d(
163
- self.in_channels, self.embed_dims, kernel_size=1)
164
- self.fc_cls = Linear(self.embed_dims, self.cls_out_channels)
165
- self.reg_ffn = FFN(
166
- self.embed_dims,
167
- self.embed_dims,
168
- self.num_fcs,
169
- self.act_cfg,
170
- dropout=0.0,
171
- add_residual=False)
172
- self.fc_reg = Linear(self.embed_dims, 4)
173
- self.query_embedding = nn.Embedding(self.num_query, self.embed_dims)
174
-
175
- def init_weights(self, distribution='uniform'):
176
- """Initialize weights of the transformer head."""
177
- # The initialization for transformer is important
178
- self.transformer.init_weights()
179
-
180
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
181
- missing_keys, unexpected_keys, error_msgs):
182
- """load checkpoints."""
183
- # NOTE here use `AnchorFreeHead` instead of `TransformerHead`,
184
- # since `AnchorFreeHead._load_from_state_dict` should not be
185
- # called here. Invoking the default `Module._load_from_state_dict`
186
- # is enough.
187
- super(AnchorFreeHead,
188
- self)._load_from_state_dict(state_dict, prefix, local_metadata,
189
- strict, missing_keys,
190
- unexpected_keys, error_msgs)
191
-
192
- def forward(self, feats, img_metas):
193
- """Forward function.
194
-
195
- Args:
196
- feats (tuple[Tensor]): Features from the upstream network, each is
197
- a 4D-tensor.
198
- img_metas (list[dict]): List of image information.
199
-
200
- Returns:
201
- tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels.
202
-
203
- - all_cls_scores_list (list[Tensor]): Classification scores \
204
- for each scale level. Each is a 4D-tensor with shape \
205
- [nb_dec, bs, num_query, cls_out_channels]. Note \
206
- `cls_out_channels` should includes background.
207
- - all_bbox_preds_list (list[Tensor]): Sigmoid regression \
208
- outputs for each scale level. Each is a 4D-tensor with \
209
- normalized coordinate format (cx, cy, w, h) and shape \
210
- [nb_dec, bs, num_query, 4].
211
- """
212
- num_levels = len(feats)
213
- img_metas_list = [img_metas for _ in range(num_levels)]
214
- return multi_apply(self.forward_single, feats, img_metas_list)
215
-
216
- def forward_single(self, x, img_metas):
217
- """"Forward function for a single feature level.
218
-
219
- Args:
220
- x (Tensor): Input feature from backbone's single stage, shape
221
- [bs, c, h, w].
222
- img_metas (list[dict]): List of image information.
223
-
224
- Returns:
225
- all_cls_scores (Tensor): Outputs from the classification head,
226
- shape [nb_dec, bs, num_query, cls_out_channels]. Note
227
- cls_out_channels should includes background.
228
- all_bbox_preds (Tensor): Sigmoid outputs from the regression
229
- head with normalized coordinate format (cx, cy, w, h).
230
- Shape [nb_dec, bs, num_query, 4].
231
- """
232
- # construct binary masks which used for the transformer.
233
- # NOTE following the official DETR repo, non-zero values representing
234
- # ignored positions, while zero values means valid positions.
235
- batch_size = x.size(0)
236
- input_img_h, input_img_w = img_metas[0]['batch_input_shape']
237
- masks = x.new_ones((batch_size, input_img_h, input_img_w))
238
- for img_id in range(batch_size):
239
- img_h, img_w, _ = img_metas[img_id]['img_shape']
240
- masks[img_id, :img_h, :img_w] = 0
241
-
242
- x = self.input_proj(x)
243
- # interpolate masks to have the same spatial shape with x
244
- masks = F.interpolate(
245
- masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1)
246
- # position encoding
247
- pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w]
248
- # outs_dec: [nb_dec, bs, num_query, embed_dim]
249
- outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight,
250
- pos_embed)
251
-
252
- all_cls_scores = self.fc_cls(outs_dec)
253
- all_bbox_preds = self.fc_reg(self.activate(
254
- self.reg_ffn(outs_dec))).sigmoid()
255
- return all_cls_scores, all_bbox_preds
256
-
257
- @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list'))
258
- def loss(self,
259
- all_cls_scores_list,
260
- all_bbox_preds_list,
261
- gt_bboxes_list,
262
- gt_labels_list,
263
- img_metas,
264
- gt_bboxes_ignore=None):
265
- """"Loss function.
266
-
267
- Only outputs from the last feature level are used for computing
268
- losses by default.
269
-
270
- Args:
271
- all_cls_scores_list (list[Tensor]): Classification outputs
272
- for each feature level. Each is a 4D-tensor with shape
273
- [nb_dec, bs, num_query, cls_out_channels].
274
- all_bbox_preds_list (list[Tensor]): Sigmoid regression
275
- outputs for each feature level. Each is a 4D-tensor with
276
- normalized coordinate format (cx, cy, w, h) and shape
277
- [nb_dec, bs, num_query, 4].
278
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
279
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
280
- gt_labels_list (list[Tensor]): Ground truth class indices for each
281
- image with shape (num_gts, ).
282
- img_metas (list[dict]): List of image meta information.
283
- gt_bboxes_ignore (list[Tensor], optional): Bounding boxes
284
- which can be ignored for each image. Default None.
285
-
286
- Returns:
287
- dict[str, Tensor]: A dictionary of loss components.
288
- """
289
- # NOTE defaultly only the outputs from the last feature scale is used.
290
- all_cls_scores = all_cls_scores_list[-1]
291
- all_bbox_preds = all_bbox_preds_list[-1]
292
- assert gt_bboxes_ignore is None, \
293
- 'Only supports for gt_bboxes_ignore setting to None.'
294
-
295
- num_dec_layers = len(all_cls_scores)
296
- all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)]
297
- all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)]
298
- all_gt_bboxes_ignore_list = [
299
- gt_bboxes_ignore for _ in range(num_dec_layers)
300
- ]
301
- img_metas_list = [img_metas for _ in range(num_dec_layers)]
302
-
303
- losses_cls, losses_bbox, losses_iou = multi_apply(
304
- self.loss_single, all_cls_scores, all_bbox_preds,
305
- all_gt_bboxes_list, all_gt_labels_list, img_metas_list,
306
- all_gt_bboxes_ignore_list)
307
-
308
- loss_dict = dict()
309
- # loss from the last decoder layer
310
- loss_dict['loss_cls'] = losses_cls[-1]
311
- loss_dict['loss_bbox'] = losses_bbox[-1]
312
- loss_dict['loss_iou'] = losses_iou[-1]
313
- # loss from other decoder layers
314
- num_dec_layer = 0
315
- for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1],
316
- losses_bbox[:-1],
317
- losses_iou[:-1]):
318
- loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i
319
- loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i
320
- loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i
321
- num_dec_layer += 1
322
- return loss_dict
323
-
324
- def loss_single(self,
325
- cls_scores,
326
- bbox_preds,
327
- gt_bboxes_list,
328
- gt_labels_list,
329
- img_metas,
330
- gt_bboxes_ignore_list=None):
331
- """"Loss function for outputs from a single decoder layer of a single
332
- feature level.
333
-
334
- Args:
335
- cls_scores (Tensor): Box score logits from a single decoder layer
336
- for all images. Shape [bs, num_query, cls_out_channels].
337
- bbox_preds (Tensor): Sigmoid outputs from a single decoder layer
338
- for all images, with normalized coordinate (cx, cy, w, h) and
339
- shape [bs, num_query, 4].
340
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
341
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
342
- gt_labels_list (list[Tensor]): Ground truth class indices for each
343
- image with shape (num_gts, ).
344
- img_metas (list[dict]): List of image meta information.
345
- gt_bboxes_ignore_list (list[Tensor], optional): Bounding
346
- boxes which can be ignored for each image. Default None.
347
-
348
- Returns:
349
- dict[str, Tensor]: A dictionary of loss components for outputs from
350
- a single decoder layer.
351
- """
352
- num_imgs = cls_scores.size(0)
353
- cls_scores_list = [cls_scores[i] for i in range(num_imgs)]
354
- bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)]
355
- cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list,
356
- gt_bboxes_list, gt_labels_list,
357
- img_metas, gt_bboxes_ignore_list)
358
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
359
- num_total_pos, num_total_neg) = cls_reg_targets
360
- labels = torch.cat(labels_list, 0)
361
- label_weights = torch.cat(label_weights_list, 0)
362
- bbox_targets = torch.cat(bbox_targets_list, 0)
363
- bbox_weights = torch.cat(bbox_weights_list, 0)
364
-
365
- # classification loss
366
- cls_scores = cls_scores.reshape(-1, self.cls_out_channels)
367
- # construct weighted avg_factor to match with the official DETR repo
368
- cls_avg_factor = num_total_pos * 1.0 + \
369
- num_total_neg * self.bg_cls_weight
370
- loss_cls = self.loss_cls(
371
- cls_scores, labels, label_weights, avg_factor=cls_avg_factor)
372
-
373
- # Compute the average number of gt boxes accross all gpus, for
374
- # normalization purposes
375
- num_total_pos = loss_cls.new_tensor([num_total_pos])
376
- num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item()
377
-
378
- # construct factors used for rescale bboxes
379
- factors = []
380
- for img_meta, bbox_pred in zip(img_metas, bbox_preds):
381
- img_h, img_w, _ = img_meta['img_shape']
382
- factor = bbox_pred.new_tensor([img_w, img_h, img_w,
383
- img_h]).unsqueeze(0).repeat(
384
- bbox_pred.size(0), 1)
385
- factors.append(factor)
386
- factors = torch.cat(factors, 0)
387
-
388
- # DETR regress the relative position of boxes (cxcywh) in the image,
389
- # thus the learning target is normalized by the image size. So here
390
- # we need to re-scale them for calculating IoU loss
391
- bbox_preds = bbox_preds.reshape(-1, 4)
392
- bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors
393
- bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors
394
-
395
- # regression IoU loss, defaultly GIoU loss
396
- loss_iou = self.loss_iou(
397
- bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos)
398
-
399
- # regression L1 loss
400
- loss_bbox = self.loss_bbox(
401
- bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos)
402
- return loss_cls, loss_bbox, loss_iou
403
-
404
- def get_targets(self,
405
- cls_scores_list,
406
- bbox_preds_list,
407
- gt_bboxes_list,
408
- gt_labels_list,
409
- img_metas,
410
- gt_bboxes_ignore_list=None):
411
- """"Compute regression and classification targets for a batch image.
412
-
413
- Outputs from a single decoder layer of a single feature level are used.
414
-
415
- Args:
416
- cls_scores_list (list[Tensor]): Box score logits from a single
417
- decoder layer for each image with shape [num_query,
418
- cls_out_channels].
419
- bbox_preds_list (list[Tensor]): Sigmoid outputs from a single
420
- decoder layer for each image, with normalized coordinate
421
- (cx, cy, w, h) and shape [num_query, 4].
422
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
423
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
424
- gt_labels_list (list[Tensor]): Ground truth class indices for each
425
- image with shape (num_gts, ).
426
- img_metas (list[dict]): List of image meta information.
427
- gt_bboxes_ignore_list (list[Tensor], optional): Bounding
428
- boxes which can be ignored for each image. Default None.
429
-
430
- Returns:
431
- tuple: a tuple containing the following targets.
432
-
433
- - labels_list (list[Tensor]): Labels for all images.
434
- - label_weights_list (list[Tensor]): Label weights for all \
435
- images.
436
- - bbox_targets_list (list[Tensor]): BBox targets for all \
437
- images.
438
- - bbox_weights_list (list[Tensor]): BBox weights for all \
439
- images.
440
- - num_total_pos (int): Number of positive samples in all \
441
- images.
442
- - num_total_neg (int): Number of negative samples in all \
443
- images.
444
- """
445
- assert gt_bboxes_ignore_list is None, \
446
- 'Only supports for gt_bboxes_ignore setting to None.'
447
- num_imgs = len(cls_scores_list)
448
- gt_bboxes_ignore_list = [
449
- gt_bboxes_ignore_list for _ in range(num_imgs)
450
- ]
451
-
452
- (labels_list, label_weights_list, bbox_targets_list,
453
- bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply(
454
- self._get_target_single, cls_scores_list, bbox_preds_list,
455
- gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list)
456
- num_total_pos = sum((inds.numel() for inds in pos_inds_list))
457
- num_total_neg = sum((inds.numel() for inds in neg_inds_list))
458
- return (labels_list, label_weights_list, bbox_targets_list,
459
- bbox_weights_list, num_total_pos, num_total_neg)
460
-
461
- def _get_target_single(self,
462
- cls_score,
463
- bbox_pred,
464
- gt_bboxes,
465
- gt_labels,
466
- img_meta,
467
- gt_bboxes_ignore=None):
468
- """"Compute regression and classification targets for one image.
469
-
470
- Outputs from a single decoder layer of a single feature level are used.
471
-
472
- Args:
473
- cls_score (Tensor): Box score logits from a single decoder layer
474
- for one image. Shape [num_query, cls_out_channels].
475
- bbox_pred (Tensor): Sigmoid outputs from a single decoder layer
476
- for one image, with normalized coordinate (cx, cy, w, h) and
477
- shape [num_query, 4].
478
- gt_bboxes (Tensor): Ground truth bboxes for one image with
479
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
480
- gt_labels (Tensor): Ground truth class indices for one image
481
- with shape (num_gts, ).
482
- img_meta (dict): Meta information for one image.
483
- gt_bboxes_ignore (Tensor, optional): Bounding boxes
484
- which can be ignored. Default None.
485
-
486
- Returns:
487
- tuple[Tensor]: a tuple containing the following for one image.
488
-
489
- - labels (Tensor): Labels of each image.
490
- - label_weights (Tensor]): Label weights of each image.
491
- - bbox_targets (Tensor): BBox targets of each image.
492
- - bbox_weights (Tensor): BBox weights of each image.
493
- - pos_inds (Tensor): Sampled positive indices for each image.
494
- - neg_inds (Tensor): Sampled negative indices for each image.
495
- """
496
-
497
- num_bboxes = bbox_pred.size(0)
498
- # assigner and sampler
499
- assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes,
500
- gt_labels, img_meta,
501
- gt_bboxes_ignore)
502
- sampling_result = self.sampler.sample(assign_result, bbox_pred,
503
- gt_bboxes)
504
- pos_inds = sampling_result.pos_inds
505
- neg_inds = sampling_result.neg_inds
506
-
507
- # label targets
508
- labels = gt_bboxes.new_full((num_bboxes, ),
509
- self.num_classes,
510
- dtype=torch.long)
511
- labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds]
512
- label_weights = gt_bboxes.new_ones(num_bboxes)
513
-
514
- # bbox targets
515
- bbox_targets = torch.zeros_like(bbox_pred)
516
- bbox_weights = torch.zeros_like(bbox_pred)
517
- bbox_weights[pos_inds] = 1.0
518
- img_h, img_w, _ = img_meta['img_shape']
519
-
520
- # DETR regress the relative position of boxes (cxcywh) in the image.
521
- # Thus the learning target should be normalized by the image size, also
522
- # the box format should be converted from defaultly x1y1x2y2 to cxcywh.
523
- factor = bbox_pred.new_tensor([img_w, img_h, img_w,
524
- img_h]).unsqueeze(0)
525
- pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor
526
- pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized)
527
- bbox_targets[pos_inds] = pos_gt_bboxes_targets
528
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
529
- neg_inds)
530
-
531
- # over-write because img_metas are needed as inputs for bbox_head.
532
- def forward_train(self,
533
- x,
534
- img_metas,
535
- gt_bboxes,
536
- gt_labels=None,
537
- gt_bboxes_ignore=None,
538
- proposal_cfg=None,
539
- **kwargs):
540
- """Forward function for training mode.
541
-
542
- Args:
543
- x (list[Tensor]): Features from backbone.
544
- img_metas (list[dict]): Meta information of each image, e.g.,
545
- image size, scaling factor, etc.
546
- gt_bboxes (Tensor): Ground truth bboxes of the image,
547
- shape (num_gts, 4).
548
- gt_labels (Tensor): Ground truth labels of each box,
549
- shape (num_gts,).
550
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
551
- ignored, shape (num_ignored_gts, 4).
552
- proposal_cfg (mmcv.Config): Test / postprocessing configuration,
553
- if None, test_cfg would be used.
554
-
555
- Returns:
556
- dict[str, Tensor]: A dictionary of loss components.
557
- """
558
- assert proposal_cfg is None, '"proposal_cfg" must be None'
559
- outs = self(x, img_metas)
560
- if gt_labels is None:
561
- loss_inputs = outs + (gt_bboxes, img_metas)
562
- else:
563
- loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
564
- losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
565
- return losses
566
-
567
- @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list'))
568
- def get_bboxes(self,
569
- all_cls_scores_list,
570
- all_bbox_preds_list,
571
- img_metas,
572
- rescale=False):
573
- """Transform network outputs for a batch into bbox predictions.
574
-
575
- Args:
576
- all_cls_scores_list (list[Tensor]): Classification outputs
577
- for each feature level. Each is a 4D-tensor with shape
578
- [nb_dec, bs, num_query, cls_out_channels].
579
- all_bbox_preds_list (list[Tensor]): Sigmoid regression
580
- outputs for each feature level. Each is a 4D-tensor with
581
- normalized coordinate format (cx, cy, w, h) and shape
582
- [nb_dec, bs, num_query, 4].
583
- img_metas (list[dict]): Meta information of each image.
584
- rescale (bool, optional): If True, return boxes in original
585
- image space. Default False.
586
-
587
- Returns:
588
- list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \
589
- The first item is an (n, 5) tensor, where the first 4 columns \
590
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the \
591
- 5-th column is a score between 0 and 1. The second item is a \
592
- (n,) tensor where each item is the predicted class label of \
593
- the corresponding box.
594
- """
595
- # NOTE defaultly only using outputs from the last feature level,
596
- # and only the outputs from the last decoder layer is used.
597
- cls_scores = all_cls_scores_list[-1][-1]
598
- bbox_preds = all_bbox_preds_list[-1][-1]
599
-
600
- result_list = []
601
- for img_id in range(len(img_metas)):
602
- cls_score = cls_scores[img_id]
603
- bbox_pred = bbox_preds[img_id]
604
- img_shape = img_metas[img_id]['img_shape']
605
- scale_factor = img_metas[img_id]['scale_factor']
606
- proposals = self._get_bboxes_single(cls_score, bbox_pred,
607
- img_shape, scale_factor,
608
- rescale)
609
- result_list.append(proposals)
610
- return result_list
611
-
612
- def _get_bboxes_single(self,
613
- cls_score,
614
- bbox_pred,
615
- img_shape,
616
- scale_factor,
617
- rescale=False):
618
- """Transform outputs from the last decoder layer into bbox predictions
619
- for each image.
620
-
621
- Args:
622
- cls_score (Tensor): Box score logits from the last decoder layer
623
- for each image. Shape [num_query, cls_out_channels].
624
- bbox_pred (Tensor): Sigmoid outputs from the last decoder layer
625
- for each image, with coordinate format (cx, cy, w, h) and
626
- shape [num_query, 4].
627
- img_shape (tuple[int]): Shape of input image, (height, width, 3).
628
- scale_factor (ndarray, optional): Scale factor of the image arange
629
- as (w_scale, h_scale, w_scale, h_scale).
630
- rescale (bool, optional): If True, return boxes in original image
631
- space. Default False.
632
-
633
- Returns:
634
- tuple[Tensor]: Results of detected bboxes and labels.
635
-
636
- - det_bboxes: Predicted bboxes with shape [num_query, 5], \
637
- where the first 4 columns are bounding box positions \
638
- (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \
639
- between 0 and 1.
640
- - det_labels: Predicted labels of the corresponding box with \
641
- shape [num_query].
642
- """
643
- assert len(cls_score) == len(bbox_pred)
644
- # exclude background
645
- scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1)
646
- det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred)
647
- det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1]
648
- det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0]
649
- det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1])
650
- det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0])
651
- if rescale:
652
- det_bboxes /= det_bboxes.new_tensor(scale_factor)
653
- det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1)
654
- return det_bboxes, det_labels
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/detectors/atss.py DELETED
@@ -1,17 +0,0 @@
1
- from ..builder import DETECTORS
2
- from .single_stage import SingleStageDetector
3
-
4
-
5
- @DETECTORS.register_module()
6
- class ATSS(SingleStageDetector):
7
- """Implementation of `ATSS <https://arxiv.org/abs/1912.02424>`_."""
8
-
9
- def __init__(self,
10
- backbone,
11
- neck,
12
- bbox_head,
13
- train_cfg=None,
14
- test_cfg=None,
15
- pretrained=None):
16
- super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg,
17
- test_cfg, pretrained)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/utils/util_mixins.py DELETED
@@ -1,104 +0,0 @@
1
- """This module defines the :class:`NiceRepr` mixin class, which defines a
2
- ``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__``
3
- method, which you must define. This means you only have to overload one
4
- function instead of two. Furthermore, if the object defines a ``__len__``
5
- method, then the ``__nice__`` method defaults to something sensible, otherwise
6
- it is treated as abstract and raises ``NotImplementedError``.
7
-
8
- To use simply have your object inherit from :class:`NiceRepr`
9
- (multi-inheritance should be ok).
10
-
11
- This code was copied from the ubelt library: https://github.com/Erotemic/ubelt
12
-
13
- Example:
14
- >>> # Objects that define __nice__ have a default __str__ and __repr__
15
- >>> class Student(NiceRepr):
16
- ... def __init__(self, name):
17
- ... self.name = name
18
- ... def __nice__(self):
19
- ... return self.name
20
- >>> s1 = Student('Alice')
21
- >>> s2 = Student('Bob')
22
- >>> print(f's1 = {s1}')
23
- >>> print(f's2 = {s2}')
24
- s1 = <Student(Alice)>
25
- s2 = <Student(Bob)>
26
-
27
- Example:
28
- >>> # Objects that define __len__ have a default __nice__
29
- >>> class Group(NiceRepr):
30
- ... def __init__(self, data):
31
- ... self.data = data
32
- ... def __len__(self):
33
- ... return len(self.data)
34
- >>> g = Group([1, 2, 3])
35
- >>> print(f'g = {g}')
36
- g = <Group(3)>
37
- """
38
- import warnings
39
-
40
-
41
- class NiceRepr(object):
42
- """Inherit from this class and define ``__nice__`` to "nicely" print your
43
- objects.
44
-
45
- Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function
46
- Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``.
47
- If the inheriting class has a ``__len__``, method then the default
48
- ``__nice__`` method will return its length.
49
-
50
- Example:
51
- >>> class Foo(NiceRepr):
52
- ... def __nice__(self):
53
- ... return 'info'
54
- >>> foo = Foo()
55
- >>> assert str(foo) == '<Foo(info)>'
56
- >>> assert repr(foo).startswith('<Foo(info) at ')
57
-
58
- Example:
59
- >>> class Bar(NiceRepr):
60
- ... pass
61
- >>> bar = Bar()
62
- >>> import pytest
63
- >>> with pytest.warns(None) as record:
64
- >>> assert 'object at' in str(bar)
65
- >>> assert 'object at' in repr(bar)
66
-
67
- Example:
68
- >>> class Baz(NiceRepr):
69
- ... def __len__(self):
70
- ... return 5
71
- >>> baz = Baz()
72
- >>> assert str(baz) == '<Baz(5)>'
73
- """
74
-
75
- def __nice__(self):
76
- """str: a "nice" summary string describing this module"""
77
- if hasattr(self, '__len__'):
78
- # It is a common pattern for objects to use __len__ in __nice__
79
- # As a convenience we define a default __nice__ for these objects
80
- return str(len(self))
81
- else:
82
- # In all other cases force the subclass to overload __nice__
83
- raise NotImplementedError(
84
- f'Define the __nice__ method for {self.__class__!r}')
85
-
86
- def __repr__(self):
87
- """str: the string of the module"""
88
- try:
89
- nice = self.__nice__()
90
- classname = self.__class__.__name__
91
- return f'<{classname}({nice}) at {hex(id(self))}>'
92
- except NotImplementedError as ex:
93
- warnings.warn(str(ex), category=RuntimeWarning)
94
- return object.__repr__(self)
95
-
96
- def __str__(self):
97
- """str: the string of the module"""
98
- try:
99
- classname = self.__class__.__name__
100
- nice = self.__nice__()
101
- return f'<{classname}({nice})>'
102
- except NotImplementedError as ex:
103
- warnings.warn(str(ex), category=RuntimeWarning)
104
- return object.__repr__(self)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Campfireman/whisper_lab2/app.py DELETED
@@ -1,119 +0,0 @@
1
- from transformers import pipeline
2
- import gradio as gr
3
- import moviepy.editor as mp
4
- from pytube import YouTube
5
- import math
6
-
7
- pipe = pipeline(model="Campfireman/whisper-small-hi") # change to "your-username/the-name-you-picked"
8
-
9
- segment_length = 25 # 25s per segment
10
-
11
- def download_video(url):
12
- print("Downloading...")
13
- local_file = (
14
- YouTube(url)
15
- .streams.filter(progressive=True, file_extension="mp4")
16
- .first()
17
- .download()
18
- )
19
- print("Downloaded")
20
- global my_clip
21
- global original_wav
22
- my_clip = mp.VideoFileClip(local_file)
23
- my_clip.audio.write_audiofile("AUDIO_ORIGINAL.wav")
24
- original_wav = mp.AudioFileClip("AUDIO_ORIGINAL.wav")
25
- global audio_length
26
- audio_length = original_wav.duration
27
- print("Overall audio time elapsed: "+str(audio_length))
28
- return local_file
29
-
30
- def validate_youtube(url):
31
- #This creates a youtube object
32
- try:
33
- yt = YouTube(url)
34
- except Exception:
35
- print("Hi the URL seems not a valid YouTube video link")
36
- return True
37
- #This will return the length of the video in sec as an int
38
- video_length = yt.length
39
- if video_length > 600:
40
- print("Your video is longer than 10 minutes")
41
- return False
42
- else:
43
- print("Your video is less than 10 minutes")
44
- return True
45
-
46
- def validate_url(url):
47
- import validators
48
- if not validators.url(url):
49
- return True
50
- else:
51
- return False
52
-
53
- def audio_clipper(index, seg_total):
54
- my_audio = "audio_out"+str(index)+".wav"
55
- audio_clipped_obj = mp.AudioFileClip.copy(original_wav)
56
- print("Segment "+str(index)+":")
57
- # Clipping
58
- if (index > 0):
59
- print("Clipped: 0 ~ " + str(segment_length * (index)) + "sec")
60
- audio_clipped_obj = mp.AudioFileClip.cutout(audio_clipped_obj, 0, segment_length * (index))
61
- if (index < seg_total - 1):
62
- print("Clipped: " + str(segment_length * (index + 1)) + "~ " + str(audio_length) +" sec")
63
- audio_clipped_obj = mp.AudioFileClip.cutout(audio_clipped_obj, segment_length * (index + 1), audio_length)
64
-
65
- # Write out the temporary segment data
66
- mp.AudioFileClip.write_audiofile(audio_clipped_obj, my_audio)
67
- #audio_clipped_obj.audio.write_audiofile(my_audio)
68
-
69
- return my_audio
70
-
71
- def transcribe(video_url):
72
- text = ""
73
- if validate_url(video_url):
74
- if not validate_youtube(video_url):
75
- return "The URL seems not for Youtube videos or the video is too long. Check out the errors in the log. "
76
- else:
77
- download_video(video_url)
78
- else:
79
- return "Invalid URL. Please check the format of your link. "
80
-
81
- segment_count = math.ceil(audio_length / segment_length)
82
- print("Total segments: "+str(segment_count))
83
- if segment_count <= 0:
84
- return "Corrupted Video Data! Invalid length of "+str(segment_count * 25)+" second(s)."
85
- else:
86
- for x in range(segment_count):
87
- audio = audio_clipper(x, segment_count)
88
- seg_text = pipe(audio, batch_size=512, truncation=True)["text"]
89
- print("Segtext: ")
90
- print(seg_text)
91
- text = text + seg_text
92
-
93
- return text
94
-
95
-
96
- def transcribe2(audio):
97
- text = pipe(audio)["text"]
98
- return text
99
-
100
-
101
- iface = gr.Interface( fn=transcribe,
102
- inputs=gr.Textbox(label = "Enter the URL of the Youtube video clip here (without prefixes like http://):"),
103
- outputs="text",
104
- title="Whisper Small SE",
105
- description="Video Swedish Transcriptior",
106
- )
107
-
108
-
109
- iface2 = gr.Interface(
110
- fn=transcribe2,
111
- inputs=gr.Audio(source="microphone", type="filepath"),
112
- outputs="text",
113
- title="Whisper Small Swedish",
114
- description="Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.",
115
- )
116
-
117
- demo = gr.TabbedInterface([iface, iface2],["Swedish YouTube Video to Text", "Swedish Audio to Text"])
118
-
119
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/manage.py DELETED
@@ -1,43 +0,0 @@
1
- #!/usr/bin/env python
2
- """Django's command-line utility for administrative tasks."""
3
- import os
4
- import sys
5
- import random
6
- from torchvision.transforms import GaussianBlur
7
-
8
-
9
- # Define a custom transform for Gaussian blur
10
- def gaussian_blur(
11
- x,
12
- p=0.5,
13
- kernel_size_min=3,
14
- kernel_size_max=20,
15
- sigma_min=0.1,
16
- sigma_max=3):
17
- if x.ndim == 4:
18
- for i in range(x.shape[0]):
19
- if random.random() < p:
20
- kernel_size = random.randrange(
21
- kernel_size_min,
22
- kernel_size_max + 1, 2)
23
- sigma = random.uniform(sigma_min, sigma_max)
24
- x[i] = GaussianBlur(kernel_size=kernel_size, sigma=sigma)(x[i])
25
- return x
26
-
27
-
28
- def main():
29
- """Run administrative tasks."""
30
- os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
31
- try:
32
- from django.core.management import execute_from_command_line
33
- except ImportError as exc:
34
- raise ImportError(
35
- "Couldn't import Django. Are you sure it's installed and "
36
- "available on your PYTHONPATH environment variable? Did you "
37
- "forget to activate a virtual environment?"
38
- ) from exc
39
- execute_from_command_line(sys.argv)
40
-
41
-
42
- if __name__ == '__main__':
43
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/optor/index.html DELETED
@@ -1,53 +0,0 @@
1
- <!DOCTYPE html>
2
- <html lang="en">
3
- <head>
4
- <meta charset="utf-8" />
5
- <meta
6
- name="viewport"
7
- content="width=device-width, initial-scale=1, shrink-to-fit=no, maximum-scale=1"
8
- />
9
-
10
- <script>
11
- window.__gradio_mode__ = "app";
12
- window.gradio_config = {"version": "3.0.13", "mode": "blocks", "dev_mode": false, "components": [{"id": 1, "type": "markdown", "props": {"value": "<h1><center>OPTOR</center></h1>", "name": "markdown", "visible": true, "style": {}}}, {"id": 2, "type": "markdown", "props": {"value": "<p>OPTOR will allow you to generate high-quality images based on a text query based on SD, Midjourney, Dalle-2!</p>\n", "name": "markdown", "visible": true, "style": {}}}, {"id": 3, "type": "group", "props": {"type": "group", "visible": true, "style": {}}}, {"id": 4, "type": "box", "props": {"type": "box", "visible": true, "style": {}}}, {"id": 5, "type": "row", "props": {"type": "row", "visible": true, "style": {"equal_height": true, "mobile_collapse": false}}}, {"id": 6, "type": "textbox", "props": {"lines": 1, "max_lines": 1, "value": "", "label": "Enter your prompt", "show_label": false, "name": "textbox", "visible": true, "style": {"container": false}}}, {"id": 7, "type": "button", "props": {"value": "Run", "variant": "primary", "name": "button", "visible": true, "style": {}}}, {"id": 8, "type": "gallery", "props": {"label": "Generated images", "show_label": false, "name": "gallery", "visible": true, "style": {"grid": [3], "height": "auto"}}}, {"id": 9, "type": "markdown", "props": {"value": "<details>\n<summary>LICENSE</summary>\n<p style='line-height: normal; font-size: small'>\nAll rights reserved to CoffAI, the technology is powered by the latest technology and you get it for free. You can copy the repositories wherever you want. Thank you!</a>.\n</p>\n</details>", "name": "markdown", "visible": true, "style": {}}}, {"id": 10, "type": "markdown", "props": {"value": "<hr />\n<p style='text-align: center'>\nCreated by <a href=\"https://texton-optor1.hf.space\" target=\"_blank\">CofAI</a> et al. 2023\n<br/>\n<a href=\"https://texton-optor2.hf.space\" target=\"_blank\">Discord</a> | <a href=\"https://texton-eh.hf.space\" target=\"_blank\">Evgeniy Hristoforu</a>\n<p style='text-align: center'>Powered by CofAI.api <a href=\"https://texton-cof1.hf.space/trc/\" target=\"_blank\">CofAI.api Engine</a>\n</p>", "name": "markdown", "visible": true, "style": {}}}], "theme": "default", "css": ".container { max-width: 800px; margin: auto; }", "enable_queue": false, "layout": {"id": 0, "children": [{"id": 1}, {"id": 2}, {"id": 3, "children": [{"id": 4, "children": [{"id": 5, "children": [{"id": 6}, {"id": 7}]}]}, {"id": 8}]}, {"id": 9}, {"id": 10}]}, "dependencies": [{"targets": [7], "trigger": "click", "inputs": [6], "outputs": [8], "backend_fn": false, "js": "\n async (text) => {\n try {\n response = await fetch('https://bf.dallemini.ai/generate', {\n method: 'POST',\n headers: {\n 'Accept': 'application/json',\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify({\n prompt: text\n })\n });\n response = await response.json()\n let imgs = response.images.map(r => \"data:image/png;base64,\" + r)\n return imgs\n } catch (e) {\n alert(\"Too much traffic, please try again.\")\n IMG = \"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMgAAADICAMAAACahl6sAAAAOVBMVEXg4OB1dXXX19fd3d2EhIR9fX14eHjJycm2trbb29uurq6goKCZmZmIiIiBgYHNzc2np6e8vLySkpKXK8HrAAABuUlEQVR4nO3Z0bKCIBCAYQNFVCzr/R/2nHU6k8KpJi6wZf7vLu1id9gFhKYBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAb249h7pzr5jD29uhospnlfNo4L+boiLKYyZ0iblKYiu/iNER3PTquD9npPgbB98Za0/twH59JVasMtzXo1m+iHny7PrwpysSuebgxCtmOTlkma121l/TFZR2UqXxEebxEO/87QZlZ3inpeCPzVftkojUyJp2OWVgKy23qSsbg8evitBSXkUjHzYN9Is0oeWoYkkUKazsxRYlYKa6ldFSfs7K/8tsnUSLrXHAuG1SOXpp5t1LEiQxSe33ZqDJIC4TdkziRJkRN9J1CXFlpIj7J9RvNSd0kiUj1zSVjyiKr4X5yTRIx0kYlY8oinbzfFSaJWFlJSsaUpZpEqimttNkTOpo9nX4TOqbfdEFM6FgQpW7c8OofSrYo1Wwaq9nG1/NhVc2nbj2HD821kuOgeg7o3hyZBj1Hpo9D7M3K+HeIrSmPeq4Vfl3ruOhpnly9vdyEfa1KLkPF7nr66GAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPjcD13rCcC3ILx/AAAAAElFTkSuQmCC\"\n return Array(9).fill(IMG)\n }\n }\n ", "status_tracker": null, "queue": null, "api_name": null}]};
13
- </script>
14
-
15
- <link rel="preconnect" href="https://fonts.googleapis.com" />
16
- <link
17
- rel="preconnect"
18
- href="https://fonts.gstatic.com"
19
- crossorigin="anonymous"
20
- />
21
- <link
22
- href="https://fonts.googleapis.com/css?family=Source Sans Pro"
23
- rel="stylesheet"
24
- />
25
- <link
26
- href="https://fonts.googleapis.com/css?family=IBM Plex Mono"
27
- rel="stylesheet"
28
- />
29
- <script src="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.1/iframeResizer.contentWindow.min.js"></script>
30
- <script type="module" crossorigin src="https://gradio.s3-us-west-2.amazonaws.com/3.0.9b12/assets/index.8eca4ae7.js"></script>
31
- <link rel="stylesheet" href="https://gradio.s3-us-west-2.amazonaws.com/3.0.9b12/assets/index.cbea297d.css">
32
- <style>
33
- footer img {
34
- display: none !important;
35
- }
36
- </style>
37
- </head>
38
-
39
- <body
40
- style="
41
- margin: 0;
42
- padding: 0;
43
- display: flex;
44
- flex-direction: column;
45
- flex-grow: 1;
46
- "
47
- >
48
- <div
49
- id="root"
50
- style="display: flex; flex-direction: column; flex-grow: 1"
51
- ></div>
52
- </body>
53
- </html>