parquet-converter commited on
Commit
168c640
·
1 Parent(s): eb91bd5

Update parquet files (step 88 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Christinaaguilerabacktobasicsbittorrent.md +0 -12
  2. spaces/1gistliPinn/ChatGPT4/Examples/Dbms Book Pdf By Prateek Bhatia.md +0 -6
  3. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Asphalt 8 The Ultimate Racing Game for Speed Lovers - Drive and Drift with Real Physics.md +0 -119
  4. spaces/1phancelerku/anime-remove-background/Aprenda a fazer pizzas incrveis com o Good Pizza Great Pizza Mod Apk Dinheiro Infinito.md +0 -112
  5. spaces/1phancelerku/anime-remove-background/Batyr Muhammedow - Lebabyma - The Song that Rocked the Armenian Music Scene - Mp3 Download.md +0 -142
  6. spaces/1phancelerku/anime-remove-background/Download Green Button Mod APK and Challenge Your Friends to Press the Button.md +0 -102
  7. spaces/1toTree/lora_test/ppdiffusers/configuration_utils.py +0 -591
  8. spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_discrete.py +0 -286
  9. spaces/1toTree/lora_test/ppdiffusers/utils/import_utils.py +0 -331
  10. spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/model_param_init.py +0 -69
  11. spaces/AIWaves/Debate/src/agents/template.py +0 -111
  12. spaces/AP123/ai-avatars/convertosd.py +0 -226
  13. spaces/AUBADA-ALARABI/AraPoet/README.md +0 -14
  14. spaces/Abhilashvj/planogram-compliance/yolo_inference_util.py +0 -369
  15. spaces/AdamWEE80/VoiceTTS/README.md +0 -12
  16. spaces/AgentVerse/agentVerse/dataloader/gsm8k.py +0 -22
  17. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PaddingMethods.js +0 -36
  18. spaces/Aitor/CVchat/app.py +0 -85
  19. spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/data_utils.py +0 -267
  20. spaces/AlekseyKorshuk/gai-project/modules/playground.py +0 -142
  21. spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cantonese.py +0 -59
  22. spaces/Amrrs/image-caption-with-vit-gpt2/app.py +0 -78
  23. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae.py +0 -688
  24. spaces/Andy1621/UniFormerV2_mit_demo/mitv1_class_index.py +0 -341
  25. spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_giou_1x_coco.py +0 -6
  26. spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/README.md +0 -28
  27. spaces/Andy1621/uniformer_image_detection/configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py +0 -79
  28. spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/distributed_sampler.py +0 -39
  29. spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/yolo_neck.py +0 -136
  30. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/hrf.py +0 -27
  31. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/modules.py +0 -213
  32. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py +0 -150
  33. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/actions.py +0 -207
  34. spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/text/file_utils.py +0 -256
  35. spaces/BIOML-SVM/SVM/README.md +0 -54
  36. spaces/Banbri/zcvzcv/Dockerfile +0 -65
  37. spaces/Banbri/zcvzcv/src/lib/pick.ts +0 -2
  38. spaces/Basil2k4/VPSnguyenmanh/src/create_user_and_fix_permissions.sh +0 -47
  39. spaces/Benson/text-generation/Examples/Banderas De Pases.md +0 -83
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/markers.py +0 -304
  41. spaces/BridgeTower/bridgetower-video-search/app.py +0 -341
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_anchor_generator.py +0 -122
  43. spaces/CVPR/LIVE/thrust/thrust/detail/event_error.h +0 -166
  44. spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scalar/binary_search.h +0 -85
  45. spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_preprocessing.py +0 -69
  46. spaces/DEEMOSTECH/ChatAvatar/static/js/main.84e5ce89.js +0 -0
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/filelock/_soft.py +0 -46
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/table_builder.py +0 -223
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/bezierTools.py +0 -1474
  50. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/_version.py +0 -21
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Christinaaguilerabacktobasicsbittorrent.md DELETED
@@ -1,12 +0,0 @@
1
- <br />
2
- <h1>How to Download Christina Aguilera's Back to Basics Album via BitTorrent</h1>
3
- <p>Christina Aguilera is one of the most popular and talented singers of our time. Her fifth studio album, Back to Basics, was released in 2006 and received critical acclaim for its blend of retro soul, jazz, blues, and pop influences. The album features hit singles such as "Ain't No Other Man", "Hurt", "Candyman", and "Slow Down Baby".</p>
4
- <p>If you are a fan of Christina Aguilera and want to download her Back to Basics album for free, you can use BitTorrent, a peer-to-peer file sharing protocol that allows users to download and share large files over the internet. BitTorrent is legal, but downloading copyrighted content without permission is not. Therefore, you should only download files that are in the public domain or that you have the right to use.</p>
5
- <h2>christinaaguilerabacktobasicsbittorrent</h2><br /><p><b><b>DOWNLOAD</b> &#128504; <a href="https://byltly.com/2uKxz2">https://byltly.com/2uKxz2</a></b></p><br /><br />
6
- <p>To download Christina Aguilera's Back to Basics album via BitTorrent, you will need a BitTorrent client, such as qBittorrent, uTorrent, or Vuze. A BitTorrent client is a software that enables you to connect to other users who have the files you want and download them to your computer. You will also need a torrent file or a magnet link, which are small files that contain information about the files you want to download, such as their names, sizes, and locations.</p>
7
- <p>One of the sources where you can find a torrent file or a magnet link for Christina Aguilera's Back to Basics album is <a href="https://bt4g.org/magnet/0dc7e8b988fd2a3b25fabf64d1a44bc95f5e615c">this website[^1^]</a>. This website provides a magnet link for the album, which you can copy and paste into your BitTorrent client. Alternatively, you can click on the magnet link and your BitTorrent client will open automatically and start downloading the album.</p>
8
- <p>Another source where you can find a torrent file or a magnet link for Christina Aguilera's Back to Basics album is <a href="https://www.youtube.com/watch?v=M5xwpe_mhzQ">this YouTube video[^2^]</a>. This video contains the full album in audio format, along with the track listing and the release date. In the description of the video, you can find links to various streaming platforms where you can listen to or buy the album legally. However, if you scroll down to the comments section, you can also find some users who have posted torrent files or magnet links for the album. You can download these files or links and use them with your BitTorrent client.</p>
9
- <p>Before downloading any torrent file or magnet link from any source, you should always check its validity and safety. You can do this by reading the reviews and ratings of other users who have downloaded the same file or link. You should also scan the file or link with an antivirus software before opening it. Additionally, you should use a VPN (virtual private network) service to protect your privacy and security while downloading files via BitTorrent.</p>
10
- <p>Downloading Christina Aguilera's Back to Basics album via BitTorrent is a simple and fast way to enjoy her music for free. However, you should always respect the rights of the artists and creators who produce such amazing content. If you like Christina Aguilera's music, you should support her by buying her albums, attending her concerts, or following her on social media.</p> cec2833e83<br />
11
- <br />
12
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Dbms Book Pdf By Prateek Bhatia.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>dbms book pdf by prateek bhatia</h2><br /><p><b><b>DOWNLOAD</b> &#187; <a href="https://imgfil.com/2uxXKT">https://imgfil.com/2uxXKT</a></b></p><br /><br />
2
- <br />
3
- [Books] Database Management System By Prateek Bhatia Pdf. Recognizing the ... Sep 22, 2019; 3 min read; Dbms Book Pdf By Prateek Bhatia. 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Asphalt 8 The Ultimate Racing Game for Speed Lovers - Drive and Drift with Real Physics.md DELETED
@@ -1,119 +0,0 @@
1
- <br />
2
- <h1>Asphalt 8 Racing Game - Drive, Drift at Real Speed Download</h1>
3
- <p>If you are looking for a thrilling and immersive racing game that will keep you on the edge of your seat, you should definitely check out Asphalt 8. This is one of the most popular and acclaimed racing games on mobile devices, with over 470 million players worldwide. In this article, we will tell you everything you need to know about Asphalt 8, including how to download and install it on your device, what features it offers, and some tips and tricks to help you become a better racer.</p>
4
- <h2>asphalt 8 racing game - drive drift at real speed download</h2><br /><p><b><b>Download File</b> &#9675;&#9675;&#9675; <a href="https://urlin.us/2uSVBn">https://urlin.us/2uSVBn</a></b></p><br /><br />
5
- <h2>Introduction</h2>
6
- <p>Asphalt 8 is an arcade racing game developed by Gameloft SE. It is part of the Asphalt franchise that started in 2004. Asphalt 8 was released in 2013 for iOS, Android, Windows Phone, Windows 10, Tizen, BlackBerry, tvOS, macOS, Nintendo Switch, Ouya, Fire OS. It has received several updates and expansions since then.</p>
7
- <p>Asphalt 8 lets you experience the thrill of driving over 300 high-performance cars and bikes from top licensed manufacturers like Ferrari, Lamborghini, Bugatti, Porsche, Ducati, and more. You can race across more than 75 tracks in different locations around the world, from the Nevada Desert to Tokyo streets. You can also compete with other players in real-time multiplayer mode or challenge yourself in various single-player modes.</p>
8
- <p>To download Asphalt 8 on your device, you need to follow these steps:</p>
9
- <ul>
10
- <li>Go to your device's app store (Google Play Store for Android, App Store for iOS, Microsoft Store for Windows) or visit [1](https://gameloft.com/game/asphalt-8) for other platforms.</li>
11
- <li>Search for "Asphalt 8" or "Asphalt 8 Racing Game" or use this link for Android devices.</li>
12
- <li>Tap on "Install" or "Get" button and wait for the download to finish.</li>
13
- <li>Launch the game and follow the instructions on the screen.</li>
14
- <li>Enjoy your racing adventure!</li>
15
- </ul>
16
- <h2>Features of Asphalt 8</h2>
17
- <h3>Licensed luxury cars and motorcycles</h3>
18
- <p>One of the main attractions of Asphalt 8 is its impressive collection of vehicles. You can choose from over 300 cars and bikes from some of the most prestigious brands in the world. Whether you prefer speed, power, design, or handling, you will find something that suits your taste. Some of the models available in the game are:</p>
19
- <p>asphalt 8 car racing game - apps on google play<br />
20
- asphalt 8 airborne - microsoft store<br />
21
- asphalt 8 racing game - drive drift at real speed for pc<br />
22
- asphalt 8 - car racing game gameloft se<br />
23
- asphalt 8 airborne - perform high-speed aerial stunts<br />
24
- asphalt 8 racing game - download free for android<br />
25
- asphalt 8 - car racing game apk mod<br />
26
- asphalt 8 airborne - online multiplayer racing experience<br />
27
- asphalt 8 racing game - best graphics and physics<br />
28
- asphalt 8 - car racing game review<br />
29
- asphalt 8 airborne - how to install on windows 10<br />
30
- asphalt 8 racing game - tips and tricks for beginners<br />
31
- asphalt 8 - car racing game cheats and hacks<br />
32
- asphalt 8 airborne - latest update and news<br />
33
- asphalt 8 racing game - top licensed cars and motorcycles<br />
34
- asphalt 8 - car racing game features and gameplay<br />
35
- asphalt 8 airborne - system requirements and compatibility<br />
36
- asphalt 8 racing game - customize and upgrade your rides<br />
37
- asphalt 8 - car racing game official website and support<br />
38
- asphalt 8 airborne - ratings and feedback from users<br />
39
- asphalt 8 racing game - best tracks and locations<br />
40
- asphalt 8 - car racing game discord and social media<br />
41
- asphalt 8 airborne - limited-time events and rewards<br />
42
- asphalt 8 racing game - massive content depth and challenges<br />
43
- asphalt 8 - car racing game trailer and screenshots<br />
44
- asphalt 8 airborne - faq and troubleshooting<br />
45
- asphalt 8 racing game - create your own racer avatar<br />
46
- asphalt 8 - car racing game alternatives and similar games<br />
47
- asphalt 8 airborne - achievements and leaderboards<br />
48
- asphalt 8 racing game - how to play with friends and family<br />
49
- asphalt 8 - car racing game blog and community<br />
50
- asphalt 8 airborne - in-game purchases and currency<br />
51
- asphalt 8 racing game - how to get free cars and bikes<br />
52
- asphalt 8 - car racing game videos and tutorials<br />
53
- asphalt 8 airborne - fun facts and trivia<br />
54
- asphalt 8 racing game - how to drift and boost your speed<br />
55
- asphalt 8 - car racing game soundtrack and music<br />
56
- asphalt 8 airborne - history and development of the game<br />
57
- asphalt 8 racing game - how to unlock special edition vehicles<br />
58
- asphalt 8 - car racing game comparison with other asphalt games<br />
59
- asphalt 8 airborne - pros and cons of the game<br />
60
- asphalt 8 racing game - how to contact the developers and report bugs<br />
61
- asphalt 8 - car racing game vr and mixed reality mode<br />
62
- asphalt 8 airborne - best cars and bikes for each track<br />
63
- asphalt 8 racing game - how to backup and restore your progress<br />
64
- asphalt 8 - car racing game controller support and settings<br />
65
- asphalt 8 airborne - secrets and easter eggs in the game<br />
66
- asphalt 8 racing game - how to join world series and tournaments</p> <ul>
67
- <li>Ferrari FXX Evoluzione, LaFerrari, Enzo Ferrari, 488 GTB, F40, F12berlinetta</li>
68
- <li>Lamborghini Veneno, Aventador LP 750-4 SV, Huracán Super Trofeo Evo, Centenario LP 770-4, Sesto Elemento, Egoista</li>
69
- <li>Bugatti Chiron, Veyron 16.4 Grand Sport Vitesse, Divo</li>
70
- <li>Porsche 918 Spyder with Weissach Package, 911 GT3 RS, Carrera GT, Taycan Turbo S</li>
71
- <li>Ducati SuperSport S, Monster 1200, XDiavel S, 1299 Panigale R Final Edition</li>
72
- </ul>
73
- <p>You can customize and upgrade your vehicles with various decals, colors, rims, and performance parts. You can also tune them to suit your driving style and preferences. You can unlock new vehicles by completing events, collections, or spending credits and tokens.</p>
74
- <h3>Stunning graphics and physics-based gameplay</h3>
75
- <p>Asphalt 8 is not just a racing game, it is also a visual spectacle. The game features stunning graphics and animations that make you feel like you are in a real race. The game uses a physics-based engine that simulates realistic car behavior and dynamics. You can see the details of the cars, the environments, the weather effects, and the damage effects.</p>
76
- <p>Asphalt 8 is also known for its high-speed aerial stunts and drifts. You can perform amazing jumps and flips by using ramps, barrels, bridges, and other obstacles. You can also drift on the asphalt to gain more speed and nitro. You can use the nitro to boost your speed and perform even more spectacular stunts. You can also activate the adrenaline mode to go faster than ever.</p>
77
- <h3>Endless stream of content and modes</h3>
78
- <p>Asphalt 8 never gets boring because it offers an endless stream of content and modes to keep you entertained. You can race across more than 75 tracks in different locations around the world, from the Nevada Desert to Tokyo streets. Each track has its own challenges, shortcuts, and secrets to discover.</p>
79
- <p>You can also compete with other players in real-time multiplayer mode or challenge yourself in various single-player modes. Some of the modes available in the game are:</p>
80
- <ul>
81
- <li>Career Mode: Complete over 1,000 events and become the best racer in the world.</li>
82
- <li>Mastery Mode: Master each vehicle and earn exclusive rewards.</li>
83
- <li>World Series: Race against up to seven other players online and climb the leaderboard.</li>
84
- <li>Events Mode: Participate in limited-time events and win special prizes.</li>
85
- <li>R&D Mode: Test-drive new vehicles and unlock them before anyone else.</li>
86
- <li>Enduro Double Down Mode: Survive as long as you can in a series of races with increasing difficulty.</li>
87
- <li>Tag Racing Mode: Switch between two vehicles during a race and use their strengths to your advantage.</li>
88
- <li>Infection Mode: Infect other racers with unlimited nitro or avoid being infected yourself.</li>
89
- <li>Knockdown Mode: Knock down as many opponents as you can or avoid being knocked down yourself.</li>
90
- <li>Gate Drift Mode: Drift through as many gates as you can to earn points.</li>
91
- </ul> <h2>Tips and tricks for Asphalt 8</h2>
92
- <p>If you want to improve your racing skills and enjoy Asphalt 8 more, you should follow these tips and tricks:</p>
93
- <h3>How to master the controls and settings</h3>
94
- <p>Asphalt 8 offers different control options for you to choose from. You can use tilt, touch, or tap to steer your vehicle. You can also customize the sensitivity, position, and size of the controls. You can also enable or disable auto-acceleration, auto-brake, and manual nitro.</p>
95
- <p>You should experiment with different control options and settings until you find the one that suits you best. You should also practice using nitro, boosters, and other power-ups effectively. Nitro can help you speed up, overtake, or escape from opponents. Boosters can give you extra advantages such as double credits, extra nitro, or tuning kits. Other power-ups such as shockwaves, magnets, or shields can help you deal with obstacles and enemies.</p>
96
- <h3>How to earn credits and tokens</h3>
97
- <p>Credits and tokens are the main currencies in Asphalt 8. You need them to buy new vehicles, upgrade them, or access special features. You can earn credits and tokens by completing races, events, collections, achievements, or watching ads. You can also buy them with real money if you want.</p>
98
- <p>You should spend your credits and tokens wisely on upgrades, decals, and special items. Upgrades can improve your vehicle's performance and stats. Decals can change your vehicle's appearance and give you extra bonuses. Special items such as pro kits or blueprints can unlock new vehicles or enhance them.</p>
99
- <h3>How to improve your racing skills and strategies</h3>
100
- <p>To become a better racer in Asphalt 8, you should learn how to choose the best car and bike for each track and mode. Different vehicles have different strengths and weaknesses in terms of speed, acceleration, handling, nitro efficiency, etc. You should also consider the terrain, weather, and layout of the track when choosing your vehicle.</p>
101
- <p>You should also learn how to avoid crashes, obstacles, and opponents' attacks. Crashes can slow you down or damage your vehicle. Obstacles such as traffic, barrels, rocks, etc. can block your way or make you lose control. Opponents' attacks such as missiles, EMPs, bumpers, etc. can hinder your progress or knock you down. You should use your skills and power-ups to dodge or counter these threats.</p>
102
- <h2>Conclusion</h2>
103
- <p>Asphalt 8 is a fantastic racing game that will keep you hooked for hours. It offers a wide range of vehicles, tracks, modes, and features that will satisfy any racing fan. It also has stunning graphics and physics-based gameplay that will make you feel like you are in a real race.</p>
104
- <p>If you are ready to experience the thrill of driving at real speed and performing amazing stunts and drifts on the asphalt, you should download Asphalt 8 today. You can find it on your device's app store or visit [1](https://gameloft.com/game/asphalt-8) for other platforms. You can also join the community of millions of players online and share your racing stories and tips.</p>
105
- <p>Don't wait any longer. Download Asphalt 8 now and start your racing adventure!</p>
106
- <h2>FAQs</h2>
107
- <ul>
108
- <li>Q: How do I unlock new vehicles in Asphalt 8?</li>
109
- <li>A: You can unlock new vehicles by completing events, collections, R&D mode, or spending credits and tokens. You can also get blueprints or pro kits to unlock or enhance certain vehicles.</li>
110
- <li>Q: How do I get more nitro in Asphalt 8?</li>
111
- <li>A: You can get more nitro by performing stunts such as jumps, flips, drifts, barrel rolls, etc. You can also collect nitro bottles on the track or use boosters or power-ups to increase your nitro capacity or efficiency.</li>
112
- <li>Q: How do I play with my friends in Asphalt 8?</li>
113
- <li>A: You can play with your friends in Asphalt 8 by inviting them to join a race in multiplayer mode or creating a private room with a custom code. You can also join a club or a team with your friends and compete with other clubs or teams.</li>
114
- <li>Q: How do I change the camera view in Asphalt 8?</li>
115
- <li>A: You can change the camera view in Asphalt 8 by tapping on the camera icon on the top right corner of the screen. You can choose from four different views: first-person (inside the car), third-person (behind the car), hood (on top of the car), or far (above the car).</li>
116
- <li>Q: How do I update Asphalt <li>A: You can update Asphalt 8 by going to your device's app store or visiting [1](https://gameloft.com/game/asphalt-8) for other platforms. You can also enable auto-update in your device's settings to get the latest version automatically.</li>
117
- </ul></p> 197e85843d<br />
118
- <br />
119
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Aprenda a fazer pizzas incrveis com o Good Pizza Great Pizza Mod Apk Dinheiro Infinito.md DELETED
@@ -1,112 +0,0 @@
1
- <br />
2
- <h1>Good Pizza Great Pizza Mod APK Dinheiro Infinito: How to Download and Play</h1>
3
- <p>Do you love pizza? Do you dream of running your own pizza shop? Do you want to have unlimited money to buy all the toppings, upgrades, and decorations you want? If you answered yes to any of these questions, then you should try <strong>Good Pizza Great Pizza Mod APK Dinheiro Infinito</strong>, a modified version of the popular pizza-making game that gives you infinite money. In this article, we will tell you what this game is, how to download and install it, and how to play it.</p>
4
- <h2>good pizza great pizza mod apk dinheiro infinito</h2><br /><p><b><b>DOWNLOAD</b> >>> <a href="https://jinyurl.com/2uNLou">https://jinyurl.com/2uNLou</a></b></p><br /><br />
5
- <h2>What is Good Pizza Great Pizza?</h2>
6
- <p>Good Pizza Great Pizza is a fun and addictive game that lets you experience the joy and challenge of running your own pizza shop. You have to make pizzas for your customers, who have different preferences and requests. You have to use the right ingredients, cut the pizza correctly, and bake it for the right time. You also have to manage your money, buy new toppings and equipment, and compete with your rival pizza shop across the street.</p>
7
- <h3>A fun and addictive pizza-making game</h3>
8
- <p>The game has a simple but engaging gameplay that will keep you hooked for hours. You can use your finger to swipe, tap, drag, and drop the ingredients on the pizza dough. You can also use a knife to cut the pizza into slices, and a timer to control the baking. The game has over 100 different ingredients, including cheese, pepperoni, mushrooms, pineapple, anchovies, olives, and more. You can also unlock special toppings like bacon, ham, chicken, shrimp, and even chocolate.</p>
9
- <h3>A realistic and challenging simulation of running a pizza shop</h3>
10
- <p>The game is not just about making pizzas. It is also about running a business. You have to balance your income and expenses, pay rent and bills, buy new equipment and upgrades, and deal with unexpected events like power outages, robberies, or inspections. You also have to keep track of your inventory, order new supplies when needed, and avoid wasting food. The game has a realistic economy system that changes according to the day of the week, the weather, and the season.</p>
11
- <p>good pizza great pizza hack apk unlimited money<br />
12
- good pizza great pizza mod apk download latest version<br />
13
- good pizza great pizza cheat apk free download<br />
14
- good pizza great pizza mod apk android 1<br />
15
- good pizza great pizza unlimited money apk revdl<br />
16
- good pizza great pizza mod apk happymod<br />
17
- good pizza great pizza hack apk 2023<br />
18
- good pizza great pizza mod apk dinheiro infinito 2023<br />
19
- good pizza great pizza mod apk rexdl<br />
20
- good pizza great pizza mod apk unlimited toppings<br />
21
- good pizza great pizza hack apk no root<br />
22
- good pizza great pizza mod apk offline<br />
23
- good pizza great pizza cheat apk 2023<br />
24
- good pizza great pizza mod apk dinheiro infinito atualizado<br />
25
- good pizza great pizza mod apk unlimited everything<br />
26
- good pizza great pizza hack apk latest version<br />
27
- good pizza great pizza mod apk online<br />
28
- good pizza great pizza cheat apk unlimited money<br />
29
- good pizza great pizza mod apk dinheiro infinito download<br />
30
- good pizza great pizza mod apk all unlocked<br />
31
- good pizza great pizza hack apk android 1<br />
32
- good pizza great pizza mod apk new version<br />
33
- good pizza great pizza cheat apk download 2023<br />
34
- good pizza great pizza mod apk dinheiro infinito e diamantes<br />
35
- good pizza great pizza mod apk unlimited coins and gems<br />
36
- good pizza great pizza hack apk free download<br />
37
- good pizza great pizza mod apk old version<br />
38
- good pizza great pizza cheat apk android 1<br />
39
- good pizza great pizza mod apk dinheiro infinito e tudo desbloqueado<br />
40
- good pizza great pizza mod apk unlimited ingredients<br />
41
- good pizza great pizza hack apk online<br />
42
- good pizza great pizza mod apk premium<br />
43
- good pizza great pizza cheat apk latest version<br />
44
- good pizz</p>
45
- <h3>A colorful and quirky cast of customers and rivals</h3>
46
- <p>The game has over 80 different customers, each with their own personality, preferences, and dialogue. Some customers are easy to please, while others are very picky or weird. Some customers will tip you well, while others will try to scam you or complain. You have to listen carefully to their orders, read their expressions, and make them happy. You also have to deal with your rival pizza shop owner, who will try to sabotage you or steal your customers.</p>
47
- <h2>What is Good Pizza Great Pizza Mod APK Dinheiro Infinito?</h2>
48
- <p>Good Pizza Great Pizza Mod APK Dinheiro Infinito is a modified version of the original game that gives you unlimited money. This means that you can buy all the toppings, upgrades, and decorations you want without worrying about your budget. You can also skip the ads that sometimes interrupt the game. You can also cheat the game by making any pizza you want, regardless of the customer's order. You can have fun experimenting with different combinations and creations.</p>
49
- <h3>A modified version of the game that gives you unlimited money</h3>
50
- <p>With Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will never run out of money. You can start the game with a huge amount of cash, and you will earn more every time you sell a pizza. You can spend your money on anything you want, without worrying about your budget. You can buy all the toppings, even the most expensive ones, and use them as much as you want. You can also buy all the upgrades, such as a bigger oven, a faster cutter, a better mixer, and more. You can also buy all the decorations, such as posters, plants, lights, and furniture, to make your shop look more attractive and cozy.</p>
51
- <h3>A way to unlock all the toppings, upgrades, and decorations</h3>
52
- <p>With Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will not have to wait or work hard to unlock new items. You can access all the toppings, upgrades, and decorations from the start of the game. You can choose from over 100 different ingredients, including cheese, pepperoni, mushrooms, pineapple, anchovies, olives, and more. You can also unlock special toppings like bacon, ham, chicken, shrimp, and even chocolate. You can also get all the upgrades, such as a bigger oven, a faster cutter, a better mixer, and more. You can also get all the decorations, such as posters, plants, lights, and furniture, to make your shop look more attractive and cozy.</p>
53
- <h3>A cheat to make the game easier and more enjoyable</h3>
54
- <p>With Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will not have to worry about satisfying your customers or beating your rivals. You can make any pizza you want, regardless of the customer's order. You can use any ingredients you want, in any quantity you want. You can also cut the pizza in any way you want, and bake it for as long as you want. You can also ignore the customer's feedback or complaints. You can also ignore your rival's challenges or taunts. You can just have fun making pizzas and enjoying your infinite money.</p>
55
- <h2>How to Download and Install Good Pizza Great Pizza Mod APK Dinheiro Infinito?</h2>
56
- <p>If you want to try Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will need to download and install it on your device. Here are the steps you need to follow:</p>
57
- <h3>Step 1: Find a reliable source for the mod apk file</h3>
58
- <p>The first thing you need to do is to find a trustworthy website that offers the mod apk file for Good Pizza Great Pizza. There are many websites that claim to provide this file, but some of them may be fake or malicious. You need to be careful and avoid downloading anything that may harm your device or steal your data. To find a reliable source for the mod apk file, you can do some research online or ask for recommendations from other users who have tried it before.</p>
59
- <h3>Step 2: Enable unknown sources on your device settings</h3>
60
- <p>The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the official Google Play Store. To do this, you need to go to your device settings > security > unknown sources > enable. This may vary depending on your device model and Android version.</p>
61
- <h3>Step 3: Download and install the mod apk file</h3>
62
- <p>The third thing you need to do is to download and install the mod apk file on your device. To do this, you need to go to the website where you found the mod apk file and click on the download button. This may take some time depending on your internet speed and file size. Once the download is complete, you need to open the file manager app on your device and locate the mod apk file. Then you need to tap on it and follow the instructions on the screen to install it.</p>
63
- <h3>Step 4: Launch the game and enjoy your infinite money</h3>
64
- <p>The last thing you need to do is to launch the game and enjoy your infinite money. To do this, you need to find the game icon on your device home screen or app drawer and tap on it. The game will start and you will see that you have unlimited money to spend on anything you want. You can also see that all the toppings, upgrades, and decorations are unlocked and available for you to use. You can also see that you can make any pizza you want, regardless of the customer's order. You can have fun making pizzas and enjoying your infinite money.</p>
65
- <h2>How to Play Good Pizza Great Pizza Mod APK Dinheiro Infinito?</h2>
66
- <p>Now that you have downloaded and installed Good Pizza Great Pizza Mod APK Dinheiro Infinito, you may wonder how to play it. Here are some tips and tricks for making the best pizzas, satisfying your customers, and beating your rivals.</p>
67
- <h3>Tips and tricks for making the best pizzas</h3>
68
- <p>Even though you have unlimited money and toppings, you still want to make the best pizzas possible. Here are some tips and tricks for making the best pizzas:</p>
69
- <ul>
70
- <li>Use the right amount of sauce, cheese, and toppings. Too much or too little can ruin the taste and appearance of your pizza.</li>
71
- <li>Cut the pizza into even slices. Use the knife to cut the pizza into the number of slices that the customer requested. Try to make the slices as equal as possible.</li>
72
- <li>Bake the pizza for the right time. Use the timer to control the baking time of your pizza. Don't overcook or undercook your pizza, as this can affect the quality and flavor of your pizza.</li>
73
- <li>Experiment with different combinations and creations. You can use any ingredients you want, in any quantity you want. You can also cut the pizza in any way you want, and bake it for as long as you want. You can have fun making pizzas that look and taste amazing.</li>
74
- </ul>
75
- <h3>How to satisfy your customers and beat your rivals</h3>
76
- <p>Even though you can cheat the game by making any pizza you want, you may still want to satisfy your customers and beat your rivals. Here are some tips and tricks for satisfying your customers and beating your rivals:</p>
77
- <ul>
78
- <li>Listen carefully to your customers' orders. Your customers will tell you what kind of pizza they want, how many slices they want, and how they want it cut. You can also see their expressions and dialogue to get clues about their preferences and requests.</li>
79
- <li>Make the pizza according to their order. You can choose to make the pizza exactly as they ordered, or you can add some extra toppings or variations to surprise them. You can also ignore their order completely and make whatever pizza you want, but this may make them unhappy or angry.</li>
80
- <li>Deliver the pizza quickly and politely. You can use the speed button to make the pizza faster, or you can take your time and enjoy the process. You can also use the smile button to greet your customers warmly, or you can use the frown button to show your displeasure or sarcasm.</li>
81
- <li>Earn tips and ratings from your customers. Your customers will give you tips and ratings based on how well you made their pizza. The more they like your pizza, the more they will tip you and rate you highly. The less they like your pizza, the less they will tip you and rate you lowly.</li>
82
- <li>Compete with your rival pizza shop owner. Your rival pizza shop owner will try to sabotage you or steal your customers by offering better prices, faster service, or better quality. You have to prove that your pizza is better than his by making more money, getting more customers, and getting higher ratings.</li>
83
- </ul>
84
- <h3>How to customize your shop and attract more business</h3>
85
- <p>Even though you have unlimited money and decorations, you may still want to customize your shop and attract more business. Here are some tips and tricks for customizing your shop and attracting more business:</p>
86
- <ul>
87
- <li>Buy new equipment and upgrades. You can buy new equipment and upgrades to improve your pizza-making skills and efficiency. You can buy a bigger oven, a faster cutter, a better mixer, and more.</li>
88
- <li>Buy new decorations. You can buy new decorations to make your shop look more attractive and cozy. You can buy posters, plants, lights, furniture, and more.</li>
89
- <li>Change the theme of your shop. You can change the theme of your shop to suit different occasions or seasons. You can choose from themes like Halloween, Christmas, Valentine's Day, Summer, Winter, and more.</li>
90
- <li>Attract more customers with special offers or events. You can attract more customers with special offers or events that will make them want to try your pizza. You can offer discounts, free toppings, coupons, loyalty cards, contests, giveaways, and more.</li>
91
- </ul>
92
- <h2>Conclusion</h2>
93
- <p>In conclusion, Good Pizza Great Pizza Mod APK Dinheiro Infinito is a modified version of the popular pizza-making game that gives you infinite money. You can download and install it on your device, and enjoy making pizzas with unlimited money. You can also unlock all the toppings, upgrades, and decorations, and cheat the game by making any pizza you want. You can also customize your shop and attract more customers with special offers or events. Good Pizza Great Pizza Mod APK Dinheiro Infinito is a fun and easy way to play the game, but it may also take away some of the challenge and excitement of the original game. If you want to experience the real joy and challenge of running a pizza shop, you may want to try the original game instead.</p>
94
- <h2>FAQs</h2>
95
- <p>Here are some frequently asked questions about Good Pizza Great Pizza Mod APK Dinheiro Infinito:</p>
96
- <h3>Q: Is Good Pizza Great Pizza Mod APK Dinheiro Infinito safe to download and install?</h3>
97
- <p>A: Good Pizza Great Pizza Mod APK Dinheiro Infinito is not an official version of the game, and it may contain viruses or malware that can harm your device or steal your data. You should only download and install it from a reliable source, and at your own risk. You should also scan the file with an antivirus software before installing it.</p>
98
- <h3>Q: Is Good Pizza Great Pizza Mod APK Dinheiro Infinito compatible with my device?</h3>
99
- <p>A: Good Pizza Great Pizza Mod APK Dinheiro Infinito is compatible with most Android devices that have Android 4.1 or higher. However, some devices may not support the mod apk file, or may experience glitches or crashes while playing the game. You should check the compatibility of your device before downloading and installing the mod apk file.</p>
100
- <h3>Q: How can I update Good Pizza Great Pizza Mod APK Dinheiro Infinito?</h3>
101
- <p>A: Good Pizza Great Pizza Mod APK Dinheiro Infinito is not connected to the official Google Play Store, and it may not receive regular updates from the developers. You may have to manually check for updates from the website where you downloaded the mod apk file, or look for a newer version of the mod apk file online.</p>
102
- <h3>Q: How can I uninstall Good Pizza Great Pizza Mod APK Dinheiro Infinito?</h3>
103
- <p>A: If you want to uninstall Good Pizza Great Pizza Mod APK Dinheiro Infinito, you can do so by following these steps:</p>
104
- <ol>
105
- <li>Go to your device settings > apps > Good Pizza Great Pizza > uninstall.</li>
106
- <li>Delete the mod apk file from your device storage.</li>
107
- <li>Clear your device cache and data.</li>
108
- </ol>
109
- <h3>Q: Where can I find more information about Good Pizza Great Pizza Mod APK Dinheiro Infinito?</h3>
110
- <p>A: If you want to find more information about Good Pizza Great Pizza Mod APK Dinheiro Infinito, you can visit the website where you downloaded the mod apk file, or search online for reviews, videos, or forums about the game.</p> 401be4b1e0<br />
111
- <br />
112
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Batyr Muhammedow - Lebabyma - The Song that Rocked the Armenian Music Scene - Mp3 Download.md DELETED
@@ -1,142 +0,0 @@
1
-
2
- <h1>Lebabyma Mp3 Skachat: How to Download and Enjoy Uzbek Music Online</h1>
3
- <p>If you are looking for a catchy and upbeat song that will make you want to dance, you might want to check out Lebabyma, a popular Uzbek song by Batyr Muhammedow. But how can you download Lebabyma Mp3 and enjoy it on your device? And what are some other ways to explore and appreciate Uzbek music online? In this article, we will answer these questions and more. Read on to find out how to download and enjoy Uzbek music online.</p>
4
- <h2>What is Lebabyma?</h2>
5
- <p>Lebabyma is a song by Batyr Muhammedow, a famous Uzbek singer and composer. The song was released in 2021 and quickly became a hit among Uzbek music fans. But what does Lebabyma mean and where did it come from?</p>
6
- <h2>lebabyma mp3 skachat</h2><br /><p><b><b>DOWNLOAD</b> ->>> <a href="https://jinyurl.com/2uNMPL">https://jinyurl.com/2uNMPL</a></b></p><br /><br />
7
- <h3>The meaning and origin of the word</h3>
8
- <p>Lebabyma is a word that combines two Uzbek words: leba (bread) and baby (baby). It is a term of endearment that means "my bread" or "my sweetie". It is similar to how English speakers might call their loved ones "honey" or "sugar". The word lebabyma was coined by Batyr Muhammedow himself, who said he wanted to create a unique and catchy word for his song.</p>
9
- <h3>The popularity and style of the song</h3>
10
- <p>Lebabyma is a song that blends traditional Uzbek elements with modern pop influences. It features a catchy chorus, upbeat tempo, and lively instrumentation. The song has a positive and romantic message, as the singer expresses his love and admiration for his partner. The song has been praised for its originality, creativity, and energy. It has also been widely shared on social media platforms such as TikTok, Instagram, and YouTube.</p>
11
- <h3>The artist and his background</h3>
12
- <p>Batyr Muhammedow is a well-known Uzbek singer, composer, and producer. He was born in 1988 in Turkmenistan, but moved to Uzbekistan when he was young. He started his musical career in 2009, when he participated in the TV show "Star Academy". Since then, he has released several albums and singles, such as "Seni Sevaman", "Yana Yana", and "Lebabyma". He is known for his versatile and innovative style, as he experiments with different genres, languages, and cultures. He is also an active supporter of social causes, such as environmental protection, animal rights, and education.</p>
13
- <h2>How to Download Lebabyma Mp3?</h2>
14
- <p>If you want to download Lebabyma Mp3 and listen to it offline, you might face some challenges. First of all, you need to consider the legal and ethical issues of downloading music. Second, you need to find reliable sources and platforms for downloading Lebabyma Mp3. Third, you need to follow the steps and tips for downloading Lebabyma Mp3. Let's look at each of these aspects in more detail.</p> <h3>The legal and ethical issues of downloading music</h3>
15
- <p>Before you download Lebabyma Mp3, you need to be aware of the legal and ethical issues of downloading music. Downloading music without the permission of the artist or the owner of the rights is considered illegal in many countries. It is also unethical, as it deprives the artist of their income and recognition. Therefore, you should always respect the intellectual property rights of the music creators and pay for their work. You can do this by buying their CDs, downloading their songs from authorized platforms, or streaming their music from licensed services.</p>
16
- <h3>The best sources and platforms for downloading Lebabyma Mp3</h3>
17
- <p>If you want to download Lebabyma Mp3 legally and ethically, you need to find the best sources and platforms for doing so. There are many websites and apps that offer free or cheap downloads of Lebabyma Mp3, but not all of them are trustworthy or safe. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them might also provide low-quality or incomplete files that can ruin your listening experience. Therefore, you should always use reputable and reliable sources and platforms for downloading Lebabyma Mp3. Some of the best ones are:</p>
18
- <ul>
19
- <li><a href="">iTunes</a>: This is one of the most popular and widely used platforms for downloading music. You can buy Lebabyma Mp3 for $0.99 and enjoy it on your iPhone, iPad, iPod, Mac, or PC.</li>
20
- <li><a href="">Amazon Music</a>: This is another well-known and trusted platform for downloading music. You can buy Lebabyma Mp3 for $0.99 and enjoy it on your Android, iOS, Windows, Mac, or web devices.</li>
21
- <li><a href="">Google Play Music</a>: This is a platform that allows you to download music from Google's online store. You can buy Lebabyma Mp3 for $0.99 and enjoy it on your Android, iOS, Windows, Mac, or web devices.</li>
22
- <li><a href="">Spotify</a>: This is a platform that allows you to stream music online or download it offline. You can listen to Lebabyma Mp3 for free with ads or pay $9.99 per month for a premium subscription that gives you ad-free and offline access to millions of songs.</li>
23
- <li><a href="">YouTube Music</a>: This is a platform that allows you to stream music online or download it offline. You can listen to Lebabyma Mp3 for free with ads or pay $9.99 per month for a premium subscription that gives you ad-free and offline access to millions of songs.</li>
24
- </ul>
25
- <h3>The steps and tips for downloading Lebabyma Mp3</h3>
26
- <p>Once you have chosen the source and platform for downloading Lebabyma Mp3, you need to follow the steps and tips for doing so. Here are some general guidelines that apply to most sources and platforms:</p>
27
- <p>lebabyma mp3 download free<br />
28
- lebabyma mp3 song by batyr muhammedow<br />
29
- lebabyma mp3 online listen<br />
30
- lebabyma mp3 320 kbps<br />
31
- lebabyma mp3 lyrics<br />
32
- lebabyma mp3 spotify<br />
33
- lebabyma mp3 youtube converter<br />
34
- lebabyma mp3 ringtone<br />
35
- lebabyma mp3 remix<br />
36
- lebabyma mp3 instrumental<br />
37
- lebabyma mp3 karaoke<br />
38
- lebabyma mp3 music video<br />
39
- lebabyma mp3 album<br />
40
- lebabyma mp3 single<br />
41
- lebabyma mp3 release date<br />
42
- lebabyma mp3 genre<br />
43
- lebabyma mp3 artist<br />
44
- lebabyma mp3 cover art<br />
45
- lebabyma mp3 playlist<br />
46
- lebabyma mp3 radio<br />
47
- lebabyma mp3 streaming service<br />
48
- lebabyma mp3 soundcloud<br />
49
- lebabyma mp3 apple music<br />
50
- lebabyma mp3 amazon music<br />
51
- lebabyma mp3 deezer<br />
52
- lebabyma mp3 tidal<br />
53
- lebabyma mp3 pandora<br />
54
- lebabyma mp3 iheartradio<br />
55
- lebabyma mp3 napster<br />
56
- lebabyma mp3 audiomack<br />
57
- lebabyma mp3 bandcamp<br />
58
- lebabyma mp3 reverbnation<br />
59
- lebabyma mp3 datpiff<br />
60
- lebabyma mp3 mixcloud<br />
61
- lebabyma mp3 soundclick<br />
62
- lebabyma mp3 last.fm<br />
63
- lebabyma mp3 shazam<br />
64
- lebabyma mp3 musixmatch<br />
65
- lebabyma mp3 genius lyrics<br />
66
- lebabyma mp3 azlyrics<br />
67
- lebabyma mp3 metrolyrics<br />
68
- lebabyma mp3 lyricstranslate.com <br />
69
- lebabyma mp3 songmeanings.com <br />
70
- lebabyma mp3 songfacts.com <br />
71
- lebabyma mp3 whosampled.com <br />
72
- lebabyma mp3 discogs.com <br />
73
- lebabyma mp3 allmusic.com <br />
74
- lebabyma mp3 rateyourmusic.com</p>
75
- <ol>
76
- <li>Make sure you have a stable internet connection and enough storage space on your device.</li>
77
- <li>Search for Lebabyma Mp3 on the source or platform of your choice.</li>
78
- <li>Select the song and click on the download or buy button.</li>
79
- <li>Enter your payment details if required and confirm your purchase.</li>
80
- <li>Wait for the download to complete and check if the file is successfully saved on your device.</li>
81
- <li>Enjoy listening to Lebabyma Mp3 on your device or transfer it to another device if you want.</li>
82
- </ol>
83
- <p>Some tips to enhance your downloading experience are:</p>
84
- <ul>
85
- <li>Compare the prices and quality of different sources and platforms before buying Lebabyma Mp3.</li>
86
- <li>Check the reviews and ratings of other users before downloading Lebabyma Mp3 from a source or platform.</li>
87
- <li>Use antivirus software and firewall to protect your device from potential threats when downloading Lebabyma Mp3.</li>
88
- <li>Backup your downloaded files on a cloud service or an external drive in case you lose them or delete them by mistake.</li>
89
- <li>Respect the rights of the artist and do not share or distribute Lebabyma Mp3 without their permission.</li>
90
- </ul>
91
- <h2>How to Enjoy Uzbek Music Online?</h2>
92
- <p>Downloading Lebabyma Mp3 is not the only way to enjoy Uzbek music online. There are many other ways to explore and appreciate Uzbek music online. You can learn about the benefits and challenges of listening to Uzbek music online, discover the genres and artists of Uzbek music, and find playlists and recommendations for Uzbek music lovers.</p>
93
- <h <h3>The benefits and challenges of listening to Uzbek music online</h3>
94
- <p>Listening to Uzbek music online can be a rewarding and enjoyable experience. You can benefit from listening to Uzbek music online in many ways, such as:</p>
95
- <ul>
96
- <li>You can learn about a different culture and language through music.</li>
97
- <li>You can discover new sounds and styles that can enrich your musical taste.</li>
98
- <li>You can connect with other Uzbek music fans and share your opinions and emotions.</li>
99
- <li>You can support Uzbek music artists and help them grow their audience and influence.</li>
100
- </ul>
101
- <p>However, listening to Uzbek music online can also pose some challenges, such as:</p>
102
- <ul>
103
- <li>You might face some difficulties in finding and accessing Uzbek music online, as it might not be available or popular on some platforms or regions.</li>
104
- <li>You might encounter some barriers in understanding and appreciating Uzbek music online, as it might have different meanings, contexts, and references that you are not familiar with.</li>
105
- <li>You might face some prejudices or stereotypes about Uzbek music online, as it might be misunderstood or misrepresented by some people or media.</li>
106
- <li>You might have some ethical dilemmas about listening to Uzbek music online, as it might involve some legal or moral issues that you need to consider.</li>
107
- </ul>
108
- <h3>The genres and artists of Uzbek music</h3>
109
- <p>Uzbek music is a diverse and rich musical tradition that reflects the history, culture, and identity of the Uzbek people. Uzbek music has various genres and styles that cater to different tastes and preferences. Some of the most popular and influential genres and artists of Uzbek music are:</p>
110
- <table>
111
- <tr><th>Genre</th><th>Description</th><th>Examples of Artists</th></tr>
112
- <tr><td>Maqom</td><td>A classical genre of Uzbek music that consists of complex melodic and rhythmic patterns. It is usually performed by a solo singer accompanied by traditional instruments such as the tanbur, the doira, and the nay.</td><td>Munojot Yo'lchiyeva, Shavkat Mirziyoyev, Abdurashid Khamidov</td></tr>
113
- <tr><td>Estrada</td><td>A modern genre of Uzbek pop music that incorporates elements of folk, jazz, rock, and disco. It is usually performed by a singer or a band with electronic instruments such as the keyboard, the guitar, and the drum machine.</td><td>Yulduz Usmonova, Sevara Nazarkhan, Rayhon Ganiyeva</td></tr>
114
- <tr><td>Rap</td><td>A contemporary genre of Uzbek hip hop music that involves spoken word delivery over rhythmic beats. It is usually performed by a rapper or a group of rappers with a DJ or a producer. It often addresses social and political issues in Uzbek society.</td><td>Ozodbek Nazarbekov, Shoxrux Mirzo, Ziyoda Qobilova</td></tr>
115
- <tr><td>Folk</td><td>A traditional genre of Uzbek music that reflects the regional and ethnic diversity of the country. It is usually performed by a solo singer or a group of singers with acoustic instruments such as the dutar, the surnay, and the chang.</td><td>Feruza Jumaniyozova, Matluba Ahmadova, Davron Ergashev</td></tr>
116
- <tr><td>Rock</td><td>A progressive genre of Uzbek music that combines elements of western rock with local influences. It is usually performed by a band with electric instruments such as the guitar, the bass, and the drums. It often experiments with different sounds and styles.</td><td>Yalla, Bolalar, Qishloq Ovozi</td></tr>
117
- </table>
118
- <h3>The playlists and recommendations for Uzbek music lovers</h3>
119
- <p>If you want to listen to more Uzbek music online, you might want to check out some playlists and recommendations for Uzbek music lovers. Here are some suggestions that can help you find and enjoy more Uzbek music online:</p>
120
- <ul>
121
- <li>Listen to <a href="">Uzbek Top 50</a>, a playlist that features the most popular and trending songs in Uzbekistan. You can find it on Spotify, Apple Music, YouTube Music, or Deezer.</li>
122
- <li>Listen to <a href="">Uzbek Classics</a>, a playlist that features the best and most influential songs in Uzbek history. You can find it on Spotify, Apple Music, YouTube Music, or Deezer.</li>
123
- <li>Listen to <a href="">Uzbek Discovery</a>, a playlist that features new and emerging artists in Uzbek music. You can find it on Spotify, Apple Music, YouTube Music, or Deezer.</li>
124
- <li>Listen to <a href="">Uzbek Radio</a>, an online radio station that broadcasts live Uzbek music from various genres and styles. You can find it on <a href="">UzbekRadio.net</a>.</li>
125
- <li>Listen to <a href="">Uzbek Podcasts</a>, a collection of podcasts that cover topics related to Uzbek music, culture, and society. You can find it on Spotify, Apple Podcasts, Google Podcasts, or Stitcher.</li>
126
- <li>Listen to <a href="">Uzbek Music Blogs</a>, a list of blogs that review, analyze, and recommend Uzbek music. You can find it on <a href="">Feedspot</a>.</li>
127
- </ul>
128
- <h2>Conclusion</h2>
129
- <p>Lebabyma Mp3 Skachat is a great way to enjoy one of the most popular and catchy songs in Uzbek music. However, it is not the only way to explore and appreciate Uzbek music online. You can also learn about the meaning and origin of Lebabyma, the artist and his background, the legal and ethical issues of downloading music, the best sources and platforms for downloading Lebabyma Mp3, the steps and tips for downloading Lebabyma Mp3, the benefits and challenges of listening to Uzbek music online, the genres and artists of Uzbek music, and the playlists and recommendations for Uzbek music lovers. By doing so, you can enrich your musical knowledge and experience, as well as support Uzbek music artists and culture. So what are you waiting for? Download Lebabyma Mp3 today and enjoy Uzbek music online!</p>
130
- <h2>FAQs</h2>
131
- <h4>What is the genre of Lebabyma?</h4>
132
- <p>Lebabyma is a genre of Uzbek pop music that blends traditional Uzbek elements with modern pop influences.</p>
133
- <h4>Who is Batyr Muhammedow?</h4>
134
- <p>Batyr Muhammedow is a famous Uzbek singer, composer, and producer. He is the creator of Lebabyma and other popular songs.</p>
135
- <h4>What are some other popular Uzbek songs?</h4>
136
- <p>Some other popular Uzbek songs are Seni Sevaman by Batyr Muhammedow, Yor-Yor by Yulduz Usmonova, Qalbim by Sevara Nazarkhan, and O'zbekiston by Ozodbek Nazarbekov.</p>
137
- <h4>How can I support Uzbek music artists?</h4>
138
- <p>You can support Uzbek music artists by buying their CDs, downloading their songs from authorized platforms, streaming their music from licensed services, sharing their music on social media, attending their concerts, and donating to their causes.</p>
139
- <h4>Where can I learn more about Uzbek culture and language?</h4>
140
- <p>You can learn more about Uzbek culture and language by visiting <a href="">Uzbekistan.travel</a>, a website that provides information and resources about Uzbekistan's history, geography, cuisine, art, literature, and more. You can also take online courses or watch videos on <a href="">UzbekClass.com</a>, a website that offers lessons and materials for learning Uzbek language.</p> 401be4b1e0<br />
141
- <br />
142
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Green Button Mod APK and Challenge Your Friends to Press the Button.md DELETED
@@ -1,102 +0,0 @@
1
- <br />
2
- <h1>Download Green Button Mod APK: A Fun and Addictive Money Clicker Game</h1>
3
- <p>Do you love clicking games that let you earn virtual money by simply tapping on a button? If yes, then you should try Green Button Mod APK, a fun and addictive money clicker game that will keep you entertained for hours. In this game, you can tap on a green button to earn money, upgrade your buttons and boosters, customize your buttons with different colors and shapes, and compete with other players on the leaderboards and achievements. You can also enjoy the game without any ads or limitations with the modded version of the game.</p>
4
- <h2>What is Green Button Mod APK?</h2>
5
- <p>Green Button Mod APK is a modified version of the original Green Button: Press the Button game, which is a simulation game developed by Apkloli. The game is available for Android and iOS devices. The game is simple but addictive: you just have to tap on a green button to earn money. The more you tap, the more money you make. You can use the money to upgrade your buttons and boosters, which will increase your earnings per tap. You can also customize your buttons with different colors and shapes, such as red, blue, yellow, square, circle, star, etc. You can also unlock new buttons with special effects and bonuses.</p>
6
- <h2>download green button mod apk</h2><br /><p><b><b>Download Zip</b> &#9734;&#9734;&#9734;&#9734;&#9734; <a href="https://jinyurl.com/2uNStN">https://jinyurl.com/2uNStN</a></b></p><br /><br />
7
- <p>The modded version of the game gives you unlimited money to spend on upgrades and customizations. You can also enjoy the game without any ads or interruptions. You can also access all the features and levels of the game without any restrictions.</p>
8
- <h3>Features of Green Button Mod APK</h3>
9
- <h4>Unlimited money</h4>
10
- <p>With Green Button Mod APK, you can get unlimited money to spend on upgrades and customizations. You don't have to worry about running out of money or waiting for it to accumulate. You can buy any button or booster you want and make your game more fun and exciting.</p>
11
- <h4>No ads</h4>
12
- <p>Another benefit of Green Button Mod APK is that it removes all the ads from the game. You don't have to watch any annoying or intrusive ads that pop up on your screen or interrupt your gameplay. You can enjoy the game without any distractions or delays.</p>
13
- <p>download idle green button mod apk free rewards<br />
14
- download green button games mod apk unlimited money<br />
15
- download green button press the button mod apk latest version<br />
16
- download green button simulator mod apk no ads<br />
17
- download green button clicker mod apk hack<br />
18
- download green button tycoon mod apk offline<br />
19
- download green button idle game mod apk android<br />
20
- download green button challenge mod apk premium<br />
21
- download green button adventure mod apk unlocked<br />
22
- download green button factory mod apk pro<br />
23
- download green button frenzy mod apk cheat<br />
24
- download green button evolution mod apk online<br />
25
- download green button empire mod apk ios<br />
26
- download green button mania mod apk vip<br />
27
- download green button quest mod apk modded<br />
28
- download green button world mod apk cracked<br />
29
- download green button farm mod apk unlimited gems<br />
30
- download green button life mod apk full version<br />
31
- download green button city mod apk mega mod<br />
32
- download green button fun mod apk no root<br />
33
- download green button tap tap mod apk free shopping<br />
34
- download green button maker mod apk unlimited coins<br />
35
- download green button legend mod apk all unlocked<br />
36
- download green button master mod apk god mode<br />
37
- download green button hero mod apk free download<br />
38
- download green button saga mod apk unlimited everything<br />
39
- download green button story mod apk no verification<br />
40
- download green button journey mod apk free fire<br />
41
- download green button magic mod apk one hit kill<br />
42
- download green button land mod apk unlimited lives<br />
43
- download green button dream mod apk free spins<br />
44
- download green button rush mod apk unlimited keys<br />
45
- download green button war mod apk unlimited energy<br />
46
- download green button blast mod apk free coins<br />
47
- download green button space mod apk unlimited stars<br />
48
- download green button race mod apk unlimited gold<br />
49
- download green button puzzle mod apk free boosters<br />
50
- download green button kingdom mod apk unlimited diamonds<br />
51
- download green button garden mod apk free tickets<br />
52
- download green button escape mod apk unlimited hints<br />
53
- download green button smash mod apk free cash<br />
54
- download green button shooter mod apk unlimited ammo<br />
55
- download green button runner mod apk free skins<br />
56
- download green button arcade mod apk unlimited tokens<br />
57
- download green button casino mod apk free chips<br />
58
- download green button trivia mod apk free points<br />
59
- download green button bingo mod apk free cards<br />
60
- download green button slots mod apk free spins</p>
61
- <h4>Customizable buttons</h4>
62
- <p>Green Button Mod APK also allows you to customize your buttons with different colors and shapes. You can choose from a variety of options, such as red, blue, yellow, square, circle, star, etc. You can also unlock new buttons with special effects and bonuses, such as fire, ice, lightning, rainbow, etc. You can make your buttons look more appealing and unique.</p>
63
- <h4>Leaderboards and achievements</h4>
64
- <p>Green Button Mod APK also lets you compete with other players on the leaderboards and achievements. You can see how you rank among other players in terms of money earned, taps made, buttons unlocked, etc. You can also complete various achievements and earn rewards and trophies. You can challenge your friends and other players to see who is the best money clicker.</p>
65
- <h2>How to download and install Green Button Mod APK?</h2>
66
- <h3>Steps to download and install Green Button Mod APK</h3>
67
- <p>If you want to download and install Green Button Mod APK on your Android device, you can follow these simple steps:</p>
68
- <ol>
69
- <li>Go to [this link](^1 ) to download the Green Button Mod APK file on your device.</li>
70
- <li>Once the download is complete, go to your device settings and enable the installation of apps from unknown sources.</li>
71
- <li>Locate the downloaded file and tap on it to start the installation process.</li>
72
- <li>Follow the instructions on the screen and wait for the installation to finish.</li>
73
- <li>Launch the game and enjoy the unlimited money and no ads features.</li>
74
- </ol>
75
- <h3>Tips and tricks for playing Green Button Mod APK</h3>
76
- <p>If you want to make the most out of Green Button Mod APK, you can follow these tips and tricks:</p>
77
- <h4>Tap faster and smarter</h4>
78
- <p>The basic rule of the game is to tap on the green button as fast as you can to earn money. However, you can also tap smarter by using multiple fingers or tapping on different parts of the button. This will increase your tapping speed and efficiency, and help you earn more money in less time.</p>
79
- <h4>Upgrade your buttons and boosters</h4>
80
- <p>Another way to increase your earnings is to upgrade your buttons and boosters. You can use the money you earn to buy new buttons or improve the existing ones. You can also buy boosters that will multiply your earnings per tap, such as x2, x5, x10, etc. Upgrading your buttons and boosters will also unlock new levels and features in the game.</p>
81
- <h4>Use the offline mode</h4>
82
- <p>Green Button Mod APK also has an offline mode that allows you to earn money even when you are not playing the game. You can activate the offline mode by tapping on the airplane icon on the top right corner of the screen. This will enable a passive income that will accumulate over time. You can collect the money when you return to the game.</p>
83
- <h4>Challenge your friends and other players</h4>
84
- <p>Green Button Mod APK also has a social aspect that lets you challenge your friends and other players on the leaderboards and achievements. You can connect your game account with Facebook or Google Play and see how you rank among other players in terms of money earned, taps made, buttons unlocked, etc. You can also invite your friends to play the game and compare your scores. You can also earn rewards and bonuses for playing with friends.</p>
85
- <h2>Conclusion</h2>
86
- <p>Green Button Mod APK is a fun and addictive money clicker game that will keep you entertained for hours. You can tap on a green button to earn money, upgrade your buttons and boosters, customize your buttons with different colors and shapes, and compete with other players on the leaderboards and achievements. You can also enjoy the game without any ads or limitations with the modded version of the game. If you want to download and install Green Button Mod APK on your Android device, you can follow the steps mentioned above. You can also use the tips and tricks to make the most out of the game.</p>
87
- <h3>FAQs</h3>
88
- <p>Here are some frequently asked questions about Green Button Mod APK:</p>
89
- <ul>
90
- <li><b>Is Green Button Mod APK safe to download and install?</b><br>
91
- Yes, Green Button Mod APK is safe to download and install on your device. The modded version of the game does not contain any viruses or malware that could harm your device or data. However, you should always download the modded version from a trusted source, such as [this link].</li>
92
- <li><b>Do I need to root my device to use Green Button Mod APK?</b><br>
93
- No, you do not need to root your device to use Green Button Mod APK. The modded version of the game works fine on both rooted and non-rooted devices. You just need to enable the installation of apps from unknown sources in your device settings.</li>
94
- <li><b>Can I play Green Button Mod APK online?</b><br>
95
- Yes, you can play Green Button Mod APK online with other players. You can connect your game account with Facebook or Google Play and see how you rank among other players on the leaderboards and achievements. You can also invite your friends to play the game and compare your scores.</li>
96
- <li><b>How can I get more money in Green Button Mod APK?</b><br>
97
- There are several ways to get more money in Green Button Mod APK. You can tap faster and smarter on the green button, upgrade your buttons and boosters, use the offline mode, challenge your friends and other players, etc. You can also use the unlimited money feature of the modded version of the game to buy any button or booster you want.</li>
98
- <li><b>What are some alternatives to Green Button Mod APK?</b><br>
99
- If you like money clicker games, you might also like some alternatives to Green Button Mod APK, such as Make It Rain: The Love of Money, Adventure Capitalist, and Cookie Clicker. These are some of the popular money clicker games that you can play online or on your mobile devices. They have similar gameplay mechanics as Green Button Mod APK, but with different themes and features. You can check them out and see which one suits your taste and preference.</li>
100
- </ul></p> 197e85843d<br />
101
- <br />
102
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/configuration_utils.py DELETED
@@ -1,591 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
4
- #
5
- # Licensed under the Apache License, Version 2.0 (the "License");
6
- # you may not use this file except in compliance with the License.
7
- # You may obtain a copy of the License at
8
- #
9
- # http://www.apache.org/licenses/LICENSE-2.0
10
- #
11
- # Unless required by applicable law or agreed to in writing, software
12
- # distributed under the License is distributed on an "AS IS" BASIS,
13
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- # See the License for the specific language governing permissions and
15
- # limitations under the License.
16
- """ ConfigMixin base class and utilities."""
17
- import functools
18
- import importlib
19
- import inspect
20
- import json
21
- import os
22
- import re
23
- import tempfile
24
- from collections import OrderedDict
25
- from typing import Any, Dict, Optional, Tuple, Union
26
-
27
- import numpy as np
28
- from huggingface_hub import (
29
- create_repo,
30
- get_hf_file_metadata,
31
- hf_hub_download,
32
- hf_hub_url,
33
- repo_type_and_id_from_hf_id,
34
- upload_folder,
35
- )
36
- from huggingface_hub.utils import EntryNotFoundError
37
- from requests import HTTPError
38
-
39
- from .download_utils import ppdiffusers_bos_download
40
- from .utils import (
41
- DOWNLOAD_SERVER,
42
- HF_CACHE,
43
- PPDIFFUSERS_CACHE,
44
- DummyObject,
45
- deprecate,
46
- logging,
47
- )
48
- from .version import VERSION as __version__
49
-
50
- logger = logging.get_logger(__name__)
51
-
52
- _re_configuration_file = re.compile(r"config\.(.*)\.json")
53
-
54
-
55
- class FrozenDict(OrderedDict):
56
- def __init__(self, *args, **kwargs):
57
- super().__init__(*args, **kwargs)
58
-
59
- for key, value in self.items():
60
- setattr(self, key, value)
61
-
62
- self.__frozen = True
63
-
64
- def __delitem__(self, *args, **kwargs):
65
- raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.")
66
-
67
- def setdefault(self, *args, **kwargs):
68
- raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.")
69
-
70
- def pop(self, *args, **kwargs):
71
- raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.")
72
-
73
- def update(self, *args, **kwargs):
74
- raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.")
75
-
76
- def __setattr__(self, name, value):
77
- if hasattr(self, "__frozen") and self.__frozen:
78
- raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.")
79
- super().__setattr__(name, value)
80
-
81
- def __setitem__(self, name, value):
82
- if hasattr(self, "__frozen") and self.__frozen:
83
- raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.")
84
- super().__setitem__(name, value)
85
-
86
-
87
- class ConfigMixin:
88
- r"""
89
- Base class for all configuration classes. Stores all configuration parameters under `self.config` Also handles all
90
- methods for loading/downloading/saving classes inheriting from [`ConfigMixin`] with
91
- - [`~ConfigMixin.from_config`]
92
- - [`~ConfigMixin.save_config`]
93
-
94
- Class attributes:
95
- - **config_name** (`str`) -- A filename under which the config should stored when calling
96
- [`~ConfigMixin.save_config`] (should be overridden by parent class).
97
- - **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be
98
- overridden by subclass).
99
- - **has_compatibles** (`bool`) -- Whether the class has compatible classes (should be overridden by subclass).
100
- - **_deprecated_kwargs** (`List[str]`) -- Keyword arguments that are deprecated. Note that the init function
101
- should only have a `kwargs` argument if at least one argument is deprecated (should be overridden by
102
- subclass).
103
- """
104
- config_name = None
105
- ignore_for_config = []
106
- has_compatibles = False
107
- _deprecated_kwargs = []
108
-
109
- def register_to_config(self, **kwargs):
110
- if self.config_name is None:
111
- raise NotImplementedError(f"Make sure that {self.__class__} has defined a class name `config_name`")
112
-
113
- # Special case for `kwargs` used in deprecation warning added to schedulers
114
- # TODO: remove this when we remove the deprecation warning, and the `kwargs` argument,
115
- # or solve in a more general way.
116
- kwargs.pop("kwargs", None)
117
- for key, value in kwargs.items():
118
- try:
119
- setattr(self, key, value)
120
- except AttributeError as err:
121
- logger.error(f"Can't set {key} with value {value} for {self}")
122
- raise err
123
-
124
- if not hasattr(self, "_internal_dict"):
125
- internal_dict = kwargs
126
- else:
127
- previous_dict = dict(self._internal_dict)
128
- internal_dict = {**self._internal_dict, **kwargs}
129
- logger.debug(f"Updating config from {previous_dict} to {internal_dict}")
130
-
131
- self._internal_dict = FrozenDict(internal_dict)
132
-
133
- def save_config(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs):
134
- """
135
- Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the
136
- [`~ConfigMixin.from_config`] class method.
137
-
138
- Args:
139
- save_directory (`str` or `os.PathLike`):
140
- Directory where the configuration JSON file will be saved (will be created if it does not exist).
141
- """
142
- if os.path.isfile(save_directory):
143
- raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")
144
-
145
- os.makedirs(save_directory, exist_ok=True)
146
-
147
- # If we save using the predefined names, we can load using `from_config`
148
- output_config_file = os.path.join(save_directory, self.config_name)
149
-
150
- self.to_json_file(output_config_file)
151
- logger.info(f"Configuration saved in {output_config_file}")
152
-
153
- def save_to_hf_hub(
154
- self,
155
- repo_id: str,
156
- private: Optional[bool] = None,
157
- subfolder: Optional[str] = None,
158
- commit_message: Optional[str] = None,
159
- revision: Optional[str] = None,
160
- create_pr: bool = False,
161
- ):
162
- """
163
- Uploads all elements of this config to a new HuggingFace Hub repository.
164
- Args:
165
- repo_id (str): Repository name for your model/tokenizer in the Hub.
166
- private (bool, optional): Whether the model/tokenizer is set to private
167
- subfolder (str, optional): Push to a subfolder of the repo instead of the root
168
- commit_message (str, optional): The summary / title / first line of the generated commit. Defaults to: f"Upload {path_in_repo} with huggingface_hub"
169
- revision (str, optional): The git revision to commit from. Defaults to the head of the "main" branch.
170
- create_pr (boolean, optional): Whether or not to create a Pull Request with that commit. Defaults to False.
171
- If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch.
172
- If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server.
173
-
174
- Returns: The url of the commit of your model in the given repository.
175
- """
176
- repo_url = create_repo(repo_id, private=private, exist_ok=True)
177
-
178
- # Infer complete repo_id from repo_url
179
- # Can be different from the input `repo_id` if repo_owner was implicit
180
- _, repo_owner, repo_name = repo_type_and_id_from_hf_id(repo_url)
181
-
182
- repo_id = f"{repo_owner}/{repo_name}"
183
-
184
- # Check if README file already exist in repo
185
- try:
186
- get_hf_file_metadata(hf_hub_url(repo_id=repo_id, filename="README.md", revision=revision))
187
- has_readme = True
188
- except EntryNotFoundError:
189
- has_readme = False
190
-
191
- with tempfile.TemporaryDirectory() as root_dir:
192
- if subfolder is not None:
193
- save_dir = os.path.join(root_dir, subfolder)
194
- else:
195
- save_dir = root_dir
196
- # save config
197
- self.save_config(save_dir)
198
- # Add readme if does not exist
199
- logger.info("README.md not found, adding the default README.md")
200
- if not has_readme:
201
- with open(os.path.join(root_dir, "README.md"), "w") as f:
202
- f.write(f"---\nlibrary_name: ppdiffusers\n---\n# {repo_id}")
203
-
204
- # Upload model and return
205
- logger.info(f"Pushing to the {repo_id}. This might take a while")
206
- return upload_folder(
207
- repo_id=repo_id,
208
- repo_type="model",
209
- folder_path=root_dir,
210
- commit_message=commit_message,
211
- revision=revision,
212
- create_pr=create_pr,
213
- )
214
-
215
- @classmethod
216
- def from_config(cls, config: Union[FrozenDict, Dict[str, Any]] = None, return_unused_kwargs=False, **kwargs):
217
- r"""
218
- Instantiate a Python class from a config dictionary
219
-
220
- Parameters:
221
- config (`Dict[str, Any]`):
222
- A config dictionary from which the Python class will be instantiated. Make sure to only load
223
- configuration files of compatible classes.
224
- return_unused_kwargs (`bool`, *optional*, defaults to `False`):
225
- Whether kwargs that are not consumed by the Python class should be returned or not.
226
-
227
- kwargs (remaining dictionary of keyword arguments, *optional*):
228
- Can be used to update the configuration object (after it being loaded) and initiate the Python class.
229
- `**kwargs` will be directly passed to the underlying scheduler/model's `__init__` method and eventually
230
- overwrite same named arguments of `config`.
231
-
232
- Examples:
233
-
234
- ```python
235
- >>> from ppdiffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler
236
-
237
- >>> # Download scheduler from BOS and cache.
238
- >>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32")
239
-
240
- >>> # Instantiate DDIM scheduler class with same config as DDPM
241
- >>> scheduler = DDIMScheduler.from_config(scheduler.config)
242
-
243
- >>> # Instantiate PNDM scheduler class with same config as DDPM
244
- >>> scheduler = PNDMScheduler.from_config(scheduler.config)
245
- ```
246
- """
247
- # <===== TO BE REMOVED WITH DEPRECATION
248
- # TODO(Patrick) - make sure to remove the following lines when config=="model_path" is deprecated
249
- if "pretrained_model_name_or_path" in kwargs:
250
- config = kwargs.pop("pretrained_model_name_or_path")
251
-
252
- if config is None:
253
- raise ValueError("Please make sure to provide a config as the first positional argument.")
254
- # ======>
255
-
256
- if not isinstance(config, dict):
257
- deprecation_message = "It is deprecated to pass a pretrained model name or path to `from_config`."
258
- if "Scheduler" in cls.__name__:
259
- deprecation_message += (
260
- f"If you were trying to load a scheduler, please use {cls}.from_pretrained(...) instead."
261
- " Otherwise, please make sure to pass a configuration dictionary instead. This functionality will"
262
- " be removed in v1.0.0."
263
- )
264
- elif "Model" in cls.__name__:
265
- deprecation_message += (
266
- f"If you were trying to load a model, please use {cls}.load_config(...) followed by"
267
- f" {cls}.from_config(...) instead. Otherwise, please make sure to pass a configuration dictionary"
268
- " instead. This functionality will be removed in v1.0.0."
269
- )
270
- deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False)
271
- config, kwargs = cls.load_config(pretrained_model_name_or_path=config, return_unused_kwargs=True, **kwargs)
272
-
273
- init_dict, unused_kwargs, hidden_dict = cls.extract_init_dict(config, **kwargs)
274
-
275
- # Allow dtype to be specified on initialization
276
- if "dtype" in unused_kwargs:
277
- # (TODO junnyu, donot use dtype)
278
- unused_kwargs.pop("dtype")
279
- # init_dict["dtype"] = unused_kwargs.pop("dtype")
280
-
281
- # add possible deprecated kwargs
282
- for deprecated_kwarg in cls._deprecated_kwargs:
283
- if deprecated_kwarg in unused_kwargs:
284
- init_dict[deprecated_kwarg] = unused_kwargs.pop(deprecated_kwarg)
285
-
286
- # Return model and optionally state and/or unused_kwargs
287
- model = cls(**init_dict)
288
-
289
- # make sure to also save config parameters that might be used for compatible classes
290
- model.register_to_config(**hidden_dict)
291
-
292
- # add hidden kwargs of compatible classes to unused_kwargs
293
- unused_kwargs = {**unused_kwargs, **hidden_dict}
294
-
295
- if return_unused_kwargs:
296
- return (model, unused_kwargs)
297
- else:
298
- return model
299
-
300
- @classmethod
301
- def get_config_dict(cls, *args, **kwargs):
302
- deprecation_message = (
303
- f" The function get_config_dict is deprecated. Please use {cls}.load_config instead. This function will be"
304
- " removed in version v1.0.0"
305
- )
306
- deprecate("get_config_dict", "1.0.0", deprecation_message, standard_warn=False)
307
- return cls.load_config(*args, **kwargs)
308
-
309
- @classmethod
310
- def load_config(
311
- cls, pretrained_model_name_or_path: Union[str, os.PathLike], return_unused_kwargs=False, **kwargs
312
- ) -> Tuple[Dict[str, Any], Dict[str, Any]]:
313
- r"""
314
- Instantiate a Python class from a config dictionary
315
-
316
- Parameters:
317
- pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*):
318
- Can be either:
319
-
320
- - A string, the *model id* of a model repo on huggingface.co. Valid model ids should have an
321
- organization name, like `google/ddpm-celebahq-256`.
322
- - A path to a *directory* containing model weights saved using [`~ConfigMixin.save_config`], e.g.,
323
- `./my_model_directory/`.
324
-
325
- cache_dir (`Union[str, os.PathLike]`, *optional*):
326
- Path to a directory in which a downloaded pretrained model configuration should be cached if the
327
- standard cache should not be used.
328
- output_loading_info(`bool`, *optional*, defaults to `False`):
329
- Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
330
- subfolder (`str`, *optional*, defaults to `""`):
331
- In case the relevant files are located inside a subfolder of the model repo (either remote in
332
- huggingface.co or downloaded locally), you can specify the folder name here.
333
- from_hf_hub (bool, *optional*):
334
- Whether to load from Hugging Face Hub. Defaults to False
335
- """
336
- from_hf_hub = kwargs.pop("from_hf_hub", False)
337
- if from_hf_hub:
338
- cache_dir = kwargs.pop("cache_dir", HF_CACHE)
339
- else:
340
- cache_dir = kwargs.pop("cache_dir", PPDIFFUSERS_CACHE)
341
- subfolder = kwargs.pop("subfolder", None)
342
-
343
- pretrained_model_name_or_path = str(pretrained_model_name_or_path)
344
-
345
- if cls.config_name is None:
346
- raise ValueError(
347
- "`self.config_name` is not defined. Note that one should not load a config from "
348
- "`ConfigMixin`. Please make sure to define `config_name` in a class inheriting from `ConfigMixin`"
349
- )
350
-
351
- if os.path.isfile(pretrained_model_name_or_path):
352
- config_file = pretrained_model_name_or_path
353
- elif os.path.isdir(pretrained_model_name_or_path):
354
- if os.path.isfile(os.path.join(pretrained_model_name_or_path, cls.config_name)):
355
- # Load from a Paddle checkpoint
356
- config_file = os.path.join(pretrained_model_name_or_path, cls.config_name)
357
- elif subfolder is not None and os.path.isfile(
358
- os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name)
359
- ):
360
- config_file = os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name)
361
- else:
362
- raise EnvironmentError(
363
- f"Error no file named {cls.config_name} found in directory {pretrained_model_name_or_path}."
364
- )
365
- elif from_hf_hub:
366
- config_file = hf_hub_download(
367
- repo_id=pretrained_model_name_or_path,
368
- filename=cls.config_name,
369
- cache_dir=cache_dir,
370
- subfolder=subfolder,
371
- library_name="PPDiffusers",
372
- library_version=__version__,
373
- )
374
- else:
375
- try:
376
- config_file = ppdiffusers_bos_download(
377
- pretrained_model_name_or_path,
378
- filename=cls.config_name,
379
- subfolder=subfolder,
380
- cache_dir=cache_dir,
381
- )
382
- except HTTPError as err:
383
- raise EnvironmentError(
384
- "There was a specific connection error when trying to load"
385
- f" {pretrained_model_name_or_path}:\n{err}"
386
- )
387
- except ValueError:
388
- raise EnvironmentError(
389
- f"We couldn't connect to '{DOWNLOAD_SERVER}' to load this model, couldn't find it"
390
- f" in the cached files and it looks like {pretrained_model_name_or_path} is not the path to a"
391
- f" directory containing a {cls.config_name} file.\nCheckout your internet connection or see how to"
392
- " run the library in offline mode at"
393
- " 'https://huggingface.co/docs/diffusers/installation#offline-mode'."
394
- )
395
- except EnvironmentError:
396
- raise EnvironmentError(
397
- f"Can't load config for '{pretrained_model_name_or_path}'. If you were trying to load it from "
398
- "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
399
- f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory "
400
- f"containing a {cls.config_name} file"
401
- )
402
-
403
- try:
404
- # Load config dict
405
- config_dict = cls._dict_from_json_file(config_file)
406
- except (json.JSONDecodeError, UnicodeDecodeError):
407
- raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.")
408
-
409
- if return_unused_kwargs:
410
- return config_dict, kwargs
411
-
412
- return config_dict
413
-
414
- @staticmethod
415
- def _get_init_keys(cls):
416
- return set(dict(inspect.signature(cls.__init__).parameters).keys())
417
-
418
- @classmethod
419
- def extract_init_dict(cls, config_dict, **kwargs):
420
- # 0. Copy origin config dict
421
- original_dict = {k: v for k, v in config_dict.items()}
422
-
423
- # 1. Retrieve expected config attributes from __init__ signature
424
- expected_keys = cls._get_init_keys(cls)
425
- expected_keys.remove("self")
426
- # remove general kwargs if present in dict
427
- if "kwargs" in expected_keys:
428
- expected_keys.remove("kwargs")
429
-
430
- # 2. Remove attributes that cannot be expected from expected config attributes
431
- # remove keys to be ignored
432
- if len(cls.ignore_for_config) > 0:
433
- expected_keys = expected_keys - set(cls.ignore_for_config)
434
-
435
- # load ppdiffusers library to import compatible and original scheduler
436
- ppdiffusers_library = importlib.import_module(__name__.split(".")[0])
437
-
438
- if cls.has_compatibles:
439
- compatible_classes = [c for c in cls._get_compatibles() if not isinstance(c, DummyObject)]
440
- else:
441
- compatible_classes = []
442
-
443
- expected_keys_comp_cls = set()
444
- for c in compatible_classes:
445
- expected_keys_c = cls._get_init_keys(c)
446
- expected_keys_comp_cls = expected_keys_comp_cls.union(expected_keys_c)
447
- expected_keys_comp_cls = expected_keys_comp_cls - cls._get_init_keys(cls)
448
- config_dict = {k: v for k, v in config_dict.items() if k not in expected_keys_comp_cls}
449
-
450
- # remove attributes from orig class that cannot be expected
451
- orig_cls_name = config_dict.pop("_class_name", cls.__name__)
452
- if orig_cls_name != cls.__name__ and hasattr(ppdiffusers_library, orig_cls_name):
453
- orig_cls = getattr(ppdiffusers_library, orig_cls_name)
454
- unexpected_keys_from_orig = cls._get_init_keys(orig_cls) - expected_keys
455
- config_dict = {k: v for k, v in config_dict.items() if k not in unexpected_keys_from_orig}
456
-
457
- # remove private attributes
458
- config_dict = {k: v for k, v in config_dict.items() if not k.startswith("_")}
459
-
460
- # 3. Create keyword arguments that will be passed to __init__ from expected keyword arguments
461
- init_dict = {}
462
- for key in expected_keys:
463
- # if config param is passed to kwarg and is present in config dict
464
- # it should overwrite existing config dict key
465
- if key in kwargs and key in config_dict:
466
- config_dict[key] = kwargs.pop(key)
467
-
468
- if key in kwargs:
469
- # overwrite key
470
- init_dict[key] = kwargs.pop(key)
471
- elif key in config_dict:
472
- # use value from config dict
473
- init_dict[key] = config_dict.pop(key)
474
-
475
- # 4. Give nice warning if unexpected values have been passed
476
- if len(config_dict) > 0:
477
- logger.warning(
478
- f"The config attributes {config_dict} were passed to {cls.__name__}, "
479
- "but are not expected and will be ignored. Please verify your "
480
- f"{cls.config_name} configuration file."
481
- )
482
-
483
- # 5. Give nice info if config attributes are initiliazed to default because they have not been passed
484
- passed_keys = set(init_dict.keys())
485
- if len(expected_keys - passed_keys) > 0:
486
- logger.info(
487
- f"{expected_keys - passed_keys} was not found in config. Values will be initialized to default values."
488
- )
489
-
490
- # 6. Define unused keyword arguments
491
- unused_kwargs = {**config_dict, **kwargs}
492
-
493
- # 7. Define "hidden" config parameters that were saved for compatible classes
494
- hidden_config_dict = {k: v for k, v in original_dict.items() if k not in init_dict}
495
-
496
- return init_dict, unused_kwargs, hidden_config_dict
497
-
498
- @classmethod
499
- def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
500
- with open(json_file, "r", encoding="utf-8") as reader:
501
- text = reader.read()
502
- return json.loads(text)
503
-
504
- def __repr__(self):
505
- return f"{self.__class__.__name__} {self.to_json_string()}"
506
-
507
- @property
508
- def config(self) -> Dict[str, Any]:
509
- """
510
- Returns the config of the class as a frozen dictionary
511
-
512
- Returns:
513
- `Dict[str, Any]`: Config of the class.
514
- """
515
- return self._internal_dict
516
-
517
- def to_json_string(self) -> str:
518
- """
519
- Serializes this instance to a JSON string.
520
-
521
- Returns:
522
- `str`: String containing all the attributes that make up this configuration instance in JSON format.
523
- """
524
- config_dict = self._internal_dict if hasattr(self, "_internal_dict") else {}
525
- config_dict["_class_name"] = self.__class__.__name__
526
- config_dict["_ppdiffusers_version"] = __version__
527
-
528
- def to_json_saveable(value):
529
- if isinstance(value, np.ndarray):
530
- value = value.tolist()
531
- return value
532
-
533
- config_dict = {k: to_json_saveable(v) for k, v in config_dict.items()}
534
- return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
535
-
536
- def to_json_file(self, json_file_path: Union[str, os.PathLike]):
537
- """
538
- Save this instance to a JSON file.
539
-
540
- Args:
541
- json_file_path (`str` or `os.PathLike`):
542
- Path to the JSON file in which this configuration instance's parameters will be saved.
543
- """
544
- with open(json_file_path, "w", encoding="utf-8") as writer:
545
- writer.write(self.to_json_string())
546
-
547
-
548
- def register_to_config(init):
549
- r"""
550
- Decorator to apply on the init of classes inheriting from [`ConfigMixin`] so that all the arguments are
551
- automatically sent to `self.register_for_config`. To ignore a specific argument accepted by the init but that
552
- shouldn't be registered in the config, use the `ignore_for_config` class variable
553
-
554
- Warning: Once decorated, all private arguments (beginning with an underscore) are trashed and not sent to the init!
555
- """
556
-
557
- @functools.wraps(init)
558
- def inner_init(self, *args, **kwargs):
559
- # Ignore private kwargs in the init.
560
- init_kwargs = {k: v for k, v in kwargs.items() if not k.startswith("_")}
561
- config_init_kwargs = {k: v for k, v in kwargs.items() if k.startswith("_")}
562
-
563
- if not isinstance(self, ConfigMixin):
564
- raise RuntimeError(
565
- f"`@register_for_config` was applied to {self.__class__.__name__} init method, but this class does "
566
- "not inherit from `ConfigMixin`."
567
- )
568
-
569
- ignore = getattr(self, "ignore_for_config", [])
570
- # Get positional arguments aligned with kwargs
571
- new_kwargs = {}
572
- signature = inspect.signature(init)
573
- parameters = {
574
- name: p.default for i, (name, p) in enumerate(signature.parameters.items()) if i > 0 and name not in ignore
575
- }
576
- for arg, name in zip(args, parameters.keys()):
577
- new_kwargs[name] = arg
578
-
579
- # Then add all kwargs
580
- new_kwargs.update(
581
- {
582
- k: init_kwargs.get(k, default)
583
- for k, default in parameters.items()
584
- if k not in ignore and k not in new_kwargs
585
- }
586
- )
587
- new_kwargs = {**config_init_kwargs, **new_kwargs}
588
- getattr(self, "register_to_config")(**new_kwargs)
589
- init(self, *args, **init_kwargs)
590
-
591
- return inner_init
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_discrete.py DELETED
@@ -1,286 +0,0 @@
1
- # Copyright 2022 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- from typing import List, Optional, Tuple, Union
16
-
17
- import numpy as np
18
- import paddle
19
-
20
- from ..configuration_utils import ConfigMixin, register_to_config
21
- from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS
22
- from .scheduling_utils import SchedulerMixin, SchedulerOutput
23
-
24
-
25
- class KDPM2DiscreteScheduler(SchedulerMixin, ConfigMixin):
26
- """
27
- Scheduler created by @crowsonkb in [k_diffusion](https://github.com/crowsonkb/k-diffusion), see:
28
- https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188
29
-
30
- Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022).
31
-
32
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
33
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
34
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
35
- [`~SchedulerMixin.from_pretrained`] functions.
36
-
37
- Args:
38
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
39
- beta_start (`float`): the starting `beta` value of inference.
40
- beta_end (`float`): the final `beta` value.
41
- beta_schedule (`str`):
42
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
43
- `linear` or `scaled_linear`.
44
- trained_betas (`np.ndarray`, optional):
45
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
46
- prediction_type (`str`, default `epsilon`, optional):
47
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
48
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
49
- https://imagen.research.google/video/paper.pdf)
50
- """
51
-
52
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
53
- order = 2
54
-
55
- @register_to_config
56
- def __init__(
57
- self,
58
- num_train_timesteps: int = 1000,
59
- beta_start: float = 0.00085, # sensible defaults
60
- beta_end: float = 0.012,
61
- beta_schedule: str = "linear",
62
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
63
- prediction_type: str = "epsilon",
64
- ):
65
- if trained_betas is not None:
66
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
67
- elif beta_schedule == "linear":
68
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
69
- elif beta_schedule == "scaled_linear":
70
- # this schedule is very specific to the latent diffusion model.
71
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
72
- else:
73
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
74
-
75
- self.alphas = 1.0 - self.betas
76
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
77
-
78
- # set all values
79
- self.set_timesteps(num_train_timesteps, num_train_timesteps)
80
-
81
- def index_for_timestep(self, timestep):
82
- indices = (self.timesteps == timestep).nonzero()
83
- if self.state_in_first_order:
84
- pos = -1
85
- else:
86
- pos = 0
87
- return indices[pos].item()
88
-
89
- def scale_model_input(
90
- self,
91
- sample: paddle.Tensor,
92
- timestep: Union[float, paddle.Tensor],
93
- ) -> paddle.Tensor:
94
- """
95
- Args:
96
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
97
- current timestep.
98
- sample (`paddle.Tensor`): input sample timestep (`int`, optional): current timestep
99
- Returns:
100
- `paddle.Tensor`: scaled input sample
101
- """
102
- step_index = self.index_for_timestep(timestep)
103
-
104
- if self.state_in_first_order:
105
- sigma = self.sigmas[step_index]
106
- else:
107
- sigma = self.sigmas_interpol[step_index]
108
-
109
- sample = sample / ((sigma**2 + 1) ** 0.5)
110
- return sample
111
-
112
- def set_timesteps(
113
- self,
114
- num_inference_steps: int,
115
- num_train_timesteps: Optional[int] = None,
116
- ):
117
- """
118
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
119
-
120
- Args:
121
- num_inference_steps (`int`):
122
- the number of diffusion steps used when generating samples with a pre-trained model.
123
- """
124
- self.num_inference_steps = num_inference_steps
125
-
126
- num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps
127
-
128
- timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=np.float32)[::-1].copy()
129
-
130
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
131
- self.log_sigmas = paddle.to_tensor(np.log(sigmas), dtype="float32")
132
-
133
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
134
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
135
- sigmas = paddle.to_tensor(sigmas)
136
-
137
- # interpolate sigmas
138
- sigmas_interpol = sigmas.log().lerp(sigmas.roll(1).log(), 0.5).exp()
139
- # must set to 0.0
140
- sigmas_interpol[-1] = 0.0
141
-
142
- self.sigmas = paddle.concat([sigmas[:1], sigmas[1:].repeat_interleave(2), sigmas[-1:]])
143
- self.sigmas_interpol = paddle.concat(
144
- [sigmas_interpol[:1], sigmas_interpol[1:].repeat_interleave(2), sigmas_interpol[-1:]]
145
- )
146
-
147
- # standard deviation of the initial noise distribution
148
- self.init_noise_sigma = self.sigmas.max()
149
-
150
- timesteps = paddle.to_tensor(timesteps)
151
-
152
- # interpolate timesteps
153
- timesteps_interpol = self.sigma_to_t(sigmas_interpol)
154
- interleaved_timesteps = paddle.stack((timesteps_interpol[1:-1, None], timesteps[1:, None]), axis=-1).flatten()
155
- timesteps = paddle.concat([timesteps[:1], interleaved_timesteps])
156
-
157
- self.timesteps = timesteps
158
-
159
- self.sample = None
160
-
161
- def sigma_to_t(self, sigma):
162
- # get log sigma
163
- log_sigma = sigma.log()
164
-
165
- # get distribution
166
- dists = log_sigma - self.log_sigmas[:, None]
167
-
168
- # get sigmas range
169
- low_idx = (dists >= 0).cast("int64").cumsum(axis=0).argmax(axis=0).clip(max=self.log_sigmas.shape[0] - 2)
170
-
171
- high_idx = low_idx + 1
172
-
173
- low = self.log_sigmas[low_idx]
174
- high = self.log_sigmas[high_idx]
175
-
176
- # interpolate sigmas
177
- w = (low - log_sigma) / (low - high)
178
- w = w.clip(0, 1)
179
-
180
- # transform interpolation to time range
181
- t = (1 - w) * low_idx + w * high_idx
182
- t = t.reshape(sigma.shape)
183
- return t
184
-
185
- @property
186
- def state_in_first_order(self):
187
- return self.sample is None
188
-
189
- def step(
190
- self,
191
- model_output: Union[paddle.Tensor, np.ndarray],
192
- timestep: Union[float, paddle.Tensor],
193
- sample: Union[paddle.Tensor, np.ndarray],
194
- return_dict: bool = True,
195
- ) -> Union[SchedulerOutput, Tuple]:
196
- """
197
- Args:
198
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
199
- process from the learned model outputs (most often the predicted noise).
200
- model_output (`paddle.Tensor` or `np.ndarray`): direct output from learned diffusion model. timestep
201
- (`int`): current discrete timestep in the diffusion chain. sample (`paddle.Tensor` or `np.ndarray`):
202
- current instance of sample being created by diffusion process.
203
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
204
- Returns:
205
- [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`:
206
- [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
207
- returning a tuple, the first element is the sample tensor.
208
- """
209
- step_index = self.index_for_timestep(timestep)
210
-
211
- if self.state_in_first_order:
212
- sigma = self.sigmas[step_index]
213
- sigma_interpol = self.sigmas_interpol[step_index + 1]
214
- sigma_next = self.sigmas[step_index + 1]
215
- else:
216
- # 2nd order / KDPM2's method
217
- sigma = self.sigmas[step_index - 1]
218
- sigma_interpol = self.sigmas_interpol[step_index]
219
- sigma_next = self.sigmas[step_index]
220
-
221
- # currently only gamma=0 is supported. This usually works best anyways.
222
- # We can support gamma in the future but then need to scale the timestep before
223
- # passing it to the model which requires a change in API
224
- gamma = 0
225
- sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now
226
-
227
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
228
- if self.config.prediction_type == "epsilon":
229
- sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol
230
- pred_original_sample = sample - sigma_input * model_output
231
- elif self.config.prediction_type == "v_prediction":
232
- sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol
233
- pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + (
234
- sample / (sigma_input**2 + 1)
235
- )
236
- else:
237
- raise ValueError(
238
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
239
- )
240
-
241
- if self.state_in_first_order:
242
- # 2. Convert to an ODE derivative for 1st order
243
- derivative = (sample - pred_original_sample) / sigma_hat
244
- # 3. delta timestep
245
- dt = sigma_interpol - sigma_hat
246
-
247
- # store for 2nd order step
248
- self.sample = sample
249
- else:
250
- # DPM-Solver-2
251
- # 2. Convert to an ODE derivative for 2nd order
252
- derivative = (sample - pred_original_sample) / sigma_interpol
253
-
254
- # 3. delta timestep
255
- dt = sigma_next - sigma_hat
256
-
257
- sample = self.sample
258
- self.sample = None
259
-
260
- prev_sample = sample + derivative * dt
261
-
262
- if not return_dict:
263
- return (prev_sample,)
264
-
265
- return SchedulerOutput(prev_sample=prev_sample)
266
-
267
- def add_noise(
268
- self,
269
- original_samples: paddle.Tensor,
270
- noise: paddle.Tensor,
271
- timesteps: paddle.Tensor,
272
- ) -> paddle.Tensor:
273
- # Make sure sigmas and timesteps have the same dtype as original_samples
274
- self.sigmas = self.sigmas.cast(original_samples.dtype)
275
-
276
- step_indices = [self.index_for_timestep(t) for t in timesteps]
277
-
278
- sigma = self.sigmas[step_indices].flatten()
279
- while len(sigma.shape) < len(original_samples.shape):
280
- sigma = sigma.unsqueeze(-1)
281
-
282
- noisy_samples = original_samples + noise * sigma
283
- return noisy_samples
284
-
285
- def __len__(self):
286
- return self.config.num_train_timesteps
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/utils/import_utils.py DELETED
@@ -1,331 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """
16
- Import utilities: Utilities related to imports and our lazy inits.
17
- """
18
- import importlib.util
19
- import operator as op
20
- import os
21
- import sys
22
- from collections import OrderedDict
23
- from typing import Union
24
-
25
- from packaging.version import Version, parse
26
-
27
- from . import logging
28
-
29
- # The package importlib_metadata is in a different place, depending on the python version.
30
- if sys.version_info < (3, 8):
31
- import importlib_metadata
32
- else:
33
- import importlib.metadata as importlib_metadata
34
-
35
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
36
-
37
- ENV_VARS_TRUE_VALUES = {"1", "ON", "YES", "TRUE"}
38
- ENV_VARS_TRUE_AND_AUTO_VALUES = ENV_VARS_TRUE_VALUES.union({"AUTO"})
39
-
40
- USE_PADDLE = os.environ.get("USE_PADDLE", "AUTO").upper()
41
-
42
- STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt}
43
-
44
- _paddle_version = "N/A"
45
- if USE_PADDLE in ENV_VARS_TRUE_AND_AUTO_VALUES:
46
- _paddle_available = importlib.util.find_spec("paddle") is not None
47
- if _paddle_available:
48
- try:
49
- import paddle
50
-
51
- _paddle_version = paddle.__version__
52
- logger.info(f"Paddle version {_paddle_version} available.")
53
- except importlib_metadata.PackageNotFoundError:
54
- _paddle_available = False
55
- else:
56
- logger.info("Disabling Paddle because USE_PADDLE is not set.")
57
- _paddle_available = False
58
-
59
- _paddlenlp_available = importlib.util.find_spec("paddlenlp") is not None
60
- try:
61
- _paddlenlp_version = importlib_metadata.version("paddlenlp")
62
- logger.debug(f"Successfully imported paddlenlp version {_paddlenlp_version}")
63
- except importlib_metadata.PackageNotFoundError:
64
- _paddlenlp_available = False
65
-
66
- _inflect_available = importlib.util.find_spec("inflect") is not None
67
- try:
68
- _inflect_version = importlib_metadata.version("inflect")
69
- logger.debug(f"Successfully imported inflect version {_inflect_version}")
70
- except importlib_metadata.PackageNotFoundError:
71
- _inflect_available = False
72
-
73
- _unidecode_available = importlib.util.find_spec("unidecode") is not None
74
- try:
75
- _unidecode_version = importlib_metadata.version("unidecode")
76
- logger.debug(f"Successfully imported unidecode version {_unidecode_version}")
77
- except importlib_metadata.PackageNotFoundError:
78
- _unidecode_available = False
79
-
80
- _modelcards_available = importlib.util.find_spec("modelcards") is not None
81
- try:
82
- _modelcards_version = importlib_metadata.version("modelcards")
83
- logger.debug(f"Successfully imported modelcards version {_modelcards_version}")
84
- except importlib_metadata.PackageNotFoundError:
85
- _modelcards_available = False
86
-
87
- _onnxruntime_version = "N/A"
88
- _onnx_available = importlib.util.find_spec("onnxruntime") is not None
89
- if _onnx_available:
90
- candidates = (
91
- "onnxruntime",
92
- "onnxruntime-gpu",
93
- "onnxruntime-directml",
94
- "onnxruntime-openvino",
95
- "ort_nightly_directml",
96
- )
97
- _onnxruntime_version = None
98
- # For the metadata, we have to look for both onnxruntime and onnxruntime-gpu
99
- for pkg in candidates:
100
- try:
101
- _onnxruntime_version = importlib_metadata.version(pkg)
102
- break
103
- except importlib_metadata.PackageNotFoundError:
104
- pass
105
- _onnx_available = _onnxruntime_version is not None
106
- if _onnx_available:
107
- logger.debug(f"Successfully imported onnxruntime version {_onnxruntime_version}")
108
-
109
- _scipy_available = importlib.util.find_spec("scipy") is not None
110
- try:
111
- _scipy_version = importlib_metadata.version("scipy")
112
- logger.debug(f"Successfully imported scipy version {_scipy_version}")
113
- except importlib_metadata.PackageNotFoundError:
114
- _scipy_available = False
115
-
116
- _librosa_available = importlib.util.find_spec("librosa") is not None
117
- try:
118
- _librosa_version = importlib_metadata.version("librosa")
119
- logger.debug(f"Successfully imported librosa version {_librosa_version}")
120
- except importlib_metadata.PackageNotFoundError:
121
- _librosa_available = False
122
-
123
- _fastdeploy_available = importlib.util.find_spec("fastdeploy") is not None
124
- if _fastdeploy_available:
125
- candidates = ("fastdeploy_gpu_python", "fastdeploy_python")
126
- _fastdeploy_version = None
127
- # For the metadata, we have to look for both fastdeploy_python and fastdeploy_gpu_python
128
- for pkg in candidates:
129
- try:
130
- _fastdeploy_version = importlib_metadata.version(pkg)
131
- break
132
- except importlib_metadata.PackageNotFoundError:
133
- pass
134
- _fastdeploy_available = _fastdeploy_version is not None
135
- if _fastdeploy_available:
136
- logger.debug(f"Successfully imported fastdeploy version {_fastdeploy_version}")
137
-
138
-
139
- _k_diffusion_available = importlib.util.find_spec("k_diffusion") is not None
140
- try:
141
- _k_diffusion_version = importlib_metadata.version("k_diffusion")
142
- logger.debug(f"Successfully imported k-diffusion version {_k_diffusion_version}")
143
- except importlib_metadata.PackageNotFoundError:
144
- _k_diffusion_available = True
145
-
146
- _wandb_available = importlib.util.find_spec("wandb") is not None
147
- try:
148
- _wandb_version = importlib_metadata.version("wandb")
149
- logger.debug(f"Successfully imported wandb version {_wandb_version }")
150
- except importlib_metadata.PackageNotFoundError:
151
- _wandb_available = False
152
-
153
-
154
- def is_paddle_available():
155
- return _paddle_available
156
-
157
-
158
- def is_paddlenlp_available():
159
- return _paddlenlp_available
160
-
161
-
162
- def is_inflect_available():
163
- return _inflect_available
164
-
165
-
166
- def is_unidecode_available():
167
- return _unidecode_available
168
-
169
-
170
- def is_modelcards_available():
171
- return _modelcards_available
172
-
173
-
174
- def is_onnx_available():
175
- return _onnx_available
176
-
177
-
178
- def is_scipy_available():
179
- return _scipy_available
180
-
181
-
182
- def is_librosa_available():
183
- return _librosa_available
184
-
185
-
186
- def is_fastdeploy_available():
187
- return _fastdeploy_available
188
-
189
-
190
- def is_k_diffusion_available():
191
- return _k_diffusion_available
192
-
193
-
194
- def is_wandb_available():
195
- return _wandb_available
196
-
197
-
198
- # docstyle-ignore
199
- FASTDEPLOY_IMPORT_ERROR = """
200
- {0} requires the fastdeploy library but it was not found in your environment. You can install it with pip: `pip install
201
- fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html`
202
- """
203
-
204
- # docstyle-ignore
205
- INFLECT_IMPORT_ERROR = """
206
- {0} requires the inflect library but it was not found in your environment. You can install it with pip: `pip install
207
- inflect`
208
- """
209
-
210
- # docstyle-ignore
211
- PADDLE_IMPORT_ERROR = """
212
- {0} requires the Paddle library but it was not found in your environment. Checkout the instructions on the
213
- installation page: https://www.paddlepaddle.org.cn/install/quick and follow the ones that match your environment.
214
- """
215
-
216
- # docstyle-ignore
217
- LIBROSA_IMPORT_ERROR = """
218
- {0} requires the librosa library but it was not found in your environment. Checkout the instructions on the
219
- installation page: https://librosa.org/doc/latest/install.html and follow the ones that match your environment.
220
- """
221
-
222
- # docstyle-ignore
223
- ONNX_IMPORT_ERROR = """
224
- {0} requires the onnxruntime library but it was not found in your environment. You can install it with pip: `pip
225
- install onnxruntime`
226
- """
227
-
228
- # docstyle-ignore
229
- SCIPY_IMPORT_ERROR = """
230
- {0} requires the scipy library but it was not found in your environment. You can install it with pip: `pip install
231
- scipy`
232
- """
233
-
234
- # docstyle-ignore
235
- PADDLENLP_IMPORT_ERROR = """
236
- {0} requires the paddlenlp library but it was not found in your environment. You can install it with pip: `pip
237
- install paddlenlp`
238
- """
239
-
240
- # docstyle-ignore
241
- UNIDECODE_IMPORT_ERROR = """
242
- {0} requires the unidecode library but it was not found in your environment. You can install it with pip: `pip install
243
- Unidecode`
244
- """
245
-
246
- # docstyle-ignore
247
- K_DIFFUSION_IMPORT_ERROR = """
248
- {0} requires the k-diffusion library but it was not found in your environment. You can install it with pip: `pip
249
- install k-diffusion`
250
- """
251
-
252
- # docstyle-ignore
253
- WANDB_IMPORT_ERROR = """
254
- {0} requires the wandb library but it was not found in your environment. You can install it with pip: `pip
255
- install wandb`
256
- """
257
-
258
- BACKENDS_MAPPING = OrderedDict(
259
- [
260
- ("fastdeploy", (is_fastdeploy_available, FASTDEPLOY_IMPORT_ERROR)),
261
- ("inflect", (is_inflect_available, INFLECT_IMPORT_ERROR)),
262
- ("onnx", (is_onnx_available, ONNX_IMPORT_ERROR)),
263
- ("scipy", (is_scipy_available, SCIPY_IMPORT_ERROR)),
264
- ("paddle", (is_paddle_available, PADDLE_IMPORT_ERROR)),
265
- ("paddlenlp", (is_paddlenlp_available, PADDLENLP_IMPORT_ERROR)),
266
- ("unidecode", (is_unidecode_available, UNIDECODE_IMPORT_ERROR)),
267
- ("librosa", (is_librosa_available, LIBROSA_IMPORT_ERROR)),
268
- ("k_diffusion", (is_k_diffusion_available, K_DIFFUSION_IMPORT_ERROR)),
269
- ("wandb", (is_wandb_available, WANDB_IMPORT_ERROR)),
270
- ]
271
- )
272
-
273
-
274
- def requires_backends(obj, backends):
275
- if not isinstance(backends, (list, tuple)):
276
- backends = [backends]
277
-
278
- name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
279
- checks = (BACKENDS_MAPPING[backend] for backend in backends)
280
- failed = [msg.format(name) for available, msg in checks if not available()]
281
- if failed:
282
- raise ImportError("".join(failed))
283
-
284
-
285
- class DummyObject(type):
286
- """
287
- Metaclass for the dummy objects. Any class inheriting from it will return the ImportError generated by
288
- `requires_backend` each time a user tries to access any method of that class.
289
- """
290
-
291
- def __getattr__(cls, key):
292
- if key.startswith("_"):
293
- return super().__getattr__(cls, key)
294
- requires_backends(cls, cls._backends)
295
-
296
-
297
- # This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L319
298
- def compare_versions(library_or_version: Union[str, Version], operation: str, requirement_version: str):
299
- """
300
- Args:
301
- Compares a library version to some requirement using a given operation.
302
- library_or_version (`str` or `packaging.version.Version`):
303
- A library name or a version to check.
304
- operation (`str`):
305
- A string representation of an operator, such as `">"` or `"<="`.
306
- requirement_version (`str`):
307
- The version to compare the library version against
308
- """
309
- if operation not in STR_OPERATION_TO_FUNC.keys():
310
- raise ValueError(f"`operation` must be one of {list(STR_OPERATION_TO_FUNC.keys())}, received {operation}")
311
- operation = STR_OPERATION_TO_FUNC[operation]
312
- if isinstance(library_or_version, str):
313
- library_or_version = parse(importlib_metadata.version(library_or_version))
314
- return operation(library_or_version, parse(requirement_version))
315
-
316
-
317
- # This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L338
318
- def is_paddle_version(operation: str, version: str):
319
- """
320
- Args:
321
- Compares the current Paddle version to a given reference with an operation.
322
- operation (`str`):
323
- A string representation of an operator, such as `">"` or `"<="`
324
- version (`str`):
325
- A string version of Paddle
326
- """
327
- return compare_versions(parse(_paddle_version), operation, version)
328
-
329
-
330
- class OptionalDependencyNotAvailable(BaseException):
331
- """An error indicating that an optional dependency of Diffusers was not found in the environment."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/model_param_init.py DELETED
@@ -1,69 +0,0 @@
1
- import json
2
- import os
3
- import pathlib
4
-
5
- default_param = {}
6
- default_param["bins"] = 768
7
- default_param["unstable_bins"] = 9 # training only
8
- default_param["reduction_bins"] = 762 # training only
9
- default_param["sr"] = 44100
10
- default_param["pre_filter_start"] = 757
11
- default_param["pre_filter_stop"] = 768
12
- default_param["band"] = {}
13
-
14
-
15
- default_param["band"][1] = {
16
- "sr": 11025,
17
- "hl": 128,
18
- "n_fft": 960,
19
- "crop_start": 0,
20
- "crop_stop": 245,
21
- "lpf_start": 61, # inference only
22
- "res_type": "polyphase",
23
- }
24
-
25
- default_param["band"][2] = {
26
- "sr": 44100,
27
- "hl": 512,
28
- "n_fft": 1536,
29
- "crop_start": 24,
30
- "crop_stop": 547,
31
- "hpf_start": 81, # inference only
32
- "res_type": "sinc_best",
33
- }
34
-
35
-
36
- def int_keys(d):
37
- r = {}
38
- for k, v in d:
39
- if k.isdigit():
40
- k = int(k)
41
- r[k] = v
42
- return r
43
-
44
-
45
- class ModelParameters(object):
46
- def __init__(self, config_path=""):
47
- if ".pth" == pathlib.Path(config_path).suffix:
48
- import zipfile
49
-
50
- with zipfile.ZipFile(config_path, "r") as zip:
51
- self.param = json.loads(
52
- zip.read("param.json"), object_pairs_hook=int_keys
53
- )
54
- elif ".json" == pathlib.Path(config_path).suffix:
55
- with open(config_path, "r") as f:
56
- self.param = json.loads(f.read(), object_pairs_hook=int_keys)
57
- else:
58
- self.param = default_param
59
-
60
- for k in [
61
- "mid_side",
62
- "mid_side_b",
63
- "mid_side_b2",
64
- "stereo_w",
65
- "stereo_n",
66
- "reverse",
67
- ]:
68
- if not k in self.param:
69
- self.param[k] = False
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/Debate/src/agents/template.py DELETED
@@ -1,111 +0,0 @@
1
- ## default { "temperature": 0.3, "model": "gpt-3.5-turbo-16k-0613","log_path": "logs/{your name}"}
2
- LLM = {
3
- "temperature": 0.0,
4
- "model": "gpt-3.5-turbo-16k-0613",
5
- "log_path": "logs/god"
6
- }
7
-
8
-
9
- Agents = {
10
- "Lilong" : {
11
- "style" : "professional",
12
- "roles" : {
13
- "company" : "coder",
14
- "state2" : "role2",
15
- },
16
- "name2" : {
17
- "style" : "professional",
18
- "roles" : {
19
- "company" : "coder",
20
- "state2" : "role2",
21
- },
22
- }
23
- }
24
- }
25
-
26
- # indispensable parameter: "controller_type"("order","random","rule")
27
- # default extract words: "end". You can choose not to fill in this parameter
28
- controller = {
29
- "controller_type": "order",
30
- "max_chat_nums" : 12,
31
- "judge_system_prompt": "",
32
- "judge_last_prompt": "",
33
- "judge_extract_words": "end",
34
- "call_system_prompt" : "",
35
- "call_last_prompt": "",
36
- "call_extract_words": ""
37
- }
38
-
39
- #
40
- Agent_state = {
41
- "role": {
42
- "LLM_type": "OpenAI",
43
- "LLM": LLM,
44
- "style": {
45
- "role": "Opening Advocate for the Affirmative",
46
- "style": "professional"
47
- },
48
- "task": {
49
- "task": ""
50
- },
51
- "rule": {
52
- "rule": ""
53
- }
54
- },
55
- }
56
-
57
-
58
- # indispensable parameter: "agent_states","controller"
59
- # "roles" determines the speaking order when the rule is order. If not set, it is the default order.
60
- # "begin_query" & "begin_role" determines the first speaker.It often determines the direction of the next speech. If you do not set it, it will default to the first agent.
61
- # "environment_prompt" : Responsible for setting the scene for the current environment
62
- State = {
63
- "controller": controller,
64
- "begin_role": "",
65
- "begin_query": "",
66
- "environment_prompt": "",
67
- "roles": ["role1","role2"],
68
- "LLM_type": "OpenAI",
69
- "LLM": LLM,
70
- "agent_state" : Agent_state,
71
- }
72
-
73
-
74
-
75
- States = {
76
- "end_state":{
77
- "agent_states":{}
78
- },
79
- "state1" : State
80
-
81
- }
82
-
83
-
84
- # default finish_state_name is "end_state"
85
- # "environment_type" : "competive" : different states not share the memory; "cooperative":diffrent states share the memory
86
- SOP = {
87
- "config" : {
88
- "API_KEY" : "Your key",
89
- "PROXY" : "Your PROXY",
90
- "MAX_CHAT_HISTORY" : "5",
91
- "User_Names" : "[\"alexander\"]"
92
- },
93
- "environment_type" : "competive",
94
- "LLM_type": "OpenAI",
95
- "LLM" :LLM,
96
- "root": "state1",
97
- "finish_state_name" : "end_state",
98
- "relations": {
99
- "state1": {
100
- "0": "state1",
101
- "1": "state2"
102
- },
103
- "state2":{
104
- "0":"state2",
105
- "1":"end_state"
106
- }
107
- },
108
- "agents": Agents,
109
- "states": States,
110
- }
111
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AP123/ai-avatars/convertosd.py DELETED
@@ -1,226 +0,0 @@
1
- # Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint.
2
- # *Only* converts the UNet, VAE, and Text Encoder.
3
- # Does not convert optimizer state or any other thing.
4
- # Written by jachiam
5
-
6
- import argparse
7
- import os.path as osp
8
-
9
- import torch
10
- import gc
11
-
12
- # =================#
13
- # UNet Conversion #
14
- # =================#
15
-
16
- unet_conversion_map = [
17
- # (stable-diffusion, HF Diffusers)
18
- ("time_embed.0.weight", "time_embedding.linear_1.weight"),
19
- ("time_embed.0.bias", "time_embedding.linear_1.bias"),
20
- ("time_embed.2.weight", "time_embedding.linear_2.weight"),
21
- ("time_embed.2.bias", "time_embedding.linear_2.bias"),
22
- ("input_blocks.0.0.weight", "conv_in.weight"),
23
- ("input_blocks.0.0.bias", "conv_in.bias"),
24
- ("out.0.weight", "conv_norm_out.weight"),
25
- ("out.0.bias", "conv_norm_out.bias"),
26
- ("out.2.weight", "conv_out.weight"),
27
- ("out.2.bias", "conv_out.bias"),
28
- ]
29
-
30
- unet_conversion_map_resnet = [
31
- # (stable-diffusion, HF Diffusers)
32
- ("in_layers.0", "norm1"),
33
- ("in_layers.2", "conv1"),
34
- ("out_layers.0", "norm2"),
35
- ("out_layers.3", "conv2"),
36
- ("emb_layers.1", "time_emb_proj"),
37
- ("skip_connection", "conv_shortcut"),
38
- ]
39
-
40
- unet_conversion_map_layer = []
41
- # hardcoded number of downblocks and resnets/attentions...
42
- # would need smarter logic for other networks.
43
- for i in range(4):
44
- # loop over downblocks/upblocks
45
-
46
- for j in range(2):
47
- # loop over resnets/attentions for downblocks
48
- hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
49
- sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
50
- unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
51
-
52
- if i < 3:
53
- # no attention layers in down_blocks.3
54
- hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
55
- sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
56
- unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
57
-
58
- for j in range(3):
59
- # loop over resnets/attentions for upblocks
60
- hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
61
- sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
62
- unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
63
-
64
- if i > 0:
65
- # no attention layers in up_blocks.0
66
- hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
67
- sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
68
- unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
69
-
70
- if i < 3:
71
- # no downsample in down_blocks.3
72
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
73
- sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
74
- unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
75
-
76
- # no upsample in up_blocks.3
77
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
78
- sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}."
79
- unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
80
-
81
- hf_mid_atn_prefix = "mid_block.attentions.0."
82
- sd_mid_atn_prefix = "middle_block.1."
83
- unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
84
-
85
- for j in range(2):
86
- hf_mid_res_prefix = f"mid_block.resnets.{j}."
87
- sd_mid_res_prefix = f"middle_block.{2*j}."
88
- unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
89
-
90
-
91
- def convert_unet_state_dict(unet_state_dict):
92
- # buyer beware: this is a *brittle* function,
93
- # and correct output requires that all of these pieces interact in
94
- # the exact order in which I have arranged them.
95
- mapping = {k: k for k in unet_state_dict.keys()}
96
- for sd_name, hf_name in unet_conversion_map:
97
- mapping[hf_name] = sd_name
98
- for k, v in mapping.items():
99
- if "resnets" in k:
100
- for sd_part, hf_part in unet_conversion_map_resnet:
101
- v = v.replace(hf_part, sd_part)
102
- mapping[k] = v
103
- for k, v in mapping.items():
104
- for sd_part, hf_part in unet_conversion_map_layer:
105
- v = v.replace(hf_part, sd_part)
106
- mapping[k] = v
107
- new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()}
108
- return new_state_dict
109
-
110
-
111
- # ================#
112
- # VAE Conversion #
113
- # ================#
114
-
115
- vae_conversion_map = [
116
- # (stable-diffusion, HF Diffusers)
117
- ("nin_shortcut", "conv_shortcut"),
118
- ("norm_out", "conv_norm_out"),
119
- ("mid.attn_1.", "mid_block.attentions.0."),
120
- ]
121
-
122
- for i in range(4):
123
- # down_blocks have two resnets
124
- for j in range(2):
125
- hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}."
126
- sd_down_prefix = f"encoder.down.{i}.block.{j}."
127
- vae_conversion_map.append((sd_down_prefix, hf_down_prefix))
128
-
129
- if i < 3:
130
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0."
131
- sd_downsample_prefix = f"down.{i}.downsample."
132
- vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix))
133
-
134
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
135
- sd_upsample_prefix = f"up.{3-i}.upsample."
136
- vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix))
137
-
138
- # up_blocks have three resnets
139
- # also, up blocks in hf are numbered in reverse from sd
140
- for j in range(3):
141
- hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}."
142
- sd_up_prefix = f"decoder.up.{3-i}.block.{j}."
143
- vae_conversion_map.append((sd_up_prefix, hf_up_prefix))
144
-
145
- # this part accounts for mid blocks in both the encoder and the decoder
146
- for i in range(2):
147
- hf_mid_res_prefix = f"mid_block.resnets.{i}."
148
- sd_mid_res_prefix = f"mid.block_{i+1}."
149
- vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix))
150
-
151
-
152
- vae_conversion_map_attn = [
153
- # (stable-diffusion, HF Diffusers)
154
- ("norm.", "group_norm."),
155
- ("q.", "query."),
156
- ("k.", "key."),
157
- ("v.", "value."),
158
- ("proj_out.", "proj_attn."),
159
- ]
160
-
161
-
162
- def reshape_weight_for_sd(w):
163
- # convert HF linear weights to SD conv2d weights
164
- return w.reshape(*w.shape, 1, 1)
165
-
166
-
167
- def convert_vae_state_dict(vae_state_dict):
168
- mapping = {k: k for k in vae_state_dict.keys()}
169
- for k, v in mapping.items():
170
- for sd_part, hf_part in vae_conversion_map:
171
- v = v.replace(hf_part, sd_part)
172
- mapping[k] = v
173
- for k, v in mapping.items():
174
- if "attentions" in k:
175
- for sd_part, hf_part in vae_conversion_map_attn:
176
- v = v.replace(hf_part, sd_part)
177
- mapping[k] = v
178
- new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()}
179
- weights_to_convert = ["q", "k", "v", "proj_out"]
180
- print("Converting to CKPT ...")
181
- for k, v in new_state_dict.items():
182
- for weight_name in weights_to_convert:
183
- if f"mid.attn_1.{weight_name}.weight" in k:
184
- new_state_dict[k] = reshape_weight_for_sd(v)
185
- return new_state_dict
186
-
187
-
188
- # =========================#
189
- # Text Encoder Conversion #
190
- # =========================#
191
- # pretty much a no-op
192
-
193
-
194
- def convert_text_enc_state_dict(text_enc_dict):
195
- return text_enc_dict
196
-
197
-
198
- def convert(model_path, checkpoint_path):
199
- unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin")
200
- vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin")
201
- text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin")
202
-
203
- # Convert the UNet model
204
- unet_state_dict = torch.load(unet_path, map_location='cpu')
205
- unet_state_dict = convert_unet_state_dict(unet_state_dict)
206
- unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()}
207
-
208
- # Convert the VAE model
209
- vae_state_dict = torch.load(vae_path, map_location='cpu')
210
- vae_state_dict = convert_vae_state_dict(vae_state_dict)
211
- vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()}
212
-
213
- # Convert the text encoder model
214
- text_enc_dict = torch.load(text_enc_path, map_location='cpu')
215
- text_enc_dict = convert_text_enc_state_dict(text_enc_dict)
216
- text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()}
217
-
218
- # Put together new checkpoint
219
- state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict}
220
-
221
- state_dict = {k:v.half() for k,v in state_dict.items()}
222
- state_dict = {"state_dict": state_dict}
223
- torch.save(state_dict, checkpoint_path)
224
- del state_dict, text_enc_dict, vae_state_dict, unet_state_dict
225
- torch.cuda.empty_cache()
226
- gc.collect()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AUBADA-ALARABI/AraPoet/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: AraPoet
3
- emoji: ✍️
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.18.0
8
- app_file: app.py
9
- pinned: false
10
- license: gpl-3.0
11
- duplicated_from: Abdllh/AraPoet
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/yolo_inference_util.py DELETED
@@ -1,369 +0,0 @@
1
- import argparse
2
- import sys
3
- from pathlib import Path
4
-
5
- import cv2
6
- import numpy as np
7
- import torch
8
- import torch.backends.cudnn as cudnn
9
-
10
- from models.experimental import attempt_load
11
- from utils.datasets import LoadImages, LoadStreams
12
- from utils.general import (
13
- apply_classifier,
14
- check_img_size,
15
- check_imshow,
16
- check_requirements,
17
- check_suffix,
18
- colorstr,
19
- increment_path,
20
- is_ascii,
21
- non_max_suppression,
22
- save_one_box,
23
- scale_coords,
24
- set_logging,
25
- strip_optimizer,
26
- xyxy2xywh,
27
- )
28
- from utils.plots import Annotator, colors
29
- from utils.torch_utils import load_classifier, select_device, time_sync
30
-
31
- # FILE = Path(__file__).resolve()
32
- # ROOT = FILE.parents[0] # YOLOv5 root directory
33
- # if str(ROOT) not in sys.path:
34
- # sys.path.append(str(ROOT)) # add ROOT to PATH
35
-
36
-
37
-
38
- @torch.no_grad()
39
- def run_yolo_v5(
40
- weights="yolov5s.pt", # model.pt path(s)
41
- source="data/images", # file/dir/URL/glob, 0 for webcam
42
- imgsz=640, # inference size (pixels)
43
- conf_thres=0.25, # confidence threshold
44
- iou_thres=0.45, # NMS IOU threshold
45
- max_det=1000, # maximum detections per image
46
- device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu
47
- view_img=False, # show results
48
- save_txt=False, # save results to *.txt
49
- save_conf=False, # save confidences in --save-txt labels
50
- save_crop=False, # save cropped prediction boxes
51
- nosave=False, # do not save images/videos
52
- classes=None, # filter by class: --class 0, or --class 0 2 3
53
- agnostic_nms=False, # class-agnostic NMS
54
- augment=False, # augmented inference
55
- visualize=False, # visualize features
56
- update=False, # update all models
57
- project="runs/detect", # save results to project/name
58
- name="exp", # save results to project/name
59
- exist_ok=False, # existing project/name ok, do not increment
60
- line_thickness=3, # bounding box thickness (pixels)
61
- hide_labels=False, # hide labels
62
- hide_conf=False, # hide confidences
63
- half=False, # use FP16 half-precision inference
64
- ):
65
- save_img = not nosave and not source.endswith(
66
- ".txt"
67
- ) # save inference images
68
- webcam = (
69
- source.isnumeric()
70
- or source.endswith(".txt")
71
- or source.lower().startswith(
72
- ("rtsp://", "rtmp://", "http://", "https://")
73
- )
74
- )
75
-
76
- # Directories
77
- save_dir = increment_path(
78
- Path(project) / name, exist_ok=exist_ok
79
- ) # increment run
80
- (save_dir / "labels" if save_txt else save_dir).mkdir(
81
- parents=True, exist_ok=True
82
- ) # make dir
83
-
84
- # Initialize
85
- set_logging()
86
- device = select_device(device)
87
- half &= device.type != "cpu" # half precision only supported on CUDA
88
-
89
- # Load model
90
- w = weights[0] if isinstance(weights, list) else weights
91
- classify, suffix, suffixes = (
92
- False,
93
- Path(w).suffix.lower(),
94
- [".pt", ".onnx", ".tflite", ".pb", ""],
95
- )
96
- check_suffix(w, suffixes) # check weights have acceptable suffix
97
- pt, onnx, tflite, pb, saved_model = (
98
- suffix == x for x in suffixes
99
- ) # backend booleans
100
- stride, names = 64, [f"class{i}" for i in range(1000)] # assign defaults
101
- if pt:
102
- model = attempt_load(weights, map_location=device) # load FP32 model
103
- stride = int(model.stride.max()) # model stride
104
- names = (
105
- model.module.names if hasattr(model, "module") else model.names
106
- ) # get class names
107
- if half:
108
- model.half() # to FP16
109
- if classify: # second-stage classifier
110
- modelc = load_classifier(name="resnet50", n=2) # initialize
111
- modelc.load_state_dict(
112
- torch.load("resnet50.pt", map_location=device)["model"]
113
- ).to(device).eval()
114
- elif onnx:
115
- check_requirements(("onnx", "onnxruntime"))
116
- import onnxruntime
117
-
118
- session = onnxruntime.InferenceSession(w, None)
119
- else: # TensorFlow models
120
- check_requirements(("tensorflow>=2.4.1",))
121
- import tensorflow as tf
122
-
123
- if (
124
- pb
125
- ): # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
126
-
127
- def wrap_frozen_graph(gd, inputs, outputs):
128
- x = tf.compat.v1.wrap_function(
129
- lambda: tf.compat.v1.import_graph_def(gd, name=""), []
130
- ) # wrapped import
131
- return x.prune(
132
- tf.nest.map_structure(x.graph.as_graph_element, inputs),
133
- tf.nest.map_structure(x.graph.as_graph_element, outputs),
134
- )
135
-
136
- graph_def = tf.Graph().as_graph_def()
137
- graph_def.ParseFromString(open(w, "rb").read())
138
- frozen_func = wrap_frozen_graph(
139
- gd=graph_def, inputs="x:0", outputs="Identity:0"
140
- )
141
- elif saved_model:
142
- model = tf.keras.models.load_model(w)
143
- elif tflite:
144
- interpreter = tf.lite.Interpreter(
145
- model_path=w
146
- ) # load TFLite model
147
- interpreter.allocate_tensors() # allocate
148
- input_details = interpreter.get_input_details() # inputs
149
- output_details = interpreter.get_output_details() # outputs
150
- int8 = (
151
- input_details[0]["dtype"] == np.uint8
152
- ) # is TFLite quantized uint8 model
153
- imgsz = check_img_size(imgsz, s=stride) # check image size
154
- ascii = is_ascii(names) # names are ascii (use PIL for UTF-8)
155
-
156
- # Dataloader
157
- print("Loading data from the source", source)
158
- if webcam:
159
- view_img = check_imshow()
160
- cudnn.benchmark = (
161
- True # set True to speed up constant image size inference
162
- )
163
- dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)
164
- bs = len(dataset) # batch_size
165
- else:
166
- dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)
167
- bs = 1 # batch_size
168
- vid_path, vid_writer = [None] * bs, [None] * bs
169
-
170
- # Run inference
171
- if pt and device.type != "cpu":
172
- model(
173
- torch.zeros(1, 3, *imgsz)
174
- .to(device)
175
- .type_as(next(model.parameters()))
176
- ) # run once
177
- dt, seen = [0.0, 0.0, 0.0], 0
178
- results = []
179
- for path, img, im0s, vid_cap in dataset:
180
- t1 = time_sync()
181
- if onnx:
182
- img = img.astype("float32")
183
- else:
184
- img = torch.from_numpy(img).to(device)
185
- img = img.half() if half else img.float() # uint8 to fp16/32
186
- img = img / 255.0 # 0 - 255 to 0.0 - 1.0
187
- if len(img.shape) == 3:
188
- img = img[None] # expand for batch dim
189
- t2 = time_sync()
190
- dt[0] += t2 - t1
191
-
192
- # Inference
193
- if pt:
194
- visualize = (
195
- increment_path(save_dir / Path(path).stem, mkdir=True)
196
- if visualize
197
- else False
198
- )
199
- pred = model(img, augment=augment, visualize=visualize)[0]
200
- elif onnx:
201
- pred = torch.tensor(
202
- session.run(
203
- [session.get_outputs()[0].name],
204
- {session.get_inputs()[0].name: img},
205
- )
206
- )
207
- else: # tensorflow model (tflite, pb, saved_model)
208
- imn = img.permute(0, 2, 3, 1).cpu().numpy() # image in numpy
209
- if pb:
210
- pred = frozen_func(x=tf.constant(imn)).numpy()
211
- elif saved_model:
212
- pred = model(imn, training=False).numpy()
213
- elif tflite:
214
- if int8:
215
- scale, zero_point = input_details[0]["quantization"]
216
- imn = (imn / scale + zero_point).astype(
217
- np.uint8
218
- ) # de-scale
219
- interpreter.set_tensor(input_details[0]["index"], imn)
220
- interpreter.invoke()
221
- pred = interpreter.get_tensor(output_details[0]["index"])
222
- if int8:
223
- scale, zero_point = output_details[0]["quantization"]
224
- pred = (
225
- pred.astype(np.float32) - zero_point
226
- ) * scale # re-scale
227
- pred[..., 0] *= imgsz[1] # x
228
- pred[..., 1] *= imgsz[0] # y
229
- pred[..., 2] *= imgsz[1] # w
230
- pred[..., 3] *= imgsz[0] # h
231
- pred = torch.tensor(pred)
232
- t3 = time_sync()
233
- dt[1] += t3 - t2
234
-
235
- # NMS
236
- pred = non_max_suppression(
237
- pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det
238
- )
239
- dt[2] += time_sync() - t3
240
-
241
- # Second-stage classifier (optional)
242
- if classify:
243
- pred = apply_classifier(pred, modelc, img, im0s)
244
-
245
- # Process predictions
246
- for i, det in enumerate(pred): # per image
247
- seen += 1
248
- if webcam: # batch_size >= 1
249
- p, s, im0, frame = (
250
- path[i],
251
- f"{i}: ",
252
- im0s[i].copy(),
253
- dataset.count,
254
- )
255
- else:
256
- p, s, im0, frame = (
257
- path,
258
- "",
259
- im0s.copy(),
260
- getattr(dataset, "frame", 0),
261
- )
262
-
263
- p = Path(p) # to Path
264
- save_path = str(save_dir / p.name) # img.jpg
265
- txt_path = str(save_dir / "labels" / p.stem) + (
266
- "" if dataset.mode == "image" else f"_{frame}"
267
- ) # img.txt
268
- s += "%gx%g " % img.shape[2:] # print string
269
- gn = torch.tensor(im0.shape)[
270
- [1, 0, 1, 0]
271
- ] # normalization gain whwh
272
- imc = im0.copy() if save_crop else im0 # for save_crop
273
- annotator = Annotator(
274
- im0, line_width=line_thickness, pil=not ascii
275
- )
276
- if len(det):
277
- # Rescale boxes from img_size to im0 size
278
- det[:, :4] = scale_coords(
279
- img.shape[2:], det[:, :4], im0.shape
280
- ).round()
281
- results.append((im0, det))
282
- # Print results
283
- for c in det[:, -1].unique():
284
- n = (det[:, -1] == c).sum() # detections per class
285
- s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
286
-
287
- # Write results
288
- for *xyxy, conf, cls in reversed(det):
289
- if save_txt: # Write to file
290
- xywh = (
291
- (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn)
292
- .view(-1)
293
- .tolist()
294
- ) # normalized xywh
295
- line = (
296
- (cls, *xywh, conf) if save_conf else (cls, *xywh)
297
- ) # label format
298
- with open(txt_path + ".txt", "a") as f:
299
- f.write(("%g " * len(line)).rstrip() % line + "\n")
300
-
301
- if save_img or save_crop or view_img: # Add bbox to image
302
- c = int(cls) # integer class
303
- label = (
304
- None
305
- if hide_labels
306
- else (
307
- names[c]
308
- if hide_conf
309
- else f"{names[c]} {conf:.2f}"
310
- )
311
- )
312
- annotator.box_label(xyxy, label, color=colors(c, True))
313
- if save_crop:
314
- save_one_box(
315
- xyxy,
316
- imc,
317
- file=save_dir
318
- / "crops"
319
- / names[c]
320
- / f"{p.stem}.jpg",
321
- BGR=True,
322
- )
323
- # Print time (inference-only)
324
- print(f"{s}Done. ({t3 - t2:.3f}s)")
325
-
326
- # Stream results
327
- im0 = annotator.result()
328
- if view_img:
329
- cv2.imshow(str(p), im0)
330
- cv2.waitKey(1) # 1 millisecond
331
-
332
- # Save results (image with detections)
333
- if save_img:
334
- if dataset.mode == "image":
335
- cv2.imwrite(save_path, im0)
336
- else: # 'video' or 'stream'
337
- if vid_path[i] != save_path: # new video
338
- vid_path[i] = save_path
339
- if isinstance(vid_writer[i], cv2.VideoWriter):
340
- vid_writer[
341
- i
342
- ].release() # release previous video writer
343
- if vid_cap: # video
344
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
345
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
346
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
347
- else: # stream
348
- fps, w, h = 30, im0.shape[1], im0.shape[0]
349
- save_path += ".mp4"
350
- vid_writer[i] = cv2.VideoWriter(
351
- save_path,
352
- cv2.VideoWriter_fourcc(*"mp4v"),
353
- fps,
354
- (w, h),
355
- )
356
- vid_writer[i].write(im0)
357
-
358
- # Print results
359
- t = tuple(x / seen * 1e3 for x in dt) # speeds per image
360
- print(
361
- f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}"
362
- % t
363
- )
364
- return results
365
- # if save_txt or save_img:
366
- # s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
367
- # print(f"Results saved to {colorstr('bold', save_dir)}{s}")
368
- # if update:
369
- # strip_optimizer(weights) # update model (to fix SourceChangeWarning)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AdamWEE80/VoiceTTS/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: VoiceTTS
3
- emoji: 🐨
4
- colorFrom: pink
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.24.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/dataloader/gsm8k.py DELETED
@@ -1,22 +0,0 @@
1
- from .dataloader import DataLoader
2
- from . import dataloader_registry
3
- import json
4
- import re
5
-
6
-
7
- @dataloader_registry.register("tasksolving/gsm8k")
8
- class GSM8KLoader(DataLoader):
9
- def __init__(self, path: str):
10
- self.answer_pat = re.compile(r"#### (-?\d+)")
11
- super().__init__(path)
12
-
13
- def load(self):
14
- with open(self.path) as f:
15
- for line in f:
16
- line = json.loads(line)
17
- self.examples.append(
18
- {
19
- "input": line["question"],
20
- "answer": line["answer"].split('#### ')[-1],
21
- }
22
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PaddingMethods.js DELETED
@@ -1,36 +0,0 @@
1
- import { GetPadding, SetPadding } from '../../../plugins/utils/padding/PaddingMethods.js';
2
-
3
- export default {
4
- getInnerPadding(key) {
5
- return GetPadding(this.space, key);
6
- },
7
-
8
- setInnerPadding(key, value) {
9
- SetPadding(this.space, key, value);
10
- return this;
11
- },
12
-
13
- getOuterPadding(key) {
14
- return GetPadding(this.getSizerConfig(this).padding, key);
15
- },
16
-
17
- setOuterPadding(key, value) {
18
- SetPadding(this.getSizerConfig(this).padding, key, value);
19
- return this;
20
- },
21
-
22
- getChildOuterPadding(child, key) {
23
- if (typeof (child) === 'string') {
24
- child = this.getElement(child);
25
- }
26
- return GetPadding(this.getSizerConfig(child).padding, key);
27
- },
28
-
29
- setChildOuterPadding(child, key, value) {
30
- if (typeof (child) === 'string') {
31
- child = this.getElement(child);
32
- }
33
- SetPadding(this.getSizerConfig(child).padding, key, value);
34
- return this;
35
- },
36
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aitor/CVchat/app.py DELETED
@@ -1,85 +0,0 @@
1
- import os
2
-
3
- import gradio as gr
4
- import requests
5
- from langchain.chains import RetrievalQA
6
- from langchain.document_loaders import PDFMinerLoader
7
- from langchain.indexes import VectorstoreIndexCreator
8
- from langchain.llms import OpenAI
9
-
10
-
11
- def set_openai_key(raw_key):
12
- # Check if the API is valid
13
- headers = {"Authorization": f"Bearer {raw_key}"}
14
- response = requests.get("https://api.openai.com/v1/engines", headers=headers)
15
- if response.status_code != 200:
16
- raise gr.Error("API key is not valid. Check the key and try again.")
17
-
18
- os.environ["OPENAI_API_KEY"] = raw_key
19
- return gr.File.update(interactive=True), gr.Button.update(interactive=True)
20
-
21
-
22
- def create_langchain(pdf_object):
23
- loader = PDFMinerLoader(pdf_object.name)
24
- index_creator = VectorstoreIndexCreator()
25
- docsearch = index_creator.from_loaders([loader])
26
- chain = RetrievalQA.from_chain_type(
27
- llm=OpenAI(),
28
- chain_type="stuff",
29
- retriever=docsearch.vectorstore.as_retriever(),
30
- input_key="question",
31
- verbose=True,
32
- return_source_documents=True,
33
- )
34
- return chain, gr.Button.update(interactive=True)
35
-
36
-
37
- def ask_question(chain, question_text):
38
- return chain({"question": question_text})["result"]
39
-
40
-
41
- with gr.Blocks() as demo:
42
- # Sate objects
43
- chain_state = gr.State()
44
-
45
- # Layout
46
- oai_token = gr.Textbox(
47
- label="OpenAI Token",
48
- placeholder="Lm-iIas452gaw3erGtPar26gERGSA5RVkFJQST23WEG524EWEl",
49
- )
50
-
51
- pdf_object = gr.File(
52
- label="Upload your CV in PDF format",
53
- file_count="single",
54
- type="file",
55
- interactive=False,
56
- )
57
- gr.Examples(
58
- examples=[
59
- os.path.join(os.path.abspath(""), "sample_data", "CV_AITOR_MIRA.pdf")
60
- ],
61
- inputs=pdf_object,
62
- label="Example CV",
63
- )
64
- create_chain_btn = gr.Button(value="Create CVchat", interactive=False)
65
-
66
- question_placeholder = """Enumerate the candidate's top 5 hard skills and rate them by importance from 0 to 5.
67
- Example:
68
- - Algebra 5/5"""
69
- question_box = gr.Textbox(label="Question", value=question_placeholder)
70
- qa_button = gr.Button(value="Submit question", interactive=False)
71
-
72
- # Actions
73
- oai_token.change(
74
- set_openai_key, inputs=oai_token, outputs=[pdf_object, create_chain_btn]
75
- )
76
- lchain = create_chain_btn.click(
77
- create_langchain, inputs=pdf_object, outputs=[chain_state, qa_button]
78
- )
79
- qa_button.click(
80
- ask_question,
81
- inputs=[chain_state, question_box],
82
- outputs=gr.Textbox(label="Answer"),
83
- )
84
-
85
- demo.launch(debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/data_utils.py DELETED
@@ -1,267 +0,0 @@
1
- import time
2
- import os
3
- import random
4
- import numpy as np
5
- import torch
6
- import torch.utils.data
7
- import torchaudio
8
-
9
- import commons
10
- from mel_processing import spectrogram_torch
11
- from utils import load_wav_to_torch, load_filepaths_and_text
12
- from text import text_to_sequence, cleaned_text_to_sequence
13
- """Multi speaker version"""
14
-
15
-
16
- class TextAudioSpeakerLoader(torch.utils.data.Dataset):
17
- """
18
- 1) loads audio, speaker_id, text pairs
19
- 2) normalizes text and converts them to sequences of integers
20
- 3) computes spectrograms from audio files.
21
- """
22
-
23
- def __init__(self, audiopaths_sid_text, hparams, symbols):
24
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
25
- self.text_cleaners = hparams.text_cleaners
26
- self.max_wav_value = hparams.max_wav_value
27
- self.sampling_rate = hparams.sampling_rate
28
- self.filter_length = hparams.filter_length
29
- self.hop_length = hparams.hop_length
30
- self.win_length = hparams.win_length
31
- self.sampling_rate = hparams.sampling_rate
32
-
33
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
34
-
35
- self.add_blank = hparams.add_blank
36
- self.min_text_len = getattr(hparams, "min_text_len", 1)
37
- self.max_text_len = getattr(hparams, "max_text_len", 190)
38
- self.symbols = symbols
39
-
40
- random.seed(1234)
41
- random.shuffle(self.audiopaths_sid_text)
42
- self._filter()
43
-
44
- def _filter(self):
45
- """
46
- Filter text & store spec lengths
47
- """
48
- # Store spectrogram lengths for Bucketing
49
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
50
- # spec_length = wav_length // hop_length
51
-
52
- audiopaths_sid_text_new = []
53
- lengths = []
54
- for audiopath, sid, text in self.audiopaths_sid_text:
55
- # audiopath = "./user_voice/" + audiopath
56
-
57
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
58
- audiopaths_sid_text_new.append([audiopath, sid, text])
59
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
60
- self.audiopaths_sid_text = audiopaths_sid_text_new
61
- self.lengths = lengths
62
-
63
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
64
- # separate filename, speaker_id and text
65
- audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
66
- text = self.get_text(text)
67
- spec, wav = self.get_audio(audiopath)
68
- sid = self.get_sid(sid)
69
- return (text, spec, wav, sid)
70
-
71
- def get_audio(self, filename):
72
- # audio, sampling_rate = load_wav_to_torch(filename)
73
- # if sampling_rate != self.sampling_rate:
74
- # raise ValueError("{} {} SR doesn't match target {} SR".format(
75
- # sampling_rate, self.sampling_rate))
76
- # audio_norm = audio / self.max_wav_value if audio.max() > 10 else audio
77
- # audio_norm = audio_norm.unsqueeze(0)
78
- audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True)
79
- # spec_filename = filename.replace(".wav", ".spec.pt")
80
- # if os.path.exists(spec_filename):
81
- # spec = torch.load(spec_filename)
82
- # else:
83
- # try:
84
- spec = spectrogram_torch(audio_norm, self.filter_length,
85
- self.sampling_rate, self.hop_length, self.win_length,
86
- center=False)
87
- spec = spec.squeeze(0)
88
- # except NotImplementedError:
89
- # print("?")
90
- # spec = torch.squeeze(spec, 0)
91
- # torch.save(spec, spec_filename)
92
- return spec, audio_norm
93
-
94
- def get_text(self, text):
95
- if self.cleaned_text:
96
- text_norm = cleaned_text_to_sequence(text, self.symbols)
97
- else:
98
- text_norm = text_to_sequence(text, self.text_cleaners)
99
- if self.add_blank:
100
- text_norm = commons.intersperse(text_norm, 0)
101
- text_norm = torch.LongTensor(text_norm)
102
- return text_norm
103
-
104
- def get_sid(self, sid):
105
- sid = torch.LongTensor([int(sid)])
106
- return sid
107
-
108
- def __getitem__(self, index):
109
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
110
-
111
- def __len__(self):
112
- return len(self.audiopaths_sid_text)
113
-
114
-
115
- class TextAudioSpeakerCollate():
116
- """ Zero-pads model inputs and targets
117
- """
118
-
119
- def __init__(self, return_ids=False):
120
- self.return_ids = return_ids
121
-
122
- def __call__(self, batch):
123
- """Collate's training batch from normalized text, audio and speaker identities
124
- PARAMS
125
- ------
126
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
127
- """
128
- # Right zero-pad all one-hot text sequences to max input length
129
- _, ids_sorted_decreasing = torch.sort(
130
- torch.LongTensor([x[1].size(1) for x in batch]),
131
- dim=0, descending=True)
132
-
133
- max_text_len = max([len(x[0]) for x in batch])
134
- max_spec_len = max([x[1].size(1) for x in batch])
135
- max_wav_len = max([x[2].size(1) for x in batch])
136
-
137
- text_lengths = torch.LongTensor(len(batch))
138
- spec_lengths = torch.LongTensor(len(batch))
139
- wav_lengths = torch.LongTensor(len(batch))
140
- sid = torch.LongTensor(len(batch))
141
-
142
- text_padded = torch.LongTensor(len(batch), max_text_len)
143
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
144
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
145
- text_padded.zero_()
146
- spec_padded.zero_()
147
- wav_padded.zero_()
148
- for i in range(len(ids_sorted_decreasing)):
149
- row = batch[ids_sorted_decreasing[i]]
150
-
151
- text = row[0]
152
- text_padded[i, :text.size(0)] = text
153
- text_lengths[i] = text.size(0)
154
-
155
- spec = row[1]
156
- spec_padded[i, :, :spec.size(1)] = spec
157
- spec_lengths[i] = spec.size(1)
158
-
159
- wav = row[2]
160
- wav_padded[i, :, :wav.size(1)] = wav
161
- wav_lengths[i] = wav.size(1)
162
-
163
- sid[i] = row[3]
164
-
165
- if self.return_ids:
166
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
167
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid
168
-
169
-
170
- class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
171
- """
172
- Maintain similar input lengths in a batch.
173
- Length groups are specified by boundaries.
174
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
175
-
176
- It removes samples which are not included in the boundaries.
177
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
178
- """
179
-
180
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
181
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
182
- self.lengths = dataset.lengths
183
- self.batch_size = batch_size
184
- self.boundaries = boundaries
185
-
186
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
187
- self.total_size = sum(self.num_samples_per_bucket)
188
- self.num_samples = self.total_size // self.num_replicas
189
-
190
- def _create_buckets(self):
191
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
192
- for i in range(len(self.lengths)):
193
- length = self.lengths[i]
194
- idx_bucket = self._bisect(length)
195
- if idx_bucket != -1:
196
- buckets[idx_bucket].append(i)
197
-
198
- for i in range(len(buckets) - 1, 0, -1):
199
- if len(buckets[i]) == 0:
200
- buckets.pop(i)
201
- self.boundaries.pop(i + 1)
202
-
203
- num_samples_per_bucket = []
204
- for i in range(len(buckets)):
205
- len_bucket = len(buckets[i])
206
- total_batch_size = self.num_replicas * self.batch_size
207
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
208
- num_samples_per_bucket.append(len_bucket + rem)
209
- return buckets, num_samples_per_bucket
210
-
211
- def __iter__(self):
212
- # deterministically shuffle based on epoch
213
- g = torch.Generator()
214
- g.manual_seed(self.epoch)
215
-
216
- indices = []
217
- if self.shuffle:
218
- for bucket in self.buckets:
219
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
220
- else:
221
- for bucket in self.buckets:
222
- indices.append(list(range(len(bucket))))
223
-
224
- batches = []
225
- for i in range(len(self.buckets)):
226
- bucket = self.buckets[i]
227
- len_bucket = len(bucket)
228
- ids_bucket = indices[i]
229
- num_samples_bucket = self.num_samples_per_bucket[i]
230
-
231
- # add extra samples to make it evenly divisible
232
- rem = num_samples_bucket - len_bucket
233
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
234
-
235
- # subsample
236
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
237
-
238
- # batching
239
- for j in range(len(ids_bucket) // self.batch_size):
240
- batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]]
241
- batches.append(batch)
242
-
243
- if self.shuffle:
244
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
245
- batches = [batches[i] for i in batch_ids]
246
- self.batches = batches
247
-
248
- assert len(self.batches) * self.batch_size == self.num_samples
249
- return iter(self.batches)
250
-
251
- def _bisect(self, x, lo=0, hi=None):
252
- if hi is None:
253
- hi = len(self.boundaries) - 1
254
-
255
- if hi > lo:
256
- mid = (hi + lo) // 2
257
- if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
258
- return mid
259
- elif x <= self.boundaries[mid]:
260
- return self._bisect(x, lo, mid)
261
- else:
262
- return self._bisect(x, mid + 1, hi)
263
- else:
264
- return -1
265
-
266
- def __len__(self):
267
- return self.num_samples // self.batch_size
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyKorshuk/gai-project/modules/playground.py DELETED
@@ -1,142 +0,0 @@
1
- from functools import partial
2
-
3
- import gradio as gr
4
-
5
- import config
6
- from modules import utils
7
- from modules import common
8
- from modules.models import GuanacoModel, ChaiBot
9
-
10
-
11
- def render_playground(demo):
12
- # set inital states
13
- bot_config = utils.get_bot_config(config.DEFAULT_BOT_NAME)
14
- bot_state = gr.State(bot_config)
15
- convo_state = common.get_convo_state(bot_config)
16
-
17
- # render widgets
18
- render_header()
19
- common.render_section_separator("Set up")
20
- model_tag = common.render_model_selector()
21
- bot_profile, bot_selector = common.render_bot_profile(bot_config)
22
- bot_config_text = common.render_bot_config(bot_config)
23
-
24
- common.render_section_separator("Chat")
25
- dialog = render_dialog(bot_config)
26
-
27
- # set default model state according to database
28
- model_state = common.get_model_state(config.DEFAULT_MODEL)
29
-
30
- # render submit buttons and parameter sliders
31
- msg, send, regenerate, clear = common.render_chat_buttons()
32
-
33
- # set callbacks
34
- bot_selector.change(
35
- _reload_bot,
36
- [bot_selector, bot_profile],
37
- [bot_profile, convo_state, dialog, bot_state, bot_config_text],
38
- queue=False
39
- )
40
-
41
- model_tag.change(
42
- _clear_chat,
43
- [dialog, bot_state],
44
- [dialog],
45
- queue=False
46
- )
47
- send.click(
48
- _respond,
49
- [msg, convo_state, dialog, model_state],
50
- [msg, dialog],
51
- queue=False
52
- )
53
- msg.submit(
54
- _respond,
55
- [msg, convo_state, dialog, model_state],
56
- [msg, dialog],
57
- queue=False
58
- )
59
- regenerate.click(
60
- _regenerate_response,
61
- [convo_state, dialog, model_state],
62
- [dialog],
63
- queue=False
64
- )
65
- clear.click(
66
- _clear_chat,
67
- [dialog, bot_state],
68
- [dialog],
69
- queue=False
70
- )
71
-
72
-
73
- def _update_model_parameter_slider(slider, params_state, label):
74
- params_state.update({label: slider})
75
- return params_state
76
-
77
-
78
- def render_header():
79
- gr.Markdown("""
80
- # Playground
81
- """)
82
-
83
-
84
- def render_dialog(bot_config):
85
- first_message = (None, bot_config["firstMessage"])
86
- dialog = gr.Chatbot([first_message])
87
- return dialog
88
-
89
-
90
- def _reload_bot(bot_selector, bot_profile):
91
- bot_selector = bot_selector or config.DEFAULT_BOT_NAME
92
- bot_config = utils.get_bot_config(bot_selector)
93
- bot_profile = utils.get_bot_picture_html(bot_config)
94
- convo_state = ChaiBot(bot_config)
95
- bot_config_text = f"# Memory\n{bot_config.get('memory', '')}\n# Prompt\n{bot_config.get('prompt', '')}"
96
- dialog_st = [(None, bot_config["firstMessage"])]
97
- return bot_profile, convo_state, dialog_st, bot_config, bot_config_text
98
-
99
-
100
- def _respond(user_message, chaibot, chat_history, model):
101
- chaibot.add_user_message(user_message)
102
- bot_response = model.generate_response(chaibot)
103
- chaibot.add_bot_message(bot_response)
104
- chat_history.append(
105
- (user_message, bot_response)
106
- )
107
- return "", chat_history
108
-
109
-
110
- def _clear_chat(chat_history, bot_state):
111
- chat_history = [(None, bot_state["firstMessage"])]
112
- return chat_history
113
-
114
-
115
- def _regenerate_response(chaibot, chat_history, model):
116
- chaibot.messages.pop()
117
- chat_history.pop()
118
- user_message = chaibot.messages[-1][-1]
119
- bot_response = model.generate_response(chaibot)
120
- chaibot.add_bot_message(bot_response)
121
- chat_history.append(
122
- (user_message, bot_response)
123
- )
124
- return chat_history
125
-
126
-
127
- def _get_model(model_tag):
128
- model = GuanacoModel(model_tag)
129
- return model
130
-
131
-
132
- def _parse_model_parameters_from_bot_id(model_tag):
133
- model = _get_model(model_tag)
134
- out = [
135
- model.config.generation_params["temperature"],
136
- model.config.generation_params["repetition_penalty"],
137
- model.config.generation_params["max_new_tokens"],
138
- model.config.generation_params["top_k"],
139
- model.config.generation_params["top_p"],
140
- model
141
- ]
142
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cantonese.py DELETED
@@ -1,59 +0,0 @@
1
- import re
2
- import cn2an
3
- import opencc
4
-
5
-
6
- converter = opencc.OpenCC('jyutjyu')
7
-
8
- # List of (Latin alphabet, ipa) pairs:
9
- _latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
10
- ('A', 'ei˥'),
11
- ('B', 'biː˥'),
12
- ('C', 'siː˥'),
13
- ('D', 'tiː˥'),
14
- ('E', 'iː˥'),
15
- ('F', 'e˥fuː˨˩'),
16
- ('G', 'tsiː˥'),
17
- ('H', 'ɪk̚˥tsʰyː˨˩'),
18
- ('I', 'ɐi˥'),
19
- ('J', 'tsei˥'),
20
- ('K', 'kʰei˥'),
21
- ('L', 'e˥llou˨˩'),
22
- ('M', 'ɛːm˥'),
23
- ('N', 'ɛːn˥'),
24
- ('O', 'ou˥'),
25
- ('P', 'pʰiː˥'),
26
- ('Q', 'kʰiːu˥'),
27
- ('R', 'aː˥lou˨˩'),
28
- ('S', 'ɛː˥siː˨˩'),
29
- ('T', 'tʰiː˥'),
30
- ('U', 'juː˥'),
31
- ('V', 'wiː˥'),
32
- ('W', 'tʊk̚˥piː˥juː˥'),
33
- ('X', 'ɪk̚˥siː˨˩'),
34
- ('Y', 'waːi˥'),
35
- ('Z', 'iː˨sɛːt̚˥')
36
- ]]
37
-
38
-
39
- def number_to_cantonese(text):
40
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text)
41
-
42
-
43
- def latin_to_ipa(text):
44
- for regex, replacement in _latin_to_ipa:
45
- text = re.sub(regex, replacement, text)
46
- return text
47
-
48
-
49
- def cantonese_to_ipa(text):
50
- text = number_to_cantonese(text.upper())
51
- text = converter.convert(text).replace('-','').replace('$',' ')
52
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
53
- text = re.sub(r'[、;:]', ',', text)
54
- text = re.sub(r'\s*,\s*', ', ', text)
55
- text = re.sub(r'\s*。\s*', '. ', text)
56
- text = re.sub(r'\s*?\s*', '? ', text)
57
- text = re.sub(r'\s*!\s*', '! ', text)
58
- text = re.sub(r'\s*$', '', text)
59
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/image-caption-with-vit-gpt2/app.py DELETED
@@ -1,78 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """Image Captioning with ViT+GPT2
3
-
4
- Automatically generated by Colaboratory.
5
-
6
- Original file is located at
7
- https://colab.research.google.com/drive/1P3O0gO5AUqSmM8rE9dxy2tXJ-9jkhxHz
8
- """
9
-
10
- #! pip install transformers -q
11
-
12
- #! pip install gradio -q
13
-
14
- from PIL import Image
15
- from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, PreTrainedTokenizerFast
16
- import requests
17
-
18
- model = VisionEncoderDecoderModel.from_pretrained("sachin/vit2distilgpt2")
19
-
20
- vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
21
-
22
- tokenizer = PreTrainedTokenizerFast.from_pretrained("distilgpt2")
23
-
24
- # url = 'https://d2gp644kobdlm6.cloudfront.net/wp-content/uploads/2016/06/bigstock-Shocked-and-surprised-boy-on-t-113798588-300x212.jpg'
25
-
26
- # with Image.open(requests.get(url, stream=True).raw) as img:
27
- # pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values
28
-
29
- #encoder_outputs = model.generate(pixel_values.to('cpu'),num_beams=5)
30
-
31
- #generated_sentences = tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True)
32
-
33
- #generated_sentences
34
-
35
- #naive text processing
36
- #generated_sentences[0].split('.')[0]
37
-
38
- # inference function
39
-
40
- def vit2distilgpt2(img):
41
- pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values
42
- encoder_outputs = generated_ids = model.generate(pixel_values.to('cpu'),num_beams=5)
43
- generated_sentences = tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True)
44
-
45
- return(generated_sentences[0].split('.')[0])
46
-
47
- #!wget https://media.glamour.com/photos/5f171c4fd35176eaedb36823/master/w_2560%2Cc_limit/bike.jpg
48
-
49
- import gradio as gr
50
-
51
- inputs = [
52
- gr.inputs.Image(type="pil", label="Original Image")
53
- ]
54
-
55
- outputs = [
56
- gr.outputs.Textbox(label = 'Caption')
57
- ]
58
-
59
- title = "Image Captioning using ViT + GPT2"
60
- description = "ViT and GPT2 are used to generate Image Caption for the uploaded image. COCO Dataset was used for training. This image captioning model might have some biases that we couldn't figure during our stress testing, so if you find any bias (gender, race and so on) please use `Flag` button to flag the image with bias"
61
- article = " <a href='https://huggingface.co/sachin/vit2distilgpt2'>Model Repo on Hugging Face Model Hub</a>"
62
- examples = [
63
- ["people-walking-street-pedestrian-crossing-traffic-light-city.jpeg"],
64
- ["elonmusk.jpeg"]
65
-
66
- ]
67
-
68
- gr.Interface(
69
- vit2distilgpt2,
70
- inputs,
71
- outputs,
72
- title=title,
73
- description=description,
74
- article=article,
75
- examples=examples,
76
- theme="huggingface",
77
- ).launch(debug=True, enable_queue=True)
78
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae.py DELETED
@@ -1,688 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- from dataclasses import dataclass
15
- from typing import Optional
16
-
17
- import numpy as np
18
- import torch
19
- import torch.nn as nn
20
-
21
- from ..utils import BaseOutput, is_torch_version, randn_tensor
22
- from .attention_processor import SpatialNorm
23
- from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
24
-
25
-
26
- @dataclass
27
- class DecoderOutput(BaseOutput):
28
- """
29
- Output of decoding method.
30
-
31
- Args:
32
- sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
33
- The decoded output sample from the last layer of the model.
34
- """
35
-
36
- sample: torch.FloatTensor
37
-
38
-
39
- class Encoder(nn.Module):
40
- def __init__(
41
- self,
42
- in_channels=3,
43
- out_channels=3,
44
- down_block_types=("DownEncoderBlock2D",),
45
- block_out_channels=(64,),
46
- layers_per_block=2,
47
- norm_num_groups=32,
48
- act_fn="silu",
49
- double_z=True,
50
- ):
51
- super().__init__()
52
- self.layers_per_block = layers_per_block
53
-
54
- self.conv_in = torch.nn.Conv2d(
55
- in_channels,
56
- block_out_channels[0],
57
- kernel_size=3,
58
- stride=1,
59
- padding=1,
60
- )
61
-
62
- self.mid_block = None
63
- self.down_blocks = nn.ModuleList([])
64
-
65
- # down
66
- output_channel = block_out_channels[0]
67
- for i, down_block_type in enumerate(down_block_types):
68
- input_channel = output_channel
69
- output_channel = block_out_channels[i]
70
- is_final_block = i == len(block_out_channels) - 1
71
-
72
- down_block = get_down_block(
73
- down_block_type,
74
- num_layers=self.layers_per_block,
75
- in_channels=input_channel,
76
- out_channels=output_channel,
77
- add_downsample=not is_final_block,
78
- resnet_eps=1e-6,
79
- downsample_padding=0,
80
- resnet_act_fn=act_fn,
81
- resnet_groups=norm_num_groups,
82
- attention_head_dim=output_channel,
83
- temb_channels=None,
84
- )
85
- self.down_blocks.append(down_block)
86
-
87
- # mid
88
- self.mid_block = UNetMidBlock2D(
89
- in_channels=block_out_channels[-1],
90
- resnet_eps=1e-6,
91
- resnet_act_fn=act_fn,
92
- output_scale_factor=1,
93
- resnet_time_scale_shift="default",
94
- attention_head_dim=block_out_channels[-1],
95
- resnet_groups=norm_num_groups,
96
- temb_channels=None,
97
- )
98
-
99
- # out
100
- self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[-1], num_groups=norm_num_groups, eps=1e-6)
101
- self.conv_act = nn.SiLU()
102
-
103
- conv_out_channels = 2 * out_channels if double_z else out_channels
104
- self.conv_out = nn.Conv2d(block_out_channels[-1], conv_out_channels, 3, padding=1)
105
-
106
- self.gradient_checkpointing = False
107
-
108
- def forward(self, x):
109
- sample = x
110
- sample = self.conv_in(sample)
111
-
112
- if self.training and self.gradient_checkpointing:
113
-
114
- def create_custom_forward(module):
115
- def custom_forward(*inputs):
116
- return module(*inputs)
117
-
118
- return custom_forward
119
-
120
- # down
121
- if is_torch_version(">=", "1.11.0"):
122
- for down_block in self.down_blocks:
123
- sample = torch.utils.checkpoint.checkpoint(
124
- create_custom_forward(down_block), sample, use_reentrant=False
125
- )
126
- # middle
127
- sample = torch.utils.checkpoint.checkpoint(
128
- create_custom_forward(self.mid_block), sample, use_reentrant=False
129
- )
130
- else:
131
- for down_block in self.down_blocks:
132
- sample = torch.utils.checkpoint.checkpoint(create_custom_forward(down_block), sample)
133
- # middle
134
- sample = torch.utils.checkpoint.checkpoint(create_custom_forward(self.mid_block), sample)
135
-
136
- else:
137
- # down
138
- for down_block in self.down_blocks:
139
- sample = down_block(sample)
140
-
141
- # middle
142
- sample = self.mid_block(sample)
143
-
144
- # post-process
145
- sample = self.conv_norm_out(sample)
146
- sample = self.conv_act(sample)
147
- sample = self.conv_out(sample)
148
-
149
- return sample
150
-
151
-
152
- class Decoder(nn.Module):
153
- def __init__(
154
- self,
155
- in_channels=3,
156
- out_channels=3,
157
- up_block_types=("UpDecoderBlock2D",),
158
- block_out_channels=(64,),
159
- layers_per_block=2,
160
- norm_num_groups=32,
161
- act_fn="silu",
162
- norm_type="group", # group, spatial
163
- ):
164
- super().__init__()
165
- self.layers_per_block = layers_per_block
166
-
167
- self.conv_in = nn.Conv2d(
168
- in_channels,
169
- block_out_channels[-1],
170
- kernel_size=3,
171
- stride=1,
172
- padding=1,
173
- )
174
-
175
- self.mid_block = None
176
- self.up_blocks = nn.ModuleList([])
177
-
178
- temb_channels = in_channels if norm_type == "spatial" else None
179
-
180
- # mid
181
- self.mid_block = UNetMidBlock2D(
182
- in_channels=block_out_channels[-1],
183
- resnet_eps=1e-6,
184
- resnet_act_fn=act_fn,
185
- output_scale_factor=1,
186
- resnet_time_scale_shift="default" if norm_type == "group" else norm_type,
187
- attention_head_dim=block_out_channels[-1],
188
- resnet_groups=norm_num_groups,
189
- temb_channels=temb_channels,
190
- )
191
-
192
- # up
193
- reversed_block_out_channels = list(reversed(block_out_channels))
194
- output_channel = reversed_block_out_channels[0]
195
- for i, up_block_type in enumerate(up_block_types):
196
- prev_output_channel = output_channel
197
- output_channel = reversed_block_out_channels[i]
198
-
199
- is_final_block = i == len(block_out_channels) - 1
200
-
201
- up_block = get_up_block(
202
- up_block_type,
203
- num_layers=self.layers_per_block + 1,
204
- in_channels=prev_output_channel,
205
- out_channels=output_channel,
206
- prev_output_channel=None,
207
- add_upsample=not is_final_block,
208
- resnet_eps=1e-6,
209
- resnet_act_fn=act_fn,
210
- resnet_groups=norm_num_groups,
211
- attention_head_dim=output_channel,
212
- temb_channels=temb_channels,
213
- resnet_time_scale_shift=norm_type,
214
- )
215
- self.up_blocks.append(up_block)
216
- prev_output_channel = output_channel
217
-
218
- # out
219
- if norm_type == "spatial":
220
- self.conv_norm_out = SpatialNorm(block_out_channels[0], temb_channels)
221
- else:
222
- self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6)
223
- self.conv_act = nn.SiLU()
224
- self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1)
225
-
226
- self.gradient_checkpointing = False
227
-
228
- def forward(self, z, latent_embeds=None):
229
- sample = z
230
- sample = self.conv_in(sample)
231
-
232
- upscale_dtype = next(iter(self.up_blocks.parameters())).dtype
233
- if self.training and self.gradient_checkpointing:
234
-
235
- def create_custom_forward(module):
236
- def custom_forward(*inputs):
237
- return module(*inputs)
238
-
239
- return custom_forward
240
-
241
- if is_torch_version(">=", "1.11.0"):
242
- # middle
243
- sample = torch.utils.checkpoint.checkpoint(
244
- create_custom_forward(self.mid_block), sample, latent_embeds, use_reentrant=False
245
- )
246
- sample = sample.to(upscale_dtype)
247
-
248
- # up
249
- for up_block in self.up_blocks:
250
- sample = torch.utils.checkpoint.checkpoint(
251
- create_custom_forward(up_block), sample, latent_embeds, use_reentrant=False
252
- )
253
- else:
254
- # middle
255
- sample = torch.utils.checkpoint.checkpoint(
256
- create_custom_forward(self.mid_block), sample, latent_embeds
257
- )
258
- sample = sample.to(upscale_dtype)
259
-
260
- # up
261
- for up_block in self.up_blocks:
262
- sample = torch.utils.checkpoint.checkpoint(create_custom_forward(up_block), sample, latent_embeds)
263
- else:
264
- # middle
265
- sample = self.mid_block(sample, latent_embeds)
266
- sample = sample.to(upscale_dtype)
267
-
268
- # up
269
- for up_block in self.up_blocks:
270
- sample = up_block(sample, latent_embeds)
271
-
272
- # post-process
273
- if latent_embeds is None:
274
- sample = self.conv_norm_out(sample)
275
- else:
276
- sample = self.conv_norm_out(sample, latent_embeds)
277
- sample = self.conv_act(sample)
278
- sample = self.conv_out(sample)
279
-
280
- return sample
281
-
282
-
283
- class UpSample(nn.Module):
284
- def __init__(
285
- self,
286
- in_channels: int,
287
- out_channels: int,
288
- ) -> None:
289
- super().__init__()
290
- self.in_channels = in_channels
291
- self.out_channels = out_channels
292
- self.deconv = nn.ConvTranspose2d(in_channels, out_channels, kernel_size=4, stride=2, padding=1)
293
-
294
- def forward(self, x: torch.FloatTensor) -> torch.FloatTensor:
295
- x = torch.relu(x)
296
- x = self.deconv(x)
297
- return x
298
-
299
-
300
- class MaskConditionEncoder(nn.Module):
301
- """
302
- used in AsymmetricAutoencoderKL
303
- """
304
-
305
- def __init__(
306
- self,
307
- in_ch: int,
308
- out_ch: int = 192,
309
- res_ch: int = 768,
310
- stride: int = 16,
311
- ) -> None:
312
- super().__init__()
313
-
314
- channels = []
315
- while stride > 1:
316
- stride = stride // 2
317
- in_ch_ = out_ch * 2
318
- if out_ch > res_ch:
319
- out_ch = res_ch
320
- if stride == 1:
321
- in_ch_ = res_ch
322
- channels.append((in_ch_, out_ch))
323
- out_ch *= 2
324
-
325
- out_channels = []
326
- for _in_ch, _out_ch in channels:
327
- out_channels.append(_out_ch)
328
- out_channels.append(channels[-1][0])
329
-
330
- layers = []
331
- in_ch_ = in_ch
332
- for l in range(len(out_channels)):
333
- out_ch_ = out_channels[l]
334
- if l == 0 or l == 1:
335
- layers.append(nn.Conv2d(in_ch_, out_ch_, kernel_size=3, stride=1, padding=1))
336
- else:
337
- layers.append(nn.Conv2d(in_ch_, out_ch_, kernel_size=4, stride=2, padding=1))
338
- in_ch_ = out_ch_
339
-
340
- self.layers = nn.Sequential(*layers)
341
-
342
- def forward(self, x: torch.FloatTensor, mask=None) -> torch.FloatTensor:
343
- out = {}
344
- for l in range(len(self.layers)):
345
- layer = self.layers[l]
346
- x = layer(x)
347
- out[str(tuple(x.shape))] = x
348
- x = torch.relu(x)
349
- return out
350
-
351
-
352
- class MaskConditionDecoder(nn.Module):
353
- """The `MaskConditionDecoder` should be used in combination with [`AsymmetricAutoencoderKL`] to enhance the model's
354
- decoder with a conditioner on the mask and masked image."""
355
-
356
- def __init__(
357
- self,
358
- in_channels=3,
359
- out_channels=3,
360
- up_block_types=("UpDecoderBlock2D",),
361
- block_out_channels=(64,),
362
- layers_per_block=2,
363
- norm_num_groups=32,
364
- act_fn="silu",
365
- norm_type="group", # group, spatial
366
- ):
367
- super().__init__()
368
- self.layers_per_block = layers_per_block
369
-
370
- self.conv_in = nn.Conv2d(
371
- in_channels,
372
- block_out_channels[-1],
373
- kernel_size=3,
374
- stride=1,
375
- padding=1,
376
- )
377
-
378
- self.mid_block = None
379
- self.up_blocks = nn.ModuleList([])
380
-
381
- temb_channels = in_channels if norm_type == "spatial" else None
382
-
383
- # mid
384
- self.mid_block = UNetMidBlock2D(
385
- in_channels=block_out_channels[-1],
386
- resnet_eps=1e-6,
387
- resnet_act_fn=act_fn,
388
- output_scale_factor=1,
389
- resnet_time_scale_shift="default" if norm_type == "group" else norm_type,
390
- attention_head_dim=block_out_channels[-1],
391
- resnet_groups=norm_num_groups,
392
- temb_channels=temb_channels,
393
- )
394
-
395
- # up
396
- reversed_block_out_channels = list(reversed(block_out_channels))
397
- output_channel = reversed_block_out_channels[0]
398
- for i, up_block_type in enumerate(up_block_types):
399
- prev_output_channel = output_channel
400
- output_channel = reversed_block_out_channels[i]
401
-
402
- is_final_block = i == len(block_out_channels) - 1
403
-
404
- up_block = get_up_block(
405
- up_block_type,
406
- num_layers=self.layers_per_block + 1,
407
- in_channels=prev_output_channel,
408
- out_channels=output_channel,
409
- prev_output_channel=None,
410
- add_upsample=not is_final_block,
411
- resnet_eps=1e-6,
412
- resnet_act_fn=act_fn,
413
- resnet_groups=norm_num_groups,
414
- attention_head_dim=output_channel,
415
- temb_channels=temb_channels,
416
- resnet_time_scale_shift=norm_type,
417
- )
418
- self.up_blocks.append(up_block)
419
- prev_output_channel = output_channel
420
-
421
- # condition encoder
422
- self.condition_encoder = MaskConditionEncoder(
423
- in_ch=out_channels,
424
- out_ch=block_out_channels[0],
425
- res_ch=block_out_channels[-1],
426
- )
427
-
428
- # out
429
- if norm_type == "spatial":
430
- self.conv_norm_out = SpatialNorm(block_out_channels[0], temb_channels)
431
- else:
432
- self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6)
433
- self.conv_act = nn.SiLU()
434
- self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1)
435
-
436
- self.gradient_checkpointing = False
437
-
438
- def forward(self, z, image=None, mask=None, latent_embeds=None):
439
- sample = z
440
- sample = self.conv_in(sample)
441
-
442
- upscale_dtype = next(iter(self.up_blocks.parameters())).dtype
443
- if self.training and self.gradient_checkpointing:
444
-
445
- def create_custom_forward(module):
446
- def custom_forward(*inputs):
447
- return module(*inputs)
448
-
449
- return custom_forward
450
-
451
- if is_torch_version(">=", "1.11.0"):
452
- # middle
453
- sample = torch.utils.checkpoint.checkpoint(
454
- create_custom_forward(self.mid_block), sample, latent_embeds, use_reentrant=False
455
- )
456
- sample = sample.to(upscale_dtype)
457
-
458
- # condition encoder
459
- if image is not None and mask is not None:
460
- masked_image = (1 - mask) * image
461
- im_x = torch.utils.checkpoint.checkpoint(
462
- create_custom_forward(self.condition_encoder), masked_image, mask, use_reentrant=False
463
- )
464
-
465
- # up
466
- for up_block in self.up_blocks:
467
- if image is not None and mask is not None:
468
- sample_ = im_x[str(tuple(sample.shape))]
469
- mask_ = nn.functional.interpolate(mask, size=sample.shape[-2:], mode="nearest")
470
- sample = sample * mask_ + sample_ * (1 - mask_)
471
- sample = torch.utils.checkpoint.checkpoint(
472
- create_custom_forward(up_block), sample, latent_embeds, use_reentrant=False
473
- )
474
- if image is not None and mask is not None:
475
- sample = sample * mask + im_x[str(tuple(sample.shape))] * (1 - mask)
476
- else:
477
- # middle
478
- sample = torch.utils.checkpoint.checkpoint(
479
- create_custom_forward(self.mid_block), sample, latent_embeds
480
- )
481
- sample = sample.to(upscale_dtype)
482
-
483
- # condition encoder
484
- if image is not None and mask is not None:
485
- masked_image = (1 - mask) * image
486
- im_x = torch.utils.checkpoint.checkpoint(
487
- create_custom_forward(self.condition_encoder), masked_image, mask
488
- )
489
-
490
- # up
491
- for up_block in self.up_blocks:
492
- if image is not None and mask is not None:
493
- sample_ = im_x[str(tuple(sample.shape))]
494
- mask_ = nn.functional.interpolate(mask, size=sample.shape[-2:], mode="nearest")
495
- sample = sample * mask_ + sample_ * (1 - mask_)
496
- sample = torch.utils.checkpoint.checkpoint(create_custom_forward(up_block), sample, latent_embeds)
497
- if image is not None and mask is not None:
498
- sample = sample * mask + im_x[str(tuple(sample.shape))] * (1 - mask)
499
- else:
500
- # middle
501
- sample = self.mid_block(sample, latent_embeds)
502
- sample = sample.to(upscale_dtype)
503
-
504
- # condition encoder
505
- if image is not None and mask is not None:
506
- masked_image = (1 - mask) * image
507
- im_x = self.condition_encoder(masked_image, mask)
508
-
509
- # up
510
- for up_block in self.up_blocks:
511
- if image is not None and mask is not None:
512
- sample_ = im_x[str(tuple(sample.shape))]
513
- mask_ = nn.functional.interpolate(mask, size=sample.shape[-2:], mode="nearest")
514
- sample = sample * mask_ + sample_ * (1 - mask_)
515
- sample = up_block(sample, latent_embeds)
516
- if image is not None and mask is not None:
517
- sample = sample * mask + im_x[str(tuple(sample.shape))] * (1 - mask)
518
-
519
- # post-process
520
- if latent_embeds is None:
521
- sample = self.conv_norm_out(sample)
522
- else:
523
- sample = self.conv_norm_out(sample, latent_embeds)
524
- sample = self.conv_act(sample)
525
- sample = self.conv_out(sample)
526
-
527
- return sample
528
-
529
-
530
- class VectorQuantizer(nn.Module):
531
- """
532
- Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly avoids costly matrix
533
- multiplications and allows for post-hoc remapping of indices.
534
- """
535
-
536
- # NOTE: due to a bug the beta term was applied to the wrong term. for
537
- # backwards compatibility we use the buggy version by default, but you can
538
- # specify legacy=False to fix it.
539
- def __init__(
540
- self, n_e, vq_embed_dim, beta, remap=None, unknown_index="random", sane_index_shape=False, legacy=True
541
- ):
542
- super().__init__()
543
- self.n_e = n_e
544
- self.vq_embed_dim = vq_embed_dim
545
- self.beta = beta
546
- self.legacy = legacy
547
-
548
- self.embedding = nn.Embedding(self.n_e, self.vq_embed_dim)
549
- self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e)
550
-
551
- self.remap = remap
552
- if self.remap is not None:
553
- self.register_buffer("used", torch.tensor(np.load(self.remap)))
554
- self.re_embed = self.used.shape[0]
555
- self.unknown_index = unknown_index # "random" or "extra" or integer
556
- if self.unknown_index == "extra":
557
- self.unknown_index = self.re_embed
558
- self.re_embed = self.re_embed + 1
559
- print(
560
- f"Remapping {self.n_e} indices to {self.re_embed} indices. "
561
- f"Using {self.unknown_index} for unknown indices."
562
- )
563
- else:
564
- self.re_embed = n_e
565
-
566
- self.sane_index_shape = sane_index_shape
567
-
568
- def remap_to_used(self, inds):
569
- ishape = inds.shape
570
- assert len(ishape) > 1
571
- inds = inds.reshape(ishape[0], -1)
572
- used = self.used.to(inds)
573
- match = (inds[:, :, None] == used[None, None, ...]).long()
574
- new = match.argmax(-1)
575
- unknown = match.sum(2) < 1
576
- if self.unknown_index == "random":
577
- new[unknown] = torch.randint(0, self.re_embed, size=new[unknown].shape).to(device=new.device)
578
- else:
579
- new[unknown] = self.unknown_index
580
- return new.reshape(ishape)
581
-
582
- def unmap_to_all(self, inds):
583
- ishape = inds.shape
584
- assert len(ishape) > 1
585
- inds = inds.reshape(ishape[0], -1)
586
- used = self.used.to(inds)
587
- if self.re_embed > self.used.shape[0]: # extra token
588
- inds[inds >= self.used.shape[0]] = 0 # simply set to zero
589
- back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds)
590
- return back.reshape(ishape)
591
-
592
- def forward(self, z):
593
- # reshape z -> (batch, height, width, channel) and flatten
594
- z = z.permute(0, 2, 3, 1).contiguous()
595
- z_flattened = z.view(-1, self.vq_embed_dim)
596
-
597
- # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
598
- min_encoding_indices = torch.argmin(torch.cdist(z_flattened, self.embedding.weight), dim=1)
599
-
600
- z_q = self.embedding(min_encoding_indices).view(z.shape)
601
- perplexity = None
602
- min_encodings = None
603
-
604
- # compute loss for embedding
605
- if not self.legacy:
606
- loss = self.beta * torch.mean((z_q.detach() - z) ** 2) + torch.mean((z_q - z.detach()) ** 2)
607
- else:
608
- loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * torch.mean((z_q - z.detach()) ** 2)
609
-
610
- # preserve gradients
611
- z_q = z + (z_q - z).detach()
612
-
613
- # reshape back to match original input shape
614
- z_q = z_q.permute(0, 3, 1, 2).contiguous()
615
-
616
- if self.remap is not None:
617
- min_encoding_indices = min_encoding_indices.reshape(z.shape[0], -1) # add batch axis
618
- min_encoding_indices = self.remap_to_used(min_encoding_indices)
619
- min_encoding_indices = min_encoding_indices.reshape(-1, 1) # flatten
620
-
621
- if self.sane_index_shape:
622
- min_encoding_indices = min_encoding_indices.reshape(z_q.shape[0], z_q.shape[2], z_q.shape[3])
623
-
624
- return z_q, loss, (perplexity, min_encodings, min_encoding_indices)
625
-
626
- def get_codebook_entry(self, indices, shape):
627
- # shape specifying (batch, height, width, channel)
628
- if self.remap is not None:
629
- indices = indices.reshape(shape[0], -1) # add batch axis
630
- indices = self.unmap_to_all(indices)
631
- indices = indices.reshape(-1) # flatten again
632
-
633
- # get quantized latent vectors
634
- z_q = self.embedding(indices)
635
-
636
- if shape is not None:
637
- z_q = z_q.view(shape)
638
- # reshape back to match original input shape
639
- z_q = z_q.permute(0, 3, 1, 2).contiguous()
640
-
641
- return z_q
642
-
643
-
644
- class DiagonalGaussianDistribution(object):
645
- def __init__(self, parameters, deterministic=False):
646
- self.parameters = parameters
647
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
648
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
649
- self.deterministic = deterministic
650
- self.std = torch.exp(0.5 * self.logvar)
651
- self.var = torch.exp(self.logvar)
652
- if self.deterministic:
653
- self.var = self.std = torch.zeros_like(
654
- self.mean, device=self.parameters.device, dtype=self.parameters.dtype
655
- )
656
-
657
- def sample(self, generator: Optional[torch.Generator] = None) -> torch.FloatTensor:
658
- # make sure sample is on the same device as the parameters and has same dtype
659
- sample = randn_tensor(
660
- self.mean.shape, generator=generator, device=self.parameters.device, dtype=self.parameters.dtype
661
- )
662
- x = self.mean + self.std * sample
663
- return x
664
-
665
- def kl(self, other=None):
666
- if self.deterministic:
667
- return torch.Tensor([0.0])
668
- else:
669
- if other is None:
670
- return 0.5 * torch.sum(torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, dim=[1, 2, 3])
671
- else:
672
- return 0.5 * torch.sum(
673
- torch.pow(self.mean - other.mean, 2) / other.var
674
- + self.var / other.var
675
- - 1.0
676
- - self.logvar
677
- + other.logvar,
678
- dim=[1, 2, 3],
679
- )
680
-
681
- def nll(self, sample, dims=[1, 2, 3]):
682
- if self.deterministic:
683
- return torch.Tensor([0.0])
684
- logtwopi = np.log(2.0 * np.pi)
685
- return 0.5 * torch.sum(logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, dim=dims)
686
-
687
- def mode(self):
688
- return self.mean
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/UniFormerV2_mit_demo/mitv1_class_index.py DELETED
@@ -1,341 +0,0 @@
1
- mitv1_classnames = {
2
- "0": "adult+female+singing",
3
- "1": "adult+female+speaking",
4
- "2": "adult+male+singing",
5
- "3": "adult+male+speaking",
6
- "4": "aiming",
7
- "5": "applauding",
8
- "6": "arresting",
9
- "7": "ascending",
10
- "8": "asking",
11
- "9": "assembling",
12
- "10": "attacking",
13
- "11": "autographing",
14
- "12": "baking",
15
- "13": "balancing",
16
- "14": "baptizing",
17
- "15": "barbecuing",
18
- "16": "barking",
19
- "17": "bathing",
20
- "18": "bending",
21
- "19": "bicycling",
22
- "20": "biting",
23
- "21": "blocking",
24
- "22": "blowing",
25
- "23": "boarding",
26
- "24": "boating",
27
- "25": "boiling",
28
- "26": "bouncing",
29
- "27": "bowing",
30
- "28": "bowling",
31
- "29": "boxing",
32
- "30": "breaking",
33
- "31": "brushing",
34
- "32": "bubbling",
35
- "33": "building",
36
- "34": "bulldozing",
37
- "35": "burning",
38
- "36": "burying",
39
- "37": "buttoning",
40
- "38": "buying",
41
- "39": "calling",
42
- "40": "camping",
43
- "41": "carrying",
44
- "42": "carving",
45
- "43": "catching",
46
- "44": "celebrating",
47
- "45": "chasing",
48
- "46": "cheering",
49
- "47": "cheerleading",
50
- "48": "chewing",
51
- "49": "child+singing",
52
- "50": "child+speaking",
53
- "51": "chopping",
54
- "52": "clapping",
55
- "53": "clawing",
56
- "54": "cleaning",
57
- "55": "clearing",
58
- "56": "climbing",
59
- "57": "clinging",
60
- "58": "clipping",
61
- "59": "closing",
62
- "60": "coaching",
63
- "61": "colliding",
64
- "62": "combing",
65
- "63": "combusting",
66
- "64": "competing",
67
- "65": "constructing",
68
- "66": "cooking",
69
- "67": "coughing",
70
- "68": "covering",
71
- "69": "cracking",
72
- "70": "crafting",
73
- "71": "cramming",
74
- "72": "crashing",
75
- "73": "crawling",
76
- "74": "crouching",
77
- "75": "crushing",
78
- "76": "crying",
79
- "77": "cuddling",
80
- "78": "cutting",
81
- "79": "dancing",
82
- "80": "descending",
83
- "81": "destroying",
84
- "82": "digging",
85
- "83": "dining",
86
- "84": "dipping",
87
- "85": "discussing",
88
- "86": "diving",
89
- "87": "dragging",
90
- "88": "draining",
91
- "89": "drawing",
92
- "90": "drenching",
93
- "91": "dressing",
94
- "92": "drilling",
95
- "93": "drinking",
96
- "94": "dripping",
97
- "95": "driving",
98
- "96": "dropping",
99
- "97": "drumming",
100
- "98": "drying",
101
- "99": "dunking",
102
- "100": "dusting",
103
- "101": "eating",
104
- "102": "emptying",
105
- "103": "entering",
106
- "104": "erupting",
107
- "105": "exercising",
108
- "106": "exiting",
109
- "107": "extinguishing",
110
- "108": "falling",
111
- "109": "feeding",
112
- "110": "fencing",
113
- "111": "fighting",
114
- "112": "filling",
115
- "113": "filming",
116
- "114": "fishing",
117
- "115": "flicking",
118
- "116": "flipping",
119
- "117": "floating",
120
- "118": "flooding",
121
- "119": "flowing",
122
- "120": "flying",
123
- "121": "folding",
124
- "122": "frowning",
125
- "123": "frying",
126
- "124": "fueling",
127
- "125": "gambling",
128
- "126": "gardening",
129
- "127": "giggling",
130
- "128": "giving",
131
- "129": "grilling",
132
- "130": "grinning",
133
- "131": "gripping",
134
- "132": "grooming",
135
- "133": "guarding",
136
- "134": "hammering",
137
- "135": "handcuffing",
138
- "136": "handwriting",
139
- "137": "hanging",
140
- "138": "hiking",
141
- "139": "hitchhiking",
142
- "140": "hitting",
143
- "141": "howling",
144
- "142": "hugging",
145
- "143": "hunting",
146
- "144": "imitating",
147
- "145": "inflating",
148
- "146": "injecting",
149
- "147": "instructing",
150
- "148": "interviewing",
151
- "149": "jogging",
152
- "150": "joining",
153
- "151": "juggling",
154
- "152": "jumping",
155
- "153": "kicking",
156
- "154": "kissing",
157
- "155": "kneeling",
158
- "156": "knitting",
159
- "157": "knocking",
160
- "158": "landing",
161
- "159": "laughing",
162
- "160": "launching",
163
- "161": "leaking",
164
- "162": "leaning",
165
- "163": "leaping",
166
- "164": "lecturing",
167
- "165": "licking",
168
- "166": "lifting",
169
- "167": "loading",
170
- "168": "locking",
171
- "169": "manicuring",
172
- "170": "marching",
173
- "171": "marrying",
174
- "172": "massaging",
175
- "173": "measuring",
176
- "174": "mopping",
177
- "175": "mowing",
178
- "176": "officiating",
179
- "177": "opening",
180
- "178": "operating",
181
- "179": "overflowing",
182
- "180": "packaging",
183
- "181": "packing",
184
- "182": "painting",
185
- "183": "parading",
186
- "184": "paying",
187
- "185": "pedaling",
188
- "186": "peeling",
189
- "187": "performing",
190
- "188": "photographing",
191
- "189": "picking",
192
- "190": "piloting",
193
- "191": "pitching",
194
- "192": "placing",
195
- "193": "planting",
196
- "194": "playing",
197
- "195": "playing+fun",
198
- "196": "playing+music",
199
- "197": "playing+sports",
200
- "198": "playing+videogames",
201
- "199": "plugging",
202
- "200": "plunging",
203
- "201": "pointing",
204
- "202": "poking",
205
- "203": "pouring",
206
- "204": "praying",
207
- "205": "preaching",
208
- "206": "pressing",
209
- "207": "protesting",
210
- "208": "pulling",
211
- "209": "punching",
212
- "210": "punting",
213
- "211": "pushing",
214
- "212": "putting",
215
- "213": "queuing",
216
- "214": "racing",
217
- "215": "rafting",
218
- "216": "raining",
219
- "217": "raising",
220
- "218": "reaching",
221
- "219": "reading",
222
- "220": "removing",
223
- "221": "repairing",
224
- "222": "resting",
225
- "223": "riding",
226
- "224": "rinsing",
227
- "225": "rising",
228
- "226": "roaring",
229
- "227": "rocking",
230
- "228": "rolling",
231
- "229": "rowing",
232
- "230": "rubbing",
233
- "231": "running",
234
- "232": "sailing",
235
- "233": "saluting",
236
- "234": "sanding",
237
- "235": "sawing",
238
- "236": "scratching",
239
- "237": "screwing",
240
- "238": "scrubbing",
241
- "239": "selling",
242
- "240": "serving",
243
- "241": "sewing",
244
- "242": "shaking",
245
- "243": "shaving",
246
- "244": "shooting",
247
- "245": "shopping",
248
- "246": "shouting",
249
- "247": "shoveling",
250
- "248": "shredding",
251
- "249": "shrugging",
252
- "250": "signing",
253
- "251": "singing",
254
- "252": "sitting",
255
- "253": "skating",
256
- "254": "sketching",
257
- "255": "skiing",
258
- "256": "skipping",
259
- "257": "slapping",
260
- "258": "sleeping",
261
- "259": "slicing",
262
- "260": "sliding",
263
- "261": "slipping",
264
- "262": "smashing",
265
- "263": "smelling",
266
- "264": "smiling",
267
- "265": "smoking",
268
- "266": "snapping",
269
- "267": "sneezing",
270
- "268": "sniffing",
271
- "269": "snowing",
272
- "270": "snuggling",
273
- "271": "socializing",
274
- "272": "sowing",
275
- "273": "speaking",
276
- "274": "spilling",
277
- "275": "spinning",
278
- "276": "spitting",
279
- "277": "splashing",
280
- "278": "spraying",
281
- "279": "spreading",
282
- "280": "sprinkling",
283
- "281": "sprinting",
284
- "282": "squatting",
285
- "283": "squinting",
286
- "284": "stacking",
287
- "285": "standing",
288
- "286": "starting",
289
- "287": "stealing",
290
- "288": "steering",
291
- "289": "stirring",
292
- "290": "stitching",
293
- "291": "stomping",
294
- "292": "stopping",
295
- "293": "storming",
296
- "294": "stretching",
297
- "295": "stroking",
298
- "296": "studying",
299
- "297": "submerging",
300
- "298": "surfing",
301
- "299": "sweeping",
302
- "300": "swerving",
303
- "301": "swimming",
304
- "302": "swinging",
305
- "303": "talking",
306
- "304": "taping",
307
- "305": "tapping",
308
- "306": "tattooing",
309
- "307": "teaching",
310
- "308": "tearing",
311
- "309": "telephoning",
312
- "310": "throwing",
313
- "311": "tickling",
314
- "312": "towing",
315
- "313": "trimming",
316
- "314": "tripping",
317
- "315": "tuning",
318
- "316": "turning",
319
- "317": "twisting",
320
- "318": "tying",
321
- "319": "typing",
322
- "320": "unloading",
323
- "321": "unpacking",
324
- "322": "vacuuming",
325
- "323": "waking",
326
- "324": "walking",
327
- "325": "washing",
328
- "326": "watering",
329
- "327": "waving",
330
- "328": "waxing",
331
- "329": "weeding",
332
- "330": "welding",
333
- "331": "wetting",
334
- "332": "whistling",
335
- "333": "winking",
336
- "334": "working",
337
- "335": "wrapping",
338
- "336": "wrestling",
339
- "337": "writing",
340
- "338": "yawning"
341
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_giou_1x_coco.py DELETED
@@ -1,6 +0,0 @@
1
- _base_ = './faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- roi_head=dict(
4
- bbox_head=dict(
5
- reg_decoded_bbox=True,
6
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0))))
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/README.md DELETED
@@ -1,28 +0,0 @@
1
- # Libra R-CNN: Towards Balanced Learning for Object Detection
2
-
3
- ## Introduction
4
-
5
- [ALGORITHM]
6
-
7
- We provide config files to reproduce the results in the CVPR 2019 paper [Libra R-CNN](https://arxiv.org/pdf/1904.02701.pdf).
8
-
9
- ```
10
- @inproceedings{pang2019libra,
11
- title={Libra R-CNN: Towards Balanced Learning for Object Detection},
12
- author={Pang, Jiangmiao and Chen, Kai and Shi, Jianping and Feng, Huajun and Ouyang, Wanli and Dahua Lin},
13
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
14
- year={2019}
15
- }
16
- ```
17
-
18
- ## Results and models
19
-
20
- The results on COCO 2017val are shown in the below table. (results on test-dev are usually slightly higher than val)
21
-
22
- | Architecture | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
23
- |:------------:|:---------------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
24
- | Faster R-CNN | R-50-FPN | pytorch | 1x | 4.6 | 19.0 | 38.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco/libra_faster_rcnn_r50_fpn_1x_coco_20200130-3afee3a9.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco/libra_faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) |
25
- | Fast R-CNN | R-50-FPN | pytorch | 1x | | | | |
26
- | Faster R-CNN | R-101-FPN | pytorch | 1x | 6.5 | 14.4 | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco/libra_faster_rcnn_r101_fpn_1x_coco_20200203-8dba6a5a.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco/libra_faster_rcnn_r101_fpn_1x_coco_20200203_001405.log.json) |
27
- | Faster R-CNN | X-101-64x4d-FPN | pytorch | 1x | 10.8 | 8.5 | 42.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco/libra_faster_rcnn_x101_64x4d_fpn_1x_coco_20200315-3a7d0488.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco/libra_faster_rcnn_x101_64x4d_fpn_1x_coco_20200315_231625.log.json) |
28
- | RetinaNet | R-50-FPN | pytorch | 1x | 4.2 | 17.7 | 37.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_retinanet_r50_fpn_1x_coco/libra_retinanet_r50_fpn_1x_coco_20200205-804d94ce.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_retinanet_r50_fpn_1x_coco/libra_retinanet_r50_fpn_1x_coco_20200205_112757.log.json) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py DELETED
@@ -1,79 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/retinanet_r50_fpn.py',
3
- '../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py'
4
- ]
5
- cudnn_benchmark = True
6
- # model settings
7
- norm_cfg = dict(type='BN', requires_grad=True)
8
- model = dict(
9
- type='RetinaNet',
10
- pretrained='torchvision://resnet50',
11
- backbone=dict(
12
- type='ResNet',
13
- depth=50,
14
- num_stages=4,
15
- out_indices=(0, 1, 2, 3),
16
- frozen_stages=1,
17
- norm_cfg=norm_cfg,
18
- norm_eval=False,
19
- style='pytorch'),
20
- neck=dict(type='NASFPN', stack_times=7, norm_cfg=norm_cfg),
21
- bbox_head=dict(type='RetinaSepBNHead', num_ins=5, norm_cfg=norm_cfg),
22
- # training and testing settings
23
- train_cfg=dict(assigner=dict(neg_iou_thr=0.5)))
24
- # dataset settings
25
- img_norm_cfg = dict(
26
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
27
- train_pipeline = [
28
- dict(type='LoadImageFromFile'),
29
- dict(type='LoadAnnotations', with_bbox=True),
30
- dict(
31
- type='Resize',
32
- img_scale=(640, 640),
33
- ratio_range=(0.8, 1.2),
34
- keep_ratio=True),
35
- dict(type='RandomCrop', crop_size=(640, 640)),
36
- dict(type='RandomFlip', flip_ratio=0.5),
37
- dict(type='Normalize', **img_norm_cfg),
38
- dict(type='Pad', size=(640, 640)),
39
- dict(type='DefaultFormatBundle'),
40
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
41
- ]
42
- test_pipeline = [
43
- dict(type='LoadImageFromFile'),
44
- dict(
45
- type='MultiScaleFlipAug',
46
- img_scale=(640, 640),
47
- flip=False,
48
- transforms=[
49
- dict(type='Resize', keep_ratio=True),
50
- dict(type='RandomFlip'),
51
- dict(type='Normalize', **img_norm_cfg),
52
- dict(type='Pad', size_divisor=128),
53
- dict(type='ImageToTensor', keys=['img']),
54
- dict(type='Collect', keys=['img']),
55
- ])
56
- ]
57
- data = dict(
58
- samples_per_gpu=8,
59
- workers_per_gpu=4,
60
- train=dict(pipeline=train_pipeline),
61
- val=dict(pipeline=test_pipeline),
62
- test=dict(pipeline=test_pipeline))
63
- # optimizer
64
- optimizer = dict(
65
- type='SGD',
66
- lr=0.08,
67
- momentum=0.9,
68
- weight_decay=0.0001,
69
- paramwise_cfg=dict(norm_decay_mult=0, bypass_duplicate=True))
70
- optimizer_config = dict(grad_clip=None)
71
- # learning policy
72
- lr_config = dict(
73
- policy='step',
74
- warmup='linear',
75
- warmup_iters=1000,
76
- warmup_ratio=0.1,
77
- step=[30, 40])
78
- # runtime settings
79
- runner = dict(type='EpochBasedRunner', max_epochs=50)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/distributed_sampler.py DELETED
@@ -1,39 +0,0 @@
1
- import math
2
-
3
- import torch
4
- from torch.utils.data import DistributedSampler as _DistributedSampler
5
-
6
-
7
- class DistributedSampler(_DistributedSampler):
8
-
9
- def __init__(self,
10
- dataset,
11
- num_replicas=None,
12
- rank=None,
13
- shuffle=True,
14
- seed=0):
15
- super().__init__(
16
- dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
17
- # for the compatibility from PyTorch 1.3+
18
- self.seed = seed if seed is not None else 0
19
-
20
- def __iter__(self):
21
- # deterministically shuffle based on epoch
22
- if self.shuffle:
23
- g = torch.Generator()
24
- g.manual_seed(self.epoch + self.seed)
25
- indices = torch.randperm(len(self.dataset), generator=g).tolist()
26
- else:
27
- indices = torch.arange(len(self.dataset)).tolist()
28
-
29
- # add extra samples to make it evenly divisible
30
- # in case that indices is shorter than half of total_size
31
- indices = (indices *
32
- math.ceil(self.total_size / len(indices)))[:self.total_size]
33
- assert len(indices) == self.total_size
34
-
35
- # subsample
36
- indices = indices[self.rank:self.total_size:self.num_replicas]
37
- assert len(indices) == self.num_samples
38
-
39
- return iter(indices)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/yolo_neck.py DELETED
@@ -1,136 +0,0 @@
1
- # Copyright (c) 2019 Western Digital Corporation or its affiliates.
2
-
3
- import torch
4
- import torch.nn as nn
5
- import torch.nn.functional as F
6
- from mmcv.cnn import ConvModule
7
-
8
- from ..builder import NECKS
9
-
10
-
11
- class DetectionBlock(nn.Module):
12
- """Detection block in YOLO neck.
13
-
14
- Let out_channels = n, the DetectionBlock contains:
15
- Six ConvLayers, 1 Conv2D Layer and 1 YoloLayer.
16
- The first 6 ConvLayers are formed the following way:
17
- 1x1xn, 3x3x2n, 1x1xn, 3x3x2n, 1x1xn, 3x3x2n.
18
- The Conv2D layer is 1x1x255.
19
- Some block will have branch after the fifth ConvLayer.
20
- The input channel is arbitrary (in_channels)
21
-
22
- Args:
23
- in_channels (int): The number of input channels.
24
- out_channels (int): The number of output channels.
25
- conv_cfg (dict): Config dict for convolution layer. Default: None.
26
- norm_cfg (dict): Dictionary to construct and config norm layer.
27
- Default: dict(type='BN', requires_grad=True)
28
- act_cfg (dict): Config dict for activation layer.
29
- Default: dict(type='LeakyReLU', negative_slope=0.1).
30
- """
31
-
32
- def __init__(self,
33
- in_channels,
34
- out_channels,
35
- conv_cfg=None,
36
- norm_cfg=dict(type='BN', requires_grad=True),
37
- act_cfg=dict(type='LeakyReLU', negative_slope=0.1)):
38
- super(DetectionBlock, self).__init__()
39
- double_out_channels = out_channels * 2
40
-
41
- # shortcut
42
- cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)
43
- self.conv1 = ConvModule(in_channels, out_channels, 1, **cfg)
44
- self.conv2 = ConvModule(
45
- out_channels, double_out_channels, 3, padding=1, **cfg)
46
- self.conv3 = ConvModule(double_out_channels, out_channels, 1, **cfg)
47
- self.conv4 = ConvModule(
48
- out_channels, double_out_channels, 3, padding=1, **cfg)
49
- self.conv5 = ConvModule(double_out_channels, out_channels, 1, **cfg)
50
-
51
- def forward(self, x):
52
- tmp = self.conv1(x)
53
- tmp = self.conv2(tmp)
54
- tmp = self.conv3(tmp)
55
- tmp = self.conv4(tmp)
56
- out = self.conv5(tmp)
57
- return out
58
-
59
-
60
- @NECKS.register_module()
61
- class YOLOV3Neck(nn.Module):
62
- """The neck of YOLOV3.
63
-
64
- It can be treated as a simplified version of FPN. It
65
- will take the result from Darknet backbone and do some upsampling and
66
- concatenation. It will finally output the detection result.
67
-
68
- Note:
69
- The input feats should be from top to bottom.
70
- i.e., from high-lvl to low-lvl
71
- But YOLOV3Neck will process them in reversed order.
72
- i.e., from bottom (high-lvl) to top (low-lvl)
73
-
74
- Args:
75
- num_scales (int): The number of scales / stages.
76
- in_channels (int): The number of input channels.
77
- out_channels (int): The number of output channels.
78
- conv_cfg (dict): Config dict for convolution layer. Default: None.
79
- norm_cfg (dict): Dictionary to construct and config norm layer.
80
- Default: dict(type='BN', requires_grad=True)
81
- act_cfg (dict): Config dict for activation layer.
82
- Default: dict(type='LeakyReLU', negative_slope=0.1).
83
- """
84
-
85
- def __init__(self,
86
- num_scales,
87
- in_channels,
88
- out_channels,
89
- conv_cfg=None,
90
- norm_cfg=dict(type='BN', requires_grad=True),
91
- act_cfg=dict(type='LeakyReLU', negative_slope=0.1)):
92
- super(YOLOV3Neck, self).__init__()
93
- assert (num_scales == len(in_channels) == len(out_channels))
94
- self.num_scales = num_scales
95
- self.in_channels = in_channels
96
- self.out_channels = out_channels
97
-
98
- # shortcut
99
- cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)
100
-
101
- # To support arbitrary scales, the code looks awful, but it works.
102
- # Better solution is welcomed.
103
- self.detect1 = DetectionBlock(in_channels[0], out_channels[0], **cfg)
104
- for i in range(1, self.num_scales):
105
- in_c, out_c = self.in_channels[i], self.out_channels[i]
106
- self.add_module(f'conv{i}', ConvModule(in_c, out_c, 1, **cfg))
107
- # in_c + out_c : High-lvl feats will be cat with low-lvl feats
108
- self.add_module(f'detect{i+1}',
109
- DetectionBlock(in_c + out_c, out_c, **cfg))
110
-
111
- def forward(self, feats):
112
- assert len(feats) == self.num_scales
113
-
114
- # processed from bottom (high-lvl) to top (low-lvl)
115
- outs = []
116
- out = self.detect1(feats[-1])
117
- outs.append(out)
118
-
119
- for i, x in enumerate(reversed(feats[:-1])):
120
- conv = getattr(self, f'conv{i+1}')
121
- tmp = conv(out)
122
-
123
- # Cat with low-lvl feats
124
- tmp = F.interpolate(tmp, scale_factor=2)
125
- tmp = torch.cat((tmp, x), 1)
126
-
127
- detect = getattr(self, f'detect{i+2}')
128
- out = detect(tmp)
129
- outs.append(out)
130
-
131
- return tuple(outs)
132
-
133
- def init_weights(self):
134
- """Initialize the weights of module."""
135
- # init is done in ConvModule
136
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/hrf.py DELETED
@@ -1,27 +0,0 @@
1
- import os.path as osp
2
-
3
- from .builder import DATASETS
4
- from .custom import CustomDataset
5
-
6
-
7
- @DATASETS.register_module()
8
- class HRFDataset(CustomDataset):
9
- """HRF dataset.
10
-
11
- In segmentation map annotation for HRF, 0 stands for background, which is
12
- included in 2 categories. ``reduce_zero_label`` is fixed to False. The
13
- ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to
14
- '.png'.
15
- """
16
-
17
- CLASSES = ('background', 'vessel')
18
-
19
- PALETTE = [[120, 120, 120], [6, 230, 230]]
20
-
21
- def __init__(self, **kwargs):
22
- super(HRFDataset, self).__init__(
23
- img_suffix='.png',
24
- seg_map_suffix='.png',
25
- reduce_zero_label=False,
26
- **kwargs)
27
- assert osp.exists(self.img_dir)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/modules.py DELETED
@@ -1,213 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- from torch.utils.checkpoint import checkpoint
4
-
5
- from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel
6
-
7
- import open_clip
8
- from ldm.util import default, count_params
9
-
10
-
11
- class AbstractEncoder(nn.Module):
12
- def __init__(self):
13
- super().__init__()
14
-
15
- def encode(self, *args, **kwargs):
16
- raise NotImplementedError
17
-
18
-
19
- class IdentityEncoder(AbstractEncoder):
20
-
21
- def encode(self, x):
22
- return x
23
-
24
-
25
- class ClassEmbedder(nn.Module):
26
- def __init__(self, embed_dim, n_classes=1000, key='class', ucg_rate=0.1):
27
- super().__init__()
28
- self.key = key
29
- self.embedding = nn.Embedding(n_classes, embed_dim)
30
- self.n_classes = n_classes
31
- self.ucg_rate = ucg_rate
32
-
33
- def forward(self, batch, key=None, disable_dropout=False):
34
- if key is None:
35
- key = self.key
36
- # this is for use in crossattn
37
- c = batch[key][:, None]
38
- if self.ucg_rate > 0. and not disable_dropout:
39
- mask = 1. - torch.bernoulli(torch.ones_like(c) * self.ucg_rate)
40
- c = mask * c + (1-mask) * torch.ones_like(c)*(self.n_classes-1)
41
- c = c.long()
42
- c = self.embedding(c)
43
- return c
44
-
45
- def get_unconditional_conditioning(self, bs, device="cuda"):
46
- uc_class = self.n_classes - 1 # 1000 classes --> 0 ... 999, one extra class for ucg (class 1000)
47
- uc = torch.ones((bs,), device=device) * uc_class
48
- uc = {self.key: uc}
49
- return uc
50
-
51
-
52
- def disabled_train(self, mode=True):
53
- """Overwrite model.train with this function to make sure train/eval mode
54
- does not change anymore."""
55
- return self
56
-
57
-
58
- class FrozenT5Embedder(AbstractEncoder):
59
- """Uses the T5 transformer encoder for text"""
60
- def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl
61
- super().__init__()
62
- self.tokenizer = T5Tokenizer.from_pretrained(version)
63
- self.transformer = T5EncoderModel.from_pretrained(version)
64
- self.device = device
65
- self.max_length = max_length # TODO: typical value?
66
- if freeze:
67
- self.freeze()
68
-
69
- def freeze(self):
70
- self.transformer = self.transformer.eval()
71
- #self.train = disabled_train
72
- for param in self.parameters():
73
- param.requires_grad = False
74
-
75
- def forward(self, text):
76
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
77
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
78
- tokens = batch_encoding["input_ids"].to(self.device)
79
- outputs = self.transformer(input_ids=tokens)
80
-
81
- z = outputs.last_hidden_state
82
- return z
83
-
84
- def encode(self, text):
85
- return self(text)
86
-
87
-
88
- class FrozenCLIPEmbedder(AbstractEncoder):
89
- """Uses the CLIP transformer encoder for text (from huggingface)"""
90
- LAYERS = [
91
- "last",
92
- "pooled",
93
- "hidden"
94
- ]
95
- def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77,
96
- freeze=True, layer="last", layer_idx=None): # clip-vit-base-patch32
97
- super().__init__()
98
- assert layer in self.LAYERS
99
- self.tokenizer = CLIPTokenizer.from_pretrained(version)
100
- self.transformer = CLIPTextModel.from_pretrained(version)
101
- self.device = device
102
- self.max_length = max_length
103
- if freeze:
104
- self.freeze()
105
- self.layer = layer
106
- self.layer_idx = layer_idx
107
- if layer == "hidden":
108
- assert layer_idx is not None
109
- assert 0 <= abs(layer_idx) <= 12
110
-
111
- def freeze(self):
112
- self.transformer = self.transformer.eval()
113
- #self.train = disabled_train
114
- for param in self.parameters():
115
- param.requires_grad = False
116
-
117
- def forward(self, text):
118
- batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,
119
- return_overflowing_tokens=False, padding="max_length", return_tensors="pt")
120
- tokens = batch_encoding["input_ids"].to(self.device)
121
- outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer=="hidden")
122
- if self.layer == "last":
123
- z = outputs.last_hidden_state
124
- elif self.layer == "pooled":
125
- z = outputs.pooler_output[:, None, :]
126
- else:
127
- z = outputs.hidden_states[self.layer_idx]
128
- return z
129
-
130
- def encode(self, text):
131
- return self(text)
132
-
133
-
134
- class FrozenOpenCLIPEmbedder(AbstractEncoder):
135
- """
136
- Uses the OpenCLIP transformer encoder for text
137
- """
138
- LAYERS = [
139
- #"pooled",
140
- "last",
141
- "penultimate"
142
- ]
143
- def __init__(self, arch="ViT-H-14", version="laion2b_s32b_b79k", device="cuda", max_length=77,
144
- freeze=True, layer="last"):
145
- super().__init__()
146
- assert layer in self.LAYERS
147
- model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
148
- del model.visual
149
- self.model = model
150
-
151
- self.device = device
152
- self.max_length = max_length
153
- if freeze:
154
- self.freeze()
155
- self.layer = layer
156
- if self.layer == "last":
157
- self.layer_idx = 0
158
- elif self.layer == "penultimate":
159
- self.layer_idx = 1
160
- else:
161
- raise NotImplementedError()
162
-
163
- def freeze(self):
164
- self.model = self.model.eval()
165
- for param in self.parameters():
166
- param.requires_grad = False
167
-
168
- def forward(self, text):
169
- tokens = open_clip.tokenize(text)
170
- z = self.encode_with_transformer(tokens.to(self.device))
171
- return z
172
-
173
- def encode_with_transformer(self, text):
174
- x = self.model.token_embedding(text) # [batch_size, n_ctx, d_model]
175
- x = x + self.model.positional_embedding
176
- x = x.permute(1, 0, 2) # NLD -> LND
177
- x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
178
- x = x.permute(1, 0, 2) # LND -> NLD
179
- x = self.model.ln_final(x)
180
- return x
181
-
182
- def text_transformer_forward(self, x: torch.Tensor, attn_mask = None):
183
- for i, r in enumerate(self.model.transformer.resblocks):
184
- if i == len(self.model.transformer.resblocks) - self.layer_idx:
185
- break
186
- if self.model.transformer.grad_checkpointing and not torch.jit.is_scripting():
187
- x = checkpoint(r, x, attn_mask)
188
- else:
189
- x = r(x, attn_mask=attn_mask)
190
- return x
191
-
192
- def encode(self, text):
193
- return self(text)
194
-
195
-
196
- class FrozenCLIPT5Encoder(AbstractEncoder):
197
- def __init__(self, clip_version="openai/clip-vit-large-patch14", t5_version="google/t5-v1_1-xl", device="cuda",
198
- clip_max_length=77, t5_max_length=77):
199
- super().__init__()
200
- self.clip_encoder = FrozenCLIPEmbedder(clip_version, device, max_length=clip_max_length)
201
- self.t5_encoder = FrozenT5Embedder(t5_version, device, max_length=t5_max_length)
202
- print(f"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, "
203
- f"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params.")
204
-
205
- def encode(self, text):
206
- return self(text)
207
-
208
- def forward(self, text):
209
- clip_z = self.clip_encoder.encode(text)
210
- t5_z = self.t5_encoder.encode(text)
211
- return [clip_z, t5_z]
212
-
213
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py DELETED
@@ -1,150 +0,0 @@
1
- import logging
2
- from typing import Iterable, Set, Tuple
3
-
4
- from pip._internal.build_env import BuildEnvironment
5
- from pip._internal.distributions.base import AbstractDistribution
6
- from pip._internal.exceptions import InstallationError
7
- from pip._internal.index.package_finder import PackageFinder
8
- from pip._internal.metadata import BaseDistribution
9
- from pip._internal.utils.subprocess import runner_with_spinner_message
10
-
11
- logger = logging.getLogger(__name__)
12
-
13
-
14
- class SourceDistribution(AbstractDistribution):
15
- """Represents a source distribution.
16
-
17
- The preparation step for these needs metadata for the packages to be
18
- generated, either using PEP 517 or using the legacy `setup.py egg_info`.
19
- """
20
-
21
- def get_metadata_distribution(self) -> BaseDistribution:
22
- return self.req.get_dist()
23
-
24
- def prepare_distribution_metadata(
25
- self,
26
- finder: PackageFinder,
27
- build_isolation: bool,
28
- check_build_deps: bool,
29
- ) -> None:
30
- # Load pyproject.toml, to determine whether PEP 517 is to be used
31
- self.req.load_pyproject_toml()
32
-
33
- # Set up the build isolation, if this requirement should be isolated
34
- should_isolate = self.req.use_pep517 and build_isolation
35
- if should_isolate:
36
- # Setup an isolated environment and install the build backend static
37
- # requirements in it.
38
- self._prepare_build_backend(finder)
39
- # Check that if the requirement is editable, it either supports PEP 660 or
40
- # has a setup.py or a setup.cfg. This cannot be done earlier because we need
41
- # to setup the build backend to verify it supports build_editable, nor can
42
- # it be done later, because we want to avoid installing build requirements
43
- # needlessly. Doing it here also works around setuptools generating
44
- # UNKNOWN.egg-info when running get_requires_for_build_wheel on a directory
45
- # without setup.py nor setup.cfg.
46
- self.req.isolated_editable_sanity_check()
47
- # Install the dynamic build requirements.
48
- self._install_build_reqs(finder)
49
- # Check if the current environment provides build dependencies
50
- should_check_deps = self.req.use_pep517 and check_build_deps
51
- if should_check_deps:
52
- pyproject_requires = self.req.pyproject_requires
53
- assert pyproject_requires is not None
54
- conflicting, missing = self.req.build_env.check_requirements(
55
- pyproject_requires
56
- )
57
- if conflicting:
58
- self._raise_conflicts("the backend dependencies", conflicting)
59
- if missing:
60
- self._raise_missing_reqs(missing)
61
- self.req.prepare_metadata()
62
-
63
- def _prepare_build_backend(self, finder: PackageFinder) -> None:
64
- # Isolate in a BuildEnvironment and install the build-time
65
- # requirements.
66
- pyproject_requires = self.req.pyproject_requires
67
- assert pyproject_requires is not None
68
-
69
- self.req.build_env = BuildEnvironment()
70
- self.req.build_env.install_requirements(
71
- finder, pyproject_requires, "overlay", kind="build dependencies"
72
- )
73
- conflicting, missing = self.req.build_env.check_requirements(
74
- self.req.requirements_to_check
75
- )
76
- if conflicting:
77
- self._raise_conflicts("PEP 517/518 supported requirements", conflicting)
78
- if missing:
79
- logger.warning(
80
- "Missing build requirements in pyproject.toml for %s.",
81
- self.req,
82
- )
83
- logger.warning(
84
- "The project does not specify a build backend, and "
85
- "pip cannot fall back to setuptools without %s.",
86
- " and ".join(map(repr, sorted(missing))),
87
- )
88
-
89
- def _get_build_requires_wheel(self) -> Iterable[str]:
90
- with self.req.build_env:
91
- runner = runner_with_spinner_message("Getting requirements to build wheel")
92
- backend = self.req.pep517_backend
93
- assert backend is not None
94
- with backend.subprocess_runner(runner):
95
- return backend.get_requires_for_build_wheel()
96
-
97
- def _get_build_requires_editable(self) -> Iterable[str]:
98
- with self.req.build_env:
99
- runner = runner_with_spinner_message(
100
- "Getting requirements to build editable"
101
- )
102
- backend = self.req.pep517_backend
103
- assert backend is not None
104
- with backend.subprocess_runner(runner):
105
- return backend.get_requires_for_build_editable()
106
-
107
- def _install_build_reqs(self, finder: PackageFinder) -> None:
108
- # Install any extra build dependencies that the backend requests.
109
- # This must be done in a second pass, as the pyproject.toml
110
- # dependencies must be installed before we can call the backend.
111
- if (
112
- self.req.editable
113
- and self.req.permit_editable_wheels
114
- and self.req.supports_pyproject_editable()
115
- ):
116
- build_reqs = self._get_build_requires_editable()
117
- else:
118
- build_reqs = self._get_build_requires_wheel()
119
- conflicting, missing = self.req.build_env.check_requirements(build_reqs)
120
- if conflicting:
121
- self._raise_conflicts("the backend dependencies", conflicting)
122
- self.req.build_env.install_requirements(
123
- finder, missing, "normal", kind="backend dependencies"
124
- )
125
-
126
- def _raise_conflicts(
127
- self, conflicting_with: str, conflicting_reqs: Set[Tuple[str, str]]
128
- ) -> None:
129
- format_string = (
130
- "Some build dependencies for {requirement} "
131
- "conflict with {conflicting_with}: {description}."
132
- )
133
- error_message = format_string.format(
134
- requirement=self.req,
135
- conflicting_with=conflicting_with,
136
- description=", ".join(
137
- f"{installed} is incompatible with {wanted}"
138
- for installed, wanted in sorted(conflicting_reqs)
139
- ),
140
- )
141
- raise InstallationError(error_message)
142
-
143
- def _raise_missing_reqs(self, missing: Set[str]) -> None:
144
- format_string = (
145
- "Some build dependencies for {requirement} are missing: {missing}."
146
- )
147
- error_message = format_string.format(
148
- requirement=self.req, missing=", ".join(map(repr, sorted(missing)))
149
- )
150
- raise InstallationError(error_message)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/actions.py DELETED
@@ -1,207 +0,0 @@
1
- # actions.py
2
-
3
- from .exceptions import ParseException
4
- from .util import col
5
-
6
-
7
- class OnlyOnce:
8
- """
9
- Wrapper for parse actions, to ensure they are only called once.
10
- """
11
-
12
- def __init__(self, method_call):
13
- from .core import _trim_arity
14
-
15
- self.callable = _trim_arity(method_call)
16
- self.called = False
17
-
18
- def __call__(self, s, l, t):
19
- if not self.called:
20
- results = self.callable(s, l, t)
21
- self.called = True
22
- return results
23
- raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset")
24
-
25
- def reset(self):
26
- """
27
- Allow the associated parse action to be called once more.
28
- """
29
-
30
- self.called = False
31
-
32
-
33
- def match_only_at_col(n):
34
- """
35
- Helper method for defining parse actions that require matching at
36
- a specific column in the input text.
37
- """
38
-
39
- def verify_col(strg, locn, toks):
40
- if col(locn, strg) != n:
41
- raise ParseException(strg, locn, "matched token not at column {}".format(n))
42
-
43
- return verify_col
44
-
45
-
46
- def replace_with(repl_str):
47
- """
48
- Helper method for common parse actions that simply return
49
- a literal value. Especially useful when used with
50
- :class:`transform_string<ParserElement.transform_string>` ().
51
-
52
- Example::
53
-
54
- num = Word(nums).set_parse_action(lambda toks: int(toks[0]))
55
- na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
56
- term = na | num
57
-
58
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
59
- """
60
- return lambda s, l, t: [repl_str]
61
-
62
-
63
- def remove_quotes(s, l, t):
64
- """
65
- Helper parse action for removing quotation marks from parsed
66
- quoted strings.
67
-
68
- Example::
69
-
70
- # by default, quotation marks are included in parsed results
71
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"]
72
-
73
- # use remove_quotes to strip quotation marks from parsed results
74
- quoted_string.set_parse_action(remove_quotes)
75
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"]
76
- """
77
- return t[0][1:-1]
78
-
79
-
80
- def with_attribute(*args, **attr_dict):
81
- """
82
- Helper to create a validating parse action to be used with start
83
- tags created with :class:`make_xml_tags` or
84
- :class:`make_html_tags`. Use ``with_attribute`` to qualify
85
- a starting tag with a required attribute value, to avoid false
86
- matches on common tags such as ``<TD>`` or ``<DIV>``.
87
-
88
- Call ``with_attribute`` with a series of attribute names and
89
- values. Specify the list of filter attributes names and values as:
90
-
91
- - keyword arguments, as in ``(align="right")``, or
92
- - as an explicit dict with ``**`` operator, when an attribute
93
- name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}``
94
- - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))``
95
-
96
- For attribute names with a namespace prefix, you must use the second
97
- form. Attribute names are matched insensitive to upper/lower case.
98
-
99
- If just testing for ``class`` (with or without a namespace), use
100
- :class:`with_class`.
101
-
102
- To verify that the attribute exists, but without specifying a value,
103
- pass ``with_attribute.ANY_VALUE`` as the value.
104
-
105
- Example::
106
-
107
- html = '''
108
- <div>
109
- Some text
110
- <div type="grid">1 4 0 1 0</div>
111
- <div type="graph">1,3 2,3 1,1</div>
112
- <div>this has no type</div>
113
- </div>
114
-
115
- '''
116
- div,div_end = make_html_tags("div")
117
-
118
- # only match div tag having a type attribute with value "grid"
119
- div_grid = div().set_parse_action(with_attribute(type="grid"))
120
- grid_expr = div_grid + SkipTo(div | div_end)("body")
121
- for grid_header in grid_expr.search_string(html):
122
- print(grid_header.body)
123
-
124
- # construct a match with any div tag having a type attribute, regardless of the value
125
- div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE))
126
- div_expr = div_any_type + SkipTo(div | div_end)("body")
127
- for div_header in div_expr.search_string(html):
128
- print(div_header.body)
129
-
130
- prints::
131
-
132
- 1 4 0 1 0
133
-
134
- 1 4 0 1 0
135
- 1,3 2,3 1,1
136
- """
137
- if args:
138
- attrs = args[:]
139
- else:
140
- attrs = attr_dict.items()
141
- attrs = [(k, v) for k, v in attrs]
142
-
143
- def pa(s, l, tokens):
144
- for attrName, attrValue in attrs:
145
- if attrName not in tokens:
146
- raise ParseException(s, l, "no matching attribute " + attrName)
147
- if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue:
148
- raise ParseException(
149
- s,
150
- l,
151
- "attribute {!r} has value {!r}, must be {!r}".format(
152
- attrName, tokens[attrName], attrValue
153
- ),
154
- )
155
-
156
- return pa
157
-
158
-
159
- with_attribute.ANY_VALUE = object()
160
-
161
-
162
- def with_class(classname, namespace=""):
163
- """
164
- Simplified version of :class:`with_attribute` when
165
- matching on a div class - made difficult because ``class`` is
166
- a reserved word in Python.
167
-
168
- Example::
169
-
170
- html = '''
171
- <div>
172
- Some text
173
- <div class="grid">1 4 0 1 0</div>
174
- <div class="graph">1,3 2,3 1,1</div>
175
- <div>this &lt;div&gt; has no class</div>
176
- </div>
177
-
178
- '''
179
- div,div_end = make_html_tags("div")
180
- div_grid = div().set_parse_action(with_class("grid"))
181
-
182
- grid_expr = div_grid + SkipTo(div | div_end)("body")
183
- for grid_header in grid_expr.search_string(html):
184
- print(grid_header.body)
185
-
186
- div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE))
187
- div_expr = div_any_type + SkipTo(div | div_end)("body")
188
- for div_header in div_expr.search_string(html):
189
- print(div_header.body)
190
-
191
- prints::
192
-
193
- 1 4 0 1 0
194
-
195
- 1 4 0 1 0
196
- 1,3 2,3 1,1
197
- """
198
- classattr = "{}:class".format(namespace) if namespace else "class"
199
- return with_attribute(**{classattr: classname})
200
-
201
-
202
- # pre-PEP8 compatibility symbols
203
- replaceWith = replace_with
204
- removeQuotes = remove_quotes
205
- withAttribute = with_attribute
206
- withClass = with_class
207
- matchOnlyAtCol = match_only_at_col
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/text/file_utils.py DELETED
@@ -1,256 +0,0 @@
1
- # Utilities for working with the local dataset cache.
2
- # This file is adapted from the AllenNLP library at https://github.com/allenai/allennlp
3
- # Copyright by the AllenNLP authors.
4
-
5
- from __future__ import absolute_import, division, print_function, unicode_literals
6
-
7
- import sys
8
- import json
9
- import logging
10
- import os
11
- import shutil
12
- import tempfile
13
- import fnmatch
14
- from functools import wraps
15
- from hashlib import sha256
16
- from io import open
17
-
18
- import boto3
19
- import requests
20
- from botocore.exceptions import ClientError
21
- from tqdm import tqdm
22
-
23
- try:
24
- from torch.hub import _get_torch_home
25
- torch_cache_home = _get_torch_home()
26
- except ImportError:
27
- torch_cache_home = os.path.expanduser(
28
- os.getenv('TORCH_HOME', os.path.join(
29
- os.getenv('XDG_CACHE_HOME', '~/.cache'), 'torch')))
30
- default_cache_path = os.path.join(torch_cache_home, 'pytorch_transformers')
31
-
32
- try:
33
- from urllib.parse import urlparse
34
- except ImportError:
35
- from urlparse import urlparse
36
-
37
- try:
38
- from pathlib import Path
39
- PYTORCH_PRETRAINED_BERT_CACHE = Path(
40
- os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', default_cache_path))
41
- except (AttributeError, ImportError):
42
- PYTORCH_PRETRAINED_BERT_CACHE = os.getenv('PYTORCH_PRETRAINED_BERT_CACHE',
43
- default_cache_path)
44
-
45
- logger = logging.getLogger(__name__) # pylint: disable=invalid-name
46
-
47
-
48
- def url_to_filename(url, etag=None):
49
- """
50
- Convert `url` into a hashed filename in a repeatable way.
51
- If `etag` is specified, append its hash to the url's, delimited
52
- by a period.
53
- """
54
- url_bytes = url.encode('utf-8')
55
- url_hash = sha256(url_bytes)
56
- filename = url_hash.hexdigest()
57
-
58
- if etag:
59
- etag_bytes = etag.encode('utf-8')
60
- etag_hash = sha256(etag_bytes)
61
- filename += '.' + etag_hash.hexdigest()
62
-
63
- return filename
64
-
65
-
66
- def filename_to_url(filename, cache_dir=None):
67
- """
68
- Return the url and etag (which may be ``None``) stored for `filename`.
69
- Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist.
70
- """
71
- if cache_dir is None:
72
- cache_dir = PYTORCH_PRETRAINED_BERT_CACHE
73
- if sys.version_info[0] == 3 and isinstance(cache_dir, Path):
74
- cache_dir = str(cache_dir)
75
-
76
- cache_path = os.path.join(cache_dir, filename)
77
- if not os.path.exists(cache_path):
78
- raise EnvironmentError("file {} not found".format(cache_path))
79
-
80
- meta_path = cache_path + '.json'
81
- if not os.path.exists(meta_path):
82
- raise EnvironmentError("file {} not found".format(meta_path))
83
-
84
- with open(meta_path, encoding="utf-8") as meta_file:
85
- metadata = json.load(meta_file)
86
- url = metadata['url']
87
- etag = metadata['etag']
88
-
89
- return url, etag
90
-
91
-
92
- def cached_path(url_or_filename, cache_dir=None):
93
- """
94
- Given something that might be a URL (or might be a local path),
95
- determine which. If it's a URL, download the file and cache it, and
96
- return the path to the cached file. If it's already a local path,
97
- make sure the file exists and then return the path.
98
- """
99
- if cache_dir is None:
100
- cache_dir = PYTORCH_PRETRAINED_BERT_CACHE
101
- if sys.version_info[0] == 3 and isinstance(url_or_filename, Path):
102
- url_or_filename = str(url_or_filename)
103
- if sys.version_info[0] == 3 and isinstance(cache_dir, Path):
104
- cache_dir = str(cache_dir)
105
-
106
- parsed = urlparse(url_or_filename)
107
-
108
- if parsed.scheme in ('http', 'https', 's3'):
109
- # URL, so get it from the cache (downloading if necessary)
110
- return get_from_cache(url_or_filename, cache_dir)
111
- elif os.path.exists(url_or_filename):
112
- # File, and it exists.
113
- return url_or_filename
114
- elif parsed.scheme == '':
115
- # File, but it doesn't exist.
116
- raise EnvironmentError("file {} not found".format(url_or_filename))
117
- else:
118
- # Something unknown
119
- raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename))
120
-
121
-
122
- def split_s3_path(url):
123
- """Split a full s3 path into the bucket name and path."""
124
- parsed = urlparse(url)
125
- if not parsed.netloc or not parsed.path:
126
- raise ValueError("bad s3 path {}".format(url))
127
- bucket_name = parsed.netloc
128
- s3_path = parsed.path
129
- # Remove '/' at beginning of path.
130
- if s3_path.startswith("/"):
131
- s3_path = s3_path[1:]
132
- return bucket_name, s3_path
133
-
134
-
135
- def s3_request(func):
136
- """
137
- Wrapper function for s3 requests in order to create more helpful error
138
- messages.
139
- """
140
-
141
- @wraps(func)
142
- def wrapper(url, *args, **kwargs):
143
- try:
144
- return func(url, *args, **kwargs)
145
- except ClientError as exc:
146
- if int(exc.response["Error"]["Code"]) == 404:
147
- raise EnvironmentError("file {} not found".format(url))
148
- else:
149
- raise
150
-
151
- return wrapper
152
-
153
-
154
- @s3_request
155
- def s3_etag(url):
156
- """Check ETag on S3 object."""
157
- s3_resource = boto3.resource("s3")
158
- bucket_name, s3_path = split_s3_path(url)
159
- s3_object = s3_resource.Object(bucket_name, s3_path)
160
- return s3_object.e_tag
161
-
162
-
163
- @s3_request
164
- def s3_get(url, temp_file):
165
- """Pull a file directly from S3."""
166
- s3_resource = boto3.resource("s3")
167
- bucket_name, s3_path = split_s3_path(url)
168
- s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file)
169
-
170
-
171
- def http_get(url, temp_file):
172
- req = requests.get(url, stream=True)
173
- content_length = req.headers.get('Content-Length')
174
- total = int(content_length) if content_length is not None else None
175
- progress = tqdm(unit="B", total=total)
176
- for chunk in req.iter_content(chunk_size=1024):
177
- if chunk: # filter out keep-alive new chunks
178
- progress.update(len(chunk))
179
- temp_file.write(chunk)
180
- progress.close()
181
-
182
-
183
- def get_from_cache(url, cache_dir=None):
184
- """
185
- Given a URL, look for the corresponding dataset in the local cache.
186
- If it's not there, download it. Then return the path to the cached file.
187
- """
188
- if cache_dir is None:
189
- cache_dir = PYTORCH_PRETRAINED_BERT_CACHE
190
- if sys.version_info[0] == 3 and isinstance(cache_dir, Path):
191
- cache_dir = str(cache_dir)
192
- if sys.version_info[0] == 2 and not isinstance(cache_dir, str):
193
- cache_dir = str(cache_dir)
194
-
195
- if not os.path.exists(cache_dir):
196
- os.makedirs(cache_dir)
197
-
198
- # Get eTag to add to filename, if it exists.
199
- if url.startswith("s3://"):
200
- etag = s3_etag(url)
201
- else:
202
- try:
203
- response = requests.head(url, allow_redirects=True)
204
- if response.status_code != 200:
205
- etag = None
206
- else:
207
- etag = response.headers.get("ETag")
208
- except EnvironmentError:
209
- etag = None
210
-
211
- if sys.version_info[0] == 2 and etag is not None:
212
- etag = etag.decode('utf-8')
213
- filename = url_to_filename(url, etag)
214
-
215
- # get cache path to put the file
216
- cache_path = os.path.join(cache_dir, filename)
217
-
218
- # If we don't have a connection (etag is None) and can't identify the file
219
- # try to get the last downloaded one
220
- if not os.path.exists(cache_path) and etag is None:
221
- matching_files = fnmatch.filter(os.listdir(cache_dir), filename + '.*')
222
- matching_files = list(filter(lambda s: not s.endswith('.json'), matching_files))
223
- if matching_files:
224
- cache_path = os.path.join(cache_dir, matching_files[-1])
225
-
226
- if not os.path.exists(cache_path):
227
- # Download to temporary file, then copy to cache dir once finished.
228
- # Otherwise you get corrupt cache entries if the download gets interrupted.
229
- with tempfile.NamedTemporaryFile() as temp_file:
230
- logger.info("%s not found in cache, downloading to %s", url, temp_file.name)
231
-
232
- # GET file object
233
- if url.startswith("s3://"):
234
- s3_get(url, temp_file)
235
- else:
236
- http_get(url, temp_file)
237
-
238
- # we are copying the file before closing it, so flush to avoid truncation
239
- temp_file.flush()
240
- # shutil.copyfileobj() starts at the current position, so go to the start
241
- temp_file.seek(0)
242
-
243
- logger.info("copying %s to cache at %s", temp_file.name, cache_path)
244
- with open(cache_path, 'wb') as cache_file:
245
- shutil.copyfileobj(temp_file, cache_file)
246
-
247
- logger.info("creating metadata file for %s", cache_path)
248
- meta = {'url': url, 'etag': etag}
249
- meta_path = cache_path + '.json'
250
- with open(meta_path, 'w') as meta_file:
251
- output_string = json.dumps(meta)
252
- meta_file.write(output_string)
253
-
254
- logger.info("removing temp file %s", temp_file.name)
255
-
256
- return cache_path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BIOML-SVM/SVM/README.md DELETED
@@ -1,54 +0,0 @@
1
- ---
2
- # https://huggingface.co/docs/hub/spaces-config-reference
3
- title: SVM
4
- emoji: 🧬
5
- colorFrom: green
6
- colorTo: green
7
- sdk: gradio
8
- app_file: app.py
9
- pinned: false
10
- models:
11
- - InstaDeepAI/nucleotide-transformer-500m-1000g
12
- - facebook/esmfold_v1
13
- - sentence-transformers/all-mpnet-base-v2
14
- python_version: 3.10.4
15
- license: mit
16
- ---
17
-
18
- # ProteinBind
19
-
20
- [![View on GitHub](https://img.shields.io/badge/-View%20on%20GitHub-000?style=flat&logo=github&logoColor=white&link=https://github.com/svm-ai/svm-hackathon)](https://github.com/svm-ai/svm-hackathon)
21
-
22
- ## ML-Driven Bioinformatics for Protein Mutation Analysis
23
-
24
- This repository contains the source code and resources for our bioinformatics project aimed at identifying how gene/protein mutations alter function and which mutations can be pathogenic. Our approach is ML-driven and utilizes a multimodal contrastive learning framework, inspired by the ImageBind model by MetaAI.
25
-
26
- ## Project Goal
27
-
28
- Our goal is to develop a method that can predict the effect of sequence variation on the function of genes/proteins. This information is critical for understanding gene/protein function, designing new proteins, and aiding in drug discovery. By modeling these effects, we can better select patients for clinical trials and modify existing drug-like molecules to treat previously untreated populations of the same disease with different mutations.
29
-
30
- ## Model Description
31
-
32
- Our model uses contrastive learning across several modalities including amino acid (AA) sequences, Gene Ontology (GO) annotations, multiple sequence alignment (MSA), 3D structure, text annotations, and DNA sequences.
33
-
34
- We utilize the following encoders for each modality:
35
-
36
- - AA sequences: ESM v1/v2 by MetaAI
37
- - Text annotations: Sentence-BERT (SBERT)
38
- - 3D structure: ESMFold by MetaAI
39
- - DNA nucleotide sequence: Nucleotide-Transformer
40
- - MSA sequence: MSA-transformer
41
-
42
-
43
- The NT-Xent loss function is used for contrastive learning.
44
-
45
- ## Getting Started
46
-
47
- Clone the repository and install the necessary dependencies. Note that we will assume you have already installed Git Large File Storage (Git LFS) as some files in this repository are tracked using Git LFS.
48
-
49
- ## Contributing
50
- Contributions are welcome! Please read the contributing guidelines before getting started.
51
-
52
- ## License
53
-
54
- This project is licensed under the terms of the MIT license.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/Dockerfile DELETED
@@ -1,65 +0,0 @@
1
- FROM node:18-alpine AS base
2
-
3
- # Install dependencies only when needed
4
- FROM base AS deps
5
- # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
6
- RUN apk add --no-cache libc6-compat
7
- WORKDIR /app
8
-
9
- # Install dependencies based on the preferred package manager
10
- COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
11
- RUN \
12
- if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
13
- elif [ -f package-lock.json ]; then npm ci; \
14
- elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
15
- else echo "Lockfile not found." && exit 1; \
16
- fi
17
-
18
- # Uncomment the following lines if you want to use a secret at buildtime,
19
- # for example to access your private npm packages
20
- # RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \
21
- # $(cat /run/secrets/HF_EXAMPLE_SECRET)
22
-
23
- # Rebuild the source code only when needed
24
- FROM base AS builder
25
- WORKDIR /app
26
- COPY --from=deps /app/node_modules ./node_modules
27
- COPY . .
28
-
29
- # Next.js collects completely anonymous telemetry data about general usage.
30
- # Learn more here: https://nextjs.org/telemetry
31
- # Uncomment the following line in case you want to disable telemetry during the build.
32
- # ENV NEXT_TELEMETRY_DISABLED 1
33
-
34
- # RUN yarn build
35
-
36
- # If you use yarn, comment out this line and use the line above
37
- RUN npm run build
38
-
39
- # Production image, copy all the files and run next
40
- FROM base AS runner
41
- WORKDIR /app
42
-
43
- ENV NODE_ENV production
44
- # Uncomment the following line in case you want to disable telemetry during runtime.
45
- # ENV NEXT_TELEMETRY_DISABLED 1
46
-
47
- RUN addgroup --system --gid 1001 nodejs
48
- RUN adduser --system --uid 1001 nextjs
49
-
50
- COPY --from=builder /app/public ./public
51
-
52
- # Automatically leverage output traces to reduce image size
53
- # https://nextjs.org/docs/advanced-features/output-file-tracing
54
- COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
55
- COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
56
- COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache
57
- # COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache
58
-
59
- USER nextjs
60
-
61
- EXPOSE 3000
62
-
63
- ENV PORT 3000
64
-
65
- CMD ["node", "server.js"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/lib/pick.ts DELETED
@@ -1,2 +0,0 @@
1
-
2
- export const pick = (items: string[]) => items[Math.floor(Math.random()*items.length)]
 
 
 
spaces/Basil2k4/VPSnguyenmanh/src/create_user_and_fix_permissions.sh DELETED
@@ -1,47 +0,0 @@
1
- #!/bin/bash
2
- ## Creates an ordinary non-root VNC_USER and calls the script to fix the file permissions
3
-
4
- ### every exit != 0 fails the script
5
- set -e
6
- set -u
7
-
8
- UNAME=0
9
- UGROUP=0
10
-
11
- if [[ -n "${VNC_USER}" ]] ; then
12
- case "$VNC_USER" in
13
- root|0) UNAME=root; UGROUP=$UNAME;; # exact match
14
- root:*|0:*) UNAME=root; UGROUP=$UNAME;; # match from the beginning
15
- *:root|*:0) UNAME=root; UGROUP=$UNAME;; # match at the end
16
- *) UNAME=${VNC_USER/%:*/}; UGROUP=${VNC_USER/#*:/};; # else case
17
- esac
18
-
19
- if [[ "$UGROUP" != "" && "$UGROUP" != "root" ]] ; then
20
-
21
- ### Creates the group only if it does not exist yet
22
- echo "Creating group $UGROUP if needed"
23
- groupadd -f $UGROUP
24
-
25
- ### Returns "0" if the user exists, or "1" otherwise
26
- missing_user=$(id -u $UNAME > /dev/null 2>&1; echo $?)
27
-
28
- if [[ $missing_user != 0 ]] ; then
29
- echo "Creating non-root user \"$VNC_USER\"."
30
- useradd --no-log-init --gid $UGROUP --home-dir $HOME --shell /bin/bash --password $VNC_PW $UNAME
31
- fi
32
- else
33
- echo "Will not create root user \"$VNC_USER\"."
34
- fi
35
- fi
36
-
37
- FIXING="Fixing permissions: "
38
-
39
- for var in "$@"
40
- do
41
- echo "$FIXING $var"
42
- find "$var"/ -name '*.sh' -exec chmod a+x {} +
43
- find "$var"/ -name '*.desktop' -exec chmod a+x {} +
44
-
45
- ### folder and its content belong to the group zero (recursively)
46
- chgrp -R 0 "$var" && chmod -R -v a+rw "$var" && find "$var" -type d -exec chmod -v a+x {} +
47
- done
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Banderas De Pases.md DELETED
@@ -1,83 +0,0 @@
1
-
2
- <h1>UnlockGo Crack Descargar: ¿Vale la pena? </h1>
3
- <p>Si alguna vez has olvidado tu contraseña, PIN, patrón o cara ID en tu iPhone o dispositivo Android, sabes lo frustrante que puede ser. Puede perder el acceso a sus datos, aplicaciones, contactos, fotos y más. También puede enfrentar el problema de bloqueo de activación de iCloud o bloqueo de Google FRP, que le impide configurar su dispositivo después de un restablecimiento de fábrica. </p>
4
- <h2>banderas de países</h2><br /><p><b><b>Download File</b> &#9989; <a href="https://bltlly.com/2v6KJ1">https://bltlly.com/2v6KJ1</a></b></p><br /><br />
5
- <p>Afortunadamente, hay una solución que puede ayudarle a evitar estos bloqueos y recuperar el control de su dispositivo. Se llama <strong>UnlockGo</strong>, y es una poderosa herramienta que puede eliminar varios tipos de bloqueos en dispositivos iOS y Android sin ninguna contraseña o pérdida de datos. </p>
6
- <p>Pero lo que si usted no quiere pagar por la versión completa de UnlockGo? Puede sentirse tentado a buscar un <strong>crack</strong>, que es una versión modificada del software que evita la verificación de la licencia y le permite usarla de forma gratuita. Sin embargo, esto no es una buena idea, ya que hay muchos riesgos y desventajas de usar una grieta. En este artículo, vamos a explicar por qué usted debe evitar el uso de una grieta y cómo se puede obtener el mejor valor de UnlockGo.</p>
7
- <h2>¿Cuáles son los riesgos de usar una grieta? </h2>
8
- <p>Una grieta puede parecer una manera fácil de ahorrar dinero, pero viene con muchas desventajas y peligros. Aquí están algunos de ellos:</p>
9
- <ul>
10
- <li><strong>Ralentización de dispositivos</strong>: La grieta puede contener código malicioso que puede infectar el dispositivo y hacerlo funcionar más lento o estrellarse por completo. </li>
11
- <li><strong>Riesgo de virus</strong>: Los sitios web que ofrecen grietas también pueden contener virus que pueden dañar su dispositivo o robar su información personal. </li>
12
- <li><strong>Malware</strong>: La grieta también puede instalar malware en su dispositivo que puede espiar sus actividades, mostrar anuncios o redirigirlo a sitios web no deseados. </li>
13
- <li><strong>Violación de la privacidad</strong>: La grieta también puede acceder a sus datos y enviarlos a terceros sin su consentimiento. </li>
14
-
15
- <li><strong>Falta de soporte</strong>: La grieta puede no funcionar correctamente o causar errores que no puede corregir. No podrá ponerse en contacto con el servicio de atención al cliente de UnlockGo ni obtener ningún reembolso. </li>
16
- <li><strong>Cuestiones legales</strong>: La grieta puede violar los términos y condiciones de UnlockGo e infringir sus derechos de propiedad intelectual. Usted puede enfrentar consecuencias legales si es sorprendido usando una grieta. </li>
17
- </ul>
18
- <h2>¿Cómo descargar UnlockGo desde el sitio web oficial? </h2>
19
- <p>La mejor manera de descargar UnlockGo es desde su sitio web oficial: <a href="( 1 )">https://itoolab.com/unlock-iphone/</a>. De esta manera, puede asegurarse de que obtiene la versión original y más reciente del software que es seguro y confiable. También puedes disfrutar de los siguientes beneficios:</p>
20
- <p></p>
21
- <ul>
22
- <li><strong>Prueba gratuita</strong>: Puedes probar UnlockGo gratis antes de comprarlo. Puedes usarlo para escanear tu dispositivo y ver si puede desbloquearlo. </li>
23
- <li><strong>Precio asequible</strong>: Usted puede comprar UnlockGo por un precio razonable que es mucho más barato que comprar un nuevo dispositivo o pagar por un servicio de reparación. - <strong>Garantía de devolución de dinero</strong>: Puede obtener un reembolso completo dentro de los 30 días si no está satisfecho con UnlockGo o si no logra desbloquear su dispositivo. </li>
24
- <li><strong>Actualizaciones de por vida</strong>: Puede obtener actualizaciones gratuitas e ilimitadas para UnlockGo siempre y cuando tenga una licencia válida. </li>
25
- <li><strong>24/7 support</strong>: Puede ponerse en contacto con el servicio de atención al cliente de UnlockGo en cualquier momento por correo electrónico o chat en vivo. Te ayudarán con cualquier problema o pregunta que puedas tener. </li>
26
- </ul>
27
- <p>Para descargar UnlockGo desde el sitio web oficial, debe seguir estos sencillos pasos:</p>
28
- <ol>
29
- <li>Visite el sitio web <a href=">https://itoolab.com/unlock-iphone/</a> y haga clic en el botón "Descargar" para su sistema informático (Windows o Mac). </li>
30
- <li>Ejecute el instalador y siga las instrucciones para instalar UnlockGo en su computadora. </li>
31
-
32
- <li>Elija el modo que se adapte a su situación (Desbloquear código de acceso de pantalla, Desbloquear ID de Apple, MDM de derivación, o tiempo de pantalla de derivación). </li>
33
- <li>Haga clic en el botón "Inicio" y siga los pasos en pantalla para desbloquear el dispositivo. </li>
34
- </ol>
35
- <h2>¿Cómo usar UnlockGo para desbloquear varios bloqueos en dispositivos iOS y Android? </h2>
36
- <p>UnlockGo es una herramienta versátil que puede desbloquear diferentes tipos de bloqueos en dispositivos iOS y Android. Aquí están algunas de las características y funciones de UnlockGo:</p>
37
- <tabla>
38
- <tr>
39
- <th>Característica</th>
40
- <th>Función</th>
41
- </tr>
42
- <tr>
43
- <td>Desbloquear contraseña de pantalla</td>
44
- <td>Esta función puede eliminar cualquier bloqueo de pantalla en su iPhone o iPad, como contraseña, PIN, patrón, Touch ID o Face ID. También puede eliminar el bloqueo de activación de iCloud o el bloqueo de Google FRP que le impide configurar su dispositivo después de un restablecimiento de fábrica. Esta función funciona para todos los dispositivos y versiones iOS y Android. </td>
45
- </tr>
46
- <tr>
47
- <td>Desbloquear ID de Apple</td>
48
- <td>Esta función puede eliminar el ID de Apple y la cuenta de iCloud de tu iPhone o iPad sin una contraseña. También puede desactivar Buscar mi iPhone y borrar todos los datos asociados con el ID de Apple. Esta función funciona para dispositivos iOS que ejecutan iOS 11.4 o anterior, o que han sido liberados por jailbreak. </td>
49
- </tr>
50
- <tr>
51
- <td>Bypass MDM</td>
52
- <td>Esta característica puede evitar el bloqueo de Administración de dispositivos móviles (MDM) que restringe el uso de su iPhone o iPad por una organización o escuela. También puede eliminar el perfil y la configuración de MDM de su dispositivo. Esta función funciona para todos los dispositivos iOS y versiones. </td>
53
- </tr>
54
- <tr>
55
- <td>Tiempo de pantalla de derivación</td>
56
- <td>Esta función puede omitir el código de acceso de tiempo de pantalla que limita el uso de aplicaciones y funciones en su iPhone o iPad. También puede eliminar la configuración de tiempo de pantalla y los datos de su dispositivo. Esta función funciona para todos los dispositivos iOS y versiones. </td>
57
- </tr>
58
- </tabla>
59
-
60
- <h2>¿Cómo evitar los riesgos de usar una grieta y obtener el mejor valor de UnlockGo? </h2>
61
- <p>Como hemos visto, usar una grieta no vale la pena, ya que te expone a muchos riesgos y desventajas. En su lugar, debe utilizar la versión oficial de UnlockGo, que es seguro, confiable y eficaz. Aquí hay algunos consejos sobre cómo obtener el máximo provecho de UnlockGo:</p>
62
- <ul>
63
- <li><strong>Compruebe la compatibilidad</strong>: Antes de usar UnlockGo, asegúrese de que su modelo de dispositivo y la versión del sistema son compatibles con el software. Puede consultar la lista de compatibilidad en el sitio web oficial o ponerse en contacto con el servicio de atención al cliente si no está seguro. </li>
64
- <li><strong>Copia de seguridad de sus datos</strong>: Aunque UnlockGo no causa ninguna pérdida de datos en la mayoría de los casos, todavía se recomienda que copia de seguridad de sus datos antes de usarlo. Puede utilizar iTunes, iCloud, Google Drive o cualquier otra herramienta de copia de seguridad para guardar sus datos. </li>
65
- <li><strong>Siga las instrucciones cuidadosamente</strong>: Cuando use UnlockGo, asegúrese de seguir las instrucciones en la pantalla cuidadosamente. No desconecte el dispositivo ni cierre el software durante el proceso de desbloqueo. Si encuentra algún error o problema, no se asuste y póngase en contacto con el servicio de atención al cliente para obtener ayuda. </li>
66
- <li><strong>Utilice un código de cupón</strong>: Si desea ahorrar algo de dinero al comprar UnlockGo, puede utilizar un código de cupón que le da un descuento. Usted puede encontrar códigos de cupón en varios sitios web o plataformas de medios sociales que promueven UnlockGo. También puede - <strong>Suscribirse al boletín de noticias</strong>: Si desea obtener las últimas noticias y actualizaciones sobre UnlockGo, puede suscribirse al boletín de noticias en el sitio web oficial. También recibirá ofertas y descuentos exclusivos de vez en cuando. </li>
67
- </ul>
68
- <h2>Conclusión</h2>
69
-
70
- <p>Sin embargo, debes evitar usar una grieta, ya que es arriesgada, ilegal e ineficaz. Puede dañar su dispositivo, comprometer su privacidad y causar errores y problemas. Siempre debe descargar UnlockGo desde el sitio web oficial y utilizarlo de acuerdo con las instrucciones. </p>
71
- <p>Si quieres probar UnlockGo por ti mismo, puedes descargarlo desde aquí: <a href="">https://itoolab.com/unlock-iphone/</a>. Te sorprenderá lo fácil y rápido que puede desbloquear tu dispositivo. </p>
72
- <h2>Preguntas frecuentes</h2>
73
- <h3> ¿Es UnlockGo seguro y legítimo? </h3>
74
- <p>Sí, UnlockGo es seguro y legítimo. No contiene ningún virus o malware que pueda dañar su dispositivo o datos. Tampoco accede ni comparte su información personal sin su permiso. Es un software de confianza que ha sido confiado por millones de usuarios en todo el mundo. </p>
75
- <h3>¿UnlockGo es compatible con todos los dispositivos y versiones iOS y Android? </h3>
76
- <p>Sí, UnlockGo es compatible con todos los dispositivos y versiones iOS y Android. Puede desbloquear iPhone, iPad, iPod touch, Samsung, Huawei, LG, Motorola, Sony, HTC y otros dispositivos. También puede desbloquear iOS 14, iOS 13, iOS 12, iOS 11, Android 11, Android 10, Android 9 y otras versiones. </p>
77
- <h3>¿Cuánto tiempo se tarda en desbloquear un dispositivo con UnlockGo? </h3>
78
- <p>El tiempo de desbloqueo depende del tipo de bloqueo y del modelo de dispositivo. Generalmente, se tarda solo unos minutos en desbloquear un dispositivo con UnlockGo. Sin embargo, algunos bloqueos pueden requerir más tiempo o pasos para desbloquear. Por ejemplo, desbloquear el bloqueo de activación de iCloud o bloqueo de Google FRP puede requerir la descarga de firmware o entrar en modo de recuperación. </p>
79
- <h3>¿Qué pasa si encuentro algún problema al usar UnlockGo? </h3>
80
- <p>Si encuentra algún problema al usar UnlockGo, puede ponerse en contacto con el servicio de atención al cliente de UnlockGo por correo electrónico o chat en vivo. Te ayudarán a resolver los problemas lo antes posible. También puede consultar la guía del usuario o la sección de preguntas frecuentes en el sitio web oficial para obtener más información. </p>
81
- <h3>¿Cómo puedo contactar con el servicio de atención al cliente de UnlockGo? </h3> 64aa2da5cf<br />
82
- <br />
83
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/markers.py DELETED
@@ -1,304 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import operator
6
- import os
7
- import platform
8
- import sys
9
- from typing import Any, Callable, Dict, List, Optional, Tuple, Union
10
-
11
- from pip._vendor.pyparsing import ( # noqa: N817
12
- Forward,
13
- Group,
14
- Literal as L,
15
- ParseException,
16
- ParseResults,
17
- QuotedString,
18
- ZeroOrMore,
19
- stringEnd,
20
- stringStart,
21
- )
22
-
23
- from .specifiers import InvalidSpecifier, Specifier
24
-
25
- __all__ = [
26
- "InvalidMarker",
27
- "UndefinedComparison",
28
- "UndefinedEnvironmentName",
29
- "Marker",
30
- "default_environment",
31
- ]
32
-
33
- Operator = Callable[[str, str], bool]
34
-
35
-
36
- class InvalidMarker(ValueError):
37
- """
38
- An invalid marker was found, users should refer to PEP 508.
39
- """
40
-
41
-
42
- class UndefinedComparison(ValueError):
43
- """
44
- An invalid operation was attempted on a value that doesn't support it.
45
- """
46
-
47
-
48
- class UndefinedEnvironmentName(ValueError):
49
- """
50
- A name was attempted to be used that does not exist inside of the
51
- environment.
52
- """
53
-
54
-
55
- class Node:
56
- def __init__(self, value: Any) -> None:
57
- self.value = value
58
-
59
- def __str__(self) -> str:
60
- return str(self.value)
61
-
62
- def __repr__(self) -> str:
63
- return f"<{self.__class__.__name__}('{self}')>"
64
-
65
- def serialize(self) -> str:
66
- raise NotImplementedError
67
-
68
-
69
- class Variable(Node):
70
- def serialize(self) -> str:
71
- return str(self)
72
-
73
-
74
- class Value(Node):
75
- def serialize(self) -> str:
76
- return f'"{self}"'
77
-
78
-
79
- class Op(Node):
80
- def serialize(self) -> str:
81
- return str(self)
82
-
83
-
84
- VARIABLE = (
85
- L("implementation_version")
86
- | L("platform_python_implementation")
87
- | L("implementation_name")
88
- | L("python_full_version")
89
- | L("platform_release")
90
- | L("platform_version")
91
- | L("platform_machine")
92
- | L("platform_system")
93
- | L("python_version")
94
- | L("sys_platform")
95
- | L("os_name")
96
- | L("os.name") # PEP-345
97
- | L("sys.platform") # PEP-345
98
- | L("platform.version") # PEP-345
99
- | L("platform.machine") # PEP-345
100
- | L("platform.python_implementation") # PEP-345
101
- | L("python_implementation") # undocumented setuptools legacy
102
- | L("extra") # PEP-508
103
- )
104
- ALIASES = {
105
- "os.name": "os_name",
106
- "sys.platform": "sys_platform",
107
- "platform.version": "platform_version",
108
- "platform.machine": "platform_machine",
109
- "platform.python_implementation": "platform_python_implementation",
110
- "python_implementation": "platform_python_implementation",
111
- }
112
- VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0])))
113
-
114
- VERSION_CMP = (
115
- L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<")
116
- )
117
-
118
- MARKER_OP = VERSION_CMP | L("not in") | L("in")
119
- MARKER_OP.setParseAction(lambda s, l, t: Op(t[0]))
120
-
121
- MARKER_VALUE = QuotedString("'") | QuotedString('"')
122
- MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0]))
123
-
124
- BOOLOP = L("and") | L("or")
125
-
126
- MARKER_VAR = VARIABLE | MARKER_VALUE
127
-
128
- MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR)
129
- MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0]))
130
-
131
- LPAREN = L("(").suppress()
132
- RPAREN = L(")").suppress()
133
-
134
- MARKER_EXPR = Forward()
135
- MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN)
136
- MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR)
137
-
138
- MARKER = stringStart + MARKER_EXPR + stringEnd
139
-
140
-
141
- def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]:
142
- if isinstance(results, ParseResults):
143
- return [_coerce_parse_result(i) for i in results]
144
- else:
145
- return results
146
-
147
-
148
- def _format_marker(
149
- marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True
150
- ) -> str:
151
-
152
- assert isinstance(marker, (list, tuple, str))
153
-
154
- # Sometimes we have a structure like [[...]] which is a single item list
155
- # where the single item is itself it's own list. In that case we want skip
156
- # the rest of this function so that we don't get extraneous () on the
157
- # outside.
158
- if (
159
- isinstance(marker, list)
160
- and len(marker) == 1
161
- and isinstance(marker[0], (list, tuple))
162
- ):
163
- return _format_marker(marker[0])
164
-
165
- if isinstance(marker, list):
166
- inner = (_format_marker(m, first=False) for m in marker)
167
- if first:
168
- return " ".join(inner)
169
- else:
170
- return "(" + " ".join(inner) + ")"
171
- elif isinstance(marker, tuple):
172
- return " ".join([m.serialize() for m in marker])
173
- else:
174
- return marker
175
-
176
-
177
- _operators: Dict[str, Operator] = {
178
- "in": lambda lhs, rhs: lhs in rhs,
179
- "not in": lambda lhs, rhs: lhs not in rhs,
180
- "<": operator.lt,
181
- "<=": operator.le,
182
- "==": operator.eq,
183
- "!=": operator.ne,
184
- ">=": operator.ge,
185
- ">": operator.gt,
186
- }
187
-
188
-
189
- def _eval_op(lhs: str, op: Op, rhs: str) -> bool:
190
- try:
191
- spec = Specifier("".join([op.serialize(), rhs]))
192
- except InvalidSpecifier:
193
- pass
194
- else:
195
- return spec.contains(lhs)
196
-
197
- oper: Optional[Operator] = _operators.get(op.serialize())
198
- if oper is None:
199
- raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
200
-
201
- return oper(lhs, rhs)
202
-
203
-
204
- class Undefined:
205
- pass
206
-
207
-
208
- _undefined = Undefined()
209
-
210
-
211
- def _get_env(environment: Dict[str, str], name: str) -> str:
212
- value: Union[str, Undefined] = environment.get(name, _undefined)
213
-
214
- if isinstance(value, Undefined):
215
- raise UndefinedEnvironmentName(
216
- f"{name!r} does not exist in evaluation environment."
217
- )
218
-
219
- return value
220
-
221
-
222
- def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool:
223
- groups: List[List[bool]] = [[]]
224
-
225
- for marker in markers:
226
- assert isinstance(marker, (list, tuple, str))
227
-
228
- if isinstance(marker, list):
229
- groups[-1].append(_evaluate_markers(marker, environment))
230
- elif isinstance(marker, tuple):
231
- lhs, op, rhs = marker
232
-
233
- if isinstance(lhs, Variable):
234
- lhs_value = _get_env(environment, lhs.value)
235
- rhs_value = rhs.value
236
- else:
237
- lhs_value = lhs.value
238
- rhs_value = _get_env(environment, rhs.value)
239
-
240
- groups[-1].append(_eval_op(lhs_value, op, rhs_value))
241
- else:
242
- assert marker in ["and", "or"]
243
- if marker == "or":
244
- groups.append([])
245
-
246
- return any(all(item) for item in groups)
247
-
248
-
249
- def format_full_version(info: "sys._version_info") -> str:
250
- version = "{0.major}.{0.minor}.{0.micro}".format(info)
251
- kind = info.releaselevel
252
- if kind != "final":
253
- version += kind[0] + str(info.serial)
254
- return version
255
-
256
-
257
- def default_environment() -> Dict[str, str]:
258
- iver = format_full_version(sys.implementation.version)
259
- implementation_name = sys.implementation.name
260
- return {
261
- "implementation_name": implementation_name,
262
- "implementation_version": iver,
263
- "os_name": os.name,
264
- "platform_machine": platform.machine(),
265
- "platform_release": platform.release(),
266
- "platform_system": platform.system(),
267
- "platform_version": platform.version(),
268
- "python_full_version": platform.python_version(),
269
- "platform_python_implementation": platform.python_implementation(),
270
- "python_version": ".".join(platform.python_version_tuple()[:2]),
271
- "sys_platform": sys.platform,
272
- }
273
-
274
-
275
- class Marker:
276
- def __init__(self, marker: str) -> None:
277
- try:
278
- self._markers = _coerce_parse_result(MARKER.parseString(marker))
279
- except ParseException as e:
280
- raise InvalidMarker(
281
- f"Invalid marker: {marker!r}, parse error at "
282
- f"{marker[e.loc : e.loc + 8]!r}"
283
- )
284
-
285
- def __str__(self) -> str:
286
- return _format_marker(self._markers)
287
-
288
- def __repr__(self) -> str:
289
- return f"<Marker('{self}')>"
290
-
291
- def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool:
292
- """Evaluate a marker.
293
-
294
- Return the boolean from evaluating the given marker against the
295
- environment. environment is an optional argument to override all or
296
- part of the determined environment.
297
-
298
- The environment is determined from the current Python process.
299
- """
300
- current_environment = default_environment()
301
- if environment is not None:
302
- current_environment.update(environment)
303
-
304
- return _evaluate_markers(self._markers, current_environment)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BridgeTower/bridgetower-video-search/app.py DELETED
@@ -1,341 +0,0 @@
1
- import os
2
- import cv2
3
- import gradio as gr
4
- import numpy as np
5
- import json
6
- import pickle
7
- from PIL import Image
8
- import torch
9
- from torch.nn.utils.rnn import pad_sequence
10
- from transformers import BridgeTowerProcessor
11
- from tqdm import tqdm
12
-
13
- from bridgetower_custom import BridgeTowerTextFeatureExtractor, BridgeTowerForITC
14
-
15
- import faiss
16
- import webvtt
17
-
18
- from pytube import YouTube
19
- from youtube_transcript_api import YouTubeTranscriptApi
20
- from youtube_transcript_api.formatters import WebVTTFormatter
21
-
22
- if torch.cuda.is_available():
23
- device = 'cuda'
24
- else:
25
- device = 'cpu'
26
- model_name = 'BridgeTower/bridgetower-large-itm-mlm-itc'
27
- model = BridgeTowerForITC.from_pretrained(model_name).to(device)
28
- text_model = BridgeTowerTextFeatureExtractor.from_pretrained(model_name).to(device)
29
-
30
- processor = BridgeTowerProcessor.from_pretrained(model_name)
31
-
32
-
33
- def download_video(video_url, path='/tmp/'):
34
-
35
- yt = YouTube(video_url)
36
- yt = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first()
37
- if not os.path.exists(path):
38
- os.makedirs(path)
39
- filepath = os.path.join(path, yt.default_filename)
40
- if not os.path.exists(filepath):
41
- print('Downloading video from YouTube...')
42
- yt.download(path)
43
- return filepath
44
-
45
-
46
- # Get transcript in webvtt
47
- def get_transcript_vtt(video_id, path='/tmp'):
48
- filepath = os.path.join(path,'test_vm.vtt')
49
- if os.path.exists(filepath):
50
- return filepath
51
-
52
- transcript = YouTubeTranscriptApi.get_transcript(video_id)
53
- formatter = WebVTTFormatter()
54
- webvtt_formatted = formatter.format_transcript(transcript)
55
-
56
- with open(filepath, 'w', encoding='utf-8') as webvtt_file:
57
- webvtt_file.write(webvtt_formatted)
58
- webvtt_file.close()
59
-
60
- return filepath
61
-
62
- # https://stackoverflow.com/a/57781047
63
- # Resizes a image and maintains aspect ratio
64
- def maintain_aspect_ratio_resize(image, width=None, height=None, inter=cv2.INTER_AREA):
65
- # Grab the image size and initialize dimensions
66
- dim = None
67
- (h, w) = image.shape[:2]
68
-
69
- # Return original image if no need to resize
70
- if width is None and height is None:
71
- return image
72
-
73
- # We are resizing height if width is none
74
- if width is None:
75
- # Calculate the ratio of the height and construct the dimensions
76
- r = height / float(h)
77
- dim = (int(w * r), height)
78
- # We are resizing width if height is none
79
- else:
80
- # Calculate the ratio of the width and construct the dimensions
81
- r = width / float(w)
82
- dim = (width, int(h * r))
83
-
84
- # Return the resized image
85
- return cv2.resize(image, dim, interpolation=inter)
86
-
87
- def time_to_frame(time, fps):
88
- '''
89
- convert time in seconds into frame number
90
- '''
91
- return int(time * fps - 1)
92
-
93
- def str2time(strtime):
94
- strtime = strtime.strip('"')
95
- hrs, mins, seconds = [float(c) for c in strtime.split(':')]
96
-
97
- total_seconds = hrs * 60**2 + mins * 60 + seconds
98
-
99
- return total_seconds
100
-
101
- def collate_fn(batch_list):
102
- batch = {}
103
- batch['input_ids'] = pad_sequence([encoding['input_ids'].squeeze(0) for encoding in batch_list], batch_first=True)
104
- batch['attention_mask'] = pad_sequence([encoding['attention_mask'].squeeze(0) for encoding in batch_list], batch_first=True)
105
- batch['pixel_values'] = torch.cat([encoding['pixel_values'] for encoding in batch_list], dim=0)
106
- batch['pixel_mask'] = torch.cat([encoding['pixel_mask'] for encoding in batch_list], dim=0)
107
- return batch
108
-
109
- def extract_images_and_embeds(video_id, video_path, subtitles, output, expanded=False, batch_size=2, progress=gr.Progress()):
110
- if os.path.exists(os.path.join(output, 'embeddings.pkl')):
111
- return
112
-
113
- os.makedirs(output, exist_ok=True)
114
- os.makedirs(os.path.join(output, 'frames'), exist_ok=True)
115
- os.makedirs(os.path.join(output, 'frames_thumb'), exist_ok=True)
116
-
117
- count = 0
118
-
119
- vidcap = cv2.VideoCapture(video_path)
120
-
121
- # Get the frames per second
122
- fps = vidcap.get(cv2.CAP_PROP_FPS)
123
-
124
- # Get the total numer of frames in the video.
125
- frame_count = vidcap.get(cv2.CAP_PROP_FRAME_COUNT)
126
-
127
- # print(fps, frame_count)
128
-
129
- frame_number = 0
130
-
131
- count = 0
132
- anno = []
133
-
134
- embeddings = []
135
- batch_list = []
136
- vtt = webvtt.read(subtitles)
137
-
138
- for idx, caption in enumerate(tqdm(vtt, total=vtt.total_length, desc="Generating embeddings")):
139
- st_time = str2time(caption.start)
140
- ed_time = str2time(caption.end)
141
-
142
- mid_time = (ed_time + st_time) / 2
143
- text = caption.text.replace('\n', ' ')
144
-
145
- if expanded :
146
- raise NotImplementedError
147
-
148
- frame_no = time_to_frame(mid_time, fps)
149
- mid_time_ms = mid_time * 1000
150
- # vidcap.set(1, frame_no) # added this line
151
- vidcap.set(cv2.CAP_PROP_POS_MSEC, mid_time_ms)
152
- print('Read a new frame: ', idx, mid_time, frame_no, text)
153
- success, frame = vidcap.read()
154
- if success:
155
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
156
- frame = Image.fromarray(frame)
157
- img_fname = f'{video_id}_{idx:06d}'
158
- img_fpath = os.path.join(output, 'frames', img_fname + '.jpg')
159
- # image = maintain_aspect_ratio_resize(image, height=350) # save frame as JPEG file
160
- # cv2.imwrite( img_fpath, image) # save frame as JPEG file
161
-
162
- count += 1
163
- anno.append({
164
- 'image_id': idx,
165
- 'img_fname': img_fname,
166
- 'caption': text,
167
- 'time': mid_time_ms,
168
- 'frame_no': frame_no
169
- })
170
-
171
- encoding = processor(frame, text, return_tensors="pt").to(device)
172
- encoding['text'] = text
173
- encoding['image_filepath'] = img_fpath
174
- encoding['start_time'] = caption.start
175
- encoding['time'] = mid_time_ms
176
-
177
- batch_list.append(encoding)
178
-
179
- else:
180
- break
181
-
182
- if len(batch_list) == batch_size:
183
- batch = collate_fn(batch_list)
184
- with torch.no_grad():
185
- outputs = model(**batch, output_hidden_states=True)
186
-
187
- for i in range(batch_size):
188
- embeddings.append({
189
- 'embeddings':outputs.logits[i,2,:].detach().cpu().numpy(),
190
- 'text': batch_list[i]['text'],
191
- 'image_filepath': batch_list[i]['image_filepath'],
192
- 'start_time': batch_list[i]['start_time'],
193
- 'time': batch_list[i]['time'],
194
- })
195
- batch_list = []
196
-
197
- if batch_list:
198
- batch = collate_fn(batch_list)
199
- with torch.no_grad():
200
- outputs = model(**batch, output_hidden_states=True)
201
-
202
- for i in range(len(batch_list)):
203
- embeddings.append({
204
- 'embeddings':outputs.logits[i,2,:].detach().cpu().numpy(),
205
- 'text': batch_list[i]['text'],
206
- 'image_filepath': batch_list[i]['image_filepath'],
207
- 'start_time': batch_list[i]['start_time'],
208
- 'time': batch_list[i]['time'],
209
- })
210
-
211
- batch_list = []
212
-
213
- with open(os.path.join(output, 'annotations.json'), 'w') as fh:
214
- json.dump(anno, fh)
215
-
216
- with open(os.path.join(output, 'embeddings.pkl'), 'wb') as fh:
217
- pickle.dump(embeddings, fh)
218
-
219
- def run_query(video_path, text_query, path='/tmp'):
220
-
221
- vidcap = cv2.VideoCapture(video_path)
222
-
223
- embeddings_filepath = os.path.join(path, 'embeddings.pkl')
224
- faiss_filepath = os.path.join(path, 'faiss_index.pkl')
225
-
226
- embeddings = pickle.load(open(embeddings_filepath, 'rb'))
227
-
228
- if os.path.exists(faiss_filepath):
229
- faiss_index = pickle.load(open(faiss_filepath, 'rb'))
230
- else :
231
- embs = [emb['embeddings'] for emb in embeddings]
232
- vectors = np.stack(embs, axis=0)
233
- num_vectors, vector_dim = vectors.shape
234
- faiss_index = faiss.IndexFlatIP(vector_dim)
235
- faiss_index.add(vectors)
236
- pickle.dump(faiss_index, open(faiss_filepath, 'wb'))
237
-
238
- print('Processing query')
239
- encoding = processor.tokenizer(text_query, return_tensors="pt").to(device)
240
- with torch.no_grad():
241
- outputs = text_model(**encoding)
242
- emb_query = outputs.cpu().numpy()
243
- print('Running FAISS search')
244
- _, I = faiss_index.search(emb_query, 6)
245
-
246
- clip_images = []
247
- transcripts = []
248
- for idx in I[0]:
249
- # frame_no = embeddings[idx]['frame_no']
250
- # vidcap.set(1, frame_no) # added this line
251
- frame_timestamp = embeddings[idx]['time']
252
- vidcap.set(cv2.CAP_PROP_POS_MSEC, frame_timestamp)
253
-
254
- success, frame = vidcap.read()
255
- if success:
256
- frame = maintain_aspect_ratio_resize(frame, height=400)
257
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
258
- frame = Image.fromarray(frame)
259
- clip_images.append(frame)
260
- transcripts.append(f"({embeddings[idx]['start_time']}) {embeddings[idx]['text']}")
261
-
262
- return clip_images, transcripts
263
-
264
-
265
- #https://stackoverflow.com/a/7936523
266
- def get_video_id_from_url(video_url):
267
- """
268
- Examples:
269
- - http://youtu.be/SA2iWivDJiE
270
- - http://www.youtube.com/watch?v=_oPAwA_Udwc&feature=feedu
271
- - http://www.youtube.com/embed/SA2iWivDJiE
272
- - http://www.youtube.com/v/SA2iWivDJiE?version=3&amp;hl=en_US
273
- """
274
- import urllib.parse
275
- url = urllib.parse.urlparse(video_url)
276
- if url.hostname == 'youtu.be':
277
- return url.path[1:]
278
- if url.hostname in ('www.youtube.com', 'youtube.com'):
279
- if url.path == '/watch':
280
- p = urllib.parse.parse_qs(url.query)
281
- return p['v'][0]
282
- if url.path[:7] == '/embed/':
283
- return url.path.split('/')[2]
284
- if url.path[:3] == '/v/':
285
- return url.path.split('/')[2]
286
-
287
- return None
288
-
289
-
290
- def process(video_url, text_query, progress=gr.Progress(track_tqdm=True)):
291
- tmp_dir = os.environ.get('TMPDIR', '/tmp')
292
- video_id = get_video_id_from_url(video_url)
293
- output_dir = os.path.join(tmp_dir, video_id)
294
- video_file = download_video(video_url, path=output_dir)
295
- subtitles = get_transcript_vtt(video_id, path=output_dir)
296
- extract_images_and_embeds(video_id=video_id,
297
- video_path=video_file,
298
- subtitles=subtitles,
299
- output=output_dir,
300
- expanded=False,
301
- batch_size=8,
302
- progress=progress,
303
- )
304
- frame_paths, transcripts = run_query(video_file, text_query, path=output_dir)
305
- return video_file, [(image, caption) for image, caption in zip(frame_paths, transcripts)]
306
-
307
-
308
- description = "This Space lets you run semantic search on a video."
309
-
310
- with gr.Blocks() as demo:
311
- gr.Markdown(description)
312
- with gr.Row():
313
- with gr.Column():
314
- video_url = gr.Text(label="Youtube url")
315
- text_query = gr.Text(label="Text query")
316
- btn = gr.Button("Run query")
317
- video_player = gr.Video(label="Video")
318
-
319
- with gr.Row():
320
- gallery = gr.Gallery(label="Images").style(grid=6)
321
-
322
- gr.Examples(
323
- examples=[
324
- ['https://www.youtube.com/watch?v=CvjoXdC-WkM','wedding'],
325
- ['https://www.youtube.com/watch?v=fWs2dWcNGu0', 'cheesecake'],
326
- ['https://www.youtube.com/watch?v=rmPpNsx4yAk', 'bunny'],
327
- ['https://www.youtube.com/watch?v=KCFYf4TJdN0' ,'sandwich'],
328
- ],
329
- inputs=[video_url, text_query],
330
- )
331
-
332
- btn.click(fn=process,
333
- inputs=[video_url, text_query],
334
- outputs=[video_player, gallery],
335
- )
336
-
337
- try:
338
- demo.queue(concurrency_count=3)
339
- demo.launch(share=True)
340
- except:
341
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_anchor_generator.py DELETED
@@ -1,122 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- import logging
3
- import unittest
4
- import torch
5
-
6
- from detectron2.config import get_cfg
7
- from detectron2.layers import ShapeSpec
8
- from detectron2.modeling.anchor_generator import DefaultAnchorGenerator, RotatedAnchorGenerator
9
-
10
- logger = logging.getLogger(__name__)
11
-
12
-
13
- class TestAnchorGenerator(unittest.TestCase):
14
- def test_default_anchor_generator(self):
15
- cfg = get_cfg()
16
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
17
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
18
-
19
- anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)])
20
-
21
- # only the last two dimensions of features matter here
22
- num_images = 2
23
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
24
- anchors = anchor_generator([features["stage3"]])
25
- expected_anchor_tensor = torch.tensor(
26
- [
27
- [-32.0, -8.0, 32.0, 8.0],
28
- [-16.0, -16.0, 16.0, 16.0],
29
- [-8.0, -32.0, 8.0, 32.0],
30
- [-64.0, -16.0, 64.0, 16.0],
31
- [-32.0, -32.0, 32.0, 32.0],
32
- [-16.0, -64.0, 16.0, 64.0],
33
- [-28.0, -8.0, 36.0, 8.0], # -28.0 == -32.0 + STRIDE (4)
34
- [-12.0, -16.0, 20.0, 16.0],
35
- [-4.0, -32.0, 12.0, 32.0],
36
- [-60.0, -16.0, 68.0, 16.0],
37
- [-28.0, -32.0, 36.0, 32.0],
38
- [-12.0, -64.0, 20.0, 64.0],
39
- ]
40
- )
41
-
42
- for i in range(num_images):
43
- assert torch.allclose(anchors[i][0].tensor, expected_anchor_tensor)
44
-
45
- def test_default_anchor_generator_centered(self):
46
- cfg = get_cfg()
47
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
48
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
49
- cfg.MODEL.ANCHOR_GENERATOR.OFFSET = 0.5
50
-
51
- anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)])
52
-
53
- # only the last two dimensions of features matter here
54
- num_images = 2
55
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
56
- anchors = anchor_generator([features["stage3"]])
57
- expected_anchor_tensor = torch.tensor(
58
- [
59
- [-30.0, -6.0, 34.0, 10.0],
60
- [-14.0, -14.0, 18.0, 18.0],
61
- [-6.0, -30.0, 10.0, 34.0],
62
- [-62.0, -14.0, 66.0, 18.0],
63
- [-30.0, -30.0, 34.0, 34.0],
64
- [-14.0, -62.0, 18.0, 66.0],
65
- [-26.0, -6.0, 38.0, 10.0],
66
- [-10.0, -14.0, 22.0, 18.0],
67
- [-2.0, -30.0, 14.0, 34.0],
68
- [-58.0, -14.0, 70.0, 18.0],
69
- [-26.0, -30.0, 38.0, 34.0],
70
- [-10.0, -62.0, 22.0, 66.0],
71
- ]
72
- )
73
-
74
- for i in range(num_images):
75
- assert torch.allclose(anchors[i][0].tensor, expected_anchor_tensor)
76
-
77
- def test_rrpn_anchor_generator(self):
78
- cfg = get_cfg()
79
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
80
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
81
- cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [[0, 45]]
82
- anchor_generator = RotatedAnchorGenerator(cfg, [ShapeSpec(stride=4)])
83
-
84
- # only the last two dimensions of features matter here
85
- num_images = 2
86
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
87
- anchors = anchor_generator([features["stage3"]])
88
- expected_anchor_tensor = torch.tensor(
89
- [
90
- [0.0, 0.0, 64.0, 16.0, 0.0],
91
- [0.0, 0.0, 64.0, 16.0, 45.0],
92
- [0.0, 0.0, 32.0, 32.0, 0.0],
93
- [0.0, 0.0, 32.0, 32.0, 45.0],
94
- [0.0, 0.0, 16.0, 64.0, 0.0],
95
- [0.0, 0.0, 16.0, 64.0, 45.0],
96
- [0.0, 0.0, 128.0, 32.0, 0.0],
97
- [0.0, 0.0, 128.0, 32.0, 45.0],
98
- [0.0, 0.0, 64.0, 64.0, 0.0],
99
- [0.0, 0.0, 64.0, 64.0, 45.0],
100
- [0.0, 0.0, 32.0, 128.0, 0.0],
101
- [0.0, 0.0, 32.0, 128.0, 45.0],
102
- [4.0, 0.0, 64.0, 16.0, 0.0], # 4.0 == 0.0 + STRIDE (4)
103
- [4.0, 0.0, 64.0, 16.0, 45.0],
104
- [4.0, 0.0, 32.0, 32.0, 0.0],
105
- [4.0, 0.0, 32.0, 32.0, 45.0],
106
- [4.0, 0.0, 16.0, 64.0, 0.0],
107
- [4.0, 0.0, 16.0, 64.0, 45.0],
108
- [4.0, 0.0, 128.0, 32.0, 0.0],
109
- [4.0, 0.0, 128.0, 32.0, 45.0],
110
- [4.0, 0.0, 64.0, 64.0, 0.0],
111
- [4.0, 0.0, 64.0, 64.0, 45.0],
112
- [4.0, 0.0, 32.0, 128.0, 0.0],
113
- [4.0, 0.0, 32.0, 128.0, 45.0],
114
- ]
115
- )
116
-
117
- for i in range(num_images):
118
- assert torch.allclose(anchors[i][0].tensor, expected_anchor_tensor)
119
-
120
-
121
- if __name__ == "__main__":
122
- unittest.main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/detail/event_error.h DELETED
@@ -1,166 +0,0 @@
1
- /*
2
- * Copyright 2008-2018 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- /// \file thrust/detail/event_error.h
18
- /// \brief \c thrust::future and thrust::future error handling types and codes.
19
-
20
- #pragma once
21
-
22
- #include <thrust/detail/config.h>
23
- #include <thrust/detail/cpp11_required.h>
24
- #include <thrust/detail/modern_gcc_required.h>
25
-
26
- #if THRUST_CPP_DIALECT >= 2011 && !defined(THRUST_LEGACY_GCC)
27
-
28
- #include <thrust/detail/type_traits.h>
29
- #include <thrust/system/error_code.h>
30
-
31
- #include <stdexcept>
32
-
33
- namespace thrust
34
- {
35
-
36
- enum class event_errc
37
- {
38
- unknown_event_error
39
- , no_state
40
- , no_content
41
- , last_event_error
42
- };
43
-
44
- /// \return <tt>error_code(static_cast<int>(e), event_category())</tt>
45
- inline error_code make_error_code(event_errc e);
46
-
47
- /// \return <tt>error_condition(static_cast<int>(e), event_category())</tt>.
48
- inline error_condition make_error_condition(event_errc e);
49
-
50
- struct event_error_category : error_category
51
- {
52
- event_error_category() = default;
53
-
54
- virtual char const* name() const
55
- {
56
- return "event";
57
- }
58
-
59
- virtual std::string message(int ev) const
60
- {
61
- switch (static_cast<event_errc>(ev))
62
- {
63
- case event_errc::no_state:
64
- {
65
- return "no_state: an operation that requires an event or future to have "
66
- "a stream or content has been performed on a event or future "
67
- "without either, e.g. a moved-from or default constructed event "
68
- "or future (an event or future may have been consumed more than "
69
- "once)";
70
- }
71
- case event_errc::no_content:
72
- {
73
- return "no_content: an operation that requires a future to have content "
74
- "has been performed on future without any, e.g. a moved-from, "
75
- "default constructed, or `thrust::new_stream` constructed future "
76
- "(a future may have been consumed more than once)";
77
- }
78
- default:
79
- {
80
- return "unknown_event_error: an unknown error with a future "
81
- "object has occurred";
82
- }
83
- };
84
- }
85
-
86
- virtual error_condition default_error_condition(int ev) const
87
- {
88
- if (
89
- event_errc::last_event_error
90
- >
91
- static_cast<event_errc>(ev)
92
- )
93
- return make_error_condition(static_cast<event_errc>(ev));
94
-
95
- return system_category().default_error_condition(ev);
96
- }
97
- };
98
-
99
- /// Obtains a reference to the static error category object for the errors
100
- /// related to futures and promises. The object is required to override the
101
- /// virtual function error_category::name() to return a pointer to the string
102
- /// "event". It is used to identify error codes provided in the
103
- /// exceptions of type event_error.
104
- inline error_category const& event_category()
105
- {
106
- static const event_error_category result;
107
- return result;
108
- }
109
-
110
- namespace system
111
- {
112
- /// Specialization of \p is_error_code_enum for \p event_errc.
113
- template<> struct is_error_code_enum<event_errc> : true_type {};
114
- } // end system
115
-
116
- /// \return <tt>error_code(static_cast<int>(e), event_category())</tt>
117
- inline error_code make_error_code(event_errc e)
118
- {
119
- return error_code(static_cast<int>(e), event_category());
120
- }
121
-
122
- /// \return <tt>error_condition(static_cast<int>(e), event_category())</tt>.
123
- inline error_condition make_error_condition(event_errc e)
124
- {
125
- return error_condition(static_cast<int>(e), event_category());
126
- }
127
-
128
- struct event_error : std::logic_error
129
- {
130
- __host__
131
- explicit event_error(error_code ec)
132
- : std::logic_error(ec.message()), ec_(ec)
133
- {}
134
-
135
- __host__
136
- explicit event_error(event_errc e)
137
- : event_error(make_error_code(e))
138
- {}
139
-
140
- __host__
141
- error_code const& code() const noexcept
142
- {
143
- return ec_;
144
- }
145
-
146
- __host__
147
- virtual ~event_error() noexcept {}
148
-
149
- private:
150
- error_code ec_;
151
- };
152
-
153
- inline bool operator==(event_error const& lhs, event_error const& rhs) noexcept
154
- {
155
- return lhs.code() == rhs.code();
156
- }
157
-
158
- inline bool operator<(event_error const& lhs, event_error const& rhs) noexcept
159
- {
160
- return lhs.code() < rhs.code();
161
- }
162
-
163
- } // end namespace thrust
164
-
165
- #endif
166
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scalar/binary_search.h DELETED
@@ -1,85 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/pair.h>
21
-
22
- namespace thrust
23
- {
24
-
25
- namespace system
26
- {
27
-
28
- namespace detail
29
- {
30
-
31
- namespace generic
32
- {
33
-
34
- namespace scalar
35
- {
36
-
37
- template<typename RandomAccessIterator, typename Size, typename T, typename BinaryPredicate>
38
- __host__ __device__
39
- RandomAccessIterator lower_bound_n(RandomAccessIterator first,
40
- Size n,
41
- const T &val,
42
- BinaryPredicate comp);
43
-
44
- template<typename RandomAccessIterator, typename T, typename BinaryPredicate>
45
- __host__ __device__
46
- RandomAccessIterator lower_bound(RandomAccessIterator first, RandomAccessIterator last,
47
- const T &val,
48
- BinaryPredicate comp);
49
-
50
- template<typename RandomAccessIterator, typename Size, typename T, typename BinaryPredicate>
51
- __host__ __device__
52
- RandomAccessIterator upper_bound_n(RandomAccessIterator first,
53
- Size n,
54
- const T &val,
55
- BinaryPredicate comp);
56
-
57
- template<typename RandomAccessIterator, typename T, typename BinaryPredicate>
58
- __host__ __device__
59
- RandomAccessIterator upper_bound(RandomAccessIterator first, RandomAccessIterator last,
60
- const T &val,
61
- BinaryPredicate comp);
62
-
63
- template<typename RandomAccessIterator, typename T, typename BinaryPredicate>
64
- __host__ __device__
65
- pair<RandomAccessIterator,RandomAccessIterator>
66
- equal_range(RandomAccessIterator first, RandomAccessIterator last,
67
- const T &val,
68
- BinaryPredicate comp);
69
-
70
- template<typename RandomAccessIterator, typename T, typename Compare>
71
- __host__ __device__
72
- bool binary_search(RandomAccessIterator first, RandomAccessIterator last, const T &value, Compare comp);
73
-
74
- } // end scalar
75
-
76
- } // end generic
77
-
78
- } // end detail
79
-
80
- } // end system
81
-
82
- } // end thrust
83
-
84
- #include <thrust/system/detail/generic/scalar/binary_search.inl>
85
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_preprocessing.py DELETED
@@ -1,69 +0,0 @@
1
- import cv2
2
- import numpy as np
3
- from CDM.config.CONFIG_UIED import Config
4
- C = Config()
5
-
6
-
7
- def read_img(path, resize_height=None, kernel_size=None):
8
-
9
- def resize_by_height(org):
10
- w_h_ratio = org.shape[1] / org.shape[0]
11
- resize_w = resize_height * w_h_ratio
12
- re = cv2.resize(org, (int(resize_w), int(resize_height)))
13
- return re
14
-
15
- try:
16
- img = cv2.imread(path)
17
- if kernel_size is not None:
18
- img = cv2.medianBlur(img, kernel_size)
19
- if img is None:
20
- print("*** Image does not exist ***")
21
- return None, None
22
- if resize_height is not None:
23
- img = resize_by_height(img)
24
- gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
25
- return img, gray
26
-
27
- except Exception as e:
28
- print(e)
29
- print("*** Img Reading Failed ***\n")
30
- return None, None
31
-
32
-
33
- def gray_to_gradient(img):
34
- if len(img.shape) == 3:
35
- img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
36
- img_f = np.copy(img)
37
- img_f = img_f.astype("float")
38
-
39
- kernel_h = np.array([[0,0,0], [0,-1.,1.], [0,0,0]])
40
- kernel_v = np.array([[0,0,0], [0,-1.,0], [0,1.,0]])
41
- dst1 = abs(cv2.filter2D(img_f, -1, kernel_h))
42
- dst2 = abs(cv2.filter2D(img_f, -1, kernel_v))
43
- gradient = (dst1 + dst2).astype('uint8')
44
- return gradient
45
-
46
-
47
- def reverse_binary(bin, show=False):
48
- """
49
- Reverse the input binary image
50
- """
51
- r, bin = cv2.threshold(bin, 1, 255, cv2.THRESH_BINARY_INV)
52
- if show:
53
- cv2.imshow('binary_rev', bin)
54
- cv2.waitKey()
55
- return bin
56
-
57
-
58
- def binarization(org, grad_min, show=False, write_path=None, wait_key=0):
59
- grey = cv2.cvtColor(org, cv2.COLOR_BGR2GRAY)
60
- grad = gray_to_gradient(grey) # get RoI with high gradient
61
- rec, binary = cv2.threshold(grad, grad_min, 255, cv2.THRESH_BINARY) # enhance the RoI
62
- morph = cv2.morphologyEx(binary, cv2.MORPH_CLOSE, (3, 3)) # remove noises
63
- if write_path is not None:
64
- cv2.imwrite(write_path, morph)
65
- if show:
66
- cv2.imshow('binary', morph)
67
- if wait_key is not None:
68
- cv2.waitKey(wait_key)
69
- return morph
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DEEMOSTECH/ChatAvatar/static/js/main.84e5ce89.js DELETED
The diff for this file is too large to render. See raw diff
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/filelock/_soft.py DELETED
@@ -1,46 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import os
4
- import sys
5
- from contextlib import suppress
6
- from errno import EACCES, EEXIST
7
- from pathlib import Path
8
-
9
- from ._api import BaseFileLock
10
- from ._util import raise_on_not_writable_file
11
-
12
-
13
- class SoftFileLock(BaseFileLock):
14
- """Simply watches the existence of the lock file."""
15
-
16
- def _acquire(self) -> None:
17
- raise_on_not_writable_file(self.lock_file)
18
- # first check for exists and read-only mode as the open will mask this case as EEXIST
19
- flags = (
20
- os.O_WRONLY # open for writing only
21
- | os.O_CREAT
22
- | os.O_EXCL # together with above raise EEXIST if the file specified by filename exists
23
- | os.O_TRUNC # truncate the file to zero byte
24
- )
25
- try:
26
- file_handler = os.open(self.lock_file, flags, self._context.mode)
27
- except OSError as exception: # re-raise unless expected exception
28
- if not (
29
- exception.errno == EEXIST # lock already exist
30
- or (exception.errno == EACCES and sys.platform == "win32") # has no access to this lock
31
- ): # pragma: win32 no cover
32
- raise
33
- else:
34
- self._context.lock_file_fd = file_handler
35
-
36
- def _release(self) -> None:
37
- assert self._context.lock_file_fd is not None # noqa: S101
38
- os.close(self._context.lock_file_fd) # the lock file is definitely not None
39
- self._context.lock_file_fd = None
40
- with suppress(OSError): # the file is already deleted and that's what we want
41
- Path(self.lock_file).unlink()
42
-
43
-
44
- __all__ = [
45
- "SoftFileLock",
46
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/table_builder.py DELETED
@@ -1,223 +0,0 @@
1
- """
2
- colorLib.table_builder: Generic helper for filling in BaseTable derivatives from tuples and maps and such.
3
-
4
- """
5
-
6
- import collections
7
- import enum
8
- from fontTools.ttLib.tables.otBase import (
9
- BaseTable,
10
- FormatSwitchingBaseTable,
11
- UInt8FormatSwitchingBaseTable,
12
- )
13
- from fontTools.ttLib.tables.otConverters import (
14
- ComputedInt,
15
- SimpleValue,
16
- Struct,
17
- Short,
18
- UInt8,
19
- UShort,
20
- IntValue,
21
- FloatValue,
22
- OptionalValue,
23
- )
24
- from fontTools.misc.roundTools import otRound
25
-
26
-
27
- class BuildCallback(enum.Enum):
28
- """Keyed on (BEFORE_BUILD, class[, Format if available]).
29
- Receives (dest, source).
30
- Should return (dest, source), which can be new objects.
31
- """
32
-
33
- BEFORE_BUILD = enum.auto()
34
-
35
- """Keyed on (AFTER_BUILD, class[, Format if available]).
36
- Receives (dest).
37
- Should return dest, which can be a new object.
38
- """
39
- AFTER_BUILD = enum.auto()
40
-
41
- """Keyed on (CREATE_DEFAULT, class[, Format if available]).
42
- Receives no arguments.
43
- Should return a new instance of class.
44
- """
45
- CREATE_DEFAULT = enum.auto()
46
-
47
-
48
- def _assignable(convertersByName):
49
- return {k: v for k, v in convertersByName.items() if not isinstance(v, ComputedInt)}
50
-
51
-
52
- def _isNonStrSequence(value):
53
- return isinstance(value, collections.abc.Sequence) and not isinstance(value, str)
54
-
55
-
56
- def _split_format(cls, source):
57
- if _isNonStrSequence(source):
58
- assert len(source) > 0, f"{cls} needs at least format from {source}"
59
- fmt, remainder = source[0], source[1:]
60
- elif isinstance(source, collections.abc.Mapping):
61
- assert "Format" in source, f"{cls} needs at least Format from {source}"
62
- remainder = source.copy()
63
- fmt = remainder.pop("Format")
64
- else:
65
- raise ValueError(f"Not sure how to populate {cls} from {source}")
66
-
67
- assert isinstance(
68
- fmt, collections.abc.Hashable
69
- ), f"{cls} Format is not hashable: {fmt!r}"
70
- assert fmt in cls.convertersByName, f"{cls} invalid Format: {fmt!r}"
71
-
72
- return fmt, remainder
73
-
74
-
75
- class TableBuilder:
76
- """
77
- Helps to populate things derived from BaseTable from maps, tuples, etc.
78
-
79
- A table of lifecycle callbacks may be provided to add logic beyond what is possible
80
- based on otData info for the target class. See BuildCallbacks.
81
- """
82
-
83
- def __init__(self, callbackTable=None):
84
- if callbackTable is None:
85
- callbackTable = {}
86
- self._callbackTable = callbackTable
87
-
88
- def _convert(self, dest, field, converter, value):
89
- enumClass = getattr(converter, "enumClass", None)
90
-
91
- if enumClass:
92
- if isinstance(value, enumClass):
93
- pass
94
- elif isinstance(value, str):
95
- try:
96
- value = getattr(enumClass, value.upper())
97
- except AttributeError:
98
- raise ValueError(f"{value} is not a valid {enumClass}")
99
- else:
100
- value = enumClass(value)
101
-
102
- elif isinstance(converter, IntValue):
103
- value = otRound(value)
104
- elif isinstance(converter, FloatValue):
105
- value = float(value)
106
-
107
- elif isinstance(converter, Struct):
108
- if converter.repeat:
109
- if _isNonStrSequence(value):
110
- value = [self.build(converter.tableClass, v) for v in value]
111
- else:
112
- value = [self.build(converter.tableClass, value)]
113
- setattr(dest, converter.repeat, len(value))
114
- else:
115
- value = self.build(converter.tableClass, value)
116
- elif callable(converter):
117
- value = converter(value)
118
-
119
- setattr(dest, field, value)
120
-
121
- def build(self, cls, source):
122
- assert issubclass(cls, BaseTable)
123
-
124
- if isinstance(source, cls):
125
- return source
126
-
127
- callbackKey = (cls,)
128
- fmt = None
129
- if issubclass(cls, FormatSwitchingBaseTable):
130
- fmt, source = _split_format(cls, source)
131
- callbackKey = (cls, fmt)
132
-
133
- dest = self._callbackTable.get(
134
- (BuildCallback.CREATE_DEFAULT,) + callbackKey, lambda: cls()
135
- )()
136
- assert isinstance(dest, cls)
137
-
138
- convByName = _assignable(cls.convertersByName)
139
- skippedFields = set()
140
-
141
- # For format switchers we need to resolve converters based on format
142
- if issubclass(cls, FormatSwitchingBaseTable):
143
- dest.Format = fmt
144
- convByName = _assignable(convByName[dest.Format])
145
- skippedFields.add("Format")
146
-
147
- # Convert sequence => mapping so before thunk only has to handle one format
148
- if _isNonStrSequence(source):
149
- # Sequence (typically list or tuple) assumed to match fields in declaration order
150
- assert len(source) <= len(
151
- convByName
152
- ), f"Sequence of {len(source)} too long for {cls}; expected <= {len(convByName)} values"
153
- source = dict(zip(convByName.keys(), source))
154
-
155
- dest, source = self._callbackTable.get(
156
- (BuildCallback.BEFORE_BUILD,) + callbackKey, lambda d, s: (d, s)
157
- )(dest, source)
158
-
159
- if isinstance(source, collections.abc.Mapping):
160
- for field, value in source.items():
161
- if field in skippedFields:
162
- continue
163
- converter = convByName.get(field, None)
164
- if not converter:
165
- raise ValueError(
166
- f"Unrecognized field {field} for {cls}; expected one of {sorted(convByName.keys())}"
167
- )
168
- self._convert(dest, field, converter, value)
169
- else:
170
- # let's try as a 1-tuple
171
- dest = self.build(cls, (source,))
172
-
173
- for field, conv in convByName.items():
174
- if not hasattr(dest, field) and isinstance(conv, OptionalValue):
175
- setattr(dest, field, conv.DEFAULT)
176
-
177
- dest = self._callbackTable.get(
178
- (BuildCallback.AFTER_BUILD,) + callbackKey, lambda d: d
179
- )(dest)
180
-
181
- return dest
182
-
183
-
184
- class TableUnbuilder:
185
- def __init__(self, callbackTable=None):
186
- if callbackTable is None:
187
- callbackTable = {}
188
- self._callbackTable = callbackTable
189
-
190
- def unbuild(self, table):
191
- assert isinstance(table, BaseTable)
192
-
193
- source = {}
194
-
195
- callbackKey = (type(table),)
196
- if isinstance(table, FormatSwitchingBaseTable):
197
- source["Format"] = int(table.Format)
198
- callbackKey += (table.Format,)
199
-
200
- for converter in table.getConverters():
201
- if isinstance(converter, ComputedInt):
202
- continue
203
- value = getattr(table, converter.name)
204
-
205
- enumClass = getattr(converter, "enumClass", None)
206
- if enumClass:
207
- source[converter.name] = value.name.lower()
208
- elif isinstance(converter, Struct):
209
- if converter.repeat:
210
- source[converter.name] = [self.unbuild(v) for v in value]
211
- else:
212
- source[converter.name] = self.unbuild(value)
213
- elif isinstance(converter, SimpleValue):
214
- # "simple" values (e.g. int, float, str) need no further un-building
215
- source[converter.name] = value
216
- else:
217
- raise NotImplementedError(
218
- "Don't know how unbuild {value!r} with {converter!r}"
219
- )
220
-
221
- source = self._callbackTable.get(callbackKey, lambda s: s)(source)
222
-
223
- return source
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/bezierTools.py DELETED
@@ -1,1474 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """fontTools.misc.bezierTools.py -- tools for working with Bezier path segments.
3
- """
4
-
5
- from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea
6
- from fontTools.misc.transform import Identity
7
- import math
8
- from collections import namedtuple
9
-
10
- try:
11
- import cython
12
-
13
- COMPILED = cython.compiled
14
- except (AttributeError, ImportError):
15
- # if cython not installed, use mock module with no-op decorators and types
16
- from fontTools.misc import cython
17
-
18
- COMPILED = False
19
-
20
-
21
- Intersection = namedtuple("Intersection", ["pt", "t1", "t2"])
22
-
23
-
24
- __all__ = [
25
- "approximateCubicArcLength",
26
- "approximateCubicArcLengthC",
27
- "approximateQuadraticArcLength",
28
- "approximateQuadraticArcLengthC",
29
- "calcCubicArcLength",
30
- "calcCubicArcLengthC",
31
- "calcQuadraticArcLength",
32
- "calcQuadraticArcLengthC",
33
- "calcCubicBounds",
34
- "calcQuadraticBounds",
35
- "splitLine",
36
- "splitQuadratic",
37
- "splitCubic",
38
- "splitQuadraticAtT",
39
- "splitCubicAtT",
40
- "splitCubicAtTC",
41
- "splitCubicIntoTwoAtTC",
42
- "solveQuadratic",
43
- "solveCubic",
44
- "quadraticPointAtT",
45
- "cubicPointAtT",
46
- "cubicPointAtTC",
47
- "linePointAtT",
48
- "segmentPointAtT",
49
- "lineLineIntersections",
50
- "curveLineIntersections",
51
- "curveCurveIntersections",
52
- "segmentSegmentIntersections",
53
- ]
54
-
55
-
56
- def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005):
57
- """Calculates the arc length for a cubic Bezier segment.
58
-
59
- Whereas :func:`approximateCubicArcLength` approximates the length, this
60
- function calculates it by "measuring", recursively dividing the curve
61
- until the divided segments are shorter than ``tolerance``.
62
-
63
- Args:
64
- pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.
65
- tolerance: Controls the precision of the calcuation.
66
-
67
- Returns:
68
- Arc length value.
69
- """
70
- return calcCubicArcLengthC(
71
- complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4), tolerance
72
- )
73
-
74
-
75
- def _split_cubic_into_two(p0, p1, p2, p3):
76
- mid = (p0 + 3 * (p1 + p2) + p3) * 0.125
77
- deriv3 = (p3 + p2 - p1 - p0) * 0.125
78
- return (
79
- (p0, (p0 + p1) * 0.5, mid - deriv3, mid),
80
- (mid, mid + deriv3, (p2 + p3) * 0.5, p3),
81
- )
82
-
83
-
84
- @cython.returns(cython.double)
85
- @cython.locals(
86
- p0=cython.complex,
87
- p1=cython.complex,
88
- p2=cython.complex,
89
- p3=cython.complex,
90
- )
91
- @cython.locals(mult=cython.double, arch=cython.double, box=cython.double)
92
- def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3):
93
- arch = abs(p0 - p3)
94
- box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3)
95
- if arch * mult >= box:
96
- return (arch + box) * 0.5
97
- else:
98
- one, two = _split_cubic_into_two(p0, p1, p2, p3)
99
- return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse(
100
- mult, *two
101
- )
102
-
103
-
104
- @cython.returns(cython.double)
105
- @cython.locals(
106
- pt1=cython.complex,
107
- pt2=cython.complex,
108
- pt3=cython.complex,
109
- pt4=cython.complex,
110
- )
111
- @cython.locals(
112
- tolerance=cython.double,
113
- mult=cython.double,
114
- )
115
- def calcCubicArcLengthC(pt1, pt2, pt3, pt4, tolerance=0.005):
116
- """Calculates the arc length for a cubic Bezier segment.
117
-
118
- Args:
119
- pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.
120
- tolerance: Controls the precision of the calcuation.
121
-
122
- Returns:
123
- Arc length value.
124
- """
125
- mult = 1.0 + 1.5 * tolerance # The 1.5 is a empirical hack; no math
126
- return _calcCubicArcLengthCRecurse(mult, pt1, pt2, pt3, pt4)
127
-
128
-
129
- epsilonDigits = 6
130
- epsilon = 1e-10
131
-
132
-
133
- @cython.cfunc
134
- @cython.inline
135
- @cython.returns(cython.double)
136
- @cython.locals(v1=cython.complex, v2=cython.complex)
137
- def _dot(v1, v2):
138
- return (v1 * v2.conjugate()).real
139
-
140
-
141
- @cython.cfunc
142
- @cython.inline
143
- @cython.returns(cython.double)
144
- @cython.locals(x=cython.complex)
145
- def _intSecAtan(x):
146
- # In : sympy.integrate(sp.sec(sp.atan(x)))
147
- # Out: x*sqrt(x**2 + 1)/2 + asinh(x)/2
148
- return x * math.sqrt(x**2 + 1) / 2 + math.asinh(x) / 2
149
-
150
-
151
- def calcQuadraticArcLength(pt1, pt2, pt3):
152
- """Calculates the arc length for a quadratic Bezier segment.
153
-
154
- Args:
155
- pt1: Start point of the Bezier as 2D tuple.
156
- pt2: Handle point of the Bezier as 2D tuple.
157
- pt3: End point of the Bezier as 2D tuple.
158
-
159
- Returns:
160
- Arc length value.
161
-
162
- Example::
163
-
164
- >>> calcQuadraticArcLength((0, 0), (0, 0), (0, 0)) # empty segment
165
- 0.0
166
- >>> calcQuadraticArcLength((0, 0), (50, 0), (80, 0)) # collinear points
167
- 80.0
168
- >>> calcQuadraticArcLength((0, 0), (0, 50), (0, 80)) # collinear points vertical
169
- 80.0
170
- >>> calcQuadraticArcLength((0, 0), (50, 20), (100, 40)) # collinear points
171
- 107.70329614269008
172
- >>> calcQuadraticArcLength((0, 0), (0, 100), (100, 0))
173
- 154.02976155645263
174
- >>> calcQuadraticArcLength((0, 0), (0, 50), (100, 0))
175
- 120.21581243984076
176
- >>> calcQuadraticArcLength((0, 0), (50, -10), (80, 50))
177
- 102.53273816445825
178
- >>> calcQuadraticArcLength((0, 0), (40, 0), (-40, 0)) # collinear points, control point outside
179
- 66.66666666666667
180
- >>> calcQuadraticArcLength((0, 0), (40, 0), (0, 0)) # collinear points, looping back
181
- 40.0
182
- """
183
- return calcQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3))
184
-
185
-
186
- @cython.returns(cython.double)
187
- @cython.locals(
188
- pt1=cython.complex,
189
- pt2=cython.complex,
190
- pt3=cython.complex,
191
- d0=cython.complex,
192
- d1=cython.complex,
193
- d=cython.complex,
194
- n=cython.complex,
195
- )
196
- @cython.locals(
197
- scale=cython.double,
198
- origDist=cython.double,
199
- a=cython.double,
200
- b=cython.double,
201
- x0=cython.double,
202
- x1=cython.double,
203
- Len=cython.double,
204
- )
205
- def calcQuadraticArcLengthC(pt1, pt2, pt3):
206
- """Calculates the arc length for a quadratic Bezier segment.
207
-
208
- Args:
209
- pt1: Start point of the Bezier as a complex number.
210
- pt2: Handle point of the Bezier as a complex number.
211
- pt3: End point of the Bezier as a complex number.
212
-
213
- Returns:
214
- Arc length value.
215
- """
216
- # Analytical solution to the length of a quadratic bezier.
217
- # Documentation: https://github.com/fonttools/fonttools/issues/3055
218
- d0 = pt2 - pt1
219
- d1 = pt3 - pt2
220
- d = d1 - d0
221
- n = d * 1j
222
- scale = abs(n)
223
- if scale == 0.0:
224
- return abs(pt3 - pt1)
225
- origDist = _dot(n, d0)
226
- if abs(origDist) < epsilon:
227
- if _dot(d0, d1) >= 0:
228
- return abs(pt3 - pt1)
229
- a, b = abs(d0), abs(d1)
230
- return (a * a + b * b) / (a + b)
231
- x0 = _dot(d, d0) / origDist
232
- x1 = _dot(d, d1) / origDist
233
- Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0)))
234
- return Len
235
-
236
-
237
- def approximateQuadraticArcLength(pt1, pt2, pt3):
238
- """Calculates the arc length for a quadratic Bezier segment.
239
-
240
- Uses Gauss-Legendre quadrature for a branch-free approximation.
241
- See :func:`calcQuadraticArcLength` for a slower but more accurate result.
242
-
243
- Args:
244
- pt1: Start point of the Bezier as 2D tuple.
245
- pt2: Handle point of the Bezier as 2D tuple.
246
- pt3: End point of the Bezier as 2D tuple.
247
-
248
- Returns:
249
- Approximate arc length value.
250
- """
251
- return approximateQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3))
252
-
253
-
254
- @cython.returns(cython.double)
255
- @cython.locals(
256
- pt1=cython.complex,
257
- pt2=cython.complex,
258
- pt3=cython.complex,
259
- )
260
- @cython.locals(
261
- v0=cython.double,
262
- v1=cython.double,
263
- v2=cython.double,
264
- )
265
- def approximateQuadraticArcLengthC(pt1, pt2, pt3):
266
- """Calculates the arc length for a quadratic Bezier segment.
267
-
268
- Uses Gauss-Legendre quadrature for a branch-free approximation.
269
- See :func:`calcQuadraticArcLength` for a slower but more accurate result.
270
-
271
- Args:
272
- pt1: Start point of the Bezier as a complex number.
273
- pt2: Handle point of the Bezier as a complex number.
274
- pt3: End point of the Bezier as a complex number.
275
-
276
- Returns:
277
- Approximate arc length value.
278
- """
279
- # This, essentially, approximates the length-of-derivative function
280
- # to be integrated with the best-matching fifth-degree polynomial
281
- # approximation of it.
282
- #
283
- # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Legendre_quadrature
284
-
285
- # abs(BezierCurveC[2].diff(t).subs({t:T})) for T in sorted(.5, .5±sqrt(3/5)/2),
286
- # weighted 5/18, 8/18, 5/18 respectively.
287
- v0 = abs(
288
- -0.492943519233745 * pt1 + 0.430331482911935 * pt2 + 0.0626120363218102 * pt3
289
- )
290
- v1 = abs(pt3 - pt1) * 0.4444444444444444
291
- v2 = abs(
292
- -0.0626120363218102 * pt1 - 0.430331482911935 * pt2 + 0.492943519233745 * pt3
293
- )
294
-
295
- return v0 + v1 + v2
296
-
297
-
298
- def calcQuadraticBounds(pt1, pt2, pt3):
299
- """Calculates the bounding rectangle for a quadratic Bezier segment.
300
-
301
- Args:
302
- pt1: Start point of the Bezier as a 2D tuple.
303
- pt2: Handle point of the Bezier as a 2D tuple.
304
- pt3: End point of the Bezier as a 2D tuple.
305
-
306
- Returns:
307
- A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.
308
-
309
- Example::
310
-
311
- >>> calcQuadraticBounds((0, 0), (50, 100), (100, 0))
312
- (0, 0, 100, 50.0)
313
- >>> calcQuadraticBounds((0, 0), (100, 0), (100, 100))
314
- (0.0, 0.0, 100, 100)
315
- """
316
- (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3)
317
- ax2 = ax * 2.0
318
- ay2 = ay * 2.0
319
- roots = []
320
- if ax2 != 0:
321
- roots.append(-bx / ax2)
322
- if ay2 != 0:
323
- roots.append(-by / ay2)
324
- points = [
325
- (ax * t * t + bx * t + cx, ay * t * t + by * t + cy)
326
- for t in roots
327
- if 0 <= t < 1
328
- ] + [pt1, pt3]
329
- return calcBounds(points)
330
-
331
-
332
- def approximateCubicArcLength(pt1, pt2, pt3, pt4):
333
- """Approximates the arc length for a cubic Bezier segment.
334
-
335
- Uses Gauss-Lobatto quadrature with n=5 points to approximate arc length.
336
- See :func:`calcCubicArcLength` for a slower but more accurate result.
337
-
338
- Args:
339
- pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.
340
-
341
- Returns:
342
- Arc length value.
343
-
344
- Example::
345
-
346
- >>> approximateCubicArcLength((0, 0), (25, 100), (75, 100), (100, 0))
347
- 190.04332968932817
348
- >>> approximateCubicArcLength((0, 0), (50, 0), (100, 50), (100, 100))
349
- 154.8852074945903
350
- >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (150, 0)) # line; exact result should be 150.
351
- 149.99999999999991
352
- >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (-50, 0)) # cusp; exact result should be 150.
353
- 136.9267662156362
354
- >>> approximateCubicArcLength((0, 0), (50, 0), (100, -50), (-50, 0)) # cusp
355
- 154.80848416537057
356
- """
357
- return approximateCubicArcLengthC(
358
- complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4)
359
- )
360
-
361
-
362
- @cython.returns(cython.double)
363
- @cython.locals(
364
- pt1=cython.complex,
365
- pt2=cython.complex,
366
- pt3=cython.complex,
367
- pt4=cython.complex,
368
- )
369
- @cython.locals(
370
- v0=cython.double,
371
- v1=cython.double,
372
- v2=cython.double,
373
- v3=cython.double,
374
- v4=cython.double,
375
- )
376
- def approximateCubicArcLengthC(pt1, pt2, pt3, pt4):
377
- """Approximates the arc length for a cubic Bezier segment.
378
-
379
- Args:
380
- pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.
381
-
382
- Returns:
383
- Arc length value.
384
- """
385
- # This, essentially, approximates the length-of-derivative function
386
- # to be integrated with the best-matching seventh-degree polynomial
387
- # approximation of it.
388
- #
389
- # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Lobatto_rules
390
-
391
- # abs(BezierCurveC[3].diff(t).subs({t:T})) for T in sorted(0, .5±(3/7)**.5/2, .5, 1),
392
- # weighted 1/20, 49/180, 32/90, 49/180, 1/20 respectively.
393
- v0 = abs(pt2 - pt1) * 0.15
394
- v1 = abs(
395
- -0.558983582205757 * pt1
396
- + 0.325650248872424 * pt2
397
- + 0.208983582205757 * pt3
398
- + 0.024349751127576 * pt4
399
- )
400
- v2 = abs(pt4 - pt1 + pt3 - pt2) * 0.26666666666666666
401
- v3 = abs(
402
- -0.024349751127576 * pt1
403
- - 0.208983582205757 * pt2
404
- - 0.325650248872424 * pt3
405
- + 0.558983582205757 * pt4
406
- )
407
- v4 = abs(pt4 - pt3) * 0.15
408
-
409
- return v0 + v1 + v2 + v3 + v4
410
-
411
-
412
- def calcCubicBounds(pt1, pt2, pt3, pt4):
413
- """Calculates the bounding rectangle for a quadratic Bezier segment.
414
-
415
- Args:
416
- pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.
417
-
418
- Returns:
419
- A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.
420
-
421
- Example::
422
-
423
- >>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0))
424
- (0, 0, 100, 75.0)
425
- >>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100))
426
- (0.0, 0.0, 100, 100)
427
- >>> print("%f %f %f %f" % calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0)))
428
- 35.566243 0.000000 64.433757 75.000000
429
- """
430
- (ax, ay), (bx, by), (cx, cy), (dx, dy) = calcCubicParameters(pt1, pt2, pt3, pt4)
431
- # calc first derivative
432
- ax3 = ax * 3.0
433
- ay3 = ay * 3.0
434
- bx2 = bx * 2.0
435
- by2 = by * 2.0
436
- xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1]
437
- yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1]
438
- roots = xRoots + yRoots
439
-
440
- points = [
441
- (
442
- ax * t * t * t + bx * t * t + cx * t + dx,
443
- ay * t * t * t + by * t * t + cy * t + dy,
444
- )
445
- for t in roots
446
- ] + [pt1, pt4]
447
- return calcBounds(points)
448
-
449
-
450
- def splitLine(pt1, pt2, where, isHorizontal):
451
- """Split a line at a given coordinate.
452
-
453
- Args:
454
- pt1: Start point of line as 2D tuple.
455
- pt2: End point of line as 2D tuple.
456
- where: Position at which to split the line.
457
- isHorizontal: Direction of the ray splitting the line. If true,
458
- ``where`` is interpreted as a Y coordinate; if false, then
459
- ``where`` is interpreted as an X coordinate.
460
-
461
- Returns:
462
- A list of two line segments (each line segment being two 2D tuples)
463
- if the line was successfully split, or a list containing the original
464
- line.
465
-
466
- Example::
467
-
468
- >>> printSegments(splitLine((0, 0), (100, 100), 50, True))
469
- ((0, 0), (50, 50))
470
- ((50, 50), (100, 100))
471
- >>> printSegments(splitLine((0, 0), (100, 100), 100, True))
472
- ((0, 0), (100, 100))
473
- >>> printSegments(splitLine((0, 0), (100, 100), 0, True))
474
- ((0, 0), (0, 0))
475
- ((0, 0), (100, 100))
476
- >>> printSegments(splitLine((0, 0), (100, 100), 0, False))
477
- ((0, 0), (0, 0))
478
- ((0, 0), (100, 100))
479
- >>> printSegments(splitLine((100, 0), (0, 0), 50, False))
480
- ((100, 0), (50, 0))
481
- ((50, 0), (0, 0))
482
- >>> printSegments(splitLine((0, 100), (0, 0), 50, True))
483
- ((0, 100), (0, 50))
484
- ((0, 50), (0, 0))
485
- """
486
- pt1x, pt1y = pt1
487
- pt2x, pt2y = pt2
488
-
489
- ax = pt2x - pt1x
490
- ay = pt2y - pt1y
491
-
492
- bx = pt1x
493
- by = pt1y
494
-
495
- a = (ax, ay)[isHorizontal]
496
-
497
- if a == 0:
498
- return [(pt1, pt2)]
499
- t = (where - (bx, by)[isHorizontal]) / a
500
- if 0 <= t < 1:
501
- midPt = ax * t + bx, ay * t + by
502
- return [(pt1, midPt), (midPt, pt2)]
503
- else:
504
- return [(pt1, pt2)]
505
-
506
-
507
- def splitQuadratic(pt1, pt2, pt3, where, isHorizontal):
508
- """Split a quadratic Bezier curve at a given coordinate.
509
-
510
- Args:
511
- pt1,pt2,pt3: Control points of the Bezier as 2D tuples.
512
- where: Position at which to split the curve.
513
- isHorizontal: Direction of the ray splitting the curve. If true,
514
- ``where`` is interpreted as a Y coordinate; if false, then
515
- ``where`` is interpreted as an X coordinate.
516
-
517
- Returns:
518
- A list of two curve segments (each curve segment being three 2D tuples)
519
- if the curve was successfully split, or a list containing the original
520
- curve.
521
-
522
- Example::
523
-
524
- >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False))
525
- ((0, 0), (50, 100), (100, 0))
526
- >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False))
527
- ((0, 0), (25, 50), (50, 50))
528
- ((50, 50), (75, 50), (100, 0))
529
- >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False))
530
- ((0, 0), (12.5, 25), (25, 37.5))
531
- ((25, 37.5), (62.5, 75), (100, 0))
532
- >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True))
533
- ((0, 0), (7.32233, 14.6447), (14.6447, 25))
534
- ((14.6447, 25), (50, 75), (85.3553, 25))
535
- ((85.3553, 25), (92.6777, 14.6447), (100, -7.10543e-15))
536
- >>> # XXX I'm not at all sure if the following behavior is desirable:
537
- >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True))
538
- ((0, 0), (25, 50), (50, 50))
539
- ((50, 50), (50, 50), (50, 50))
540
- ((50, 50), (75, 50), (100, 0))
541
- """
542
- a, b, c = calcQuadraticParameters(pt1, pt2, pt3)
543
- solutions = solveQuadratic(
544
- a[isHorizontal], b[isHorizontal], c[isHorizontal] - where
545
- )
546
- solutions = sorted(t for t in solutions if 0 <= t < 1)
547
- if not solutions:
548
- return [(pt1, pt2, pt3)]
549
- return _splitQuadraticAtT(a, b, c, *solutions)
550
-
551
-
552
- def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal):
553
- """Split a cubic Bezier curve at a given coordinate.
554
-
555
- Args:
556
- pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.
557
- where: Position at which to split the curve.
558
- isHorizontal: Direction of the ray splitting the curve. If true,
559
- ``where`` is interpreted as a Y coordinate; if false, then
560
- ``where`` is interpreted as an X coordinate.
561
-
562
- Returns:
563
- A list of two curve segments (each curve segment being four 2D tuples)
564
- if the curve was successfully split, or a list containing the original
565
- curve.
566
-
567
- Example::
568
-
569
- >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False))
570
- ((0, 0), (25, 100), (75, 100), (100, 0))
571
- >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False))
572
- ((0, 0), (12.5, 50), (31.25, 75), (50, 75))
573
- ((50, 75), (68.75, 75), (87.5, 50), (100, 0))
574
- >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True))
575
- ((0, 0), (2.29379, 9.17517), (4.79804, 17.5085), (7.47414, 25))
576
- ((7.47414, 25), (31.2886, 91.6667), (68.7114, 91.6667), (92.5259, 25))
577
- ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15))
578
- """
579
- a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4)
580
- solutions = solveCubic(
581
- a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where
582
- )
583
- solutions = sorted(t for t in solutions if 0 <= t < 1)
584
- if not solutions:
585
- return [(pt1, pt2, pt3, pt4)]
586
- return _splitCubicAtT(a, b, c, d, *solutions)
587
-
588
-
589
- def splitQuadraticAtT(pt1, pt2, pt3, *ts):
590
- """Split a quadratic Bezier curve at one or more values of t.
591
-
592
- Args:
593
- pt1,pt2,pt3: Control points of the Bezier as 2D tuples.
594
- *ts: Positions at which to split the curve.
595
-
596
- Returns:
597
- A list of curve segments (each curve segment being three 2D tuples).
598
-
599
- Examples::
600
-
601
- >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5))
602
- ((0, 0), (25, 50), (50, 50))
603
- ((50, 50), (75, 50), (100, 0))
604
- >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75))
605
- ((0, 0), (25, 50), (50, 50))
606
- ((50, 50), (62.5, 50), (75, 37.5))
607
- ((75, 37.5), (87.5, 25), (100, 0))
608
- """
609
- a, b, c = calcQuadraticParameters(pt1, pt2, pt3)
610
- return _splitQuadraticAtT(a, b, c, *ts)
611
-
612
-
613
- def splitCubicAtT(pt1, pt2, pt3, pt4, *ts):
614
- """Split a cubic Bezier curve at one or more values of t.
615
-
616
- Args:
617
- pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples.
618
- *ts: Positions at which to split the curve.
619
-
620
- Returns:
621
- A list of curve segments (each curve segment being four 2D tuples).
622
-
623
- Examples::
624
-
625
- >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5))
626
- ((0, 0), (12.5, 50), (31.25, 75), (50, 75))
627
- ((50, 75), (68.75, 75), (87.5, 50), (100, 0))
628
- >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75))
629
- ((0, 0), (12.5, 50), (31.25, 75), (50, 75))
630
- ((50, 75), (59.375, 75), (68.75, 68.75), (77.3438, 56.25))
631
- ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0))
632
- """
633
- a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4)
634
- return _splitCubicAtT(a, b, c, d, *ts)
635
-
636
-
637
- @cython.locals(
638
- pt1=cython.complex,
639
- pt2=cython.complex,
640
- pt3=cython.complex,
641
- pt4=cython.complex,
642
- a=cython.complex,
643
- b=cython.complex,
644
- c=cython.complex,
645
- d=cython.complex,
646
- )
647
- def splitCubicAtTC(pt1, pt2, pt3, pt4, *ts):
648
- """Split a cubic Bezier curve at one or more values of t.
649
-
650
- Args:
651
- pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers..
652
- *ts: Positions at which to split the curve.
653
-
654
- Yields:
655
- Curve segments (each curve segment being four complex numbers).
656
- """
657
- a, b, c, d = calcCubicParametersC(pt1, pt2, pt3, pt4)
658
- yield from _splitCubicAtTC(a, b, c, d, *ts)
659
-
660
-
661
- @cython.returns(cython.complex)
662
- @cython.locals(
663
- t=cython.double,
664
- pt1=cython.complex,
665
- pt2=cython.complex,
666
- pt3=cython.complex,
667
- pt4=cython.complex,
668
- pointAtT=cython.complex,
669
- off1=cython.complex,
670
- off2=cython.complex,
671
- )
672
- @cython.locals(
673
- t2=cython.double, _1_t=cython.double, _1_t_2=cython.double, _2_t_1_t=cython.double
674
- )
675
- def splitCubicIntoTwoAtTC(pt1, pt2, pt3, pt4, t):
676
- """Split a cubic Bezier curve at t.
677
-
678
- Args:
679
- pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.
680
- t: Position at which to split the curve.
681
-
682
- Returns:
683
- A tuple of two curve segments (each curve segment being four complex numbers).
684
- """
685
- t2 = t * t
686
- _1_t = 1 - t
687
- _1_t_2 = _1_t * _1_t
688
- _2_t_1_t = 2 * t * _1_t
689
- pointAtT = (
690
- _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4
691
- )
692
- off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3
693
- off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4
694
-
695
- pt2 = pt1 + (pt2 - pt1) * t
696
- pt3 = pt4 + (pt3 - pt4) * _1_t
697
-
698
- return ((pt1, pt2, off1, pointAtT), (pointAtT, off2, pt3, pt4))
699
-
700
-
701
- def _splitQuadraticAtT(a, b, c, *ts):
702
- ts = list(ts)
703
- segments = []
704
- ts.insert(0, 0.0)
705
- ts.append(1.0)
706
- ax, ay = a
707
- bx, by = b
708
- cx, cy = c
709
- for i in range(len(ts) - 1):
710
- t1 = ts[i]
711
- t2 = ts[i + 1]
712
- delta = t2 - t1
713
- # calc new a, b and c
714
- delta_2 = delta * delta
715
- a1x = ax * delta_2
716
- a1y = ay * delta_2
717
- b1x = (2 * ax * t1 + bx) * delta
718
- b1y = (2 * ay * t1 + by) * delta
719
- t1_2 = t1 * t1
720
- c1x = ax * t1_2 + bx * t1 + cx
721
- c1y = ay * t1_2 + by * t1 + cy
722
-
723
- pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y))
724
- segments.append((pt1, pt2, pt3))
725
- return segments
726
-
727
-
728
- def _splitCubicAtT(a, b, c, d, *ts):
729
- ts = list(ts)
730
- ts.insert(0, 0.0)
731
- ts.append(1.0)
732
- segments = []
733
- ax, ay = a
734
- bx, by = b
735
- cx, cy = c
736
- dx, dy = d
737
- for i in range(len(ts) - 1):
738
- t1 = ts[i]
739
- t2 = ts[i + 1]
740
- delta = t2 - t1
741
-
742
- delta_2 = delta * delta
743
- delta_3 = delta * delta_2
744
- t1_2 = t1 * t1
745
- t1_3 = t1 * t1_2
746
-
747
- # calc new a, b, c and d
748
- a1x = ax * delta_3
749
- a1y = ay * delta_3
750
- b1x = (3 * ax * t1 + bx) * delta_2
751
- b1y = (3 * ay * t1 + by) * delta_2
752
- c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta
753
- c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta
754
- d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx
755
- d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy
756
- pt1, pt2, pt3, pt4 = calcCubicPoints(
757
- (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y)
758
- )
759
- segments.append((pt1, pt2, pt3, pt4))
760
- return segments
761
-
762
-
763
- @cython.locals(
764
- a=cython.complex,
765
- b=cython.complex,
766
- c=cython.complex,
767
- d=cython.complex,
768
- t1=cython.double,
769
- t2=cython.double,
770
- delta=cython.double,
771
- delta_2=cython.double,
772
- delta_3=cython.double,
773
- a1=cython.complex,
774
- b1=cython.complex,
775
- c1=cython.complex,
776
- d1=cython.complex,
777
- )
778
- def _splitCubicAtTC(a, b, c, d, *ts):
779
- ts = list(ts)
780
- ts.insert(0, 0.0)
781
- ts.append(1.0)
782
- for i in range(len(ts) - 1):
783
- t1 = ts[i]
784
- t2 = ts[i + 1]
785
- delta = t2 - t1
786
-
787
- delta_2 = delta * delta
788
- delta_3 = delta * delta_2
789
- t1_2 = t1 * t1
790
- t1_3 = t1 * t1_2
791
-
792
- # calc new a, b, c and d
793
- a1 = a * delta_3
794
- b1 = (3 * a * t1 + b) * delta_2
795
- c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta
796
- d1 = a * t1_3 + b * t1_2 + c * t1 + d
797
- pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1)
798
- yield (pt1, pt2, pt3, pt4)
799
-
800
-
801
- #
802
- # Equation solvers.
803
- #
804
-
805
- from math import sqrt, acos, cos, pi
806
-
807
-
808
- def solveQuadratic(a, b, c, sqrt=sqrt):
809
- """Solve a quadratic equation.
810
-
811
- Solves *a*x*x + b*x + c = 0* where a, b and c are real.
812
-
813
- Args:
814
- a: coefficient of *x²*
815
- b: coefficient of *x*
816
- c: constant term
817
-
818
- Returns:
819
- A list of roots. Note that the returned list is neither guaranteed to
820
- be sorted nor to contain unique values!
821
- """
822
- if abs(a) < epsilon:
823
- if abs(b) < epsilon:
824
- # We have a non-equation; therefore, we have no valid solution
825
- roots = []
826
- else:
827
- # We have a linear equation with 1 root.
828
- roots = [-c / b]
829
- else:
830
- # We have a true quadratic equation. Apply the quadratic formula to find two roots.
831
- DD = b * b - 4.0 * a * c
832
- if DD >= 0.0:
833
- rDD = sqrt(DD)
834
- roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a]
835
- else:
836
- # complex roots, ignore
837
- roots = []
838
- return roots
839
-
840
-
841
- def solveCubic(a, b, c, d):
842
- """Solve a cubic equation.
843
-
844
- Solves *a*x*x*x + b*x*x + c*x + d = 0* where a, b, c and d are real.
845
-
846
- Args:
847
- a: coefficient of *x³*
848
- b: coefficient of *x²*
849
- c: coefficient of *x*
850
- d: constant term
851
-
852
- Returns:
853
- A list of roots. Note that the returned list is neither guaranteed to
854
- be sorted nor to contain unique values!
855
-
856
- Examples::
857
-
858
- >>> solveCubic(1, 1, -6, 0)
859
- [-3.0, -0.0, 2.0]
860
- >>> solveCubic(-10.0, -9.0, 48.0, -29.0)
861
- [-2.9, 1.0, 1.0]
862
- >>> solveCubic(-9.875, -9.0, 47.625, -28.75)
863
- [-2.911392, 1.0, 1.0]
864
- >>> solveCubic(1.0, -4.5, 6.75, -3.375)
865
- [1.5, 1.5, 1.5]
866
- >>> solveCubic(-12.0, 18.0, -9.0, 1.50023651123)
867
- [0.5, 0.5, 0.5]
868
- >>> solveCubic(
869
- ... 9.0, 0.0, 0.0, -7.62939453125e-05
870
- ... ) == [-0.0, -0.0, -0.0]
871
- True
872
- """
873
- #
874
- # adapted from:
875
- # CUBIC.C - Solve a cubic polynomial
876
- # public domain by Ross Cottrell
877
- # found at: http://www.strangecreations.com/library/snippets/Cubic.C
878
- #
879
- if abs(a) < epsilon:
880
- # don't just test for zero; for very small values of 'a' solveCubic()
881
- # returns unreliable results, so we fall back to quad.
882
- return solveQuadratic(b, c, d)
883
- a = float(a)
884
- a1 = b / a
885
- a2 = c / a
886
- a3 = d / a
887
-
888
- Q = (a1 * a1 - 3.0 * a2) / 9.0
889
- R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0
890
-
891
- R2 = R * R
892
- Q3 = Q * Q * Q
893
- R2 = 0 if R2 < epsilon else R2
894
- Q3 = 0 if abs(Q3) < epsilon else Q3
895
-
896
- R2_Q3 = R2 - Q3
897
-
898
- if R2 == 0.0 and Q3 == 0.0:
899
- x = round(-a1 / 3.0, epsilonDigits)
900
- return [x, x, x]
901
- elif R2_Q3 <= epsilon * 0.5:
902
- # The epsilon * .5 above ensures that Q3 is not zero.
903
- theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0))
904
- rQ2 = -2.0 * sqrt(Q)
905
- a1_3 = a1 / 3.0
906
- x0 = rQ2 * cos(theta / 3.0) - a1_3
907
- x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3
908
- x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3
909
- x0, x1, x2 = sorted([x0, x1, x2])
910
- # Merge roots that are close-enough
911
- if x1 - x0 < epsilon and x2 - x1 < epsilon:
912
- x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits)
913
- elif x1 - x0 < epsilon:
914
- x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits)
915
- x2 = round(x2, epsilonDigits)
916
- elif x2 - x1 < epsilon:
917
- x0 = round(x0, epsilonDigits)
918
- x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits)
919
- else:
920
- x0 = round(x0, epsilonDigits)
921
- x1 = round(x1, epsilonDigits)
922
- x2 = round(x2, epsilonDigits)
923
- return [x0, x1, x2]
924
- else:
925
- x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0)
926
- x = x + Q / x
927
- if R >= 0.0:
928
- x = -x
929
- x = round(x - a1 / 3.0, epsilonDigits)
930
- return [x]
931
-
932
-
933
- #
934
- # Conversion routines for points to parameters and vice versa
935
- #
936
-
937
-
938
- def calcQuadraticParameters(pt1, pt2, pt3):
939
- x2, y2 = pt2
940
- x3, y3 = pt3
941
- cx, cy = pt1
942
- bx = (x2 - cx) * 2.0
943
- by = (y2 - cy) * 2.0
944
- ax = x3 - cx - bx
945
- ay = y3 - cy - by
946
- return (ax, ay), (bx, by), (cx, cy)
947
-
948
-
949
- def calcCubicParameters(pt1, pt2, pt3, pt4):
950
- x2, y2 = pt2
951
- x3, y3 = pt3
952
- x4, y4 = pt4
953
- dx, dy = pt1
954
- cx = (x2 - dx) * 3.0
955
- cy = (y2 - dy) * 3.0
956
- bx = (x3 - x2) * 3.0 - cx
957
- by = (y3 - y2) * 3.0 - cy
958
- ax = x4 - dx - cx - bx
959
- ay = y4 - dy - cy - by
960
- return (ax, ay), (bx, by), (cx, cy), (dx, dy)
961
-
962
-
963
- @cython.cfunc
964
- @cython.inline
965
- @cython.locals(
966
- pt1=cython.complex,
967
- pt2=cython.complex,
968
- pt3=cython.complex,
969
- pt4=cython.complex,
970
- a=cython.complex,
971
- b=cython.complex,
972
- c=cython.complex,
973
- )
974
- def calcCubicParametersC(pt1, pt2, pt3, pt4):
975
- c = (pt2 - pt1) * 3.0
976
- b = (pt3 - pt2) * 3.0 - c
977
- a = pt4 - pt1 - c - b
978
- return (a, b, c, pt1)
979
-
980
-
981
- def calcQuadraticPoints(a, b, c):
982
- ax, ay = a
983
- bx, by = b
984
- cx, cy = c
985
- x1 = cx
986
- y1 = cy
987
- x2 = (bx * 0.5) + cx
988
- y2 = (by * 0.5) + cy
989
- x3 = ax + bx + cx
990
- y3 = ay + by + cy
991
- return (x1, y1), (x2, y2), (x3, y3)
992
-
993
-
994
- def calcCubicPoints(a, b, c, d):
995
- ax, ay = a
996
- bx, by = b
997
- cx, cy = c
998
- dx, dy = d
999
- x1 = dx
1000
- y1 = dy
1001
- x2 = (cx / 3.0) + dx
1002
- y2 = (cy / 3.0) + dy
1003
- x3 = (bx + cx) / 3.0 + x2
1004
- y3 = (by + cy) / 3.0 + y2
1005
- x4 = ax + dx + cx + bx
1006
- y4 = ay + dy + cy + by
1007
- return (x1, y1), (x2, y2), (x3, y3), (x4, y4)
1008
-
1009
-
1010
- @cython.cfunc
1011
- @cython.inline
1012
- @cython.locals(
1013
- a=cython.complex,
1014
- b=cython.complex,
1015
- c=cython.complex,
1016
- d=cython.complex,
1017
- p2=cython.complex,
1018
- p3=cython.complex,
1019
- p4=cython.complex,
1020
- )
1021
- def calcCubicPointsC(a, b, c, d):
1022
- p2 = c * (1 / 3) + d
1023
- p3 = (b + c) * (1 / 3) + p2
1024
- p4 = a + b + c + d
1025
- return (d, p2, p3, p4)
1026
-
1027
-
1028
- #
1029
- # Point at time
1030
- #
1031
-
1032
-
1033
- def linePointAtT(pt1, pt2, t):
1034
- """Finds the point at time `t` on a line.
1035
-
1036
- Args:
1037
- pt1, pt2: Coordinates of the line as 2D tuples.
1038
- t: The time along the line.
1039
-
1040
- Returns:
1041
- A 2D tuple with the coordinates of the point.
1042
- """
1043
- return ((pt1[0] * (1 - t) + pt2[0] * t), (pt1[1] * (1 - t) + pt2[1] * t))
1044
-
1045
-
1046
- def quadraticPointAtT(pt1, pt2, pt3, t):
1047
- """Finds the point at time `t` on a quadratic curve.
1048
-
1049
- Args:
1050
- pt1, pt2, pt3: Coordinates of the curve as 2D tuples.
1051
- t: The time along the curve.
1052
-
1053
- Returns:
1054
- A 2D tuple with the coordinates of the point.
1055
- """
1056
- x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0]
1057
- y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1]
1058
- return (x, y)
1059
-
1060
-
1061
- def cubicPointAtT(pt1, pt2, pt3, pt4, t):
1062
- """Finds the point at time `t` on a cubic curve.
1063
-
1064
- Args:
1065
- pt1, pt2, pt3, pt4: Coordinates of the curve as 2D tuples.
1066
- t: The time along the curve.
1067
-
1068
- Returns:
1069
- A 2D tuple with the coordinates of the point.
1070
- """
1071
- t2 = t * t
1072
- _1_t = 1 - t
1073
- _1_t_2 = _1_t * _1_t
1074
- x = (
1075
- _1_t_2 * _1_t * pt1[0]
1076
- + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0])
1077
- + t2 * t * pt4[0]
1078
- )
1079
- y = (
1080
- _1_t_2 * _1_t * pt1[1]
1081
- + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1])
1082
- + t2 * t * pt4[1]
1083
- )
1084
- return (x, y)
1085
-
1086
-
1087
- @cython.returns(cython.complex)
1088
- @cython.locals(
1089
- t=cython.double,
1090
- pt1=cython.complex,
1091
- pt2=cython.complex,
1092
- pt3=cython.complex,
1093
- pt4=cython.complex,
1094
- )
1095
- @cython.locals(t2=cython.double, _1_t=cython.double, _1_t_2=cython.double)
1096
- def cubicPointAtTC(pt1, pt2, pt3, pt4, t):
1097
- """Finds the point at time `t` on a cubic curve.
1098
-
1099
- Args:
1100
- pt1, pt2, pt3, pt4: Coordinates of the curve as complex numbers.
1101
- t: The time along the curve.
1102
-
1103
- Returns:
1104
- A complex number with the coordinates of the point.
1105
- """
1106
- t2 = t * t
1107
- _1_t = 1 - t
1108
- _1_t_2 = _1_t * _1_t
1109
- return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4
1110
-
1111
-
1112
- def segmentPointAtT(seg, t):
1113
- if len(seg) == 2:
1114
- return linePointAtT(*seg, t)
1115
- elif len(seg) == 3:
1116
- return quadraticPointAtT(*seg, t)
1117
- elif len(seg) == 4:
1118
- return cubicPointAtT(*seg, t)
1119
- raise ValueError("Unknown curve degree")
1120
-
1121
-
1122
- #
1123
- # Intersection finders
1124
- #
1125
-
1126
-
1127
- def _line_t_of_pt(s, e, pt):
1128
- sx, sy = s
1129
- ex, ey = e
1130
- px, py = pt
1131
- if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon:
1132
- # Line is a point!
1133
- return -1
1134
- # Use the largest
1135
- if abs(sx - ex) > abs(sy - ey):
1136
- return (px - sx) / (ex - sx)
1137
- else:
1138
- return (py - sy) / (ey - sy)
1139
-
1140
-
1141
- def _both_points_are_on_same_side_of_origin(a, b, origin):
1142
- xDiff = (a[0] - origin[0]) * (b[0] - origin[0])
1143
- yDiff = (a[1] - origin[1]) * (b[1] - origin[1])
1144
- return not (xDiff <= 0.0 and yDiff <= 0.0)
1145
-
1146
-
1147
- def lineLineIntersections(s1, e1, s2, e2):
1148
- """Finds intersections between two line segments.
1149
-
1150
- Args:
1151
- s1, e1: Coordinates of the first line as 2D tuples.
1152
- s2, e2: Coordinates of the second line as 2D tuples.
1153
-
1154
- Returns:
1155
- A list of ``Intersection`` objects, each object having ``pt``, ``t1``
1156
- and ``t2`` attributes containing the intersection point, time on first
1157
- segment and time on second segment respectively.
1158
-
1159
- Examples::
1160
-
1161
- >>> a = lineLineIntersections( (310,389), (453, 222), (289, 251), (447, 367))
1162
- >>> len(a)
1163
- 1
1164
- >>> intersection = a[0]
1165
- >>> intersection.pt
1166
- (374.44882952482897, 313.73458370177315)
1167
- >>> (intersection.t1, intersection.t2)
1168
- (0.45069111555824465, 0.5408153767394238)
1169
- """
1170
- s1x, s1y = s1
1171
- e1x, e1y = e1
1172
- s2x, s2y = s2
1173
- e2x, e2y = e2
1174
- if (
1175
- math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x)
1176
- ): # Parallel vertical
1177
- return []
1178
- if (
1179
- math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y)
1180
- ): # Parallel horizontal
1181
- return []
1182
- if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny
1183
- return []
1184
- if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny
1185
- return []
1186
- if math.isclose(e1x, s1x):
1187
- x = s1x
1188
- slope34 = (e2y - s2y) / (e2x - s2x)
1189
- y = slope34 * (x - s2x) + s2y
1190
- pt = (x, y)
1191
- return [
1192
- Intersection(
1193
- pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt)
1194
- )
1195
- ]
1196
- if math.isclose(s2x, e2x):
1197
- x = s2x
1198
- slope12 = (e1y - s1y) / (e1x - s1x)
1199
- y = slope12 * (x - s1x) + s1y
1200
- pt = (x, y)
1201
- return [
1202
- Intersection(
1203
- pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt)
1204
- )
1205
- ]
1206
-
1207
- slope12 = (e1y - s1y) / (e1x - s1x)
1208
- slope34 = (e2y - s2y) / (e2x - s2x)
1209
- if math.isclose(slope12, slope34):
1210
- return []
1211
- x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34)
1212
- y = slope12 * (x - s1x) + s1y
1213
- pt = (x, y)
1214
- if _both_points_are_on_same_side_of_origin(
1215
- pt, e1, s1
1216
- ) and _both_points_are_on_same_side_of_origin(pt, s2, e2):
1217
- return [
1218
- Intersection(
1219
- pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt)
1220
- )
1221
- ]
1222
- return []
1223
-
1224
-
1225
- def _alignment_transformation(segment):
1226
- # Returns a transformation which aligns a segment horizontally at the
1227
- # origin. Apply this transformation to curves and root-find to find
1228
- # intersections with the segment.
1229
- start = segment[0]
1230
- end = segment[-1]
1231
- angle = math.atan2(end[1] - start[1], end[0] - start[0])
1232
- return Identity.rotate(-angle).translate(-start[0], -start[1])
1233
-
1234
-
1235
- def _curve_line_intersections_t(curve, line):
1236
- aligned_curve = _alignment_transformation(line).transformPoints(curve)
1237
- if len(curve) == 3:
1238
- a, b, c = calcQuadraticParameters(*aligned_curve)
1239
- intersections = solveQuadratic(a[1], b[1], c[1])
1240
- elif len(curve) == 4:
1241
- a, b, c, d = calcCubicParameters(*aligned_curve)
1242
- intersections = solveCubic(a[1], b[1], c[1], d[1])
1243
- else:
1244
- raise ValueError("Unknown curve degree")
1245
- return sorted(i for i in intersections if 0.0 <= i <= 1)
1246
-
1247
-
1248
- def curveLineIntersections(curve, line):
1249
- """Finds intersections between a curve and a line.
1250
-
1251
- Args:
1252
- curve: List of coordinates of the curve segment as 2D tuples.
1253
- line: List of coordinates of the line segment as 2D tuples.
1254
-
1255
- Returns:
1256
- A list of ``Intersection`` objects, each object having ``pt``, ``t1``
1257
- and ``t2`` attributes containing the intersection point, time on first
1258
- segment and time on second segment respectively.
1259
-
1260
- Examples::
1261
- >>> curve = [ (100, 240), (30, 60), (210, 230), (160, 30) ]
1262
- >>> line = [ (25, 260), (230, 20) ]
1263
- >>> intersections = curveLineIntersections(curve, line)
1264
- >>> len(intersections)
1265
- 3
1266
- >>> intersections[0].pt
1267
- (84.9000930760723, 189.87306176459828)
1268
- """
1269
- if len(curve) == 3:
1270
- pointFinder = quadraticPointAtT
1271
- elif len(curve) == 4:
1272
- pointFinder = cubicPointAtT
1273
- else:
1274
- raise ValueError("Unknown curve degree")
1275
- intersections = []
1276
- for t in _curve_line_intersections_t(curve, line):
1277
- pt = pointFinder(*curve, t)
1278
- # Back-project the point onto the line, to avoid problems with
1279
- # numerical accuracy in the case of vertical and horizontal lines
1280
- line_t = _line_t_of_pt(*line, pt)
1281
- pt = linePointAtT(*line, line_t)
1282
- intersections.append(Intersection(pt=pt, t1=t, t2=line_t))
1283
- return intersections
1284
-
1285
-
1286
- def _curve_bounds(c):
1287
- if len(c) == 3:
1288
- return calcQuadraticBounds(*c)
1289
- elif len(c) == 4:
1290
- return calcCubicBounds(*c)
1291
- raise ValueError("Unknown curve degree")
1292
-
1293
-
1294
- def _split_segment_at_t(c, t):
1295
- if len(c) == 2:
1296
- s, e = c
1297
- midpoint = linePointAtT(s, e, t)
1298
- return [(s, midpoint), (midpoint, e)]
1299
- if len(c) == 3:
1300
- return splitQuadraticAtT(*c, t)
1301
- elif len(c) == 4:
1302
- return splitCubicAtT(*c, t)
1303
- raise ValueError("Unknown curve degree")
1304
-
1305
-
1306
- def _curve_curve_intersections_t(
1307
- curve1, curve2, precision=1e-3, range1=None, range2=None
1308
- ):
1309
- bounds1 = _curve_bounds(curve1)
1310
- bounds2 = _curve_bounds(curve2)
1311
-
1312
- if not range1:
1313
- range1 = (0.0, 1.0)
1314
- if not range2:
1315
- range2 = (0.0, 1.0)
1316
-
1317
- # If bounds don't intersect, go home
1318
- intersects, _ = sectRect(bounds1, bounds2)
1319
- if not intersects:
1320
- return []
1321
-
1322
- def midpoint(r):
1323
- return 0.5 * (r[0] + r[1])
1324
-
1325
- # If they do overlap but they're tiny, approximate
1326
- if rectArea(bounds1) < precision and rectArea(bounds2) < precision:
1327
- return [(midpoint(range1), midpoint(range2))]
1328
-
1329
- c11, c12 = _split_segment_at_t(curve1, 0.5)
1330
- c11_range = (range1[0], midpoint(range1))
1331
- c12_range = (midpoint(range1), range1[1])
1332
-
1333
- c21, c22 = _split_segment_at_t(curve2, 0.5)
1334
- c21_range = (range2[0], midpoint(range2))
1335
- c22_range = (midpoint(range2), range2[1])
1336
-
1337
- found = []
1338
- found.extend(
1339
- _curve_curve_intersections_t(
1340
- c11, c21, precision, range1=c11_range, range2=c21_range
1341
- )
1342
- )
1343
- found.extend(
1344
- _curve_curve_intersections_t(
1345
- c12, c21, precision, range1=c12_range, range2=c21_range
1346
- )
1347
- )
1348
- found.extend(
1349
- _curve_curve_intersections_t(
1350
- c11, c22, precision, range1=c11_range, range2=c22_range
1351
- )
1352
- )
1353
- found.extend(
1354
- _curve_curve_intersections_t(
1355
- c12, c22, precision, range1=c12_range, range2=c22_range
1356
- )
1357
- )
1358
-
1359
- unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision))
1360
- seen = set()
1361
- unique_values = []
1362
-
1363
- for ts in found:
1364
- key = unique_key(ts)
1365
- if key in seen:
1366
- continue
1367
- seen.add(key)
1368
- unique_values.append(ts)
1369
-
1370
- return unique_values
1371
-
1372
-
1373
- def curveCurveIntersections(curve1, curve2):
1374
- """Finds intersections between a curve and a curve.
1375
-
1376
- Args:
1377
- curve1: List of coordinates of the first curve segment as 2D tuples.
1378
- curve2: List of coordinates of the second curve segment as 2D tuples.
1379
-
1380
- Returns:
1381
- A list of ``Intersection`` objects, each object having ``pt``, ``t1``
1382
- and ``t2`` attributes containing the intersection point, time on first
1383
- segment and time on second segment respectively.
1384
-
1385
- Examples::
1386
- >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ]
1387
- >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ]
1388
- >>> intersections = curveCurveIntersections(curve1, curve2)
1389
- >>> len(intersections)
1390
- 3
1391
- >>> intersections[0].pt
1392
- (81.7831487395506, 109.88904552375288)
1393
- """
1394
- intersection_ts = _curve_curve_intersections_t(curve1, curve2)
1395
- return [
1396
- Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1])
1397
- for ts in intersection_ts
1398
- ]
1399
-
1400
-
1401
- def segmentSegmentIntersections(seg1, seg2):
1402
- """Finds intersections between two segments.
1403
-
1404
- Args:
1405
- seg1: List of coordinates of the first segment as 2D tuples.
1406
- seg2: List of coordinates of the second segment as 2D tuples.
1407
-
1408
- Returns:
1409
- A list of ``Intersection`` objects, each object having ``pt``, ``t1``
1410
- and ``t2`` attributes containing the intersection point, time on first
1411
- segment and time on second segment respectively.
1412
-
1413
- Examples::
1414
- >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ]
1415
- >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ]
1416
- >>> intersections = segmentSegmentIntersections(curve1, curve2)
1417
- >>> len(intersections)
1418
- 3
1419
- >>> intersections[0].pt
1420
- (81.7831487395506, 109.88904552375288)
1421
- >>> curve3 = [ (100, 240), (30, 60), (210, 230), (160, 30) ]
1422
- >>> line = [ (25, 260), (230, 20) ]
1423
- >>> intersections = segmentSegmentIntersections(curve3, line)
1424
- >>> len(intersections)
1425
- 3
1426
- >>> intersections[0].pt
1427
- (84.9000930760723, 189.87306176459828)
1428
-
1429
- """
1430
- # Arrange by degree
1431
- swapped = False
1432
- if len(seg2) > len(seg1):
1433
- seg2, seg1 = seg1, seg2
1434
- swapped = True
1435
- if len(seg1) > 2:
1436
- if len(seg2) > 2:
1437
- intersections = curveCurveIntersections(seg1, seg2)
1438
- else:
1439
- intersections = curveLineIntersections(seg1, seg2)
1440
- elif len(seg1) == 2 and len(seg2) == 2:
1441
- intersections = lineLineIntersections(*seg1, *seg2)
1442
- else:
1443
- raise ValueError("Couldn't work out which intersection function to use")
1444
- if not swapped:
1445
- return intersections
1446
- return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections]
1447
-
1448
-
1449
- def _segmentrepr(obj):
1450
- """
1451
- >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]])
1452
- '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))'
1453
- """
1454
- try:
1455
- it = iter(obj)
1456
- except TypeError:
1457
- return "%g" % obj
1458
- else:
1459
- return "(%s)" % ", ".join(_segmentrepr(x) for x in it)
1460
-
1461
-
1462
- def printSegments(segments):
1463
- """Helper for the doctests, displaying each segment in a list of
1464
- segments on a single line as a tuple.
1465
- """
1466
- for segment in segments:
1467
- print(_segmentrepr(segment))
1468
-
1469
-
1470
- if __name__ == "__main__":
1471
- import sys
1472
- import doctest
1473
-
1474
- sys.exit(doctest.testmod().failed)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/_version.py DELETED
@@ -1,21 +0,0 @@
1
-
2
- # This file was generated by 'versioneer.py' (0.20) from
3
- # revision-control system data, or from the parent directory name of an
4
- # unpacked source archive. Distribution tarballs contain a pre-generated copy
5
- # of this file.
6
-
7
- import json
8
-
9
- version_json = '''
10
- {
11
- "date": "2023-06-09T13:30:57-0400",
12
- "dirty": false,
13
- "error": null,
14
- "full-revisionid": "9a1e624022f3ad39071de5b17bafa23214b8662b",
15
- "version": "2023.6.0"
16
- }
17
- ''' # END VERSION_JSON
18
-
19
-
20
- def get_versions():
21
- return json.loads(version_json)