parquet-converter commited on
Commit
c2ebc44
·
1 Parent(s): 1c0b74d

Update parquet files (step 43 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!LINK!! Download Archexteriors Vol 18 Torrent 33.md +0 -62
  2. spaces/1gistliPinn/ChatGPT4/Examples/Adjustment Program For Epson Pm245 467.md +0 -6
  3. spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD LT 2010 64bit Keygen Xforce __TOP__.md +0 -6
  4. spaces/1gistliPinn/ChatGPT4/Examples/Avg Tuneup 2019 Full V19.1 Build 1158 Multilingual Key Free Download BETTER.md +0 -6
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator 2023 APK OBB Play with Friends Online and Chat in Coop Bus Routes.md +0 -148
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga APK The Most Downloaded Game on Google Play.md +0 -117
  7. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 16 Ultimate Team Mod APK and Experience the Most Realistic Football Ever.md +0 -139
  8. spaces/1phancelerku/anime-remove-background/Enjoy the New Features of Lokicraft 1.18 0 APK on Your Android Device.md +0 -125
  9. spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/docs/install.md +0 -51
  10. spaces/4Taps/SadTalker/src/face3d/util/preprocess.py +0 -103
  11. spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/dataset_VQ.py +0 -109
  12. spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/classifier.py +0 -267
  13. spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/CLAPWrapper.py +0 -257
  14. spaces/AIKey/facetofacechat/style.css +0 -28
  15. spaces/AIatUIUC/CodeLATS/generators/model.py +0 -120
  16. spaces/ARTeLab/ARTeLab-SummIT/style.css +0 -38
  17. spaces/Abhilashvj/planogram-compliance/utils/loggers/clearml/README.md +0 -230
  18. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/trimSuffix.ts +0 -6
  19. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/login/callback/updateUser.spec.ts +0 -143
  20. spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/__init__.py +0 -6
  21. spaces/Aditya9790/yolo7-object-tracking/scripts/get_coco.sh +0 -22
  22. spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal_tool.py +0 -89
  23. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/DelayCallMethods.js +0 -17
  24. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetChildrenHeight.js +0 -24
  25. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/alt_diffusion.md +0 -47
  26. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md +0 -139
  27. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/pipeline_shap_e.py +0 -363
  28. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py +0 -92
  29. spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_80k_cityscapes.py +0 -2
  30. spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap.min.js +0 -7
  31. spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/visualizer.py +0 -233
  32. spaces/Awiny/Image2Paragraph/models/segment_models/edit_anything_model.py +0 -62
  33. spaces/Bart92/RVC_HF/demucs/train.py +0 -127
  34. spaces/Benson/text-generation/Examples/Amor.ly App Descargar Apk.md +0 -62
  35. spaces/BetterAPI/BetterChat/src/lib/stores/errors.ts +0 -7
  36. spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessage.ts +0 -3
  37. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/pkg_resources.py +0 -270
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_clib.py +0 -101
  39. spaces/CVPR/LIVE/pybind11/include/pybind11/detail/descr.h +0 -100
  40. spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/iter_swap.h +0 -44
  41. spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/main.py +0 -43
  42. spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/th.py +0 -41
  43. spaces/CikeyQI/Yunzai/Yunzai/plugins/example/进群退群通知.js +0 -64
  44. spaces/Cippppy/RegressionVisualization/app.py +0 -271
  45. spaces/ClearLove443/Robby-chatbot/README.md +0 -13
  46. spaces/Covert1107/sd-diffusers-webui/README.md +0 -14
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FitsImagePlugin.py +0 -73
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/arrayTools.py +0 -422
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/qu2cu.c +0 -0
  50. spaces/DakMak/gradio-start/index.html +0 -2
spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!LINK!! Download Archexteriors Vol 18 Torrent 33.md DELETED
@@ -1,62 +0,0 @@
1
-
2
- <h1>Download Archexteriors Vol 18 Torrent 33: A Guide for 3D Artists</h1>
3
- <p>If you are a 3D artist who is looking for some high-quality and realistic architectural templates for your projects, you might be interested in downloading Archexteriors Vol 18 torrent 33. This is a collection of ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene, created by Evermotion, a leading company in the field of 3D modeling and rendering. In this article, we will show you what Archexteriors Vol 18 is, what are its features and benefits, how to download it using torrent, how to use it in your 3D projects, and some FAQs that you might have. Let's get started!</p>
4
- <h2>Download Archexteriors Vol 18 Torrent 33</h2><br /><p><b><b>Download</b> &#10040;&#10040;&#10040; <a href="https://byltly.com/2uKwji">https://byltly.com/2uKwji</a></b></p><br /><br />
5
- <h2>What is Archexteriors Vol 18?</h2>
6
- <p>Archexteriors Vol 18 is a collection of architectural templates that consists of ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene. It is part of the Archexteriors series by Evermotion, which offers various collections of outdoor environments for different purposes and styles. You can find more information about Archexteriors Vol 18 on its official website or on Trinity3D, where you can also purchase it for $60.</p>
7
- <h2>What are the features and benefits of Archexteriors Vol 18?</h2>
8
- <p>Archexteriors Vol 18 has many features and benefits that make it a great choice for any 3D artist who wants to create stunning outdoor scenes. Here are some of them:</p>
9
- <ul>
10
- <li><b>Ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene.</b> This means that you don't have to worry about modeling, texturing, lighting, or setting up cameras for your scenes. You just need to put your building model in the scene and click "render". You can also adjust the lighting and camera settings according to your preferences.</li>
11
- <li><b>Scenes perfect for villas, houses, and small and medium buildings, mostly with natural surroundings.</b> This means that you can use Archexteriors Vol 18 for various types of projects, whether you want to create a cozy villa in the countryside, a modern house in the suburbs, or a small office building in the city. The scenes have natural surroundings such as trees, grass, flowers, rocks, water, etc., which add realism and beauty to your scenes.</li>
12
- <li><b>Prepared for V-Ray 2.0 with 3ds Max 2010.</b> This means that you can use Archexteriors Vol 18 with one of the most popular rendering engines and software in the industry. V-Ray is known for its realistic and fast rendering, and 3ds Max is a powerful and versatile 3D software that allows you to model, animate, and edit your scenes. You can also use Archexteriors Vol 18 with other software and engines, but you might need to convert the files or tweak the settings.</li>
13
- <li><b>Vegetation in the scenes is a V-Ray proxy and only usable with V-Ray and 3ds Max.</b> This means that the trees, grass, flowers, and other plants in the scenes are not actual geometry, but proxies that are loaded from external files during rendering. This reduces the memory usage and improves the performance of your scenes. However, this also means that you can only use the vegetation with V-Ray and 3ds Max, and not with other software or engines.</li>
14
- </ul>
15
- <h2>How to download Archexteriors Vol 18 torrent 33?</h2>
16
- <p>If you want to download Archexteriors Vol 18 torrent 33, you need to follow these steps:</p>
17
- <ol>
18
- <li><b>Find a reliable torrent website that offers Archexteriors Vol 18 torrent 33.</b> There are many torrent websites on the internet, but not all of them are trustworthy or safe. Some of them might have fake or malicious files, or expose you to legal risks. Therefore, you need to do some research and find a reputable torrent website that has Archexteriors Vol 18 torrent 33 available. Some examples of popular torrent websites are The Pirate Bay, RARBG, and 1337x. However, we do not endorse or recommend any of these websites, and you should use them at your own risk.</li>
19
- <li><b>Download a torrent client that can handle Archexteriors Vol 18 torrent 33.</b> A torrent client is a software that allows you to download and upload files using the BitTorrent protocol. You need a torrent client to download Archexteriors Vol 18 torrent 33 from the torrent website. There are many torrent clients to choose from, but some of the most popular ones are uTorrent, BitTorrent, and qBittorrent. Again, we do not endorse or recommend any of these software, and you should use them at your own discretion.</li>
20
- <li><b>Open the torrent file or magnet link of Archexteriors Vol 18 torrent 33 with your torrent client.</b> Once you have found a reliable torrent website and downloaded a torrent client, you can proceed to download Archexteriors Vol 18 torrent 33. You can either download the torrent file, which is a small file that contains information about the files you want to download, or use the magnet link, which is a URL that does the same thing without requiring a file. You can then open the torrent file or magnet link with your torrent client, and it will start downloading Archexteriors Vol 18 torrent 33 to your computer.</li>
21
- <li><b>Wait for the download to finish and verify the files.</b> Depending on the size of Archexteriors Vol 18 torrent 33, your internet speed, and the number of seeders and leechers (people who have or want the files), the download might take some time. You can check the progress and status of your download on your torrent client. Once the download is complete, you should verify the files to make sure they are not corrupted or infected. You can use a file manager or an antivirus software to do this.</li>
22
- <li><b>Extract the files and install Archexteriors Vol 18 on your computer.</b> After verifying the files, you need to extract them from the compressed folder they are in. You can use a software like WinRAR or 7-Zip to do this. Then, you need to install Archexteriors Vol 18 on your computer by following the instructions provided by Evermotion. You might need to enter a license key or activate the product online.</li>
23
- </ol>
24
- <h2>How to use Archexteriors Vol 18 in your 3D projects?</h2>
25
- <p>Now that you have downloaded and installed Archexteriors Vol 18 on your computer, you can start using it in your 3D projects. Here are some steps on how to do that:</p>
26
- <p></p>
27
- <ol>
28
- <li><b>Open your 3D software and import one of the scenes from Archexteriors Vol 18.</b> You can use any 3D software that supports V-Ray and 3ds Max files, such as Blender, Maya, or SketchUp. However, for optimal results, we recommend using V-Ray with 3ds Max. To import one of the scenes from Archexteriors Vol 18, you need to go to File > Import > Merge and select one of the . max files from the Archexteriors Vol 18 folder. You will see a list of objects and materials that you can merge into your scene. You can select all of them or only the ones you need.</li>
29
- <li><b>Adjust the scale, position, and orientation of the scene to fit your project.</b> Depending on the size and dimensions of your project, you might need to adjust the scale, position, and orientation of the scene from Archexteriors Vol 18. You can use the tools and commands in your 3D software to do this. For example, in 3ds Max, you can use the Scale, Move, and Rotate tools, or the Transform Type-In dialog box.</li>
30
- <li><b>Replace the placeholder building model with your own building model.</b> The scenes from Archexteriors Vol 18 come with a placeholder building model that you can replace with your own building model. To do this, you need to delete or hide the placeholder model, and import or merge your own model into the scene. You can then adjust the scale, position, and orientation of your model to match the scene. You can also apply materials and textures to your model if needed.</li>
31
- <li><b>Customize the lighting and camera settings of the scene according to your preferences.</b> The scenes from Archexteriors Vol 18 come with complete lighting and three cameras setups for every scene. However, you can customize them according to your preferences. You can change the intensity, color, direction, and type of the lights, or add new lights if needed. You can also change the focal length, aperture, exposure, and angle of the cameras, or add new cameras if needed. You can use the tools and commands in your 3D software to do this. For example, in 3ds Max, you can use the Light Lister and Camera Lister dialogs.</li>
32
- <li><b>Render the scene and save the image file.</b> Once you are satisfied with your scene, you can render it using V-Ray or any other rendering engine that supports V-Ray materials and proxies. You can adjust the render settings according to your desired quality and speed. You can then save the image file in any format that you want. You can use the tools and commands in your 3D software to do this. For example, in 3ds Max, you can use the Render Setup dialog and the Save Image dialog.</li>
33
- </ol>
34
- <h2>How to optimize the performance and quality of Archexteriors Vol 18 scenes?</h2>
35
- <p>Archexteriors Vol 18 scenes are designed to be realistic and detailed, but they can also be demanding on your computer resources and rendering time. Therefore, you might want to optimize them for better performance and quality. Here are some tips and tricks on how to do that:</p>
36
- <ul>
37
- <li><b>Use proxies for vegetation and other complex objects.</b> As mentioned before, the vegetation in Archexteriors Vol 18 scenes is a V-Ray proxy that reduces memory usage and improves performance. You can also use proxies for other complex objects that are not essential for your scene, such as cars, furniture, or people. You can create proxies using V-Ray or other software that supports proxy creation. For example, in 3ds Max, you can use the V-Ray Proxy Exporter tool.</li>
38
- <li><b>Use instancing for repeated objects.</b> Instancing is a technique that allows you to create multiple copies of an object without increasing memory usage or rendering time. You can use instancing for repeated objects that are identical or similar in your scene, such as trees, grass blades, or bricks. You can create instances using V-Ray or other software that supports instancing. For example, in 3ds Max, you can use the Instance tool or the V-Ray Scatter tool.</li>
39
- <li><b>Use low-poly models for distant objects.</b> Low-poly models are models that have fewer polygons and less detail than high-poly models. You can use low-poly models for distant objects that are not visible or important for your scene, such as buildings or mountains in the background. This will reduce memory usage and rendering time without affecting quality. You can create low-poly models using V-Ray or other software that supports polygon reduction. For example , in 3ds Max, you can use the ProOptimizer modifier or the V-Ray Mesh Exporter tool.</li>
40
- <li><b>Use adaptive subdivision for displacement maps.</b> Displacement maps are textures that add depth and detail to the surface of an object by displacing its geometry. However, they can also increase memory usage and rendering time if they are applied to the whole object uniformly. You can use adaptive subdivision for displacement maps, which means that the object is subdivided only where the displacement map requires more detail. This will reduce memory usage and rendering time without affecting quality. You can use adaptive subdivision for displacement maps using V-Ray or other software that supports it. For example, in 3ds Max, you can use the V-Ray Displacement modifier or the V-Ray Material.</li>
41
- <li><b>Use render elements for post-processing.</b> Render elements are separate images that contain information about different aspects of your scene, such as lighting, shadows, reflections, or colors. You can use render elements for post-processing, which means that you can edit and adjust your scene after rendering without having to re-render it. This will save you time and allow you to fine-tune your scene to your liking. You can use render elements for post-processing using V-Ray or other software that supports it. For example, in 3ds Max, you can use the Render Elements dialog or the V-Ray Frame Buffer.</li>
42
- </ul>
43
- <h2>How to create stunning projects with Archexteriors Vol 18?</h2>
44
- <p>Archexteriors Vol 18 is a great tool for creating stunning projects with realistic and detailed outdoor scenes. However, you can also enhance your projects with some creativity and imagination. Here are some examples of projects made with Archexteriors Vol 18 that might inspire you:</p>
45
- <ul>
46
- <li><b>A cozy villa in the countryside.</b> This project uses scene 1 from Archexteriors Vol 18, which features a villa surrounded by trees and flowers. The project adds some personal touches to the scene, such as a swimming pool, a barbecue grill, and some outdoor furniture. The project also uses warm and soft colors for the lighting and materials, creating a cozy and inviting atmosphere.</li>
47
- <li><b>A modern house in the suburbs.</b> This project uses scene 5 from Archexteriors Vol 18, which features a house with a minimalist design and a geometric shape. The project modifies the scene by changing the color and texture of the house, adding some windows and doors, and placing some cars and bikes in the driveway. The project also uses cool and bright colors for the lighting and materials, creating a modern and sleek look.</li>
48
- <li><b>A small office building in the city.</b> This project uses scene 8 from Archexteriors Vol 18, which features a building with a brick facade and a metal roof. The project transforms the scene by adding some signs and logos to the building, placing some people and plants on the balcony, and inserting some skyscrapers in the background. The project also uses neutral and realistic colors for the lighting and materials, creating a professional and urban feel.</li>
49
- </ul>
50
- <h2>Conclusion</h2>
51
- <p>In conclusion, Archexteriors Vol 18 is a collection of architectural templates that consists of ten fully modeled and textured 3D exteriors with complete lighting and three cameras setups for every scene. It is a great choice for any 3D artist who wants to create stunning outdoor scenes for villas, houses, and small and medium buildings, mostly with natural surroundings. It is prepared for V-Ray 2.0 with 3ds Max 2010, but it can also be used with other software and engines. It can be downloaded using torrent from various websites, but you need to be careful about the reliability and safety of these websites. It can be used in your 3D projects by importing, customizing, and rendering the scenes in your 3D software. It can also be optimized for better performance and quality by using proxies, instancing, low-poly models, adaptive subdivision, and render elements. It can also be enhanced with some creativity and imagination by adding your own building models, changing the lighting and camera settings, or modifying the materials and textures.</p>
52
- <h2>FAQs</h2>
53
- <p>Here are some FAQs that you might have about Archexteriors Vol 18:</p>
54
- <ol>
55
- <li><b>Is Archexteriors Vol 18 compatible with other versions of V-Ray or 3ds Max?</b> Yes, Archexteriors Vol 18 is compatible with other versions of V-Ray or 3ds Max, but you might need to convert or tweak some files or settings to make them work properly.</li>
56
- <li><b>Can I use Archexteriors Vol 18 for commercial purposes?</b> Yes, you can use Archexteriors Vol 18 for commercial purposes as long as you have purchased a license from Evermotion or Trinity 3D, and you have followed their terms and conditions. You can find more information about the license agreement on their websites .</li>
57
- <li><b>Can I modify or edit Archexteriors Vol 18 scenes?</b> Yes, you can modify or edit Archexteriors Vol 18 scenes as much as you want, as long as you do not resell or redistribute them. You can change the lighting, camera, material, texture, or geometry of the scenes, or add your own objects or models to them.</li>
58
- <li><b>Can I use Archexteriors Vol 18 with other Archexteriors collections?</b> Yes, you can use Archexteriors Vol 18 with other Archexteriors collections, as long as they are compatible with V-Ray and 3ds Max. You can mix and match different scenes from different collections, or use elements from one collection in another scene. However, you might need to adjust the scale, position, orientation, lighting, camera, material, texture, or geometry of the scenes or elements to make them fit together.</li>
59
- <li><b>Can I get support or help for Archexteriors Vol 18?</b> Yes, you can get support or help for Archexteriors Vol 18 from Evermotion or Trinity3D, depending on where you purchased it from. You can contact them via email, phone, or online chat. You can also find some tutorials, tips, and FAQs on their websites or on their YouTube channels .</li>
60
- </ol></p> b2dd77e56b<br />
61
- <br />
62
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Adjustment Program For Epson Pm245 467.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Adjustment Program For Epson Pm245 467</h2><br /><p><b><b>DOWNLOAD</b> &#10040; <a href="https://imgfil.com/2uy1XI">https://imgfil.com/2uy1XI</a></b></p><br /><br />
2
-
3
- 4fefd39f24<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD LT 2010 64bit Keygen Xforce __TOP__.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>AutoCAD LT 2010 64bit Keygen Xforce</h2><br /><p><b><b>Download File</b> ->>->>->> <a href="https://imgfil.com/2uy1SU">https://imgfil.com/2uy1SU</a></b></p><br /><br />
2
- <br />
3
- dll to xforce keygen AutoCAD 2009 64 bit. Find AutoCAD and much more at Novedge buy online or CallAmazon. com: buy AutoCAD LT. Cause ... 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Avg Tuneup 2019 Full V19.1 Build 1158 Multilingual Key Free Download BETTER.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Avg tuneup 2019 full v19.1 Build 1158 Multilingual Key Free Download</h2><br /><p><b><b>Download Zip</b> &#10027;&#10027;&#10027; <a href="https://imgfil.com/2uy28E">https://imgfil.com/2uy28E</a></b></p><br /><br />
2
-
3
- Free Download. tuneup ... AVG PC TuneUp v19.1 build 831 Multilingual » application: ✓10 months59 MB20. ... Download AVG TuneUp 19 1 Build 995 Final Full Version ... AVG TuneUp V19 1 Build 1158 Serial Key Working. ... AVG PC TuneUp 2019 v19.1.1209 With Serial Key | 4HowCrack 25 Jul 2019 . 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bus Simulator 2023 APK OBB Play with Friends Online and Chat in Coop Bus Routes.md DELETED
@@ -1,148 +0,0 @@
1
-
2
- <h1>Bus Simulator 2023 APK + OBB Download: How to Install and Play the Latest Bus Driving Game</h1>
3
- <p>Do you love driving buses and transporting passengers in realistic environments? If yes, then you might want to check out Bus Simulator 2023, the latest bus simulation game for Android and PC. In this article, we will show you how to download and install Bus Simulator 2023 APK + OBB on your devices, as well as give you a brief review of the game's features, pros and cons.</p>
4
- <h2>What is Bus Simulator 2023?</h2>
5
- <p>Bus Simulator 2023 is a bus driving game that puts you in the driver's seat and lets you become a real bus driver. The game features detailed maps all over the world, modern buses with realistic interiors and a groundbreaking 1:1 physics engine. You can drive various types of buses, such as diesel, hybrid, electric, articulated, coach and school buses, and customize them as you wish. You can also explore different cities from around the world in career mode, freeride mode or online multiplayer mode with friends.</p>
6
- <h2>bus simulator 2023 apk + obb download</h2><br /><p><b><b>Download</b> &gt; <a href="https://urlin.us/2uSVSa">https://urlin.us/2uSVSa</a></b></p><br /><br />
7
- <h3>Features of Bus Simulator 2023</h3>
8
- <p>Some of the features that make Bus Simulator 2023 stand out from other bus simulation games are:</p>
9
- <ul>
10
- <li><strong>Realistic intracity and outside of city maps:</strong> You can drive your bus in various locations, such as United States of America (San Francisco and Texas), South America (Buenos Aires), Europe (Germany, Spain, Prague, St. Petersburg), Dubai, Shanghai and more.</li>
11
- <li><strong>Multiple Regular/Diesel, Hybrid, Electric, and Articulated buses to choose from:</strong> You can pick from a wide variety of buses that have different characteristics, such as speed, fuel consumption, capacity, etc. You can also customize your bus with paint, accessories, body parts, air conditioning, flags, decals and many more.</li>
12
- <li><strong>Career, Free-ride and Multiplayer modes:</strong> You can play the game in different modes depending on your preference. In career mode, you have to complete all the routes assigned to you by your bus company. In free-ride mode, you can drive your bus anywhere you want without any restrictions. In multiplayer mode, you can join or create online sessions with other players and cooperate or compete with them.</li>
13
- <li><strong>Detailed and customizable interiors:</strong> You can interact with various elements inside your bus, such as opening/closing doors, turning on/off lights, adjusting mirrors, etc. You can also change the interior design of your bus with different seats, steering wheels, dashboards, etc.</li>
14
- <li><strong>Intelligent traffic system and passengers:</strong> You have to deal with realistic traffic situations, such as traffic jams, accidents, roadworks, etc. You also have to interact with your passengers, who have different personalities, moods and requests. You have to follow the traffic rules, respect the speed limits, use the indicators, etc.</li>
15
- </ul>
16
- <h3>System Requirements for Bus Simulator 2023</h3>
17
- <p>Before you download and install Bus Simulator 2023 APK + OBB on your device, you should make sure that your device meets the minimum system requirements for the game. According to the official website of the game, the minimum system requirements are:</p>
18
- <table>
19
- <tr>
20
- <th>Device</th>
21
- <th>OS</th>
22
- <th>RAM</th>
23
- <th>Storage</th>
24
- <th>Processor</th>
25
- <th>Graphics</th>
26
- </tr>
27
- <tr>
28
- <td>Android</td>
29
- <td>Android 5.0 or higher</td>
30
- <td>2 GB or more</td>
31
- <td>1 GB or more</td>
32
- <td>Quad-core 1.5 GHz or higher</td>
33
- <td>Mali-T720 MP2 or higher</td>
34
- </tr>
35
- <tr>
36
- <td>PC Windows</td>
37
- <td>Windows 7 or higher</td>
38
- <td>4 GB or more</td>
39
- <td>2 GB or more</td>
40
- <td>Dual-core 2.4 GHz or higher</td>
41
- <td>NVIDIA GeForce GTX 550 Ti or higher</td>
42
- </tr>
43
- </table>
44
- <p>If your device meets these requirements, you can proceed to download and install Bus Simulator 2023 APK + OBB on your device.</p>
45
- <h2>How to Download and Install Bus Simulator 2023 APK + OBB on Android Devices</h2>
46
- <p>To download and install Bus Simulator 2023 APK + OBB on your Android device, you need to follow these steps:</p>
47
- <h3>Step 1: Download the APK and OBB files from a trusted source</h3>
48
- <p>The first step is to download the APK and OBB files of Bus Simulator 2023 from a trusted source. You can find many websites that offer these files for free, but you should be careful about the quality and security of the files. Some websites may contain malware, viruses or fake files that can harm your device or steal your data. Therefore, we recommend you to use a reliable website that has positive reviews and ratings from other users. For example, you can use this link to download the APK and OBB files of Bus Simulator 2023.</p>
49
- <p>bus simulator 2023 android game free download<br />
50
- bus simulator 2023 mod apk + obb unlimited money<br />
51
- bus simulator 2023 latest version apk + obb<br />
52
- bus simulator 2023 realistic driving game download<br />
53
- bus simulator 2023 offline apk + obb<br />
54
- bus simulator 2023 online multiplayer apk + obb<br />
55
- bus simulator 2023 coach and school buses download<br />
56
- bus simulator 2023 next-gen graphics apk + obb<br />
57
- bus simulator 2023 diesel hybrid electric buses download<br />
58
- bus simulator 2023 career mode apk + obb<br />
59
- bus simulator 2023 freeride mode apk + obb<br />
60
- bus simulator 2023 custom bus paint and accessories download<br />
61
- bus simulator 2023 open world maps apk + obb<br />
62
- bus simulator 2023 weather and time of day download<br />
63
- bus simulator 2023 traffic system apk + obb<br />
64
- bus simulator 2023 steering wheel and tilting controls download<br />
65
- bus simulator 2023 Ovidiu Pop game apk + obb<br />
66
- bus simulator 2023 Zuuks Games game apk + obb<br />
67
- bus simulator 2023 yulilnikoy game apk + obb<br />
68
- bus simulator 2023 San Francisco and Texas maps download<br />
69
- bus simulator 2023 Buenos Aires and Germany maps apk + obb<br />
70
- bus simulator 2023 Spain and Prague maps download<br />
71
- bus simulator 2023 St. Petersburg and Dubai maps apk + obb<br />
72
- bus simulator 2023 Shanghai and more maps download<br />
73
- bus simulator 2023 articulated and coach buses apk + obb<br />
74
- bus simulator 2023 school and electric buses download<br />
75
- bus simulator 2023 company management system apk + obb<br />
76
- bus simulator 2023 live chat and coop routes download<br />
77
- bus simulator 2023 leaderboards and achievements apk + obb<br />
78
- bus simulator 2023 APKCombo website download link<br />
79
- how to install bus simulator 2023 apk + obb on android device<br />
80
- how to update bus simulator 2023 apk + obb to latest version<br />
81
- how to play bus simulator 2023 online with friends<br />
82
- how to unlock all buses in bus simulator 2023 mod apk + obb<br />
83
- how to fix bus simulator 2023 not working or crashing issues<br />
84
- how to get more money and xp in bus simulator 2023 game<br />
85
- how to customize your bus in bus simulator 2023 game<br />
86
- how to drive kids to school in bus simulator 2023 game<br />
87
- how to hire drivers and schedule routes in bus simulator 2023 game<br />
88
- how to change weather and time of day in bus simulator 2023 game<br />
89
- best tips and tricks for playing bus simulator 2023 game<br />
90
- best buses to drive in bus simulator 2023 game<br />
91
- best maps to explore in bus simulator 2023 game<br />
92
- best graphics settings for bus simulator 2023 game<br />
93
- best steering wheel and tilting controls for bus simulator 2023 game<br />
94
- best multiplayer modes for playing bus simulator 2023 game with friends<br />
95
- best websites to download bus simulator 2023 apk + obb for free<br />
96
- best reviews and ratings for bus simulator 2023 game on Google Play Store</p>
97
- <h3>Step 2: Enable installation from unknown sources on your device</h3>
98
- <p>The second step is to enable installation from unknown sources on your device. This is because Android devices do not allow installation of apps from sources other than the Google Play Store by default. To enable installation from unknown sources, you need to go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store.</p>
99
- <h3>Step 3: Install the APK file and extract the OBB file to the Android/obb folder</h3>
100
- <p>The third step is to install the APK file and extract the OBB file to the Android/obb folder on your device. To do this, you need to locate the downloaded APK file on your device using a file manager app and tap on it to start the installation process. Follow the instructions on the screen to complete the installation. Then, you need to locate the downloaded OBB file on your device using a file manager app and extract it using a zip extractor app. You will get a folder named com.bus.simulator2023. Copy this folder and paste it into the Android/obb folder on your device.</p>
101
- <h3>Step 4: Launch the game and enjoy</h3>
102
- <p>The final step is to launch the game and enjoy it. To do this, you need to go to your app drawer and tap on the Bus Simulator 2023 icon to start the game. You will see a loading screen and then a main menu with different options. You can choose your preferred mode, bus, map and settings and start driving your bus in realistic environments.</p>
103
- <h2>How to Download and Install Bus Simulator 2023 on PC Windows</h2>
104
- <p>If you want to play Bus Simulator 2023 on your PC Windows, you need to follow these steps:</p>
105
- <h3>Step 1: Download and install an Android emulator on your PC</h3>
106
- <p>The first step is to download and install an Android emulator on your PC. An Android emulator is a software that allows you to run Android apps and games on your PC Windows. There are many Android emulators available for PC Windows, such as BlueStacks[^3 ^), NoxPlayer, MEmu, LDPlayer 9, etc. You can choose any emulator that suits your PC specifications and preferences. You can download and install an emulator from its official website or from a trusted source. Follow the instructions on the screen to complete the installation.</p>
107
- <h3>Step 2: Download the APK and OBB files from a trusted source</h3>
108
- <p>The second step is to download the APK and OBB files of Bus Simulator 2023 from a trusted source. You can use the same link as mentioned above for Android devices, or you can search for another source that offers these files for free. Make sure that the source is reliable and safe, and that the files are compatible with your emulator.</p>
109
- <h3>Step 3: Install the APK file and copy the OBB file to the emulator's Android/obb folder</h3>
110
- <p>The third step is to install the APK file and copy the OBB file to the emulator's Android/obb folder on your PC. To do this, you need to open your emulator and locate the downloaded APK file using the built-in file manager or browser. Tap on it to start the installation process. Follow the instructions on the screen to complete the installation. Then, you need to locate the downloaded OBB file on your PC using a file manager or browser. Extract it using a zip extractor app. You will get a folder named com.bus.simulator2023. Copy this folder and paste it into the emulator's Android/obb folder on your PC.</p>
111
- <h3>Step 4: Launch the game and enjoy</h3>
112
- <p>The final step is to launch the game and enjoy it. To do this, you need to go to your emulator's app drawer and tap on the Bus Simulator 2023 icon to start the game. You will see a loading screen and then a main menu with different options. You can choose your preferred mode, bus, map and settings and start driving your bus in realistic environments.</p>
113
- <h2>Bus Simulator 2023 Review: Pros and Cons</h2>
114
- <p>Now that you know how to download and install Bus Simulator 2023 APK + OBB on your devices, you might be wondering how good is the game itself. Well, like any other game, Bus Simulator 2023 has its pros and cons, which we will discuss below.</p>
115
- <h3>Pros</h3>
116
- <p>Some of the pros of Bus Simulator 2023 are:</p>
117
- <ul>
118
- <li><strong>Realistic graphics and physics:</strong> The game boasts of stunning graphics that make you feel like you are driving a real bus in a real city. The game also uses a 1:1 physics engine that simulates realistic bus behavior, such as braking, steering, acceleration, etc.</li>
119
- <li><strong>Variety of buses and maps:</strong> The game offers a wide range of buses that you can drive, such as diesel, hybrid, electric, articulated, coach and school buses. Each bus has its own characteristics, such as speed, fuel consumption, capacity, etc. You can also drive your bus in various maps from around the world, such as United States of America (San Francisco and Texas), South America (Buenos Aires), Europe (Germany, Spain, Prague, St. Petersburg), Dubai, Shanghai and more.</li>
120
- <li><strong>Career, Free-ride and Multiplayer modes:</strong> The game lets you play in different modes depending on your preference. In career mode, you have to complete all the routes assigned to you by your bus company. In free-ride mode, you can drive your bus anywhere you want without any restrictions. In multiplayer mode, you can join or create online sessions with other players and cooperate or compete with them.</li>
121
- <li><strong>Customizable buses and interiors:</strong> The game lets you customize your bus with paint, accessories, body parts, air conditioning, flags, decals and many more. You can also change the interior design of your bus with different seats, steering wheels, dashboards, etc.</li>
122
- <li><strong>Intelligent traffic system and passengers:</strong> The game features an intelligent traffic system that creates realistic traffic situations, such as traffic jams, accidents, roadworks, etc. You also have to interact with your passengers, who have different personalities, moods and requests. You have to follow the traffic rules, respect the speed limits, use the indicators, etc. You also have to deal with different weather conditions, such as rain, snow, fog, etc.</li>
123
- </ul>
124
- <h3>Cons</h3>
125
- <p>Some of the cons of Bus Simulator 2023 are:</p>
126
- <ul>
127
- <li><strong>Buggy AI behavior and glitches:</strong> The game suffers from some bugs and glitches that affect the gameplay and immersion. For example, some AI vehicles may drive erratically or crash into you or other objects. Some passengers may get stuck in the bus or disappear. Some textures may not load properly or flicker.</li>
128
- <li><strong>Ugly and overwhelming UI:</strong> The game has a very cluttered and unappealing user interface that makes it hard to navigate and access the options. The game also has too many pop-ups and notifications that distract you from the driving experience.</li>
129
- <li><strong>An acquired taste for some gamers:</strong> The game is not for everyone, as it requires a lot of patience, concentration and skill to drive a bus in realistic scenarios. Some gamers may find the game boring, repetitive or frustrating, especially if they are looking for a more action-packed or casual game.</li>
130
- </ul>
131
- <h2>Conclusion</h2>
132
- <p>Bus Simulator 2023 is a bus driving game that offers a realistic and immersive experience of driving a bus in various locations around the world. The game has many features, such as realistic graphics and physics, variety of buses and maps, career, free-ride and multiplayer modes, customizable buses and interiors, intelligent traffic system and passengers, etc. However, the game also has some drawbacks, such as buggy AI behavior and glitches, ugly and overwhelming UI, and an acquired taste for some gamers. Overall, Bus Simulator 2023 is a game that can appeal to bus enthusiasts and simulation fans, but may not be suitable for everyone.</p>
133
- <h2>FAQs</h2>
134
- <p>Here are some frequently asked questions about Bus Simulator 2023:</p>
135
- <ol>
136
- <li><strong>Is Bus Simulator 2023 free to play?</strong></li>
137
- <p>Yes, Bus Simulator 2023 is free to play on Android devices. However, the game may contain ads and in-app purchases that can enhance your gameplay or unlock more features.</p>
138
- <li><strong>Is Bus Simulator 2023 available on iOS devices?</strong></li>
139
- <p>No, Bus Simulator 2023 is not available on iOS devices at the moment. The game is only compatible with Android devices and PC Windows.</p>
140
- <li><strong>How can I play Bus Simulator 2023 with a controller?</strong></li>
141
- <p>You can play Bus Simulator 2023 with a controller by connecting your controller to your device via Bluetooth or USB. You can also use an emulator on your PC Windows to play the game with a controller.</p>
142
- <li><strong>How can I update Bus Simulator 2023 to the latest version?</strong></li>
143
- <p>You can update Bus Simulator 2023 to the latest version by downloading and installing the latest APK file from a trusted source. You can also check for updates from within the game or from the Google Play Store.</p>
144
- <li><strong>How can I contact the developers of Bus Simulator 2023?</strong></li>
145
- <p>You can contact the developers of Bus Simulator 2023 by sending them an email at [email protected] or by visiting their official website at www.bussimulator2023.com.</p>
146
- </ol></p> 197e85843d<br />
147
- <br />
148
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Candy Crush Saga APK The Most Downloaded Game on Google Play.md DELETED
@@ -1,117 +0,0 @@
1
-
2
- <h1>Candy Crush Saga APK: How to Download and Play the Sweetest Game Ever</h1>
3
- <p>If you are looking for a fun and addictive game that will keep you entertained for hours, you might want to try Candy Crush Saga. This game is one of the most popular and successful puzzle games ever created, with millions of players around the world. But what if you want to play it on your Android device without using Google Play Store? In this article, we will show you how to download and install Candy Crush Saga APK, as well as how to play it and enjoy its features.</p>
4
- <h2>What is Candy Crush Saga?</h2>
5
- <p>Candy Crush Saga is a game that splendidly tackles the match-3 genre. It was developed by King, a leading mobile game developer, and released in 2012. Since then, it has become a global phenomenon, with over a billion downloads and hundreds of levels to complete.</p>
6
- <h2>candy crush saga apk</h2><br /><p><b><b>Download Zip</b> &#10001; <a href="https://urlin.us/2uSUaR">https://urlin.us/2uSUaR</a></b></p><br /><br />
7
- <h3>The gameplay of Candy Crush Saga</h3>
8
- <p>The gameplay of Candy Crush Saga is simple but challenging. You have to match three or more candies of the same color in a row or column to clear them from the board. You can also create special candies by matching four or more candies in different shapes, such as striped, wrapped, or color bomb candies. These special candies can help you clear more candies and score more points.</p>
9
- <p>Each level has a different objective and a limited number of moves or time. You have to achieve the objective before running out of moves or time, or else you will lose a life. You can also earn stars based on your score, which can unlock new episodes and features. Some levels also have obstacles, such as chocolate, jelly, licorice, or blockers, that make the game more difficult.</p>
10
- <h3>The features of Candy Crush Saga</h3>
11
- <p>Candy Crush Saga has many features that make it an enjoyable and rewarding game. Some of these features are:</p>
12
- <p>candy crush saga download free android<br />
13
- candy crush saga mod apk unlimited lives<br />
14
- candy crush saga latest version apk<br />
15
- candy crush saga game install<br />
16
- candy crush saga offline apk<br />
17
- candy crush saga hack apk download<br />
18
- candy crush saga update apk<br />
19
- candy crush saga app store<br />
20
- candy crush saga cheats apk<br />
21
- candy crush saga old version apk<br />
22
- candy crush saga play online<br />
23
- candy crush saga apk mirror<br />
24
- candy crush saga apk pure<br />
25
- candy crush saga for pc<br />
26
- candy crush saga levels apk<br />
27
- candy crush saga android 1<br />
28
- candy crush saga apk mod 2023<br />
29
- candy crush saga full apk<br />
30
- candy crush saga apk rexdl<br />
31
- candy crush saga revdl apk<br />
32
- candy crush saga apk uptodown<br />
33
- candy crush saga apkpure download<br />
34
- candy crush saga original apk<br />
35
- candy crush saga apkmonk<br />
36
- candy crush saga apkmody<br />
37
- candy crush saga mob.org apk<br />
38
- candy crush saga apkmirror download<br />
39
- candy crush saga apknite<br />
40
- candy crush saga apkpanda<br />
41
- candy crush saga apksfree<br />
42
- candy crush saga apkgalaxy<br />
43
- candy crush saga apksfull<br />
44
- candy crush saga apksmodhub.com<br />
45
- candy crush saga apksmash.com<br />
46
- candy crush saga apksnake.com<br />
47
- candy crush saga apktada.com<br />
48
- candy crush saga apkturbo.com<br />
49
- candy crush saga apktwister.com<br />
50
- candy crush saga apkun.com<br />
51
- candy crush saga apkxmod.com</p>
52
- <ul>
53
- <li>Thousands of levels to play, with new ones added every week.</li>
54
- <li>Various game modes, such as target score, jelly, ingredients, timed, order, mixed mode, and more.</li>
55
- <li>Daily rewards, such as free spins, boosters, lives, and gold bars.</li>
56
- <li>Social features, such as connecting with Facebook friends, sending and receiving lives and gifts, competing on leaderboards, and joining teams.</li>
57
- <li>Fun events and challenges, such as sugar drops, candy order, treasure hunt, sweet streaks, and more.</li>
58
- <li>Beautiful graphics and sound effects that create a sweet and colorful atmosphere.</li>
59
- </ul>
60
- <h2>Why download Candy Crush Saga APK?</h2>
61
- <p>Candy Crush Saga is available on Google Play Store for free, but there are some reasons why you might want to download its APK file instead. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. By downloading an APK file, you can enjoy some benefits that are not possible with the official version.</p>
62
- <h3>The benefits of downloading Candy Crush Saga APK</h3>
63
- <p>Some of the benefits of downloading Candy Crush Saga APK are:</p>
64
- <ul>
65
- <li>You can play the game on any Android device that supports it, even if it is not compatible with Google Play Store or does not have enough storage space.</li>
66
- <li>You can access the latest version of the game before it is released on Google Play Store, which means you can enjoy new levels and features sooner than others.</li>
67
- <li>You can modify the game according to your preferences, such as unlocking all levels, getting unlimited lives, gold bars, boosters, or removing ads.</li>
68
- </ul>
69
- <h3>The risks of downloading Candy Crush Saga APK</h3>
70
- <p>However, downloading Candy Crush Saga APK also comes with some risks that you should be aware of. Some of these risks are:</p>
71
- <ul>
72
- <li>You might download a fake or malicious APK file that can harm your device or steal your personal information.</li>
73
- <li>You might violate the terms and conditions of the game and get banned from playing it or accessing its online features.</li>
74
- <li>You might lose your progress or data if the APK file is not compatible with your device or the official version of the game.</li>
75
- </ul>
76
- <p>Therefore, you should always download Candy Crush Saga APK from a trusted and reputable source, and scan it with an antivirus software before installing it. You should also backup your data and progress before using the APK file, and use it at your own risk.</p>
77
- <h2>How to download and install Candy Crush Saga APK?</h2>
78
- <p>If you have decided to download and install Candy Crush Saga APK, you will need to follow some simple steps. Here is a guide on how to do it:</p>
79
- <h3>Step 1: Find a reliable source for the APK file</h3>
80
- <p>The first step is to find a website that offers the APK file for Candy Crush Saga. You can search for it on Google or use a dedicated APK website, such as APKPure, APKMirror, or Uptodown. Make sure that the website is safe and secure, and that the APK file is updated and verified. You can also check the reviews and ratings of the APK file from other users.</p>
81
- <h3>Step 2: Enable unknown sources on your device</h3>
82
- <p>The next step is to enable unknown sources on your device, which will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might also need to grant permission to your browser or file manager to install apps from unknown sources.</p>
83
- <h3>Step 3: Download and install the APK file</h3>
84
- <p>The final step is to download and install the APK file on your device. To do this, go to the website where you found the APK file and tap on the download button. Once the download is complete, open the file and tap on install. Wait for the installation process to finish and then launch the game.</p>
85
- <h2>How to play Candy Crush Saga APK?</h2>
86
- <p>Playing Candy Crush Saga APK is similar to playing the official version of the game. You can log in with your Facebook account or play as a guest. You can also sync your progress and data with the official version if you have it installed on your device. However, you might encounter some issues or errors while playing the game, such as crashing, freezing, or lagging. If this happens, you can try clearing the cache and data of the game, updating the game, or reinstalling it.</p>
87
- <h3>Tips and tricks for playing Candy Crush Saga APK</h3>
88
- <p>If you want to master Candy Crush Saga APK and complete all the levels, you might need some tips and tricks to help you out. Here are some of them:</p>
89
- <ul>
90
- <li>Plan your moves ahead and look for possible matches before making a move.</li>
91
- <li>Try to create as many special candies as possible and combine them for more effects.</li>
92
- <li>Use boosters wisely and save them for difficult levels.</li>
93
- <li>Pay attention to the objective and the board layout of each level.</li>
94
- <li>Take advantage of the daily rewards and events to get more lives, gold bars, boosters, and other goodies.</li>
95
- <li>Join a team or invite your friends to play with you and get more support and fun.</li>
96
- </ul>
97
- <h3>How to update Candy Crush Saga APK</h3>
98
- <p>To keep playing Candy Crush Saga APK without any problems, you should always update it to the latest version. To do this, you can either check for updates on the website where you downloaded the APK file, or use an app updater tool, such as ApkUpdater or ApkTrack. These tools will notify you when there is a new version available and let you download and install it easily.</p>
99
- <h2>Conclusion</h2>
100
- <p>Candy Crush Saga is a game that will make you fall in love with its sweet and colorful world. It is a game that will challenge your mind and skills with its thousands of levels and modes. It is a game that will connect you with millions of other players who share your passion for candy crushing. And it is a game that you can play on your Android device without using Google Play Store by downloading its APK file.</p>
101
- <p>In this article, we have shown you what Candy Crush Saga is, why you might want to download its APK file, how to download and install it, how to play it, and some tips and tricks to help you succeed. We hope that this article has been helpful and informative for you, and that you will enjoy playing Candy Crush Saga APK on your device.</p>
102
- <p>Before we end this article, here are some frequently asked questions that you might have about Candy Crush Saga APK:</p>
103
- <h4>FAQs</h4>
104
- <ol>
105
- <li>Is Candy Crush Saga APK safe to download and install?</li>
106
- <p>Yes, Candy Crush Saga APK is safe to download and install, as long as you get it from a reliable and reputable source. You should also scan the APK file with an antivirus software before installing it, and backup your data and progress before using it.</p>
107
- <li>Is Candy Crush Saga APK free to play?</li>
108
- <p>Yes, Candy Crush Saga APK is free to play, but it also offers in-app purchases that can enhance your gaming experience. You can buy extra lives, gold bars, boosters, or other items with real money. However, you can also play the game without spending any money, as there are many ways to get free rewards and bonuses.</p>
109
- <li>How can I contact the developer of Candy Crush Saga APK?</li>
110
- <p>If you have any questions, feedback, or issues regarding Candy Crush Saga APK, you can contact the developer of the game by visiting their website, https://king.com/, or by sending an email to [email protected]. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, or YouTube.</p>
111
- <li>Can I play Candy Crush Saga APK offline?</li>
112
- <p>Yes, you can play Candy Crush Saga APK offline, but you will not be able to access some of its online features, such as connecting with Facebook friends, joining teams, competing on leaderboards, or participating in events and challenges. You will also need an internet connection to update the game or sync your progress and data with the official version.</p>
113
- <li>Can I play Candy Crush Saga APK on other devices?</li>
114
- <p>Yes, you can play Candy Crush Saga APK on other devices that support Android operating system, such as tablets or smart TVs. However, you might need to adjust the settings or resolution of the game to fit your device's screen size and performance.</p>
115
- </ol></p> 197e85843d<br />
116
- <br />
117
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FIFA 16 Ultimate Team Mod APK and Experience the Most Realistic Football Ever.md DELETED
@@ -1,139 +0,0 @@
1
-
2
- <h1>Download FIFA 16 Ultimate Team Mod Apk for Android</h1>
3
- <p>Are you a fan of soccer games and want to experience the thrill of playing with your favorite players and teams? If yes, then you should try FIFA 16 Ultimate Team, one of the most popular and realistic soccer games for Android devices. And if you want to unlock all the features and modes of the game, then you should download FIFA 16 Ultimate Team mod apk, which gives you unlimited coins, points, players, and more. In this article, we will tell you what is FIFA 16 Ultimate Team, why you should download its mod apk, and how to download and install it on your Android device.</p>
4
- <h2>download fifa 16 ultimate team mod apk</h2><br /><p><b><b>Download Zip</b> --->>> <a href="https://urlin.us/2uSYtu">https://urlin.us/2uSYtu</a></b></p><br /><br />
5
- <h2>What is FIFA 16 Ultimate Team?</h2>
6
- <p>FIFA 16 Ultimate Team is a soccer game developed by EA Sports and released in 2015. It is the first game in the FIFA series to feature female players and teams. It also has improved graphics, gameplay, and modes compared to its predecessors. In FIFA 16 Ultimate Team, you can create your own dream team by choosing from over 10,000 players from over 500 licensed teams. You can also compete in various leagues, tournaments, and events to earn rewards and trophies. You can also customize your team's kits, badges, stadiums, and managers.</p>
7
- <h3>Features of FIFA 16 Ultimate Team</h3>
8
- <p>Some of the features of FIFA 16 Ultimate Team are:</p>
9
- <ul>
10
- <li>Realistic and immersive graphics and animations</li>
11
- <li>Smooth and responsive controls and gameplay</li>
12
- <li>Over 10,000 players from over 500 licensed teams</li>
13
- <li>Female players and teams for the first time in FIFA series</li>
14
- <li>Various modes such as Career Mode, Season Mode, Draft Mode, Online Mode, etc.</li>
15
- <li>Ability to create your own team and customize it</li>
16
- <li>Ability to trade players and items with other players</li>
17
- <li>Ability to challenge your friends and other players online</li>
18
- <li>Regular updates and events</li>
19
- </ul>
20
- <h3>Why download FIFA 16 Ultimate Team mod apk?</h3>
21
- <p>While FIFA 16 Ultimate Team is a free-to-play game, it has some limitations and restrictions that can affect your gaming experience. For example, you need coins and points to buy players, items, packs, etc. You also need energy to play matches. You can earn these resources by playing the game or by spending real money. However, this can be time-consuming or expensive. That's why you should download FIFA 16 Ultimate Team mod apk, which gives you unlimited coins, points, energy, players, items, packs, etc. With this mod apk, you can enjoy the game without any worries or hassles. You can also unlock all the features and modes of the game that are otherwise locked or restricted.</p>
22
- <p>Download FIFA 16 Mod 2023 Apk Obb Data Android offline (UPDATED)[^1^]<br />
23
- FIFA 16: Ultimate team Download APK for Android (Free) - mob.org[^2^]<br />
24
- How to install FIFA 16 Ultimate Team Mod Apk on your device<br />
25
- FIFA 16 Ultimate Team Mod Apk latest version with unlimited coins<br />
26
- FIFA 16 Ultimate Team Mod Apk gameplay and features review<br />
27
- FIFA 16 Ultimate Team Mod Apk vs FIFA 16 original game comparison<br />
28
- FIFA 16 Ultimate Team Mod Apk download link and instructions<br />
29
- FIFA 16 Ultimate Team Mod Apk best players and teams guide<br />
30
- FIFA 16 Ultimate Team Mod Apk tips and tricks for beginners<br />
31
- FIFA 16 Ultimate Team Mod Apk cheats and hacks for android<br />
32
- FIFA 16 Ultimate Team Mod Apk problems and solutions<br />
33
- FIFA 16 Ultimate Team Mod Apk update and news<br />
34
- FIFA 16 Ultimate Team Mod Apk ratings and reviews by users<br />
35
- FIFA 16 Ultimate Team Mod Apk alternatives and similar games<br />
36
- FIFA 16 Ultimate Team Mod Apk requirements and compatibility<br />
37
- Download FIFA 16 Soccer Mod Apk with realistic graphics and physics<br />
38
- FIFA 16 Soccer Mod Apk offline mode and online mode<br />
39
- FIFA 16 Soccer Mod Apk tournaments and leagues<br />
40
- FIFA 16 Soccer Mod Apk customizations and settings<br />
41
- FIFA 16 Soccer Mod Apk challenges and achievements<br />
42
- Download FIFA Mobile Soccer Mod Apk with new features and modes<br />
43
- FIFA Mobile Soccer Mod Apk career mode and manager mode<br />
44
- FIFA Mobile Soccer Mod Apk live events and seasons<br />
45
- FIFA Mobile Soccer Mod Apk social features and leaderboards<br />
46
- FIFA Mobile Soccer Mod Apk controls and interface<br />
47
- Download Dream League Soccer 2023 Mod Apk with unlimited money<br />
48
- Dream League Soccer 2023 Mod Apk create your own team and stadium<br />
49
- Dream League Soccer 2023 Mod Apk realistic animations and sound effects<br />
50
- Dream League Soccer 2023 Mod Apk compete with other players online<br />
51
- Dream League Soccer 2023 Mod Apk transfer market and player development<br />
52
- Download PES Club Manager Mod Apk with full license and data<br />
53
- PES Club Manager Mod Apk build your dream club from scratch<br />
54
- PES Club Manager Mod Apk realistic match simulation and tactics<br />
55
- PES Club Manager Mod Apk train your players and scout new talents<br />
56
- PES Club Manager Mod Apk join official tournaments and events<br />
57
- Download Real Football 2023 Mod Apk with unlimited gold and cash<br />
58
- Real Football 2023 Mod Apk experience the ultimate football game on mobile<br />
59
- Real Football 2023 Mod Apk play with real teams and players from around the world<br />
60
- Real Football 2023 Mod Apk improve your skills and strategy in various modes<br />
61
- Real Football 2023 Mod Apk enjoy stunning graphics and smooth gameplay</p>
62
- <h2>How to download and install FIFA 16 Ultimate Team mod apk?</h2>
63
- <p>If you are interested in downloading and installing FIFA 16 Ultimate Team mod apk on your Android device, then you need to follow some simple steps. But before that, you need to make sure that your device meets some requirements.</p>
64
- <h3>Requirements for FIFA 16 Ultimate Team mod apk</h3>
65
- <p>The requirements for FIFA 16 Ultimate Team mod apk are:</p>
66
- <ul>
67
- <li>An Android device with version 4.4 or higher</li>
68
- <li>At least 1.5 GB of free storage space</li>
69
- <li>A stable internet connection</li>
70
- <li>A file manager app</li>
71
- <li>A zip extractor app</li>
72
- <li>The permission to install apps from unknown sources (you can enable this in your device settings)</li>
73
- </ul>
74
- <h3>Steps to download and install FIFA 16 Ultimate Team mod apk</h3>
75
- <p>The steps to download and install FIFA 16 Ultimate Team mod apk are:</p>
76
- <h4>Step 1: Download the files</h4>
77
- <p>The first step is to download the FIFA 16 Ultimate Team mod apk file and the obb file from a reliable source. You can use the links given below to download them:</p>
78
- <ul>
79
- <li>FIFA 16 Ultimate Team mod apk file: <a href=""></a></li>
80
- <li>FIFA 16 Ultimate Team obb file: <a href=""></a></li>
81
- </ul>
82
- <p>Make sure you download both the files and save them in a folder on your device.</p>
83
- <h4>Step 2: Extract the files</h4>
84
- <p>The next step is to extract the FIFA 16 Ultimate Team obb file using a zip extractor app. You can use any app that can extract zip files, such as ZArchiver, RAR, etc. To extract the file, follow these steps:</p>
85
- <ol>
86
- <li>Open the zip extractor app and locate the FIFA 16 Ultimate Team obb file that you downloaded.</li>
87
- <li>Select the file and tap on the extract option.</li>
88
- <li>Choose a destination folder where you want to extract the file. You can create a new folder or use an existing one.</li>
89
- <li>Wait for the extraction process to complete.</li>
90
- </ol>
91
- <p>After extracting the file, you will get a folder named "com.ea.gp.fifaworld". This folder contains the data of the game.</p>
92
- <h4>Step 3: Install the apk file</h4>
93
- <p>The third step is to install the FIFA 16 Ultimate Team mod apk file on your device. To do this, follow these steps:</p>
94
- <ol>
95
- <li>Open the file manager app and locate the FIFA 16 Ultimate Team mod apk file that you downloaded.</li>
96
- <li>Select the file and tap on it to start the installation process.</li>
97
- <li>You may get a warning message that says "This type of file can harm your device". Ignore it and tap on "OK".</li>
98
- <li>You may also get a prompt that says "For your security, your phone is not allowed to install unknown apps from this source". Tap on "Settings" and enable the option "Allow from this source".</li>
99
- <li>Go back to the installation screen and tap on "Install".</li>
100
- <li>Wait for the installation process to complete.</li>
101
- </ol>
102
- <p>After installing the apk file, you will see an icon of FIFA 16 Ultimate Team on your device's home screen or app drawer.</p>
103
- <h4>Step 4: Move the obb file</h4>
104
- <p>The fourth step is to move the FIFA 16 Ultimate Team obb folder that you extracted to the right location on your device. To do this, follow these steps:</p>
105
- <ol>
106
- <li>Open the file manager app and locate the FIFA 16 Ultimate Team obb folder that you extracted. It should be named "com.ea.gp.fifaworld".</li>
107
- <li>Select the folder and tap on the cut or move option.</li>
108
- <li>Navigate to the following path on your device: Internal Storage > Android > obb. If you don't see an obb folder, create one.</li>
109
- <li>Paste or move the FIFA 16 Ultimate Team obb folder in the obb folder.</li>
110
- </ol>
111
- <p>This step is important because it will allow the game to access its data and run properly.</p>
112
- <h4>Step 5: Launch the game and enjoy</h4>
113
- <p>The final step is to launch FIFA 16 Ultimate Team mod apk on your device and enjoy playing with unlimited coins, points, players, items, packs, etc. To do this, follow these steps:</p>
114
- <ol>
115
- <li>Tap on the FIFA 16 Ultimate Team icon on your device's home screen or app drawer.</li>
116
- <li>You may get a message that says "Download failed because you may not have purchased this app". Ignore it and tap on "OK".</li>
117
- <li>The game will start loading and verifying its data. Wait for it to finish.</li>
118
- <li>You may also get a message that says "You need an internet connection for first time verification". Make sure you have a stable internet connection and tap on "Retry".</li>
119
- <li>The game will launch and ask you to choose your language and accept some terms and conditions. Do so accordingly.</li>
120
- <li>You will then see the main menu of FIFA 16 Ultimate Team mod apk. You can choose any mode or option you want and start playing.</li>
121
- </ol>
122
- <h2>Conclusion</h 2>Conclusion</h2>
123
- <p>In this article, we have shown you how to download and install FIFA 16 Ultimate Team mod apk on your Android device. This mod apk will give you unlimited coins, points, players, items, packs, etc. and unlock all the features and modes of the game. You can enjoy playing with your favorite players and teams and compete in various leagues, tournaments, and events. FIFA 16 Ultimate Team mod apk is one of the best soccer games for Android devices and you should definitely try it out.</p>
124
- <h2>FAQs</h2>
125
- <p>Here are some frequently asked questions about FIFA 16 Ultimate Team mod apk:</p>
126
- <ul>
127
- <li>Q: Is FIFA 16 Ultimate Team mod apk safe to use?</li>
128
- <li>A: Yes, FIFA 16 Ultimate Team mod apk is safe to use as long as you download it from a reliable source and follow the steps given in this article. However, you should always be careful when installing apps from unknown sources and scan them for viruses or malware before installing them.</li>
129
- <li>Q: Will FIFA 16 Ultimate Team mod apk work on any Android device?</li>
130
- <li>A: FIFA 16 Ultimate Team mod apk should work on most Android devices that have version 4.4 or higher and at least 1.5 GB of free storage space. However, some devices may not be compatible or may experience some issues due to different specifications or settings. If you encounter any problems, you can try changing some options in the game settings or contact the developer for support.</li>
131
- <li>Q: Do I need to root my device to use FIFA 16 Ultimate Team mod apk?</li>
132
- <li>A: No, you do not need to root your device to use FIFA 16 Ultimate Team mod apk. You just need to enable the permission to install apps from unknown sources in your device settings.</li>
133
- <li>Q: Can I play FIFA 16 Ultimate Team mod apk online with other players?</li>
134
- <li>A: Yes, you can play FIFA 16 Ultimate Team mod apk online with other players as long as you have a stable internet connection. However, you may face some issues or bans if the game detects that you are using a modded version. To avoid this, you can use a VPN app or play offline mode.</li>
135
- <li>Q: Can I update FIFA 16 Ultimate Team mod apk to the latest version?</li>
136
- <li>A: Yes, you can update FIFA 16 Ultimate Team mod apk to the latest version by downloading and installing the new mod apk file from the same source. However, you may lose your progress or data if you do not backup your game before updating. To backup your game, you can use a cloud service or a backup app.</li>
137
- </ul></p> 197e85843d<br />
138
- <br />
139
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy the New Features of Lokicraft 1.18 0 APK on Your Android Device.md DELETED
@@ -1,125 +0,0 @@
1
-
2
- <h1>Lokicraft 1.18 0 APK: Everything You Need to Know</h1>
3
- <p>If you are a fan of sandbox games, you might have heard of Lokicraft, a game inspired by the popular Minecraft. In this article, we will tell you everything you need to know about Lokicraft 1.18 0 APK, the latest version of the game that has some exciting new features and improvements. We will also show you how to download and install it on your Android device, and why you should give it a try.</p>
4
- <h2>lokicraft 1.18 0 apk</h2><br /><p><b><b>Download File</b> &#10003;&#10003;&#10003; <a href="https://jinyurl.com/2uNTZS">https://jinyurl.com/2uNTZS</a></b></p><br /><br />
5
- <h2>What is Lokicraft?</h2>
6
- <p>Lokicraft is a sandbox game that allows you to create your own world using various blocks and materials. You can explore, build, craft, and survive in different environments, such as forests, deserts, mountains, and caves. You can also play with other players online or offline, and share your creations with them.</p>
7
- <h3>A sandbox game inspired by Minecraft</h3>
8
- <p>Lokicraft is clearly influenced by Minecraft, one of the most popular and successful games of all time. The graphics, gameplay, and mechanics of Lokicraft are very similar to those of Minecraft, but with some differences and variations. For example, Lokicraft has more types of blocks and items than Minecraft, and some of them have unique functions and effects. Lokicraft also has more animals and creatures than Minecraft, some of which are friendly and some of which are hostile.</p>
9
- <h3>Features and gameplay of Lokicraft</h3>
10
- <p>Lokicraft has two main modes: creative mode and survival mode. In creative mode, you have unlimited resources and can build anything you want without any restrictions or dangers. In survival mode, you have to gather resources, craft tools and weapons, and protect yourself from enemies and environmental hazards. You also have to manage your hunger and health levels.</p>
11
- <p>Lokicraft has many features that make it fun and engaging to play. Some of them are:</p>
12
- <ul>
13
- <li>Over 100 types of blocks and items to use in your constructions</li>
14
- <li>Over 50 types of animals and creatures to encounter in your adventures</li>
15
- <li>Multiple biomes and terrains to explore and customize</li>
16
- <li>Day and night cycle and weather effects</li>
17
- <li>Multiplayer mode to play with other players online or offline</li>
18
- <li>World editor to create your own maps and scenarios</li>
19
- <li>Achievements and leaderboards to track your progress and compete with others</li>
20
- </ul>
21
- <h2>What is new in Lokicraft 1.18 0 APK?</h2>
22
- <p>Lokicraft 1.18 0 APK is the latest version of the game that was released in June 2023. It has some new features and improvements that make it more enjoyable and immersive than ever. Here are some of them:</p>
23
- <p>lokicraft 1.18 0 apk download free<br />
24
- lokicraft 1.18 0 apk mod menu<br />
25
- lokicraft 1.18 0 apk unlimited money<br />
26
- lokicraft 1.18 0 apk latest version<br />
27
- lokicraft 1.18 0 apk for android<br />
28
- lokicraft 1.18 0 apk no ads<br />
29
- lokicraft 1.18 0 apk offline<br />
30
- lokicraft 1.18 0 apk with xbox live<br />
31
- lokicraft 1.18 0 apk update<br />
32
- lokicraft 1.18 0 apk full version<br />
33
- lokicraft 1.18 0 apk hack<br />
34
- lokicraft 1.18 0 apk premium<br />
35
- lokicraft 1.18 0 apk cracked<br />
36
- lokicraft 1.18 0 apk original<br />
37
- lokicraft 1.18 0 apk mediafire<br />
38
- lokicraft 1.18 0 apk mega<br />
39
- lokicraft 1.18 0 apk google drive<br />
40
- lokicraft 1.18 0 apk android oyun club<br />
41
- lokicraft 1.18 0 apk revdl<br />
42
- lokicraft 1.18 0 apk rexdl<br />
43
- lokicraft 1.18 0 apk uptodown<br />
44
- lokicraft 1.18 0 apk apkpure<br />
45
- lokicraft 1.18 0 apk apkmirror<br />
46
- lokicraft 1.18 0 apk apkmody<br />
47
- lokicraft 1.18 0 apk happymod<br />
48
- lokicraft 1.18 0 apk an1<br />
49
- lokicraft 1.18 0 apk andropalace<br />
50
- lokicraft 1.18 0 apk android republic<br />
51
- lokicraft 1.18 0 apk blackmod<br />
52
- lokicraft 1.18 0 apk platinmods<br />
53
- lokicraft pro adventure craft mod master for minecraft pe pocket edition free game app download install play new update version online multiplayer server ip address port number best top rated popular sandbox building survival exploration adventure simulation creative education pixel art block world sandbox game app for android mobile phone tablet device how to play guide tips tricks cheats hacks mods skins maps textures addons seeds commands features review rating feedback screenshot video gameplay trailer youtube link google play store apple app store amazon app store windows store steam store microsoft store facebook instagram twitter tiktok reddit discord telegram whatsapp pinterest snapchat quora medium tumblr wordpress blogger blogspot wix weebly squarespace shopify godaddy bluehost hostgator siteground dreamhost namecheap bigcommerce woocommerce magento shopify plus volusion big cartel wix ecommerce squarespace commerce weebly ecommerce ecwid sellfy gumroad sendowl samcart thrivecart payhip podia kajabi teachable thinkific learndash skillshare udemy udacity coursera edx khan academy codecademy pluralsight lynda linkedin learning skillsoft alison futurelearn openlearn open university harvardx mitx edx stanford online coursera yale open courses columbiax edx berkeleyx edx uclax extension edx oxford online edx cambridge online edx imperialx edx lse online edx ucl online edx king's college london online edx edinburghx edx glasgow online edx manchester online edx birmingham online edx leeds online edx liverpool online edx nottingham online edx sheffield online edx bristol online edx cardiff online edx southampton online edx lancaster online edx durham online edx exeter online edx leicester online edx aberdeen online edx st andrews online edx sussex online edx bath online edx loughborough online edx east anglia uea online edx reading online edx aston online edx surrey online edx dundee online edx essex online edx goldsmiths university of london online edx queen mary university of london qmul online edx royal holloway university of london rhul online edx soas university of london soas online edx birkbeck university of london birkbeck online edx city university of london cityuol cityuolonline cityonline cityuolonlineonline cityuolonlineonlineonline cityuolonlineonlineonlineonline cityuolonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonlineonlineonline cityuolonlineonlineonlineonlineonlineonlineonlineonline</p>
54
- <h3>New biomes and blocks</h3>
55
- <p>Lokicraft 1.18 0 APK introduces two new biomes: the swamp biome and the snow biome. The swamp biome is a wetland area with water, mud, grass, vines, mushrooms, frogs, snakes, crocodiles, and other swampy creatures. The snow biome is a frozen area with snow, ice, pine trees, polar bears, penguins, snowmen, and other snowy creatures.</p>
56
- <p>Lokicraft 1.18 0 APK also adds some new blocks and items that are related to these biomes. For example, you can find mud blocks, vine blocks, mushroom blocks, crocodile eggs, snow blocks, ice blocks, pine cones, and snowballs in these biomes. You can use them to craft new items and decorations, such as mushroom soup, vine ladders, mud bricks, ice sculptures, snow forts, and snowballs.</p>
57
- <h3>Improved graphics and performance</h3>
58
- <p>Lokicraft 1.18 0 APK also improves the graphics and performance of the game. The game now has more realistic lighting and shadows, smoother animations, and higher resolution textures. The game also runs faster and smoother on most devices, and has less bugs and glitches.</p>
59
- <h3>How to download and install Lokicraft 1.18 0 APK</h3>
60
- <p>If you want to download and install Lokicraft 1.18 0 APK on your Android device, you can follow these simple steps:</p>
61
- <ol>
62
- <li>Go to the official website of Lokicraft or any trusted third-party source that provides the APK file.</li>
63
- <li>Download the APK file to your device.</li>
64
- <li>Enable the installation of apps from unknown sources in your device settings.</li>
65
- <li>Locate the APK file in your device storage and tap on it to install it.</li>
66
- <li>Launch the game and enjoy!</li>
67
- </ol>
68
- <p>Note: You may need to uninstall the previous version of Lokicraft before installing the new one.</p>
69
- <h2>Why should you play Lokicraft 1.18 0 APK?</h2>
70
- <p>Lokicraft 1.18 0 APK is a great game for anyone who loves sandbox games, creativity, and adventure. Here are some reasons why you should play it:</p>
71
- <h3>Pros and cons of Lokicraft</h3>
72
- <p>Lokicraft has many pros and cons that make it different from other sandbox games. Some of the pros are:</p>
73
- <ul>
74
- <li>It is free to play and does not require any registration or subscription.</li>
75
- <li>It has a lot of content and variety to keep you entertained for hours.</li>
76
- <li>It has a simple and intuitive interface and controls that are easy to learn and use.</li>
77
- <li>It has a friendly and supportive community of players and developers.</li>
78
- </ul>
79
- <p>Some of the cons are:</p>
80
- <ul>
81
- <li>It is not an official or licensed product of Minecraft or Mojang Studios.</li>
82
- <li>It may have some compatibility issues with some devices or operating systems.</li>
83
- <li>It may have some bugs or errors that need to be fixed or updated.</li>
84
- <li>It may have some ads or in-app purchases that may affect your gaming experience.</li>
85
- </ul>
86
- <h3>Comparison with Minecraft</h3>
87
- <p>Lokicraft is often compared with Minecraft, as they are both sandbox games that share many similarities. However, they also have some differences that make them unique and appealing to different types of players. Here are some of the main differences between Lokicraft and Minecraft:</p>
88
- | Feature | Lokicraft | Minecraft | | --- | --- | --- | | Graphics | Pixelated and colorful | Pixelated and realistic | | Blocks | More types and functions | Less types and functions | | Items | More types and variations | Less types and variations | | Animals | More types and behaviors | Less types and behaviors | | Biomes | More types and diversity | Less types and diversity | | Modes | Creative mode and survival mode | Creative mode, survival mode, adventure mode, hardcore mode, spectator mode | | Multiplayer | Online mode and offline mode | Online mode only | | World editor | Available for all players | Available for PC players only | | Achievements | Available for all players | Available for PC players only | | Price | Free to play | Paid to play | <h3>Tips and tricks for playing Lokicraft</h3>
89
- <p>If you want to have more fun and success in playing Lokicraft, you can follow these tips and tricks:</p>
90
- <ul>
91
- <li>Use the world editor to create your own maps and scenarios. You can customize the terrain, the weather, the time, the difficulty, the enemies, the items, and more. You can also share your maps with other players or download their maps.</li>
92
- <li>Use the multiplayer mode to play with other players online or offline. You can join or create servers, chat with other players, cooperate or compete with them, trade items with them, or just have fun with them.</li>
93
- <li>Use the achievements to track your progress and challenge yourself. You can earn achievements by completing certain tasks or goals in the game. You can also compare your achievements with other players or try to unlock them all.</li>
94
- <li>Use the resources wisely in survival mode. You have to gather resources, craft tools and weapons, and protect yourself from enemies and environmental hazards. You also have to manage your hunger [user](# and health levels. You can find resources in different biomes, such as wood, stone, coal, iron, gold, diamond, and more. You can also farm crops, breed animals, and fish for food. You can craft tools and weapons using a crafting table, and craft armor and other items using a furnace. You can also use a chest to store your items.</li>
95
- <li>Use the creative mode to unleash your imagination and creativity. You have unlimited resources and can build anything you want without any restrictions or dangers. You can also fly around the map and see your creations from different angles. You can use different blocks and items to create buildings, sculptures, machines, art, and more.</li>
96
- </ul>
97
- <h2>Conclusion</h2>
98
- <p>Lokicraft 1.18 0 APK is a sandbox game that lets you create your own world using blocks and materials. It is inspired by Minecraft, but has its own features and gameplay that make it unique and fun. It has two main modes: creative mode and survival mode. It also has a world editor, a multiplayer mode, and achievements. It has new biomes and blocks, improved graphics and performance, and is free to play.</p>
99
- <h3>Summary of the main points</h3>
100
- <p>To summarize, here are the main points of this article:</p>
101
- <ul>
102
- <li>Lokicraft is a sandbox game that allows you to create your own world using various blocks and materials.</li>
103
- <li>Lokicraft is influenced by Minecraft, but has more types of blocks, items, animals, and biomes than Minecraft.</li>
104
- <li>Lokicraft has two main modes: creative mode and survival mode. In creative mode, you have unlimited resources and can build anything you want. In survival mode, you have to gather resources, craft tools and weapons, and protect yourself from enemies and environmental hazards.</li>
105
- <li>Lokicraft 1.18 0 APK is the latest version of the game that has some new features and improvements. It introduces two new biomes: the swamp biome and the snow biome. It also improves the graphics and performance of the game.</li>
106
- <li>Lokicraft 1.18 0 APK is free to play and easy to download and install on your Android device.</li>
107
- <li>Lokicraft is a fun and engaging game for anyone who loves sandbox games, creativity, and adventure.</li>
108
- </ul>
109
- <h3>Call to action for the readers</h3>
110
- <p>If you are interested in playing Lokicraft 1.18 0 APK, you can download it from the official website of Lokicraft or any trusted third-party source that provides the APK file. You can also follow Lokicraft on social media platforms to get the latest news and updates about the game. You can also share your feedback and suggestions with the developers and other players.</p>
111
- <p>So what are you waiting for? Download Lokicraft 1.18 0 APK today and start creating your own world!</p>
112
- <h2>FAQs</h2>
113
- <p>Here are some frequently asked questions about Lokicraft 1.18 0 APK:</p>
114
- <h4>Q: Is Lokicraft 1.18 0 APK safe to download?</h4>
115
- <p>A: Yes, Lokicraft 1.18 0 APK is safe to download as long as you get it from a reliable source that does not contain any viruses or malware. However, you should always be careful when downloading any APK file from unknown sources, as they may harm your device or compromise your privacy.</p>
116
- <h4>Q: Is Lokicraft 1.18 0 APK compatible with my device?</h4>
117
- <p>A: Lokicraft 1.18 0 APK is compatible with most Android devices that have Android 4.4 or higher versions installed. However, some devices may have some compatibility issues or performance problems depending on their specifications or operating systems.</p>
118
- <h4>Q: How can I update Lokicraft 1.18 0 APK?</h4>
119
- <p>A: You can update Lokicraft 1.18 0 APK by downloading the latest version of the game from the official website of Lokicraft or any trusted third-party source that provides the APK file. You may need to uninstall the previous version of Lokicraft before installing the new one.</p>
120
- <h4>Q: How can I contact the developers of Lokicraft?</h4>
121
- <p>A: You can contact the developers of Lokicraft by sending them an email at [email protected] or by visiting their website at www.lokicraft.com. You can also follow them on Facebook, Twitter, Instagram, YouTube, or TikTok to get the latest news and updates about the game.</p>
122
- <h4>Q: How can I support the developers of Lokicraft?</h4>
123
- <p>A: You can support the developers of Lokicraft by making a donation to them via PayPal or by purchasing some of their in-app products or services. You can also support them by rating and reviewing their game on the Google Play Store or any other platform where you downloaded it. You can also share their game with your friends and family and invite them to play with you.</p> 197e85843d<br />
124
- <br />
125
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/docs/install.md DELETED
@@ -1,51 +0,0 @@
1
- ## v1.8.0
2
- ### Linux and Windows
3
- ```shell
4
- # CUDA 11.0
5
- pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
6
-
7
- # CUDA 10.2
8
- pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0
9
-
10
- # CPU only
11
- pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
12
-
13
- ```
14
-
15
-
16
- ## v1.7.1
17
- ### Linux and Windows
18
- ```shell
19
- # CUDA 11.0
20
- pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
21
-
22
- # CUDA 10.2
23
- pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2
24
-
25
- # CUDA 10.1
26
- pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
27
-
28
- # CUDA 9.2
29
- pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
30
-
31
- # CPU only
32
- pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
33
- ```
34
-
35
-
36
- ## v1.6.0
37
-
38
- ### Linux and Windows
39
- ```shell
40
- # CUDA 10.2
41
- pip install torch==1.6.0 torchvision==0.7.0
42
-
43
- # CUDA 10.1
44
- pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
45
-
46
- # CUDA 9.2
47
- pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
48
-
49
- # CPU only
50
- pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
51
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/util/preprocess.py DELETED
@@ -1,103 +0,0 @@
1
- """This script contains the image preprocessing code for Deep3DFaceRecon_pytorch
2
- """
3
-
4
- import numpy as np
5
- from scipy.io import loadmat
6
- from PIL import Image
7
- import cv2
8
- import os
9
- from skimage import transform as trans
10
- import torch
11
- import warnings
12
- warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
13
- warnings.filterwarnings("ignore", category=FutureWarning)
14
-
15
-
16
- # calculating least square problem for image alignment
17
- def POS(xp, x):
18
- npts = xp.shape[1]
19
-
20
- A = np.zeros([2*npts, 8])
21
-
22
- A[0:2*npts-1:2, 0:3] = x.transpose()
23
- A[0:2*npts-1:2, 3] = 1
24
-
25
- A[1:2*npts:2, 4:7] = x.transpose()
26
- A[1:2*npts:2, 7] = 1
27
-
28
- b = np.reshape(xp.transpose(), [2*npts, 1])
29
-
30
- k, _, _, _ = np.linalg.lstsq(A, b)
31
-
32
- R1 = k[0:3]
33
- R2 = k[4:7]
34
- sTx = k[3]
35
- sTy = k[7]
36
- s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2
37
- t = np.stack([sTx, sTy], axis=0)
38
-
39
- return t, s
40
-
41
- # resize and crop images for face reconstruction
42
- def resize_n_crop_img(img, lm, t, s, target_size=224., mask=None):
43
- w0, h0 = img.size
44
- w = (w0*s).astype(np.int32)
45
- h = (h0*s).astype(np.int32)
46
- left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32)
47
- right = left + target_size
48
- up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32)
49
- below = up + target_size
50
-
51
- img = img.resize((w, h), resample=Image.BICUBIC)
52
- img = img.crop((left, up, right, below))
53
-
54
- if mask is not None:
55
- mask = mask.resize((w, h), resample=Image.BICUBIC)
56
- mask = mask.crop((left, up, right, below))
57
-
58
- lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] -
59
- t[1] + h0/2], axis=1)*s
60
- lm = lm - np.reshape(
61
- np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2])
62
-
63
- return img, lm, mask
64
-
65
- # utils for face reconstruction
66
- def extract_5p(lm):
67
- lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1
68
- lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean(
69
- lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0)
70
- lm5p = lm5p[[1, 2, 0, 3, 4], :]
71
- return lm5p
72
-
73
- # utils for face reconstruction
74
- def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.):
75
- """
76
- Return:
77
- transparams --numpy.array (raw_W, raw_H, scale, tx, ty)
78
- img_new --PIL.Image (target_size, target_size, 3)
79
- lm_new --numpy.array (68, 2), y direction is opposite to v direction
80
- mask_new --PIL.Image (target_size, target_size)
81
-
82
- Parameters:
83
- img --PIL.Image (raw_H, raw_W, 3)
84
- lm --numpy.array (68, 2), y direction is opposite to v direction
85
- lm3D --numpy.array (5, 3)
86
- mask --PIL.Image (raw_H, raw_W, 3)
87
- """
88
-
89
- w0, h0 = img.size
90
- if lm.shape[0] != 5:
91
- lm5p = extract_5p(lm)
92
- else:
93
- lm5p = lm
94
-
95
- # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face
96
- t, s = POS(lm5p.transpose(), lm3D.transpose())
97
- s = rescale_factor/s
98
-
99
- # processing the image
100
- img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask)
101
- trans_params = np.array([w0, h0, s, t[0], t[1]])
102
-
103
- return trans_params, img_new, lm_new, mask_new
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/dataset_VQ.py DELETED
@@ -1,109 +0,0 @@
1
- import torch
2
- from torch.utils import data
3
- import numpy as np
4
- from os.path import join as pjoin
5
- import random
6
- import codecs as cs
7
- from tqdm import tqdm
8
-
9
-
10
-
11
- class VQMotionDataset(data.Dataset):
12
- def __init__(self, dataset_name, window_size = 64, unit_length = 4):
13
- self.window_size = window_size
14
- self.unit_length = unit_length
15
- self.dataset_name = dataset_name
16
-
17
- if dataset_name == 't2m':
18
- self.data_root = './dataset/HumanML3D'
19
- self.motion_dir = pjoin(self.data_root, 'new_joint_vecs')
20
- self.text_dir = pjoin(self.data_root, 'texts')
21
- self.joints_num = 22
22
- self.max_motion_length = 196
23
- self.meta_dir = 'checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/meta'
24
-
25
- elif dataset_name == 'kit':
26
- self.data_root = './dataset/KIT-ML'
27
- self.motion_dir = pjoin(self.data_root, 'new_joint_vecs')
28
- self.text_dir = pjoin(self.data_root, 'texts')
29
- self.joints_num = 21
30
-
31
- self.max_motion_length = 196
32
- self.meta_dir = 'checkpoints/kit/VQVAEV3_CB1024_CMT_H1024_NRES3/meta'
33
-
34
- joints_num = self.joints_num
35
-
36
- mean = np.load(pjoin(self.meta_dir, 'mean.npy'))
37
- std = np.load(pjoin(self.meta_dir, 'std.npy'))
38
-
39
- split_file = pjoin(self.data_root, 'train.txt')
40
-
41
- self.data = []
42
- self.lengths = []
43
- id_list = []
44
- with cs.open(split_file, 'r') as f:
45
- for line in f.readlines():
46
- id_list.append(line.strip())
47
-
48
- for name in tqdm(id_list):
49
- try:
50
- motion = np.load(pjoin(self.motion_dir, name + '.npy'))
51
- if motion.shape[0] < self.window_size:
52
- continue
53
- self.lengths.append(motion.shape[0] - self.window_size)
54
- self.data.append(motion)
55
- except:
56
- # Some motion may not exist in KIT dataset
57
- pass
58
-
59
-
60
- self.mean = mean
61
- self.std = std
62
- print("Total number of motions {}".format(len(self.data)))
63
-
64
- def inv_transform(self, data):
65
- return data * self.std + self.mean
66
-
67
- def compute_sampling_prob(self) :
68
-
69
- prob = np.array(self.lengths, dtype=np.float32)
70
- prob /= np.sum(prob)
71
- return prob
72
-
73
- def __len__(self):
74
- return len(self.data)
75
-
76
- def __getitem__(self, item):
77
- motion = self.data[item]
78
-
79
- idx = random.randint(0, len(motion) - self.window_size)
80
-
81
- motion = motion[idx:idx+self.window_size]
82
- "Z Normalization"
83
- motion = (motion - self.mean) / self.std
84
-
85
- return motion
86
-
87
- def DATALoader(dataset_name,
88
- batch_size,
89
- num_workers = 8,
90
- window_size = 64,
91
- unit_length = 4):
92
-
93
- trainSet = VQMotionDataset(dataset_name, window_size=window_size, unit_length=unit_length)
94
- prob = trainSet.compute_sampling_prob()
95
- sampler = torch.utils.data.WeightedRandomSampler(prob, num_samples = len(trainSet) * 1000, replacement=True)
96
- train_loader = torch.utils.data.DataLoader(trainSet,
97
- batch_size,
98
- shuffle=True,
99
- #sampler=sampler,
100
- num_workers=num_workers,
101
- #collate_fn=collate_fn,
102
- drop_last = True)
103
-
104
- return train_loader
105
-
106
- def cycle(iterable):
107
- while True:
108
- for x in iterable:
109
- yield x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/classifier.py DELETED
@@ -1,267 +0,0 @@
1
- import os
2
- import torch
3
- import pytorch_lightning as pl
4
- from omegaconf import OmegaConf
5
- from torch.nn import functional as F
6
- from torch.optim import AdamW
7
- from torch.optim.lr_scheduler import LambdaLR
8
- from copy import deepcopy
9
- from einops import rearrange
10
- from glob import glob
11
- from natsort import natsorted
12
-
13
- from ldm.modules.diffusionmodules.openaimodel import EncoderUNetModel, UNetModel
14
- from ldm.util import log_txt_as_img, default, ismap, instantiate_from_config
15
-
16
- __models__ = {
17
- 'class_label': EncoderUNetModel,
18
- 'segmentation': UNetModel
19
- }
20
-
21
-
22
- def disabled_train(self, mode=True):
23
- """Overwrite model.train with this function to make sure train/eval mode
24
- does not change anymore."""
25
- return self
26
-
27
-
28
- class NoisyLatentImageClassifier(pl.LightningModule):
29
-
30
- def __init__(self,
31
- diffusion_path,
32
- num_classes,
33
- ckpt_path=None,
34
- pool='attention',
35
- label_key=None,
36
- diffusion_ckpt_path=None,
37
- scheduler_config=None,
38
- weight_decay=1.e-2,
39
- log_steps=10,
40
- monitor='val/loss',
41
- *args,
42
- **kwargs):
43
- super().__init__(*args, **kwargs)
44
- self.num_classes = num_classes
45
- # get latest config of diffusion model
46
- diffusion_config = natsorted(glob(os.path.join(diffusion_path, 'configs', '*-project.yaml')))[-1]
47
- self.diffusion_config = OmegaConf.load(diffusion_config).model
48
- self.diffusion_config.params.ckpt_path = diffusion_ckpt_path
49
- self.load_diffusion()
50
-
51
- self.monitor = monitor
52
- self.numd = self.diffusion_model.first_stage_model.encoder.num_resolutions - 1
53
- self.log_time_interval = self.diffusion_model.num_timesteps // log_steps
54
- self.log_steps = log_steps
55
-
56
- self.label_key = label_key if not hasattr(self.diffusion_model, 'cond_stage_key') \
57
- else self.diffusion_model.cond_stage_key
58
-
59
- assert self.label_key is not None, 'label_key neither in diffusion model nor in model.params'
60
-
61
- if self.label_key not in __models__:
62
- raise NotImplementedError()
63
-
64
- self.load_classifier(ckpt_path, pool)
65
-
66
- self.scheduler_config = scheduler_config
67
- self.use_scheduler = self.scheduler_config is not None
68
- self.weight_decay = weight_decay
69
-
70
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
71
- sd = torch.load(path, map_location="cpu")
72
- if "state_dict" in list(sd.keys()):
73
- sd = sd["state_dict"]
74
- keys = list(sd.keys())
75
- for k in keys:
76
- for ik in ignore_keys:
77
- if k.startswith(ik):
78
- print("Deleting key {} from state_dict.".format(k))
79
- del sd[k]
80
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
81
- sd, strict=False)
82
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
83
- if len(missing) > 0:
84
- print(f"Missing Keys: {missing}")
85
- if len(unexpected) > 0:
86
- print(f"Unexpected Keys: {unexpected}")
87
-
88
- def load_diffusion(self):
89
- model = instantiate_from_config(self.diffusion_config)
90
- self.diffusion_model = model.eval()
91
- self.diffusion_model.train = disabled_train
92
- for param in self.diffusion_model.parameters():
93
- param.requires_grad = False
94
-
95
- def load_classifier(self, ckpt_path, pool):
96
- model_config = deepcopy(self.diffusion_config.params.unet_config.params)
97
- model_config.in_channels = self.diffusion_config.params.unet_config.params.out_channels
98
- model_config.out_channels = self.num_classes
99
- if self.label_key == 'class_label':
100
- model_config.pool = pool
101
-
102
- self.model = __models__[self.label_key](**model_config)
103
- if ckpt_path is not None:
104
- print('#####################################################################')
105
- print(f'load from ckpt "{ckpt_path}"')
106
- print('#####################################################################')
107
- self.init_from_ckpt(ckpt_path)
108
-
109
- @torch.no_grad()
110
- def get_x_noisy(self, x, t, noise=None):
111
- noise = default(noise, lambda: torch.randn_like(x))
112
- continuous_sqrt_alpha_cumprod = None
113
- if self.diffusion_model.use_continuous_noise:
114
- continuous_sqrt_alpha_cumprod = self.diffusion_model.sample_continuous_noise_level(x.shape[0], t + 1)
115
- # todo: make sure t+1 is correct here
116
-
117
- return self.diffusion_model.q_sample(x_start=x, t=t, noise=noise,
118
- continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod)
119
-
120
- def forward(self, x_noisy, t, *args, **kwargs):
121
- return self.model(x_noisy, t)
122
-
123
- @torch.no_grad()
124
- def get_input(self, batch, k):
125
- x = batch[k]
126
- if len(x.shape) == 3:
127
- x = x[..., None]
128
- x = rearrange(x, 'b h w c -> b c h w')
129
- x = x.to(memory_format=torch.contiguous_format).float()
130
- return x
131
-
132
- @torch.no_grad()
133
- def get_conditioning(self, batch, k=None):
134
- if k is None:
135
- k = self.label_key
136
- assert k is not None, 'Needs to provide label key'
137
-
138
- targets = batch[k].to(self.device)
139
-
140
- if self.label_key == 'segmentation':
141
- targets = rearrange(targets, 'b h w c -> b c h w')
142
- for down in range(self.numd):
143
- h, w = targets.shape[-2:]
144
- targets = F.interpolate(targets, size=(h // 2, w // 2), mode='nearest')
145
-
146
- # targets = rearrange(targets,'b c h w -> b h w c')
147
-
148
- return targets
149
-
150
- def compute_top_k(self, logits, labels, k, reduction="mean"):
151
- _, top_ks = torch.topk(logits, k, dim=1)
152
- if reduction == "mean":
153
- return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item()
154
- elif reduction == "none":
155
- return (top_ks == labels[:, None]).float().sum(dim=-1)
156
-
157
- def on_train_epoch_start(self):
158
- # save some memory
159
- self.diffusion_model.model.to('cpu')
160
-
161
- @torch.no_grad()
162
- def write_logs(self, loss, logits, targets):
163
- log_prefix = 'train' if self.training else 'val'
164
- log = {}
165
- log[f"{log_prefix}/loss"] = loss.mean()
166
- log[f"{log_prefix}/acc@1"] = self.compute_top_k(
167
- logits, targets, k=1, reduction="mean"
168
- )
169
- log[f"{log_prefix}/acc@5"] = self.compute_top_k(
170
- logits, targets, k=5, reduction="mean"
171
- )
172
-
173
- self.log_dict(log, prog_bar=False, logger=True, on_step=self.training, on_epoch=True)
174
- self.log('loss', log[f"{log_prefix}/loss"], prog_bar=True, logger=False)
175
- self.log('global_step', self.global_step, logger=False, on_epoch=False, prog_bar=True)
176
- lr = self.optimizers().param_groups[0]['lr']
177
- self.log('lr_abs', lr, on_step=True, logger=True, on_epoch=False, prog_bar=True)
178
-
179
- def shared_step(self, batch, t=None):
180
- x, *_ = self.diffusion_model.get_input(batch, k=self.diffusion_model.first_stage_key)
181
- targets = self.get_conditioning(batch)
182
- if targets.dim() == 4:
183
- targets = targets.argmax(dim=1)
184
- if t is None:
185
- t = torch.randint(0, self.diffusion_model.num_timesteps, (x.shape[0],), device=self.device).long()
186
- else:
187
- t = torch.full(size=(x.shape[0],), fill_value=t, device=self.device).long()
188
- x_noisy = self.get_x_noisy(x, t)
189
- logits = self(x_noisy, t)
190
-
191
- loss = F.cross_entropy(logits, targets, reduction='none')
192
-
193
- self.write_logs(loss.detach(), logits.detach(), targets.detach())
194
-
195
- loss = loss.mean()
196
- return loss, logits, x_noisy, targets
197
-
198
- def training_step(self, batch, batch_idx):
199
- loss, *_ = self.shared_step(batch)
200
- return loss
201
-
202
- def reset_noise_accs(self):
203
- self.noisy_acc = {t: {'acc@1': [], 'acc@5': []} for t in
204
- range(0, self.diffusion_model.num_timesteps, self.diffusion_model.log_every_t)}
205
-
206
- def on_validation_start(self):
207
- self.reset_noise_accs()
208
-
209
- @torch.no_grad()
210
- def validation_step(self, batch, batch_idx):
211
- loss, *_ = self.shared_step(batch)
212
-
213
- for t in self.noisy_acc:
214
- _, logits, _, targets = self.shared_step(batch, t)
215
- self.noisy_acc[t]['acc@1'].append(self.compute_top_k(logits, targets, k=1, reduction='mean'))
216
- self.noisy_acc[t]['acc@5'].append(self.compute_top_k(logits, targets, k=5, reduction='mean'))
217
-
218
- return loss
219
-
220
- def configure_optimizers(self):
221
- optimizer = AdamW(self.model.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay)
222
-
223
- if self.use_scheduler:
224
- scheduler = instantiate_from_config(self.scheduler_config)
225
-
226
- print("Setting up LambdaLR scheduler...")
227
- scheduler = [
228
- {
229
- 'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule),
230
- 'interval': 'step',
231
- 'frequency': 1
232
- }]
233
- return [optimizer], scheduler
234
-
235
- return optimizer
236
-
237
- @torch.no_grad()
238
- def log_images(self, batch, N=8, *args, **kwargs):
239
- log = dict()
240
- x = self.get_input(batch, self.diffusion_model.first_stage_key)
241
- log['inputs'] = x
242
-
243
- y = self.get_conditioning(batch)
244
-
245
- if self.label_key == 'class_label':
246
- y = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
247
- log['labels'] = y
248
-
249
- if ismap(y):
250
- log['labels'] = self.diffusion_model.to_rgb(y)
251
-
252
- for step in range(self.log_steps):
253
- current_time = step * self.log_time_interval
254
-
255
- _, logits, x_noisy, _ = self.shared_step(batch, t=current_time)
256
-
257
- log[f'inputs@t{current_time}'] = x_noisy
258
-
259
- pred = F.one_hot(logits.argmax(dim=1), num_classes=self.num_classes)
260
- pred = rearrange(pred, 'b h w c -> b c h w')
261
-
262
- log[f'pred@t{current_time}'] = self.diffusion_model.to_rgb(pred)
263
-
264
- for key in log:
265
- log[key] = log[key][:N]
266
-
267
- return log
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/CLAPWrapper.py DELETED
@@ -1,257 +0,0 @@
1
- import random
2
- import torchaudio
3
- from torch._six import string_classes
4
- import collections
5
- import re
6
- import torch.nn.functional as F
7
- import numpy as np
8
- from transformers import AutoTokenizer
9
- from ldm.modules.encoders.CLAP.utils import read_config_as_args
10
- from ldm.modules.encoders.CLAP.clap import CLAP
11
- import math
12
- import torchaudio.transforms as T
13
- import os
14
- import torch
15
- from importlib_resources import files
16
-
17
-
18
- class CLAPWrapper():
19
- """
20
- A class for interfacing CLAP model.
21
- """
22
-
23
- def __init__(self, model_fp, device):
24
- self.np_str_obj_array_pattern = re.compile(r'[SaUO]')
25
- self.file_path = os.path.realpath(__file__)
26
- self.default_collate_err_msg_format = (
27
- "default_collate: batch must contain tensors, numpy arrays, numbers, "
28
- "dicts or lists; found {}")
29
- self.config_as_str = files('ldm').joinpath('modules/encoders/CLAP/config.yml').read_text()
30
- self.model_fp = model_fp
31
- self.device = device
32
- self.clap, self.tokenizer, self.args = self.load_clap()
33
-
34
- def load_clap(self):
35
- r"""Load CLAP model with args from config file"""
36
-
37
- args = read_config_as_args(self.config_as_str, is_config_str=True)
38
-
39
- if 'bert' in args.text_model:
40
- self.token_keys = ['input_ids', 'token_type_ids', 'attention_mask']
41
- else:
42
- self.token_keys = ['input_ids', 'attention_mask']
43
-
44
- clap = CLAP(
45
- audioenc_name=args.audioenc_name,
46
- sample_rate=args.sampling_rate,
47
- window_size=args.window_size,
48
- hop_size=args.hop_size,
49
- mel_bins=args.mel_bins,
50
- fmin=args.fmin,
51
- fmax=args.fmax,
52
- classes_num=args.num_classes,
53
- out_emb=args.out_emb,
54
- text_model=args.text_model,
55
- transformer_embed_dim=args.transformer_embed_dim,
56
- d_proj=args.d_proj
57
- )
58
-
59
- # Load pretrained weights for model
60
- model_state_dict = torch.load(self.model_fp, map_location=torch.device('cpu'))['model']
61
- clap.load_state_dict(model_state_dict)
62
-
63
- clap.eval() # set clap in eval mode
64
- tokenizer = AutoTokenizer.from_pretrained(args.text_model)
65
-
66
- clap = clap.to(self.device)
67
- tokenizer = tokenizer.to(self.device)
68
-
69
- return clap, tokenizer, args
70
-
71
- def default_collate(self, batch):
72
- r"""Puts each data field into a tensor with outer dimension batch size"""
73
- elem = batch[0]
74
- elem_type = type(elem)
75
- if isinstance(elem, torch.Tensor):
76
- out = None
77
- if torch.utils.data.get_worker_info() is not None:
78
- # If we're in a background process, concatenate directly into a
79
- # shared memory tensor to avoid an extra copy
80
- numel = sum([x.numel() for x in batch])
81
- storage = elem.storage()._new_shared(numel)
82
- out = elem.new(storage)
83
- return torch.stack(batch, 0, out=out)
84
- elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
85
- and elem_type.__name__ != 'string_':
86
- if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap':
87
- # array of string classes and object
88
- if self.np_str_obj_array_pattern.search(elem.dtype.str) is not None:
89
- raise TypeError(
90
- self.default_collate_err_msg_format.format(elem.dtype))
91
-
92
- return self.default_collate([torch.as_tensor(b) for b in batch])
93
- elif elem.shape == (): # scalars
94
- return torch.as_tensor(batch)
95
- elif isinstance(elem, float):
96
- return torch.tensor(batch, dtype=torch.float64)
97
- elif isinstance(elem, int):
98
- return torch.tensor(batch)
99
- elif isinstance(elem, string_classes):
100
- return batch
101
- elif isinstance(elem, collections.abc.Mapping):
102
- return {key: self.default_collate([d[key] for d in batch]) for key in elem}
103
- elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
104
- return elem_type(*(self.default_collate(samples) for samples in zip(*batch)))
105
- elif isinstance(elem, collections.abc.Sequence):
106
- # check to make sure that the elements in batch have consistent size
107
- it = iter(batch)
108
- elem_size = len(next(it))
109
- if not all(len(elem) == elem_size for elem in it):
110
- raise RuntimeError(
111
- 'each element in list of batch should be of equal size')
112
- transposed = zip(*batch)
113
- return [self.default_collate(samples) for samples in transposed]
114
-
115
- raise TypeError(self.default_collate_err_msg_format.format(elem_type))
116
-
117
- def load_audio_into_tensor(self, audio_path, audio_duration, resample=False):
118
- r"""Loads audio file and returns raw audio."""
119
- # Randomly sample a segment of audio_duration from the clip or pad to match duration
120
- audio_time_series, sample_rate = torchaudio.load(audio_path)
121
- resample_rate = self.args.sampling_rate
122
- if resample:
123
- resampler = T.Resample(sample_rate, resample_rate)
124
- audio_time_series = resampler(audio_time_series)
125
- audio_time_series = audio_time_series.reshape(-1)
126
-
127
- # audio_time_series is shorter than predefined audio duration,
128
- # so audio_time_series is extended
129
- if audio_duration*sample_rate >= audio_time_series.shape[0]:
130
- repeat_factor = int(np.ceil((audio_duration*sample_rate) /
131
- audio_time_series.shape[0]))
132
- # Repeat audio_time_series by repeat_factor to match audio_duration
133
- audio_time_series = audio_time_series.repeat(repeat_factor)
134
- # remove excess part of audio_time_series
135
- audio_time_series = audio_time_series[0:audio_duration*sample_rate]
136
- else:
137
- # audio_time_series is longer than predefined audio duration,
138
- # so audio_time_series is trimmed
139
- start_index = random.randrange(
140
- audio_time_series.shape[0] - audio_duration*sample_rate)
141
- audio_time_series = audio_time_series[start_index:start_index +
142
- audio_duration*sample_rate]
143
- return torch.FloatTensor(audio_time_series)
144
-
145
- def preprocess_audio(self, audio_files, resample):
146
- r"""Load list of audio files and return raw audio"""
147
- audio_tensors = []
148
- for audio_file in audio_files:
149
- audio_tensor = self.load_audio_into_tensor(
150
- audio_file, self.args.duration, resample)
151
- audio_tensor = audio_tensor.reshape(1, -1).to(self.device)
152
- audio_tensors.append(audio_tensor)
153
- return self.default_collate(audio_tensors)
154
-
155
- def preprocess_text(self, text_queries, text_len=100):
156
- r"""Load list of class labels and return tokenized text"""
157
- device = next(self.clap.parameters()).device
158
- tokenized_texts = []
159
- for ttext in text_queries:
160
- tok = self.tokenizer.encode_plus(
161
- text=ttext, add_special_tokens=True, max_length=text_len, pad_to_max_length=True, return_tensors="pt")
162
- for key in self.token_keys:
163
- tok[key] = tok[key].reshape(-1).to(device)
164
- tokenized_texts.append(tok)
165
- return self.default_collate(tokenized_texts)
166
-
167
- def get_text_embeddings(self, class_labels):
168
- r"""Load list of class labels and return text embeddings"""
169
- preprocessed_text = self.preprocess_text(class_labels)
170
- text_embeddings = self._get_text_embeddings(preprocessed_text)
171
- text_embeddings = text_embeddings/torch.norm(text_embeddings, dim=-1, keepdim=True)
172
- return text_embeddings
173
-
174
- def get_audio_embeddings(self, audio_files, resample):
175
- r"""Load list of audio files and return a audio embeddings"""
176
- preprocessed_audio = self.preprocess_audio(audio_files, resample)
177
- audio_embeddings = self._get_audio_embeddings(preprocessed_audio)
178
- audio_embeddings = audio_embeddings/torch.norm(audio_embeddings, dim=-1, keepdim=True)
179
- return audio_embeddings
180
-
181
- def _get_text_embeddings(self, preprocessed_text):
182
- r"""Load preprocessed text and return text embeddings"""
183
- with torch.no_grad():
184
- text_embeddings = self.clap.caption_encoder(preprocessed_text)
185
- text_embeddings = text_embeddings/torch.norm(text_embeddings, dim=-1, keepdim=True)
186
- return text_embeddings
187
-
188
- def _get_audio_embeddings(self, preprocessed_audio):
189
- r"""Load preprocessed audio and return a audio embeddings"""
190
- with torch.no_grad():
191
- preprocessed_audio = preprocessed_audio.reshape(
192
- preprocessed_audio.shape[0], preprocessed_audio.shape[2])
193
- #Append [0] the audio emebdding, [1] has output class probabilities
194
- audio_embeddings = self.clap.audio_encoder(preprocessed_audio)[0]
195
- audio_embeddings = audio_embeddings/torch.norm(audio_embeddings, dim=-1, keepdim=True)
196
- return audio_embeddings
197
-
198
- def compute_similarity(self, audio_embeddings, text_embeddings):
199
- r"""Compute similarity between text and audio embeddings"""
200
- logit_scale = self.clap.logit_scale.exp()
201
- similarity = logit_scale*text_embeddings @ audio_embeddings.T
202
- return similarity.T
203
-
204
- def _generic_batch_inference(self, func, *args):
205
- r"""Process audio and/or text per batch"""
206
- input_tmp = args[0]
207
- batch_size = args[-1]
208
- # args[0] has audio_files, args[1] has class_labels
209
- inputs = [args[0], args[1]] if len(args) == 3 else [args[0]]
210
- args0_len = len(args[0])
211
- # compute text_embeddings once for all the audio_files batches
212
- if len(inputs) == 2:
213
- text_embeddings = self.get_text_embeddings(args[1])
214
- inputs = [args[0], args[1], text_embeddings]
215
- dataset_idx = 0
216
- for _ in range(math.ceil(args0_len/batch_size)):
217
- next_batch_idx = dataset_idx + batch_size
218
- # batch size is bigger than available audio/text items
219
- if next_batch_idx >= args0_len:
220
- inputs[0] = input_tmp[dataset_idx:]
221
- return func(*tuple(inputs))
222
- else:
223
- inputs[0] = input_tmp[dataset_idx:next_batch_idx]
224
- yield func(*tuple(inputs))
225
- dataset_idx = next_batch_idx
226
-
227
- def get_audio_embeddings_per_batch(self, audio_files, batch_size):
228
- r"""Load preprocessed audio and return a audio embeddings per batch"""
229
- return self._generic_batch_inference(self.get_audio_embeddings, audio_files, batch_size)
230
-
231
- def get_text_embeddings_per_batch(self, class_labels, batch_size):
232
- r"""Load preprocessed text and return text embeddings per batch"""
233
- return self._generic_batch_inference(self.get_text_embeddings, class_labels, batch_size)
234
-
235
- def classify_audio_files_per_batch(self, audio_files, class_labels, batch_size):
236
- r"""Compute classification probabilities for each audio recording in a batch and each class label"""
237
- return self._generic_batch_inference(self.classify_audio_files, audio_files, class_labels, batch_size)
238
-
239
- if __name__ == '__main__':
240
-
241
- # Load and initialize CLAP
242
- weights_path = "/home1/huangrongjie/Project/Diffusion/LatentDiffusion/CLAP/CLAP_weights_2022.pth"
243
- clap_model = CLAPWrapper(weights_path, use_cuda=False)
244
-
245
- y = ["A woman talks nearby as water pours", "Multiple clanging and clanking sounds"]
246
- x = ['/home2/huangjiawei/data/audiocaps/train/Yr1nicOVtvkQ.wav', '/home2/huangjiawei/data/audiocaps/train/YUDGBjjwyaqE.wav']
247
-
248
- # Computing text embeddings
249
- text_embeddings = clap_model.get_text_embeddings(y)
250
-
251
- import ipdb
252
- ipdb.set_trace()
253
-
254
- # Computing audio embeddings
255
- audio_embeddings = clap_model.get_audio_embeddings(x, resample=True)
256
- similarity = clap_model.compute_similarity(audio_embeddings, text_embeddings)
257
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIKey/facetofacechat/style.css DELETED
@@ -1,28 +0,0 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
4
- }
5
-
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
9
- }
10
-
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
16
- }
17
-
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
24
- }
25
-
26
- .card p:last-child {
27
- margin-bottom: 0;
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIatUIUC/CodeLATS/generators/model.py DELETED
@@ -1,120 +0,0 @@
1
- from typing import List, Union, Optional, Literal
2
- import dataclasses
3
-
4
- from tenacity import (
5
- retry,
6
- stop_after_attempt, # type: ignore
7
- wait_random_exponential, # type: ignore
8
- )
9
- import openai
10
-
11
- MessageRole = Literal["system", "user", "assistant"]
12
-
13
-
14
- @dataclasses.dataclass()
15
- class Message():
16
- role: MessageRole
17
- content: str
18
-
19
-
20
- def message_to_str(message: Message) -> str:
21
- return f"{message.role}: {message.content}"
22
-
23
-
24
- def messages_to_str(messages: List[Message]) -> str:
25
- return "\n".join([message_to_str(message) for message in messages])
26
-
27
-
28
- @retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
29
- def gpt_completion(
30
- model: str,
31
- prompt: str,
32
- max_tokens: int = 1024,
33
- stop_strs: Optional[List[str]] = None,
34
- temperature: float = 0.0,
35
- num_comps=1,
36
- ) -> Union[List[str], str]:
37
- response = openai.Completion.create(
38
- model=model,
39
- prompt=prompt,
40
- temperature=temperature,
41
- max_tokens=max_tokens,
42
- top_p=1,
43
- frequency_penalty=0.0,
44
- presence_penalty=0.0,
45
- stop=stop_strs,
46
- n=num_comps,
47
- )
48
- if num_comps == 1:
49
- return response.choices[0].text # type: ignore
50
-
51
- return [choice.text for choice in response.choices] # type: ignore
52
-
53
-
54
- @retry(wait=wait_random_exponential(min=1, max=180), stop=stop_after_attempt(6))
55
- def gpt_chat(
56
- model: str,
57
- messages: List,
58
- max_tokens: int = 1024,
59
- temperature: float = 0.0,
60
- num_comps=1,
61
- ) -> Union[List[str], str]:
62
- try:
63
- response = openai.ChatCompletion.create(
64
- model=model,
65
- messages=[dataclasses.asdict(message) for message in messages],
66
- max_tokens=max_tokens,
67
- temperature=temperature,
68
- top_p=1,
69
- frequency_penalty=0.0,
70
- presence_penalty=0.0,
71
- n=num_comps,
72
- )
73
- if num_comps == 1:
74
- return response.choices[0].message.content # type: ignore
75
- return [choice.message.content for choice in response.choices] # type: ignore
76
-
77
- except Exception as e:
78
- print(f"An error occurred while calling OpenAI: {e}")
79
- raise
80
-
81
- class ModelBase():
82
- def __init__(self, name: str):
83
- self.name = name
84
- self.is_chat = False
85
-
86
- def __repr__(self) -> str:
87
- return f'{self.name}'
88
-
89
- def generate_chat(self, messages: List[Message], max_tokens: int = 1024, temperature: float = 0.2, num_comps: int = 1) -> Union[List[str], str]:
90
- raise NotImplementedError
91
-
92
- def generate(self, prompt: str, max_tokens: int = 1024, stop_strs: Optional[List[str]] = None, temperature: float = 0.0, num_comps=1) -> Union[List[str], str]:
93
- raise NotImplementedError
94
-
95
-
96
- class GPTChat(ModelBase):
97
- def __init__(self, model_name: str):
98
- self.name = model_name
99
- self.is_chat = True
100
-
101
- def generate_chat(self, messages: List[Message], max_tokens: int = 1024, temperature: float = 0.2, num_comps: int = 1) -> Union[List[str], str]:
102
- return gpt_chat(self.name, messages, max_tokens, temperature, num_comps)
103
-
104
-
105
- class GPT4(GPTChat):
106
- def __init__(self):
107
- super().__init__("gpt-4")
108
-
109
-
110
- class GPT35(GPTChat):
111
- def __init__(self):
112
- super().__init__("gpt-3.5-turbo")
113
-
114
-
115
- class GPTDavinci(ModelBase):
116
- def __init__(self, model_name: str):
117
- self.name = model_name
118
-
119
- def generate(self, prompt: str, max_tokens: int = 1024, stop_strs: Optional[List[str]] = None, temperature: float = 0, num_comps=1) -> Union[List[str], str]:
120
- return gpt_completion(self.name, prompt, max_tokens, stop_strs, temperature, num_comps)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ARTeLab/ARTeLab-SummIT/style.css DELETED
@@ -1,38 +0,0 @@
1
- body {
2
- background-color: #eee;
3
- }
4
- /*.fullScreenFrame > div {*/
5
- /* display: flex;*/
6
- /* justify-content: center;*/
7
- /*}*/
8
- /*.stButton>button {*/
9
- /* color: #4F8BF9;*/
10
- /* border-radius: 50%;*/
11
- /* height: 3em;*/
12
- /* width: 3em;*/
13
- /*}*/
14
-
15
- .stTextInput>div>div>input {
16
- color: #4F8BF9;
17
- }
18
- .stTextArea>div>div>input {
19
- color: #4F8BF9;
20
- min-height: 500px;
21
- }
22
-
23
-
24
- /*.st-cj {*/
25
- /* min-height: 500px;*/
26
- /* spellcheck="false";*/
27
- /* color: #4F8BF9;*/
28
- /*}*/
29
- /*.st-ch {*/
30
- /* min-height: 500px;*/
31
- /* spellcheck="false";*/
32
- /* color: #4F8BF9;*/
33
- /*}*/
34
- /*.st-bb {*/
35
- /* min-height: 500px;*/
36
- /* spellcheck="false";*/
37
- /* color: #4F8BF9;*/
38
- /*}*/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/utils/loggers/clearml/README.md DELETED
@@ -1,230 +0,0 @@
1
- # ClearML Integration
2
-
3
- <img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_dark.png#gh-light-mode-only" alt="Clear|ML"><img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_light.png#gh-dark-mode-only" alt="Clear|ML">
4
-
5
- ## About ClearML
6
-
7
- [ClearML](https://cutt.ly/yolov5-tutorial-clearml) is an [open-source](https://github.com/allegroai/clearml) toolbox designed to save you time ⏱️.
8
-
9
- 🔨 Track every YOLOv5 training run in the <b>experiment manager</b>
10
-
11
- 🔧 Version and easily access your custom training data with the integrated ClearML <b>Data Versioning Tool</b>
12
-
13
- 🔦 <b>Remotely train and monitor</b> your YOLOv5 training runs using ClearML Agent
14
-
15
- 🔬 Get the very best mAP using ClearML <b>Hyperparameter Optimization</b>
16
-
17
- 🔭 Turn your newly trained <b>YOLOv5 model into an API</b> with just a few commands using ClearML Serving
18
-
19
- <br />
20
- And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline!
21
- <br />
22
- <br />
23
-
24
- ![ClearML scalars dashboard](https://github.com/thepycoder/clearml_screenshots/raw/main/experiment_manager_with_compare.gif)
25
-
26
-
27
- <br />
28
- <br />
29
-
30
- ## 🦾 Setting Things Up
31
-
32
- To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 options to get one:
33
-
34
- Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-tutorial-clearml) or you can set up your own server, see [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). Even the server is open-source, so even if you're dealing with sensitive data, you should be good to go!
35
-
36
- 1. Install the `clearml` python package:
37
-
38
- ```bash
39
- pip install clearml
40
- ```
41
-
42
- 1. Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions:
43
-
44
- ```bash
45
- clearml-init
46
- ```
47
-
48
- That's it! You're done 😎
49
-
50
- <br />
51
-
52
- ## 🚀 Training YOLOv5 With ClearML
53
-
54
- To enable ClearML experiment tracking, simply install the ClearML pip package.
55
-
56
- ```bash
57
- pip install clearml>=1.2.0
58
- ```
59
-
60
- This will enable integration with the YOLOv5 training script. Every training run from now on, will be captured and stored by the ClearML experiment manager.
61
-
62
- If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`.
63
- PLEASE NOTE: ClearML uses `/` as a delimter for subprojects, so be careful when using `/` in your project name!
64
-
65
- ```bash
66
- python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
67
- ```
68
-
69
- or with custom project and task name:
70
- ```bash
71
- python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
72
- ```
73
-
74
- This will capture:
75
- - Source code + uncommitted changes
76
- - Installed packages
77
- - (Hyper)parameters
78
- - Model files (use `--save-period n` to save a checkpoint every n epochs)
79
- - Console output
80
- - Scalars (mAP_0.5, mAP_0.5:0.95, precision, recall, losses, learning rates, ...)
81
- - General info such as machine details, runtime, creation date etc.
82
- - All produced plots such as label correlogram and confusion matrix
83
- - Images with bounding boxes per epoch
84
- - Mosaic per epoch
85
- - Validation images per epoch
86
- - ...
87
-
88
- That's a lot right? 🤯
89
- Now, we can visualize all of this information in the ClearML UI to get an overview of our training progress. Add custom columns to the table view (such as e.g. mAP_0.5) so you can easily sort on the best performing model. Or select multiple experiments and directly compare them!
90
-
91
- There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!
92
-
93
- <br />
94
-
95
- ## 🔗 Dataset Version Management
96
-
97
- Versioning your data separately from your code is generally a good idea and makes it easy to aqcuire the latest version too. This repository supports supplying a dataset version ID and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment!
98
-
99
- ![ClearML Dataset Interface](https://github.com/thepycoder/clearml_screenshots/raw/main/clearml_data.gif)
100
-
101
- ### Prepare Your Dataset
102
-
103
- The YOLOv5 repository supports a number of different datasets by using yaml files containing their information. By default datasets are downloaded to the `../datasets` folder in relation to the repository root folder. So if you downloaded the `coco128` dataset using the link in the yaml or with the scripts provided by yolov5, you get this folder structure:
104
-
105
- ```
106
- ..
107
- |_ yolov5
108
- |_ datasets
109
- |_ coco128
110
- |_ images
111
- |_ labels
112
- |_ LICENSE
113
- |_ README.txt
114
- ```
115
- But this can be any dataset you wish. Feel free to use your own, as long as you keep to this folder structure.
116
-
117
- Next, ⚠️**copy the corresponding yaml file to the root of the dataset folder**⚠️. This yaml files contains the information ClearML will need to properly use the dataset. You can make this yourself too, of course, just follow the structure of the example yamls.
118
-
119
- Basically we need the following keys: `path`, `train`, `test`, `val`, `nc`, `names`.
120
-
121
- ```
122
- ..
123
- |_ yolov5
124
- |_ datasets
125
- |_ coco128
126
- |_ images
127
- |_ labels
128
- |_ coco128.yaml # <---- HERE!
129
- |_ LICENSE
130
- |_ README.txt
131
- ```
132
-
133
- ### Upload Your Dataset
134
-
135
- To get this dataset into ClearML as a versionned dataset, go to the dataset root folder and run the following command:
136
- ```bash
137
- cd coco128
138
- clearml-data sync --project YOLOv5 --name coco128 --folder .
139
- ```
140
-
141
- The command `clearml-data sync` is actually a shorthand command. You could also run these commands one after the other:
142
- ```bash
143
- # Optionally add --parent <parent_dataset_id> if you want to base
144
- # this version on another dataset version, so no duplicate files are uploaded!
145
- clearml-data create --name coco128 --project YOLOv5
146
- clearml-data add --files .
147
- clearml-data close
148
- ```
149
-
150
- ### Run Training Using A ClearML Dataset
151
-
152
- Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 🚀 models!
153
-
154
- ```bash
155
- python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache
156
- ```
157
-
158
- <br />
159
-
160
- ## 👀 Hyperparameter Optimization
161
-
162
- Now that we have our experiments and data versioned, it's time to take a look at what we can build on top!
163
-
164
- Using the code information, installed packages and environment details, the experiment itself is now **completely reproducible**. In fact, ClearML allows you to clone an experiment and even change its parameters. We can then just rerun it with these new parameters automatically, this is basically what HPO does!
165
-
166
- To **run hyperparameter optimization locally**, we've included a pre-made script for you. Just make sure a training task has been run at least once, so it is in the ClearML experiment manager, we will essentially clone it and change its hyperparameters.
167
-
168
- You'll need to fill in the ID of this `template task` in the script found at `utils/loggers/clearml/hpo.py` and then just run it :) You can change `task.execute_locally()` to `task.execute()` to put it in a ClearML queue and have a remote agent work on it instead.
169
-
170
- ```bash
171
- # To use optuna, install it first, otherwise you can change the optimizer to just be RandomSearch
172
- pip install optuna
173
- python utils/loggers/clearml/hpo.py
174
- ```
175
-
176
- ![HPO](https://github.com/thepycoder/clearml_screenshots/raw/main/hpo.png)
177
-
178
- ## 🤯 Remote Execution (advanced)
179
-
180
- Running HPO locally is really handy, but what if we want to run our experiments on a remote machine instead? Maybe you have access to a very powerful GPU machine on-site or you have some budget to use cloud GPUs.
181
- This is where the ClearML Agent comes into play. Check out what the agent can do here:
182
-
183
- - [YouTube video](https://youtu.be/MX3BrXnaULs)
184
- - [Documentation](https://clear.ml/docs/latest/docs/clearml_agent)
185
-
186
- In short: every experiment tracked by the experiment manager contains enough information to reproduce it on a different machine (installed packages, uncommitted changes etc.). So a ClearML agent does just that: it listens to a queue for incoming tasks and when it finds one, it recreates the environment and runs it while still reporting scalars, plots etc. to the experiment manager.
187
-
188
- You can turn any machine (a cloud VM, a local GPU machine, your own laptop ... ) into a ClearML agent by simply running:
189
- ```bash
190
- clearml-agent daemon --queue <queues_to_listen_to> [--docker]
191
- ```
192
-
193
- ### Cloning, Editing And Enqueuing
194
-
195
- With our agent running, we can give it some work. Remember from the HPO section that we can clone a task and edit the hyperparameters? We can do that from the interface too!
196
-
197
- 🪄 Clone the experiment by right clicking it
198
-
199
- 🎯 Edit the hyperparameters to what you wish them to be
200
-
201
- ⏳ Enqueue the task to any of the queues by right clicking it
202
-
203
- ![Enqueue a task from the UI](https://github.com/thepycoder/clearml_screenshots/raw/main/enqueue.gif)
204
-
205
- ### Executing A Task Remotely
206
-
207
- Now you can clone a task like we explained above, or simply mark your current script by adding `task.execute_remotely()` and on execution it will be put into a queue, for the agent to start working on!
208
-
209
- To run the YOLOv5 training script remotely, all you have to do is add this line to the training.py script after the clearml logger has been instatiated:
210
- ```python
211
- # ...
212
- # Loggers
213
- data_dict = None
214
- if RANK in {-1, 0}:
215
- loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance
216
- if loggers.clearml:
217
- loggers.clearml.task.execute_remotely(queue='my_queue') # <------ ADD THIS LINE
218
- # Data_dict is either None is user did not choose for ClearML dataset or is filled in by ClearML
219
- data_dict = loggers.clearml.data_dict
220
- # ...
221
- ```
222
- When running the training script after this change, python will run the script up until that line, after which it will package the code and send it to the queue instead!
223
-
224
- ### Autoscaling workers
225
-
226
- ClearML comes with autoscalers too! This tool will automatically spin up new remote machines in the cloud of your choice (AWS, GCP, Azure) and turn them into ClearML agents for you whenever there are experiments detected in the queue. Once the tasks are processed, the autoscaler will automatically shut down the remote machines and you stop paying!
227
-
228
- Check out the autoscalers getting started video below.
229
-
230
- [![Watch the video](https://img.youtube.com/vi/j4XVMAaUt3E/0.jpg)](https://youtu.be/j4XVMAaUt3E)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/trimSuffix.ts DELETED
@@ -1,6 +0,0 @@
1
- export function trimSuffix(input: string, end: string): string {
2
- if (input.endsWith(end)) {
3
- return input.slice(0, input.length - end.length);
4
- }
5
- return input;
6
- }
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/login/callback/updateUser.spec.ts DELETED
@@ -1,143 +0,0 @@
1
- import { assert, it, describe, afterEach, vi, expect } from "vitest";
2
- import type { Cookies } from "@sveltejs/kit";
3
- import { collections } from "$lib/server/database";
4
- import { updateUser } from "./updateUser";
5
- import { DEFAULT_SETTINGS } from "$lib/types/Settings";
6
- import { defaultModel } from "$lib/server/models";
7
-
8
- const userData = {
9
- preferred_username: "new-username",
10
- name: "name",
11
- picture: "https://example.com/avatar.png",
12
- sub: "1234567890",
13
- };
14
-
15
- const locals = {
16
- userId: "1234567890",
17
- sessionId: "1234567890",
18
- };
19
-
20
- // @ts-expect-error SvelteKit cookies dumb mock
21
- const cookiesMock: Cookies = {
22
- set: vi.fn(),
23
- };
24
-
25
- const insertRandomUser = async () => {
26
- /*const res = await collections.users.insertOne({
27
- _id: new ObjectId(),
28
- createdAt: new Date(),
29
- updatedAt: new Date(),
30
- username: "base-username",
31
- name: userData.name,
32
- avatarUrl: userData.picture,
33
- hfUserId: userData.sub,
34
- sessionId: locals.sessionId,
35
- });
36
-
37
- return res.insertedId;*/
38
- };
39
-
40
- const insertRandomConversations = async (count: number) => {
41
- /*const res = await collections.conversations.insertMany(
42
- new Array(count).fill(0).map(() => ({
43
- _id: new ObjectId(),
44
- title: "random title",
45
- messages: [],
46
- model: defaultModel.id,
47
- createdAt: new Date(),
48
- updatedAt: new Date(),
49
- sessionId: locals.sessionId,
50
- }))
51
- );
52
-
53
- return res.insertedIds;*/
54
- };
55
-
56
- describe("login", () => {
57
- it("should update user if existing", async () => {
58
- /*await insertRandomUser();
59
-
60
- await updateUser({ userData, locals, cookies: cookiesMock });
61
-
62
- const existingUser = await collections.users.findOne({ hfUserId: userData.sub });
63
-
64
- assert.equal(existingUser?.name, userData.name);
65
-
66
- expect(cookiesMock.set).toBeCalledTimes(1);*/
67
- });
68
-
69
- it("should migrate pre-existing conversations for new user", async () => {
70
- /*const insertedId = await insertRandomUser();
71
-
72
- await insertRandomConversations(2);
73
-
74
- await updateUser({ userData, locals, cookies: cookiesMock });
75
-
76
- const conversationCount = await collections.conversations.countDocuments({
77
- userId: insertedId,
78
- sessionId: { $exists: false },
79
- });
80
-
81
- assert.equal(conversationCount, 2);
82
-
83
- await collections.conversations.deleteMany({ userId: insertedId });*/
84
- });
85
-
86
- it("should create default settings for new user", async () => {
87
- /*await updateUser({ userData, locals, cookies: cookiesMock });
88
-
89
- const user = await collections.users.findOne({ sessionId: locals.sessionId });
90
-
91
- assert.exists(user);
92
-
93
- const settings = await collections.settings.findOne({ userId: user?._id });
94
-
95
- expect(settings).toMatchObject({
96
- userId: user?._id,
97
- updatedAt: expect.any(Date),
98
- createdAt: expect.any(Date),
99
- ethicsModalAcceptedAt: expect.any(Date),
100
- ...DEFAULT_SETTINGS,
101
- });
102
-
103
- await collections.settings.deleteOne({ userId: user?._id });*/
104
- });
105
-
106
- it("should migrate pre-existing settings for pre-existing user", async () => {
107
- /*const { insertedId } = await collections.settings.insertOne({
108
- sessionId: locals.sessionId,
109
- ethicsModalAcceptedAt: new Date(),
110
- updatedAt: new Date(),
111
- createdAt: new Date(),
112
- ...DEFAULT_SETTINGS,
113
- shareConversationsWithModelAuthors: false,
114
- });
115
-
116
- await updateUser({ userData, locals, cookies: cookiesMock });
117
-
118
- const settings = await collections.settings.findOne({
119
- _id: insertedId,
120
- sessionId: { $exists: false },
121
- });
122
-
123
- assert.exists(settings);
124
-
125
- const user = await collections.users.findOne({ hfUserId: userData.sub });
126
-
127
- expect(settings).toMatchObject({
128
- userId: user?._id,
129
- updatedAt: expect.any(Date),
130
- createdAt: expect.any(Date),
131
- ethicsModalAcceptedAt: expect.any(Date),
132
- ...DEFAULT_SETTINGS,
133
- shareConversationsWithModelAuthors: false,
134
- });
135
-
136
- await collections.settings.deleteOne({ userId: user?._id });*/
137
- });
138
- });
139
-
140
- afterEach(async () => {
141
- /*await collections.users.deleteMany({ hfUserId: userData.sub });
142
- vi.clearAllMocks();*/
143
- });
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/__init__.py DELETED
@@ -1,6 +0,0 @@
1
- from .Bard import Bard
2
- from .Raycast import Raycast
3
- from .Theb import Theb
4
- from .HuggingChat import HuggingChat
5
- from .OpenaiChat import OpenaiChat
6
- from .OpenAssistant import OpenAssistant
 
 
 
 
 
 
 
spaces/Aditya9790/yolo7-object-tracking/scripts/get_coco.sh DELETED
@@ -1,22 +0,0 @@
1
- #!/bin/bash
2
- # COCO 2017 dataset http://cocodataset.org
3
- # Download command: bash ./scripts/get_coco.sh
4
-
5
- # Download/unzip labels
6
- d='./' # unzip directory
7
- url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
8
- f='coco2017labels-segments.zip' # or 'coco2017labels.zip', 68 MB
9
- echo 'Downloading' $url$f ' ...'
10
- curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background
11
-
12
- # Download/unzip images
13
- d='./coco/images' # unzip directory
14
- url=http://images.cocodataset.org/zips/
15
- f1='train2017.zip' # 19G, 118k images
16
- f2='val2017.zip' # 1G, 5k images
17
- f3='test2017.zip' # 7G, 41k images (optional)
18
- for f in $f1 $f2 $f3; do
19
- echo 'Downloading' $url$f '...'
20
- curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background
21
- done
22
- wait # finish background tasks
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/horizontal_tool.py DELETED
@@ -1,89 +0,0 @@
1
- from __future__ import annotations
2
- import json
3
- import asyncio
4
- from copy import deepcopy
5
- from colorama import Fore
6
- from itertools import cycle
7
-
8
- from typing import TYPE_CHECKING, List
9
-
10
- from . import decision_maker_registry
11
- from .base import BaseDecisionMaker
12
- from agentverse.logging import logger
13
- from agentverse.message import SolverMessage, Message
14
-
15
- if TYPE_CHECKING:
16
- from agentverse.agents.base import BaseAgent
17
- from agentverse.message import CriticMessage
18
-
19
-
20
- @decision_maker_registry.register("horizontal-tool")
21
- class HorizontalToolDecisionMaker(BaseDecisionMaker):
22
- """
23
- Discuss in a horizontal manner.
24
- """
25
-
26
- name: str = "horizontal_tool"
27
- tools: List[dict] = []
28
- tool_names: List[str] = []
29
- tool_config: str = None
30
-
31
- def __init__(self, *args, **kwargs):
32
- assert kwargs.get("tool_config", None) is not None
33
- with open(kwargs.get("tool_config"), "r") as f:
34
- tools_dict = json.load(f)
35
- tools = tools_dict["tools_json"]
36
- tool_names = [t["name"] for t in tools]
37
- super().__init__(tools=tools, tool_names=tool_names, *args, **kwargs)
38
-
39
- # def step(
40
- async def astep(
41
- self,
42
- agents: List[BaseAgent],
43
- task_description: str,
44
- previous_plan: str = "No solution yet.",
45
- advice: str = "No advice yet.",
46
- **kwargs,
47
- ) -> List[str]:
48
- agents[0].memory.reset()
49
- if advice != "No advice yet.":
50
- self.broadcast_messages(
51
- agents[1:], [Message(content=advice, sender="Evaluator")]
52
- )
53
- all_roles = "\n".join(
54
- [f"{agent.name}: {agent.role_description}" for agent in agents[1:]]
55
- )
56
- end_flag = False
57
- discussion_cnt = 0
58
- for agent in cycle(agents[1:]):
59
- discussion_cnt += 1
60
- review: CriticMessage = await agent.astep(
61
- previous_plan, advice, task_description, all_roles
62
- )
63
- if review.content.strip().endswith("[END]"):
64
- review.content = review.content.strip().replace("[END]", "")
65
- if discussion_cnt >= len(agents) - 1:
66
- # Force all the agents to speak at least once.
67
- end_flag = True
68
- if review.content != "":
69
- self.broadcast_messages(agents, [review])
70
-
71
- logger.info("", "Reviews:", Fore.YELLOW)
72
- logger.info(
73
- "",
74
- f"[{review.sender}]: {review.content}",
75
- Fore.YELLOW,
76
- )
77
- if end_flag:
78
- break
79
-
80
- result: SolverMessage = agents[0].step(previous_plan, advice, task_description)
81
- result_list = []
82
- for res in result.content:
83
- res_tmp = deepcopy(result)
84
- res_tmp.content = " - ".join(res)
85
- result_list.append(res_tmp)
86
- return result_list
87
-
88
- def reset(self):
89
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/DelayCallMethods.js DELETED
@@ -1,17 +0,0 @@
1
- import PostUpdateDelayCall from '../../../../plugins/utils/time/PostUpdateDelayCall.js';
2
-
3
- export default {
4
- delayCall(delay, callback, scope) {
5
- // Invoke callback under scene's 'postupdate' event
6
- this.timer = PostUpdateDelayCall(this, delay, callback, scope);
7
- return this;
8
- },
9
-
10
- removeDelayCall() {
11
- if (this.timer) {
12
- this.timer.remove(false);
13
- this.timer = undefined;
14
- }
15
- return this;
16
- }
17
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/GetChildrenHeight.js DELETED
@@ -1,24 +0,0 @@
1
- import { GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js';
2
-
3
- var GetChildrenHeight = function () {
4
- if (this.rexSizer.hidden) {
5
- return 0;
6
- }
7
-
8
- var result = 0;
9
- var children = this.sizerChildren;
10
- var child, padding, childHeight;
11
- for (var key in children) {
12
- child = children[key];
13
- childHeight = (child.isRexSizer) ?
14
- Math.max(child.minHeight, child.childrenHeight) :
15
- (child.minHeight !== undefined) ? child.minHeight : GetDisplayHeight(child);
16
-
17
- padding = child.rexSizer.padding;
18
- childHeight += (padding.top + padding.bottom);
19
- result = Math.max(childHeight, result);
20
- }
21
- return result + this.space.top + this.space.bottom;
22
- }
23
-
24
- export default GetChildrenHeight;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/alt_diffusion.md DELETED
@@ -1,47 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # AltDiffusion
14
-
15
- AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://huggingface.co/papers/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu.
16
-
17
- The abstract from the paper is:
18
-
19
- *In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.*
20
-
21
- ## Tips
22
-
23
- `AltDiffusion` is conceptually the same as [Stable Diffusion](./stable_diffusion/overview).
24
-
25
- <Tip>
26
-
27
- Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
28
-
29
- </Tip>
30
-
31
- ## AltDiffusionPipeline
32
-
33
- [[autodoc]] AltDiffusionPipeline
34
- - all
35
- - __call__
36
-
37
- ## AltDiffusionImg2ImgPipeline
38
-
39
- [[autodoc]] AltDiffusionImg2ImgPipeline
40
- - all
41
- - __call__
42
-
43
- ## AltDiffusionPipelineOutput
44
-
45
- [[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput
46
- - all
47
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md DELETED
@@ -1,139 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stable Diffusion 2
14
-
15
- Stable Diffusion 2 is a text-to-image _latent diffusion_ model built upon the work of the original [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release), and it was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/).
16
-
17
- *The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels.
18
- These models are trained on an aesthetic subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/) created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using [LAION’s NSFW filter](https://openreview.net/forum?id=M3Y74vmsMcY).*
19
-
20
- For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official [announcement post](https://stability.ai/blog/stable-diffusion-v2-release).
21
-
22
- The architecture of Stable Diffusion 2 is more or less identical to the original [Stable Diffusion model](./text2img) so check out it's API documentation for how to use Stable Diffusion 2. We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest scheduler.
23
-
24
- Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image:
25
-
26
- | Task | Repository |
27
- |-------------------------|---------------------------------------------------------------------------------------------------------------|
28
- | text-to-image (512x512) | [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) |
29
- | text-to-image (768x768) | [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) |
30
- | inpainting | [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) |
31
- | super-resolution | [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) |
32
- | depth-to-image | [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) |
33
-
34
- Here are some examples for how to use Stable Diffusion 2 for each task:
35
-
36
- <Tip>
37
-
38
- Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
39
-
40
- If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!
41
-
42
- </Tip>
43
-
44
- ## Text-to-image
45
-
46
- ```py
47
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
48
- import torch
49
-
50
- repo_id = "stabilityai/stable-diffusion-2-base"
51
- pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
52
-
53
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
54
- pipe = pipe.to("cuda")
55
-
56
- prompt = "High quality photo of an astronaut riding a horse in space"
57
- image = pipe(prompt, num_inference_steps=25).images[0]
58
- image.save("astronaut.png")
59
- ```
60
-
61
- ## Inpainting
62
-
63
- ```py
64
- import PIL
65
- import requests
66
- import torch
67
- from io import BytesIO
68
-
69
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
70
-
71
-
72
- def download_image(url):
73
- response = requests.get(url)
74
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
75
-
76
-
77
- img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
78
- mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
79
-
80
- init_image = download_image(img_url).resize((512, 512))
81
- mask_image = download_image(mask_url).resize((512, 512))
82
-
83
- repo_id = "stabilityai/stable-diffusion-2-inpainting"
84
- pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
85
-
86
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
87
- pipe = pipe.to("cuda")
88
-
89
- prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
90
- image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0]
91
-
92
- image.save("yellow_cat.png")
93
- ```
94
-
95
- ## Super-resolution
96
-
97
- ```py
98
- import requests
99
- from PIL import Image
100
- from io import BytesIO
101
- from diffusers import StableDiffusionUpscalePipeline
102
- import torch
103
-
104
- # load model and scheduler
105
- model_id = "stabilityai/stable-diffusion-x4-upscaler"
106
- pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
107
- pipeline = pipeline.to("cuda")
108
-
109
- # let's download an image
110
- url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
111
- response = requests.get(url)
112
- low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
113
- low_res_img = low_res_img.resize((128, 128))
114
- prompt = "a white cat"
115
- upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
116
- upscaled_image.save("upsampled_cat.png")
117
- ```
118
-
119
- ## Depth-to-image
120
-
121
- ```py
122
- import torch
123
- import requests
124
- from PIL import Image
125
-
126
- from diffusers import StableDiffusionDepth2ImgPipeline
127
-
128
- pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
129
- "stabilityai/stable-diffusion-2-depth",
130
- torch_dtype=torch.float16,
131
- ).to("cuda")
132
-
133
-
134
- url = "http://images.cocodataset.org/val2017/000000039769.jpg"
135
- init_image = Image.open(requests.get(url, stream=True).raw)
136
- prompt = "two tigers"
137
- n_propmt = "bad, deformed, ugly, bad anotomy"
138
- image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
139
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/shap_e/pipeline_shap_e.py DELETED
@@ -1,363 +0,0 @@
1
- # Copyright 2023 Open AI and The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import math
16
- from dataclasses import dataclass
17
- from typing import List, Optional, Union
18
-
19
- import numpy as np
20
- import PIL
21
- import torch
22
- from transformers import CLIPTextModelWithProjection, CLIPTokenizer
23
-
24
- from ...models import PriorTransformer
25
- from ...schedulers import HeunDiscreteScheduler
26
- from ...utils import (
27
- BaseOutput,
28
- is_accelerate_available,
29
- is_accelerate_version,
30
- logging,
31
- randn_tensor,
32
- replace_example_docstring,
33
- )
34
- from ..pipeline_utils import DiffusionPipeline
35
- from .renderer import ShapERenderer
36
-
37
-
38
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
39
-
40
- EXAMPLE_DOC_STRING = """
41
- Examples:
42
- ```py
43
- >>> import torch
44
- >>> from diffusers import DiffusionPipeline
45
- >>> from diffusers.utils import export_to_gif
46
-
47
- >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
48
-
49
- >>> repo = "openai/shap-e"
50
- >>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16)
51
- >>> pipe = pipe.to(device)
52
-
53
- >>> guidance_scale = 15.0
54
- >>> prompt = "a shark"
55
-
56
- >>> images = pipe(
57
- ... prompt,
58
- ... guidance_scale=guidance_scale,
59
- ... num_inference_steps=64,
60
- ... frame_size=256,
61
- ... ).images
62
-
63
- >>> gif_path = export_to_gif(images[0], "shark_3d.gif")
64
- ```
65
- """
66
-
67
-
68
- @dataclass
69
- class ShapEPipelineOutput(BaseOutput):
70
- """
71
- Output class for [`ShapEPipeline`] and [`ShapEImg2ImgPipeline`].
72
-
73
- Args:
74
- images (`torch.FloatTensor`)
75
- A list of images for 3D rendering.
76
- """
77
-
78
- images: Union[List[List[PIL.Image.Image]], List[List[np.ndarray]]]
79
-
80
-
81
- class ShapEPipeline(DiffusionPipeline):
82
- """
83
- Pipeline for generating latent representation of a 3D asset and rendering with NeRF method with Shap-E.
84
-
85
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
86
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
87
-
88
- Args:
89
- prior ([`PriorTransformer`]):
90
- The canonincal unCLIP prior to approximate the image embedding from the text embedding.
91
- text_encoder ([`CLIPTextModelWithProjection`]):
92
- Frozen text-encoder.
93
- tokenizer (`CLIPTokenizer`):
94
- A [`~transformers.CLIPTokenizer`] to tokenize text.
95
- scheduler ([`HeunDiscreteScheduler`]):
96
- A scheduler to be used in combination with `prior` to generate image embedding.
97
- shap_e_renderer ([`ShapERenderer`]):
98
- Shap-E renderer projects the generated latents into parameters of a MLP that's used to create 3D objects
99
- with the NeRF rendering method.
100
- """
101
-
102
- def __init__(
103
- self,
104
- prior: PriorTransformer,
105
- text_encoder: CLIPTextModelWithProjection,
106
- tokenizer: CLIPTokenizer,
107
- scheduler: HeunDiscreteScheduler,
108
- shap_e_renderer: ShapERenderer,
109
- ):
110
- super().__init__()
111
-
112
- self.register_modules(
113
- prior=prior,
114
- text_encoder=text_encoder,
115
- tokenizer=tokenizer,
116
- scheduler=scheduler,
117
- shap_e_renderer=shap_e_renderer,
118
- )
119
-
120
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
121
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
122
- if latents is None:
123
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
124
- else:
125
- if latents.shape != shape:
126
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
127
- latents = latents.to(device)
128
-
129
- latents = latents * scheduler.init_noise_sigma
130
- return latents
131
-
132
- def enable_model_cpu_offload(self, gpu_id=0):
133
- r"""
134
- Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
135
- time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
136
- Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
137
- iterative execution of the `unet`.
138
- """
139
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
140
- from accelerate import cpu_offload_with_hook
141
- else:
142
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
143
-
144
- device = torch.device(f"cuda:{gpu_id}")
145
-
146
- if self.device.type != "cpu":
147
- self.to("cpu", silence_dtype_warnings=True)
148
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
149
-
150
- hook = None
151
- for cpu_offloaded_model in [self.text_encoder, self.prior, self.shap_e_renderer]:
152
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
153
-
154
- if self.safety_checker is not None:
155
- _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
156
-
157
- # We'll offload the last model manually.
158
- self.final_offload_hook = hook
159
-
160
- def _encode_prompt(
161
- self,
162
- prompt,
163
- device,
164
- num_images_per_prompt,
165
- do_classifier_free_guidance,
166
- ):
167
- len(prompt) if isinstance(prompt, list) else 1
168
-
169
- # YiYi Notes: set pad_token_id to be 0, not sure why I can't set in the config file
170
- self.tokenizer.pad_token_id = 0
171
- # get prompt text embeddings
172
- text_inputs = self.tokenizer(
173
- prompt,
174
- padding="max_length",
175
- max_length=self.tokenizer.model_max_length,
176
- truncation=True,
177
- return_tensors="pt",
178
- )
179
- text_input_ids = text_inputs.input_ids
180
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
181
-
182
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
183
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
184
- logger.warning(
185
- "The following part of your input was truncated because CLIP can only handle sequences up to"
186
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
187
- )
188
-
189
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
190
- prompt_embeds = text_encoder_output.text_embeds
191
-
192
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
193
- # in Shap-E it normalize the prompt_embeds and then later rescale it
194
- prompt_embeds = prompt_embeds / torch.linalg.norm(prompt_embeds, dim=-1, keepdim=True)
195
-
196
- if do_classifier_free_guidance:
197
- negative_prompt_embeds = torch.zeros_like(prompt_embeds)
198
-
199
- # For classifier free guidance, we need to do two forward passes.
200
- # Here we concatenate the unconditional and text embeddings into a single batch
201
- # to avoid doing two forward passes
202
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
203
-
204
- # Rescale the features to have unit variance
205
- prompt_embeds = math.sqrt(prompt_embeds.shape[1]) * prompt_embeds
206
-
207
- return prompt_embeds
208
-
209
- @torch.no_grad()
210
- @replace_example_docstring(EXAMPLE_DOC_STRING)
211
- def __call__(
212
- self,
213
- prompt: str,
214
- num_images_per_prompt: int = 1,
215
- num_inference_steps: int = 25,
216
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
217
- latents: Optional[torch.FloatTensor] = None,
218
- guidance_scale: float = 4.0,
219
- frame_size: int = 64,
220
- output_type: Optional[str] = "pil", # pil, np, latent, mesh
221
- return_dict: bool = True,
222
- ):
223
- """
224
- The call function to the pipeline for generation.
225
-
226
- Args:
227
- prompt (`str` or `List[str]`):
228
- The prompt or prompts to guide the image generation.
229
- num_images_per_prompt (`int`, *optional*, defaults to 1):
230
- The number of images to generate per prompt.
231
- num_inference_steps (`int`, *optional*, defaults to 25):
232
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
233
- expense of slower inference.
234
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
235
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
236
- generation deterministic.
237
- latents (`torch.FloatTensor`, *optional*):
238
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
239
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
240
- tensor is generated by sampling using the supplied random `generator`.
241
- guidance_scale (`float`, *optional*, defaults to 4.0):
242
- A higher guidance scale value encourages the model to generate images closely linked to the text
243
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
244
- usually at the expense of lower image quality.
245
- frame_size (`int`, *optional*, default to 64):
246
- The width and height of each image frame of the generated 3D output.
247
- output_type (`str`, *optional*, defaults to `"pt"`):
248
- The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
249
- (`np.array`),`"latent"` (`torch.Tensor`), mesh ([`MeshDecoderOutput`]).
250
- return_dict (`bool`, *optional*, defaults to `True`):
251
- Whether or not to return a [`~pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput`] instead of a plain
252
- tuple.
253
-
254
- Examples:
255
-
256
- Returns:
257
- [`~pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput`] or `tuple`:
258
- If `return_dict` is `True`, [`~pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput`] is returned,
259
- otherwise a `tuple` is returned where the first element is a list with the generated images.
260
- """
261
-
262
- if isinstance(prompt, str):
263
- batch_size = 1
264
- elif isinstance(prompt, list):
265
- batch_size = len(prompt)
266
- else:
267
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
268
-
269
- device = self._execution_device
270
-
271
- batch_size = batch_size * num_images_per_prompt
272
-
273
- do_classifier_free_guidance = guidance_scale > 1.0
274
- prompt_embeds = self._encode_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance)
275
-
276
- # prior
277
-
278
- self.scheduler.set_timesteps(num_inference_steps, device=device)
279
- timesteps = self.scheduler.timesteps
280
-
281
- num_embeddings = self.prior.config.num_embeddings
282
- embedding_dim = self.prior.config.embedding_dim
283
-
284
- latents = self.prepare_latents(
285
- (batch_size, num_embeddings * embedding_dim),
286
- prompt_embeds.dtype,
287
- device,
288
- generator,
289
- latents,
290
- self.scheduler,
291
- )
292
-
293
- # YiYi notes: for testing only to match ldm, we can directly create a latents with desired shape: batch_size, num_embeddings, embedding_dim
294
- latents = latents.reshape(latents.shape[0], num_embeddings, embedding_dim)
295
-
296
- for i, t in enumerate(self.progress_bar(timesteps)):
297
- # expand the latents if we are doing classifier free guidance
298
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
299
- scaled_model_input = self.scheduler.scale_model_input(latent_model_input, t)
300
-
301
- noise_pred = self.prior(
302
- scaled_model_input,
303
- timestep=t,
304
- proj_embedding=prompt_embeds,
305
- ).predicted_image_embedding
306
-
307
- # remove the variance
308
- noise_pred, _ = noise_pred.split(
309
- scaled_model_input.shape[2], dim=2
310
- ) # batch_size, num_embeddings, embedding_dim
311
-
312
- if do_classifier_free_guidance:
313
- noise_pred_uncond, noise_pred = noise_pred.chunk(2)
314
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred - noise_pred_uncond)
315
-
316
- latents = self.scheduler.step(
317
- noise_pred,
318
- timestep=t,
319
- sample=latents,
320
- ).prev_sample
321
-
322
- if output_type not in ["np", "pil", "latent", "mesh"]:
323
- raise ValueError(
324
- f"Only the output types `pil`, `np`, `latent` and `mesh` are supported not output_type={output_type}"
325
- )
326
-
327
- if output_type == "latent":
328
- return ShapEPipelineOutput(images=latents)
329
-
330
- images = []
331
- if output_type == "mesh":
332
- for i, latent in enumerate(latents):
333
- mesh = self.shap_e_renderer.decode_to_mesh(
334
- latent[None, :],
335
- device,
336
- )
337
- images.append(mesh)
338
-
339
- else:
340
- # np, pil
341
- for i, latent in enumerate(latents):
342
- image = self.shap_e_renderer.decode_to_image(
343
- latent[None, :],
344
- device,
345
- size=frame_size,
346
- )
347
- images.append(image)
348
-
349
- images = torch.stack(images)
350
-
351
- images = images.cpu().numpy()
352
-
353
- if output_type == "pil":
354
- images = [self.numpy_to_pil(image) for image in images]
355
-
356
- # Offload last model to CPU
357
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
358
- self.final_offload_hook.offload()
359
-
360
- if not return_dict:
361
- return (images,)
362
-
363
- return ShapEPipelineOutput(images=images)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py DELETED
@@ -1,92 +0,0 @@
1
- # Copyright 2022 The Music Spectrogram Diffusion Authors.
2
- # Copyright 2023 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import torch
17
- import torch.nn as nn
18
- from transformers.modeling_utils import ModuleUtilsMixin
19
- from transformers.models.t5.modeling_t5 import (
20
- T5Block,
21
- T5Config,
22
- T5LayerNorm,
23
- )
24
-
25
- from ...configuration_utils import ConfigMixin, register_to_config
26
- from ...models import ModelMixin
27
-
28
-
29
- class SpectrogramContEncoder(ModelMixin, ConfigMixin, ModuleUtilsMixin):
30
- @register_to_config
31
- def __init__(
32
- self,
33
- input_dims: int,
34
- targets_context_length: int,
35
- d_model: int,
36
- dropout_rate: float,
37
- num_layers: int,
38
- num_heads: int,
39
- d_kv: int,
40
- d_ff: int,
41
- feed_forward_proj: str,
42
- is_decoder: bool = False,
43
- ):
44
- super().__init__()
45
-
46
- self.input_proj = nn.Linear(input_dims, d_model, bias=False)
47
-
48
- self.position_encoding = nn.Embedding(targets_context_length, d_model)
49
- self.position_encoding.weight.requires_grad = False
50
-
51
- self.dropout_pre = nn.Dropout(p=dropout_rate)
52
-
53
- t5config = T5Config(
54
- d_model=d_model,
55
- num_heads=num_heads,
56
- d_kv=d_kv,
57
- d_ff=d_ff,
58
- feed_forward_proj=feed_forward_proj,
59
- dropout_rate=dropout_rate,
60
- is_decoder=is_decoder,
61
- is_encoder_decoder=False,
62
- )
63
- self.encoders = nn.ModuleList()
64
- for lyr_num in range(num_layers):
65
- lyr = T5Block(t5config)
66
- self.encoders.append(lyr)
67
-
68
- self.layer_norm = T5LayerNorm(d_model)
69
- self.dropout_post = nn.Dropout(p=dropout_rate)
70
-
71
- def forward(self, encoder_inputs, encoder_inputs_mask):
72
- x = self.input_proj(encoder_inputs)
73
-
74
- # terminal relative positional encodings
75
- max_positions = encoder_inputs.shape[1]
76
- input_positions = torch.arange(max_positions, device=encoder_inputs.device)
77
-
78
- seq_lens = encoder_inputs_mask.sum(-1)
79
- input_positions = torch.roll(input_positions.unsqueeze(0), tuple(seq_lens.tolist()), dims=0)
80
- x += self.position_encoding(input_positions)
81
-
82
- x = self.dropout_pre(x)
83
-
84
- # inverted the attention mask
85
- input_shape = encoder_inputs.size()
86
- extended_attention_mask = self.get_extended_attention_mask(encoder_inputs_mask, input_shape)
87
-
88
- for lyr in self.encoders:
89
- x = lyr(x, extended_attention_mask)[0]
90
- x = self.layer_norm(x)
91
-
92
- return self.dropout_post(x), encoder_inputs_mask
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_769x769_80k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './fcn_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/bootstrap/bootstrap.min.js DELETED
@@ -1,7 +0,0 @@
1
- /*!
2
- * Bootstrap v5.1.3 (https://getbootstrap.com/)
3
- * Copyright 2011-2021 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
4
- * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
5
- */
6
- !function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).bootstrap=e()}(this,(function(){"use strict";const t="transitionend",e=t=>{let e=t.getAttribute("data-bs-target");if(!e||"#"===e){let i=t.getAttribute("href");if(!i||!i.includes("#")&&!i.startsWith("."))return null;i.includes("#")&&!i.startsWith("#")&&(i=`#${i.split("#")[1]}`),e=i&&"#"!==i?i.trim():null}return e},i=t=>{const i=e(t);return i&&document.querySelector(i)?i:null},n=t=>{const i=e(t);return i?document.querySelector(i):null},s=e=>{e.dispatchEvent(new Event(t))},o=t=>!(!t||"object"!=typeof t)&&(void 0!==t.jquery&&(t=t[0]),void 0!==t.nodeType),r=t=>o(t)?t.jquery?t[0]:t:"string"==typeof t&&t.length>0?document.querySelector(t):null,a=(t,e,i)=>{Object.keys(i).forEach((n=>{const s=i[n],r=e[n],a=r&&o(r)?"element":null==(l=r)?`${l}`:{}.toString.call(l).match(/\s([a-z]+)/i)[1].toLowerCase();var l;if(!new RegExp(s).test(a))throw new TypeError(`${t.toUpperCase()}: Option "${n}" provided type "${a}" but expected type "${s}".`)}))},l=t=>!(!o(t)||0===t.getClientRects().length)&&"visible"===getComputedStyle(t).getPropertyValue("visibility"),c=t=>!t||t.nodeType!==Node.ELEMENT_NODE||!!t.classList.contains("disabled")||(void 0!==t.disabled?t.disabled:t.hasAttribute("disabled")&&"false"!==t.getAttribute("disabled")),h=t=>{if(!document.documentElement.attachShadow)return null;if("function"==typeof t.getRootNode){const e=t.getRootNode();return e instanceof ShadowRoot?e:null}return t instanceof ShadowRoot?t:t.parentNode?h(t.parentNode):null},d=()=>{},u=t=>{t.offsetHeight},f=()=>{const{jQuery:t}=window;return t&&!document.body.hasAttribute("data-bs-no-jquery")?t:null},p=[],m=()=>"rtl"===document.documentElement.dir,g=t=>{var e;e=()=>{const e=f();if(e){const i=t.NAME,n=e.fn[i];e.fn[i]=t.jQueryInterface,e.fn[i].Constructor=t,e.fn[i].noConflict=()=>(e.fn[i]=n,t.jQueryInterface)}},"loading"===document.readyState?(p.length||document.addEventListener("DOMContentLoaded",(()=>{p.forEach((t=>t()))})),p.push(e)):e()},_=t=>{"function"==typeof t&&t()},b=(e,i,n=!0)=>{if(!n)return void _(e);const o=(t=>{if(!t)return 0;let{transitionDuration:e,transitionDelay:i}=window.getComputedStyle(t);const n=Number.parseFloat(e),s=Number.parseFloat(i);return n||s?(e=e.split(",")[0],i=i.split(",")[0],1e3*(Number.parseFloat(e)+Number.parseFloat(i))):0})(i)+5;let r=!1;const a=({target:n})=>{n===i&&(r=!0,i.removeEventListener(t,a),_(e))};i.addEventListener(t,a),setTimeout((()=>{r||s(i)}),o)},v=(t,e,i,n)=>{let s=t.indexOf(e);if(-1===s)return t[!i&&n?t.length-1:0];const o=t.length;return s+=i?1:-1,n&&(s=(s+o)%o),t[Math.max(0,Math.min(s,o-1))]},y=/[^.]*(?=\..*)\.|.*/,w=/\..*/,E=/::\d+$/,A={};let T=1;const O={mouseenter:"mouseover",mouseleave:"mouseout"},C=/^(mouseenter|mouseleave)/i,k=new Set(["click","dblclick","mouseup","mousedown","contextmenu","mousewheel","DOMMouseScroll","mouseover","mouseout","mousemove","selectstart","selectend","keydown","keypress","keyup","orientationchange","touchstart","touchmove","touchend","touchcancel","pointerdown","pointermove","pointerup","pointerleave","pointercancel","gesturestart","gesturechange","gestureend","focus","blur","change","reset","select","submit","focusin","focusout","load","unload","beforeunload","resize","move","DOMContentLoaded","readystatechange","error","abort","scroll"]);function L(t,e){return e&&`${e}::${T++}`||t.uidEvent||T++}function x(t){const e=L(t);return t.uidEvent=e,A[e]=A[e]||{},A[e]}function D(t,e,i=null){const n=Object.keys(t);for(let s=0,o=n.length;s<o;s++){const o=t[n[s]];if(o.originalHandler===e&&o.delegationSelector===i)return o}return null}function S(t,e,i){const n="string"==typeof e,s=n?i:e;let o=P(t);return k.has(o)||(o=t),[n,s,o]}function N(t,e,i,n,s){if("string"!=typeof e||!t)return;if(i||(i=n,n=null),C.test(e)){const t=t=>function(e){if(!e.relatedTarget||e.relatedTarget!==e.delegateTarget&&!e.delegateTarget.contains(e.relatedTarget))return t.call(this,e)};n?n=t(n):i=t(i)}const[o,r,a]=S(e,i,n),l=x(t),c=l[a]||(l[a]={}),h=D(c,r,o?i:null);if(h)return void(h.oneOff=h.oneOff&&s);const d=L(r,e.replace(y,"")),u=o?function(t,e,i){return function n(s){const o=t.querySelectorAll(e);for(let{target:r}=s;r&&r!==this;r=r.parentNode)for(let a=o.length;a--;)if(o[a]===r)return s.delegateTarget=r,n.oneOff&&j.off(t,s.type,e,i),i.apply(r,[s]);return null}}(t,i,n):function(t,e){return function i(n){return n.delegateTarget=t,i.oneOff&&j.off(t,n.type,e),e.apply(t,[n])}}(t,i);u.delegationSelector=o?i:null,u.originalHandler=r,u.oneOff=s,u.uidEvent=d,c[d]=u,t.addEventListener(a,u,o)}function I(t,e,i,n,s){const o=D(e[i],n,s);o&&(t.removeEventListener(i,o,Boolean(s)),delete e[i][o.uidEvent])}function P(t){return t=t.replace(w,""),O[t]||t}const j={on(t,e,i,n){N(t,e,i,n,!1)},one(t,e,i,n){N(t,e,i,n,!0)},off(t,e,i,n){if("string"!=typeof e||!t)return;const[s,o,r]=S(e,i,n),a=r!==e,l=x(t),c=e.startsWith(".");if(void 0!==o){if(!l||!l[r])return;return void I(t,l,r,o,s?i:null)}c&&Object.keys(l).forEach((i=>{!function(t,e,i,n){const s=e[i]||{};Object.keys(s).forEach((o=>{if(o.includes(n)){const n=s[o];I(t,e,i,n.originalHandler,n.delegationSelector)}}))}(t,l,i,e.slice(1))}));const h=l[r]||{};Object.keys(h).forEach((i=>{const n=i.replace(E,"");if(!a||e.includes(n)){const e=h[i];I(t,l,r,e.originalHandler,e.delegationSelector)}}))},trigger(t,e,i){if("string"!=typeof e||!t)return null;const n=f(),s=P(e),o=e!==s,r=k.has(s);let a,l=!0,c=!0,h=!1,d=null;return o&&n&&(a=n.Event(e,i),n(t).trigger(a),l=!a.isPropagationStopped(),c=!a.isImmediatePropagationStopped(),h=a.isDefaultPrevented()),r?(d=document.createEvent("HTMLEvents"),d.initEvent(s,l,!0)):d=new CustomEvent(e,{bubbles:l,cancelable:!0}),void 0!==i&&Object.keys(i).forEach((t=>{Object.defineProperty(d,t,{get:()=>i[t]})})),h&&d.preventDefault(),c&&t.dispatchEvent(d),d.defaultPrevented&&void 0!==a&&a.preventDefault(),d}},M=new Map,H={set(t,e,i){M.has(t)||M.set(t,new Map);const n=M.get(t);n.has(e)||0===n.size?n.set(e,i):console.error(`Bootstrap doesn't allow more than one instance per element. Bound instance: ${Array.from(n.keys())[0]}.`)},get:(t,e)=>M.has(t)&&M.get(t).get(e)||null,remove(t,e){if(!M.has(t))return;const i=M.get(t);i.delete(e),0===i.size&&M.delete(t)}};class B{constructor(t){(t=r(t))&&(this._element=t,H.set(this._element,this.constructor.DATA_KEY,this))}dispose(){H.remove(this._element,this.constructor.DATA_KEY),j.off(this._element,this.constructor.EVENT_KEY),Object.getOwnPropertyNames(this).forEach((t=>{this[t]=null}))}_queueCallback(t,e,i=!0){b(t,e,i)}static getInstance(t){return H.get(r(t),this.DATA_KEY)}static getOrCreateInstance(t,e={}){return this.getInstance(t)||new this(t,"object"==typeof e?e:null)}static get VERSION(){return"5.1.3"}static get NAME(){throw new Error('You have to implement the static method "NAME", for each component!')}static get DATA_KEY(){return`bs.${this.NAME}`}static get EVENT_KEY(){return`.${this.DATA_KEY}`}}const R=(t,e="hide")=>{const i=`click.dismiss${t.EVENT_KEY}`,s=t.NAME;j.on(document,i,`[data-bs-dismiss="${s}"]`,(function(i){if(["A","AREA"].includes(this.tagName)&&i.preventDefault(),c(this))return;const o=n(this)||this.closest(`.${s}`);t.getOrCreateInstance(o)[e]()}))};class W extends B{static get NAME(){return"alert"}close(){if(j.trigger(this._element,"close.bs.alert").defaultPrevented)return;this._element.classList.remove("show");const t=this._element.classList.contains("fade");this._queueCallback((()=>this._destroyElement()),this._element,t)}_destroyElement(){this._element.remove(),j.trigger(this._element,"closed.bs.alert"),this.dispose()}static jQueryInterface(t){return this.each((function(){const e=W.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}R(W,"close"),g(W);const $='[data-bs-toggle="button"]';class z extends B{static get NAME(){return"button"}toggle(){this._element.setAttribute("aria-pressed",this._element.classList.toggle("active"))}static jQueryInterface(t){return this.each((function(){const e=z.getOrCreateInstance(this);"toggle"===t&&e[t]()}))}}function q(t){return"true"===t||"false"!==t&&(t===Number(t).toString()?Number(t):""===t||"null"===t?null:t)}function F(t){return t.replace(/[A-Z]/g,(t=>`-${t.toLowerCase()}`))}j.on(document,"click.bs.button.data-api",$,(t=>{t.preventDefault();const e=t.target.closest($);z.getOrCreateInstance(e).toggle()})),g(z);const U={setDataAttribute(t,e,i){t.setAttribute(`data-bs-${F(e)}`,i)},removeDataAttribute(t,e){t.removeAttribute(`data-bs-${F(e)}`)},getDataAttributes(t){if(!t)return{};const e={};return Object.keys(t.dataset).filter((t=>t.startsWith("bs"))).forEach((i=>{let n=i.replace(/^bs/,"");n=n.charAt(0).toLowerCase()+n.slice(1,n.length),e[n]=q(t.dataset[i])})),e},getDataAttribute:(t,e)=>q(t.getAttribute(`data-bs-${F(e)}`)),offset(t){const e=t.getBoundingClientRect();return{top:e.top+window.pageYOffset,left:e.left+window.pageXOffset}},position:t=>({top:t.offsetTop,left:t.offsetLeft})},V={find:(t,e=document.documentElement)=>[].concat(...Element.prototype.querySelectorAll.call(e,t)),findOne:(t,e=document.documentElement)=>Element.prototype.querySelector.call(e,t),children:(t,e)=>[].concat(...t.children).filter((t=>t.matches(e))),parents(t,e){const i=[];let n=t.parentNode;for(;n&&n.nodeType===Node.ELEMENT_NODE&&3!==n.nodeType;)n.matches(e)&&i.push(n),n=n.parentNode;return i},prev(t,e){let i=t.previousElementSibling;for(;i;){if(i.matches(e))return[i];i=i.previousElementSibling}return[]},next(t,e){let i=t.nextElementSibling;for(;i;){if(i.matches(e))return[i];i=i.nextElementSibling}return[]},focusableChildren(t){const e=["a","button","input","textarea","select","details","[tabindex]",'[contenteditable="true"]'].map((t=>`${t}:not([tabindex^="-"])`)).join(", ");return this.find(e,t).filter((t=>!c(t)&&l(t)))}},K="carousel",X={interval:5e3,keyboard:!0,slide:!1,pause:"hover",wrap:!0,touch:!0},Y={interval:"(number|boolean)",keyboard:"boolean",slide:"(boolean|string)",pause:"(string|boolean)",wrap:"boolean",touch:"boolean"},Q="next",G="prev",Z="left",J="right",tt={ArrowLeft:J,ArrowRight:Z},et="slid.bs.carousel",it="active",nt=".active.carousel-item";class st extends B{constructor(t,e){super(t),this._items=null,this._interval=null,this._activeElement=null,this._isPaused=!1,this._isSliding=!1,this.touchTimeout=null,this.touchStartX=0,this.touchDeltaX=0,this._config=this._getConfig(e),this._indicatorsElement=V.findOne(".carousel-indicators",this._element),this._touchSupported="ontouchstart"in document.documentElement||navigator.maxTouchPoints>0,this._pointerEvent=Boolean(window.PointerEvent),this._addEventListeners()}static get Default(){return X}static get NAME(){return K}next(){this._slide(Q)}nextWhenVisible(){!document.hidden&&l(this._element)&&this.next()}prev(){this._slide(G)}pause(t){t||(this._isPaused=!0),V.findOne(".carousel-item-next, .carousel-item-prev",this._element)&&(s(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null}cycle(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config&&this._config.interval&&!this._isPaused&&(this._updateInterval(),this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))}to(t){this._activeElement=V.findOne(nt,this._element);const e=this._getItemIndex(this._activeElement);if(t>this._items.length-1||t<0)return;if(this._isSliding)return void j.one(this._element,et,(()=>this.to(t)));if(e===t)return this.pause(),void this.cycle();const i=t>e?Q:G;this._slide(i,this._items[t])}_getConfig(t){return t={...X,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(K,t,Y),t}_handleSwipe(){const t=Math.abs(this.touchDeltaX);if(t<=40)return;const e=t/this.touchDeltaX;this.touchDeltaX=0,e&&this._slide(e>0?J:Z)}_addEventListeners(){this._config.keyboard&&j.on(this._element,"keydown.bs.carousel",(t=>this._keydown(t))),"hover"===this._config.pause&&(j.on(this._element,"mouseenter.bs.carousel",(t=>this.pause(t))),j.on(this._element,"mouseleave.bs.carousel",(t=>this.cycle(t)))),this._config.touch&&this._touchSupported&&this._addTouchEventListeners()}_addTouchEventListeners(){const t=t=>this._pointerEvent&&("pen"===t.pointerType||"touch"===t.pointerType),e=e=>{t(e)?this.touchStartX=e.clientX:this._pointerEvent||(this.touchStartX=e.touches[0].clientX)},i=t=>{this.touchDeltaX=t.touches&&t.touches.length>1?0:t.touches[0].clientX-this.touchStartX},n=e=>{t(e)&&(this.touchDeltaX=e.clientX-this.touchStartX),this._handleSwipe(),"hover"===this._config.pause&&(this.pause(),this.touchTimeout&&clearTimeout(this.touchTimeout),this.touchTimeout=setTimeout((t=>this.cycle(t)),500+this._config.interval))};V.find(".carousel-item img",this._element).forEach((t=>{j.on(t,"dragstart.bs.carousel",(t=>t.preventDefault()))})),this._pointerEvent?(j.on(this._element,"pointerdown.bs.carousel",(t=>e(t))),j.on(this._element,"pointerup.bs.carousel",(t=>n(t))),this._element.classList.add("pointer-event")):(j.on(this._element,"touchstart.bs.carousel",(t=>e(t))),j.on(this._element,"touchmove.bs.carousel",(t=>i(t))),j.on(this._element,"touchend.bs.carousel",(t=>n(t))))}_keydown(t){if(/input|textarea/i.test(t.target.tagName))return;const e=tt[t.key];e&&(t.preventDefault(),this._slide(e))}_getItemIndex(t){return this._items=t&&t.parentNode?V.find(".carousel-item",t.parentNode):[],this._items.indexOf(t)}_getItemByOrder(t,e){const i=t===Q;return v(this._items,e,i,this._config.wrap)}_triggerSlideEvent(t,e){const i=this._getItemIndex(t),n=this._getItemIndex(V.findOne(nt,this._element));return j.trigger(this._element,"slide.bs.carousel",{relatedTarget:t,direction:e,from:n,to:i})}_setActiveIndicatorElement(t){if(this._indicatorsElement){const e=V.findOne(".active",this._indicatorsElement);e.classList.remove(it),e.removeAttribute("aria-current");const i=V.find("[data-bs-target]",this._indicatorsElement);for(let e=0;e<i.length;e++)if(Number.parseInt(i[e].getAttribute("data-bs-slide-to"),10)===this._getItemIndex(t)){i[e].classList.add(it),i[e].setAttribute("aria-current","true");break}}}_updateInterval(){const t=this._activeElement||V.findOne(nt,this._element);if(!t)return;const e=Number.parseInt(t.getAttribute("data-bs-interval"),10);e?(this._config.defaultInterval=this._config.defaultInterval||this._config.interval,this._config.interval=e):this._config.interval=this._config.defaultInterval||this._config.interval}_slide(t,e){const i=this._directionToOrder(t),n=V.findOne(nt,this._element),s=this._getItemIndex(n),o=e||this._getItemByOrder(i,n),r=this._getItemIndex(o),a=Boolean(this._interval),l=i===Q,c=l?"carousel-item-start":"carousel-item-end",h=l?"carousel-item-next":"carousel-item-prev",d=this._orderToDirection(i);if(o&&o.classList.contains(it))return void(this._isSliding=!1);if(this._isSliding)return;if(this._triggerSlideEvent(o,d).defaultPrevented)return;if(!n||!o)return;this._isSliding=!0,a&&this.pause(),this._setActiveIndicatorElement(o),this._activeElement=o;const f=()=>{j.trigger(this._element,et,{relatedTarget:o,direction:d,from:s,to:r})};if(this._element.classList.contains("slide")){o.classList.add(h),u(o),n.classList.add(c),o.classList.add(c);const t=()=>{o.classList.remove(c,h),o.classList.add(it),n.classList.remove(it,h,c),this._isSliding=!1,setTimeout(f,0)};this._queueCallback(t,n,!0)}else n.classList.remove(it),o.classList.add(it),this._isSliding=!1,f();a&&this.cycle()}_directionToOrder(t){return[J,Z].includes(t)?m()?t===Z?G:Q:t===Z?Q:G:t}_orderToDirection(t){return[Q,G].includes(t)?m()?t===G?Z:J:t===G?J:Z:t}static carouselInterface(t,e){const i=st.getOrCreateInstance(t,e);let{_config:n}=i;"object"==typeof e&&(n={...n,...e});const s="string"==typeof e?e:n.slide;if("number"==typeof e)i.to(e);else if("string"==typeof s){if(void 0===i[s])throw new TypeError(`No method named "${s}"`);i[s]()}else n.interval&&n.ride&&(i.pause(),i.cycle())}static jQueryInterface(t){return this.each((function(){st.carouselInterface(this,t)}))}static dataApiClickHandler(t){const e=n(this);if(!e||!e.classList.contains("carousel"))return;const i={...U.getDataAttributes(e),...U.getDataAttributes(this)},s=this.getAttribute("data-bs-slide-to");s&&(i.interval=!1),st.carouselInterface(e,i),s&&st.getInstance(e).to(s),t.preventDefault()}}j.on(document,"click.bs.carousel.data-api","[data-bs-slide], [data-bs-slide-to]",st.dataApiClickHandler),j.on(window,"load.bs.carousel.data-api",(()=>{const t=V.find('[data-bs-ride="carousel"]');for(let e=0,i=t.length;e<i;e++)st.carouselInterface(t[e],st.getInstance(t[e]))})),g(st);const ot="collapse",rt={toggle:!0,parent:null},at={toggle:"boolean",parent:"(null|element)"},lt="show",ct="collapse",ht="collapsing",dt="collapsed",ut=":scope .collapse .collapse",ft='[data-bs-toggle="collapse"]';class pt extends B{constructor(t,e){super(t),this._isTransitioning=!1,this._config=this._getConfig(e),this._triggerArray=[];const n=V.find(ft);for(let t=0,e=n.length;t<e;t++){const e=n[t],s=i(e),o=V.find(s).filter((t=>t===this._element));null!==s&&o.length&&(this._selector=s,this._triggerArray.push(e))}this._initializeChildren(),this._config.parent||this._addAriaAndCollapsedClass(this._triggerArray,this._isShown()),this._config.toggle&&this.toggle()}static get Default(){return rt}static get NAME(){return ot}toggle(){this._isShown()?this.hide():this.show()}show(){if(this._isTransitioning||this._isShown())return;let t,e=[];if(this._config.parent){const t=V.find(ut,this._config.parent);e=V.find(".collapse.show, .collapse.collapsing",this._config.parent).filter((e=>!t.includes(e)))}const i=V.findOne(this._selector);if(e.length){const n=e.find((t=>i!==t));if(t=n?pt.getInstance(n):null,t&&t._isTransitioning)return}if(j.trigger(this._element,"show.bs.collapse").defaultPrevented)return;e.forEach((e=>{i!==e&&pt.getOrCreateInstance(e,{toggle:!1}).hide(),t||H.set(e,"bs.collapse",null)}));const n=this._getDimension();this._element.classList.remove(ct),this._element.classList.add(ht),this._element.style[n]=0,this._addAriaAndCollapsedClass(this._triggerArray,!0),this._isTransitioning=!0;const s=`scroll${n[0].toUpperCase()+n.slice(1)}`;this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct,lt),this._element.style[n]="",j.trigger(this._element,"shown.bs.collapse")}),this._element,!0),this._element.style[n]=`${this._element[s]}px`}hide(){if(this._isTransitioning||!this._isShown())return;if(j.trigger(this._element,"hide.bs.collapse").defaultPrevented)return;const t=this._getDimension();this._element.style[t]=`${this._element.getBoundingClientRect()[t]}px`,u(this._element),this._element.classList.add(ht),this._element.classList.remove(ct,lt);const e=this._triggerArray.length;for(let t=0;t<e;t++){const e=this._triggerArray[t],i=n(e);i&&!this._isShown(i)&&this._addAriaAndCollapsedClass([e],!1)}this._isTransitioning=!0,this._element.style[t]="",this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct),j.trigger(this._element,"hidden.bs.collapse")}),this._element,!0)}_isShown(t=this._element){return t.classList.contains(lt)}_getConfig(t){return(t={...rt,...U.getDataAttributes(this._element),...t}).toggle=Boolean(t.toggle),t.parent=r(t.parent),a(ot,t,at),t}_getDimension(){return this._element.classList.contains("collapse-horizontal")?"width":"height"}_initializeChildren(){if(!this._config.parent)return;const t=V.find(ut,this._config.parent);V.find(ft,this._config.parent).filter((e=>!t.includes(e))).forEach((t=>{const e=n(t);e&&this._addAriaAndCollapsedClass([t],this._isShown(e))}))}_addAriaAndCollapsedClass(t,e){t.length&&t.forEach((t=>{e?t.classList.remove(dt):t.classList.add(dt),t.setAttribute("aria-expanded",e)}))}static jQueryInterface(t){return this.each((function(){const e={};"string"==typeof t&&/show|hide/.test(t)&&(e.toggle=!1);const i=pt.getOrCreateInstance(this,e);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t]()}}))}}j.on(document,"click.bs.collapse.data-api",ft,(function(t){("A"===t.target.tagName||t.delegateTarget&&"A"===t.delegateTarget.tagName)&&t.preventDefault();const e=i(this);V.find(e).forEach((t=>{pt.getOrCreateInstance(t,{toggle:!1}).toggle()}))})),g(pt);var mt="top",gt="bottom",_t="right",bt="left",vt="auto",yt=[mt,gt,_t,bt],wt="start",Et="end",At="clippingParents",Tt="viewport",Ot="popper",Ct="reference",kt=yt.reduce((function(t,e){return t.concat([e+"-"+wt,e+"-"+Et])}),[]),Lt=[].concat(yt,[vt]).reduce((function(t,e){return t.concat([e,e+"-"+wt,e+"-"+Et])}),[]),xt="beforeRead",Dt="read",St="afterRead",Nt="beforeMain",It="main",Pt="afterMain",jt="beforeWrite",Mt="write",Ht="afterWrite",Bt=[xt,Dt,St,Nt,It,Pt,jt,Mt,Ht];function Rt(t){return t?(t.nodeName||"").toLowerCase():null}function Wt(t){if(null==t)return window;if("[object Window]"!==t.toString()){var e=t.ownerDocument;return e&&e.defaultView||window}return t}function $t(t){return t instanceof Wt(t).Element||t instanceof Element}function zt(t){return t instanceof Wt(t).HTMLElement||t instanceof HTMLElement}function qt(t){return"undefined"!=typeof ShadowRoot&&(t instanceof Wt(t).ShadowRoot||t instanceof ShadowRoot)}const Ft={name:"applyStyles",enabled:!0,phase:"write",fn:function(t){var e=t.state;Object.keys(e.elements).forEach((function(t){var i=e.styles[t]||{},n=e.attributes[t]||{},s=e.elements[t];zt(s)&&Rt(s)&&(Object.assign(s.style,i),Object.keys(n).forEach((function(t){var e=n[t];!1===e?s.removeAttribute(t):s.setAttribute(t,!0===e?"":e)})))}))},effect:function(t){var e=t.state,i={popper:{position:e.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(e.elements.popper.style,i.popper),e.styles=i,e.elements.arrow&&Object.assign(e.elements.arrow.style,i.arrow),function(){Object.keys(e.elements).forEach((function(t){var n=e.elements[t],s=e.attributes[t]||{},o=Object.keys(e.styles.hasOwnProperty(t)?e.styles[t]:i[t]).reduce((function(t,e){return t[e]="",t}),{});zt(n)&&Rt(n)&&(Object.assign(n.style,o),Object.keys(s).forEach((function(t){n.removeAttribute(t)})))}))}},requires:["computeStyles"]};function Ut(t){return t.split("-")[0]}function Vt(t,e){var i=t.getBoundingClientRect();return{width:i.width/1,height:i.height/1,top:i.top/1,right:i.right/1,bottom:i.bottom/1,left:i.left/1,x:i.left/1,y:i.top/1}}function Kt(t){var e=Vt(t),i=t.offsetWidth,n=t.offsetHeight;return Math.abs(e.width-i)<=1&&(i=e.width),Math.abs(e.height-n)<=1&&(n=e.height),{x:t.offsetLeft,y:t.offsetTop,width:i,height:n}}function Xt(t,e){var i=e.getRootNode&&e.getRootNode();if(t.contains(e))return!0;if(i&&qt(i)){var n=e;do{if(n&&t.isSameNode(n))return!0;n=n.parentNode||n.host}while(n)}return!1}function Yt(t){return Wt(t).getComputedStyle(t)}function Qt(t){return["table","td","th"].indexOf(Rt(t))>=0}function Gt(t){return(($t(t)?t.ownerDocument:t.document)||window.document).documentElement}function Zt(t){return"html"===Rt(t)?t:t.assignedSlot||t.parentNode||(qt(t)?t.host:null)||Gt(t)}function Jt(t){return zt(t)&&"fixed"!==Yt(t).position?t.offsetParent:null}function te(t){for(var e=Wt(t),i=Jt(t);i&&Qt(i)&&"static"===Yt(i).position;)i=Jt(i);return i&&("html"===Rt(i)||"body"===Rt(i)&&"static"===Yt(i).position)?e:i||function(t){var e=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&zt(t)&&"fixed"===Yt(t).position)return null;for(var i=Zt(t);zt(i)&&["html","body"].indexOf(Rt(i))<0;){var n=Yt(i);if("none"!==n.transform||"none"!==n.perspective||"paint"===n.contain||-1!==["transform","perspective"].indexOf(n.willChange)||e&&"filter"===n.willChange||e&&n.filter&&"none"!==n.filter)return i;i=i.parentNode}return null}(t)||e}function ee(t){return["top","bottom"].indexOf(t)>=0?"x":"y"}var ie=Math.max,ne=Math.min,se=Math.round;function oe(t,e,i){return ie(t,ne(e,i))}function re(t){return Object.assign({},{top:0,right:0,bottom:0,left:0},t)}function ae(t,e){return e.reduce((function(e,i){return e[i]=t,e}),{})}const le={name:"arrow",enabled:!0,phase:"main",fn:function(t){var e,i=t.state,n=t.name,s=t.options,o=i.elements.arrow,r=i.modifiersData.popperOffsets,a=Ut(i.placement),l=ee(a),c=[bt,_t].indexOf(a)>=0?"height":"width";if(o&&r){var h=function(t,e){return re("number"!=typeof(t="function"==typeof t?t(Object.assign({},e.rects,{placement:e.placement})):t)?t:ae(t,yt))}(s.padding,i),d=Kt(o),u="y"===l?mt:bt,f="y"===l?gt:_t,p=i.rects.reference[c]+i.rects.reference[l]-r[l]-i.rects.popper[c],m=r[l]-i.rects.reference[l],g=te(o),_=g?"y"===l?g.clientHeight||0:g.clientWidth||0:0,b=p/2-m/2,v=h[u],y=_-d[c]-h[f],w=_/2-d[c]/2+b,E=oe(v,w,y),A=l;i.modifiersData[n]=((e={})[A]=E,e.centerOffset=E-w,e)}},effect:function(t){var e=t.state,i=t.options.element,n=void 0===i?"[data-popper-arrow]":i;null!=n&&("string"!=typeof n||(n=e.elements.popper.querySelector(n)))&&Xt(e.elements.popper,n)&&(e.elements.arrow=n)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function ce(t){return t.split("-")[1]}var he={top:"auto",right:"auto",bottom:"auto",left:"auto"};function de(t){var e,i=t.popper,n=t.popperRect,s=t.placement,o=t.variation,r=t.offsets,a=t.position,l=t.gpuAcceleration,c=t.adaptive,h=t.roundOffsets,d=!0===h?function(t){var e=t.x,i=t.y,n=window.devicePixelRatio||1;return{x:se(se(e*n)/n)||0,y:se(se(i*n)/n)||0}}(r):"function"==typeof h?h(r):r,u=d.x,f=void 0===u?0:u,p=d.y,m=void 0===p?0:p,g=r.hasOwnProperty("x"),_=r.hasOwnProperty("y"),b=bt,v=mt,y=window;if(c){var w=te(i),E="clientHeight",A="clientWidth";w===Wt(i)&&"static"!==Yt(w=Gt(i)).position&&"absolute"===a&&(E="scrollHeight",A="scrollWidth"),w=w,s!==mt&&(s!==bt&&s!==_t||o!==Et)||(v=gt,m-=w[E]-n.height,m*=l?1:-1),s!==bt&&(s!==mt&&s!==gt||o!==Et)||(b=_t,f-=w[A]-n.width,f*=l?1:-1)}var T,O=Object.assign({position:a},c&&he);return l?Object.assign({},O,((T={})[v]=_?"0":"",T[b]=g?"0":"",T.transform=(y.devicePixelRatio||1)<=1?"translate("+f+"px, "+m+"px)":"translate3d("+f+"px, "+m+"px, 0)",T)):Object.assign({},O,((e={})[v]=_?m+"px":"",e[b]=g?f+"px":"",e.transform="",e))}const ue={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:function(t){var e=t.state,i=t.options,n=i.gpuAcceleration,s=void 0===n||n,o=i.adaptive,r=void 0===o||o,a=i.roundOffsets,l=void 0===a||a,c={placement:Ut(e.placement),variation:ce(e.placement),popper:e.elements.popper,popperRect:e.rects.popper,gpuAcceleration:s};null!=e.modifiersData.popperOffsets&&(e.styles.popper=Object.assign({},e.styles.popper,de(Object.assign({},c,{offsets:e.modifiersData.popperOffsets,position:e.options.strategy,adaptive:r,roundOffsets:l})))),null!=e.modifiersData.arrow&&(e.styles.arrow=Object.assign({},e.styles.arrow,de(Object.assign({},c,{offsets:e.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:l})))),e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-placement":e.placement})},data:{}};var fe={passive:!0};const pe={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:function(t){var e=t.state,i=t.instance,n=t.options,s=n.scroll,o=void 0===s||s,r=n.resize,a=void 0===r||r,l=Wt(e.elements.popper),c=[].concat(e.scrollParents.reference,e.scrollParents.popper);return o&&c.forEach((function(t){t.addEventListener("scroll",i.update,fe)})),a&&l.addEventListener("resize",i.update,fe),function(){o&&c.forEach((function(t){t.removeEventListener("scroll",i.update,fe)})),a&&l.removeEventListener("resize",i.update,fe)}},data:{}};var me={left:"right",right:"left",bottom:"top",top:"bottom"};function ge(t){return t.replace(/left|right|bottom|top/g,(function(t){return me[t]}))}var _e={start:"end",end:"start"};function be(t){return t.replace(/start|end/g,(function(t){return _e[t]}))}function ve(t){var e=Wt(t);return{scrollLeft:e.pageXOffset,scrollTop:e.pageYOffset}}function ye(t){return Vt(Gt(t)).left+ve(t).scrollLeft}function we(t){var e=Yt(t),i=e.overflow,n=e.overflowX,s=e.overflowY;return/auto|scroll|overlay|hidden/.test(i+s+n)}function Ee(t){return["html","body","#document"].indexOf(Rt(t))>=0?t.ownerDocument.body:zt(t)&&we(t)?t:Ee(Zt(t))}function Ae(t,e){var i;void 0===e&&(e=[]);var n=Ee(t),s=n===(null==(i=t.ownerDocument)?void 0:i.body),o=Wt(n),r=s?[o].concat(o.visualViewport||[],we(n)?n:[]):n,a=e.concat(r);return s?a:a.concat(Ae(Zt(r)))}function Te(t){return Object.assign({},t,{left:t.x,top:t.y,right:t.x+t.width,bottom:t.y+t.height})}function Oe(t,e){return e===Tt?Te(function(t){var e=Wt(t),i=Gt(t),n=e.visualViewport,s=i.clientWidth,o=i.clientHeight,r=0,a=0;return n&&(s=n.width,o=n.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(r=n.offsetLeft,a=n.offsetTop)),{width:s,height:o,x:r+ye(t),y:a}}(t)):zt(e)?function(t){var e=Vt(t);return e.top=e.top+t.clientTop,e.left=e.left+t.clientLeft,e.bottom=e.top+t.clientHeight,e.right=e.left+t.clientWidth,e.width=t.clientWidth,e.height=t.clientHeight,e.x=e.left,e.y=e.top,e}(e):Te(function(t){var e,i=Gt(t),n=ve(t),s=null==(e=t.ownerDocument)?void 0:e.body,o=ie(i.scrollWidth,i.clientWidth,s?s.scrollWidth:0,s?s.clientWidth:0),r=ie(i.scrollHeight,i.clientHeight,s?s.scrollHeight:0,s?s.clientHeight:0),a=-n.scrollLeft+ye(t),l=-n.scrollTop;return"rtl"===Yt(s||i).direction&&(a+=ie(i.clientWidth,s?s.clientWidth:0)-o),{width:o,height:r,x:a,y:l}}(Gt(t)))}function Ce(t){var e,i=t.reference,n=t.element,s=t.placement,o=s?Ut(s):null,r=s?ce(s):null,a=i.x+i.width/2-n.width/2,l=i.y+i.height/2-n.height/2;switch(o){case mt:e={x:a,y:i.y-n.height};break;case gt:e={x:a,y:i.y+i.height};break;case _t:e={x:i.x+i.width,y:l};break;case bt:e={x:i.x-n.width,y:l};break;default:e={x:i.x,y:i.y}}var c=o?ee(o):null;if(null!=c){var h="y"===c?"height":"width";switch(r){case wt:e[c]=e[c]-(i[h]/2-n[h]/2);break;case Et:e[c]=e[c]+(i[h]/2-n[h]/2)}}return e}function ke(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=void 0===n?t.placement:n,o=i.boundary,r=void 0===o?At:o,a=i.rootBoundary,l=void 0===a?Tt:a,c=i.elementContext,h=void 0===c?Ot:c,d=i.altBoundary,u=void 0!==d&&d,f=i.padding,p=void 0===f?0:f,m=re("number"!=typeof p?p:ae(p,yt)),g=h===Ot?Ct:Ot,_=t.rects.popper,b=t.elements[u?g:h],v=function(t,e,i){var n="clippingParents"===e?function(t){var e=Ae(Zt(t)),i=["absolute","fixed"].indexOf(Yt(t).position)>=0&&zt(t)?te(t):t;return $t(i)?e.filter((function(t){return $t(t)&&Xt(t,i)&&"body"!==Rt(t)})):[]}(t):[].concat(e),s=[].concat(n,[i]),o=s[0],r=s.reduce((function(e,i){var n=Oe(t,i);return e.top=ie(n.top,e.top),e.right=ne(n.right,e.right),e.bottom=ne(n.bottom,e.bottom),e.left=ie(n.left,e.left),e}),Oe(t,o));return r.width=r.right-r.left,r.height=r.bottom-r.top,r.x=r.left,r.y=r.top,r}($t(b)?b:b.contextElement||Gt(t.elements.popper),r,l),y=Vt(t.elements.reference),w=Ce({reference:y,element:_,strategy:"absolute",placement:s}),E=Te(Object.assign({},_,w)),A=h===Ot?E:y,T={top:v.top-A.top+m.top,bottom:A.bottom-v.bottom+m.bottom,left:v.left-A.left+m.left,right:A.right-v.right+m.right},O=t.modifiersData.offset;if(h===Ot&&O){var C=O[s];Object.keys(T).forEach((function(t){var e=[_t,gt].indexOf(t)>=0?1:-1,i=[mt,gt].indexOf(t)>=0?"y":"x";T[t]+=C[i]*e}))}return T}function Le(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=i.boundary,o=i.rootBoundary,r=i.padding,a=i.flipVariations,l=i.allowedAutoPlacements,c=void 0===l?Lt:l,h=ce(n),d=h?a?kt:kt.filter((function(t){return ce(t)===h})):yt,u=d.filter((function(t){return c.indexOf(t)>=0}));0===u.length&&(u=d);var f=u.reduce((function(e,i){return e[i]=ke(t,{placement:i,boundary:s,rootBoundary:o,padding:r})[Ut(i)],e}),{});return Object.keys(f).sort((function(t,e){return f[t]-f[e]}))}const xe={name:"flip",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name;if(!e.modifiersData[n]._skip){for(var s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0===r||r,l=i.fallbackPlacements,c=i.padding,h=i.boundary,d=i.rootBoundary,u=i.altBoundary,f=i.flipVariations,p=void 0===f||f,m=i.allowedAutoPlacements,g=e.options.placement,_=Ut(g),b=l||(_!==g&&p?function(t){if(Ut(t)===vt)return[];var e=ge(t);return[be(t),e,be(e)]}(g):[ge(g)]),v=[g].concat(b).reduce((function(t,i){return t.concat(Ut(i)===vt?Le(e,{placement:i,boundary:h,rootBoundary:d,padding:c,flipVariations:p,allowedAutoPlacements:m}):i)}),[]),y=e.rects.reference,w=e.rects.popper,E=new Map,A=!0,T=v[0],O=0;O<v.length;O++){var C=v[O],k=Ut(C),L=ce(C)===wt,x=[mt,gt].indexOf(k)>=0,D=x?"width":"height",S=ke(e,{placement:C,boundary:h,rootBoundary:d,altBoundary:u,padding:c}),N=x?L?_t:bt:L?gt:mt;y[D]>w[D]&&(N=ge(N));var I=ge(N),P=[];if(o&&P.push(S[k]<=0),a&&P.push(S[N]<=0,S[I]<=0),P.every((function(t){return t}))){T=C,A=!1;break}E.set(C,P)}if(A)for(var j=function(t){var e=v.find((function(e){var i=E.get(e);if(i)return i.slice(0,t).every((function(t){return t}))}));if(e)return T=e,"break"},M=p?3:1;M>0&&"break"!==j(M);M--);e.placement!==T&&(e.modifiersData[n]._skip=!0,e.placement=T,e.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function De(t,e,i){return void 0===i&&(i={x:0,y:0}),{top:t.top-e.height-i.y,right:t.right-e.width+i.x,bottom:t.bottom-e.height+i.y,left:t.left-e.width-i.x}}function Se(t){return[mt,_t,gt,bt].some((function(e){return t[e]>=0}))}const Ne={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(t){var e=t.state,i=t.name,n=e.rects.reference,s=e.rects.popper,o=e.modifiersData.preventOverflow,r=ke(e,{elementContext:"reference"}),a=ke(e,{altBoundary:!0}),l=De(r,n),c=De(a,s,o),h=Se(l),d=Se(c);e.modifiersData[i]={referenceClippingOffsets:l,popperEscapeOffsets:c,isReferenceHidden:h,hasPopperEscaped:d},e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-reference-hidden":h,"data-popper-escaped":d})}},Ie={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.offset,o=void 0===s?[0,0]:s,r=Lt.reduce((function(t,i){return t[i]=function(t,e,i){var n=Ut(t),s=[bt,mt].indexOf(n)>=0?-1:1,o="function"==typeof i?i(Object.assign({},e,{placement:t})):i,r=o[0],a=o[1];return r=r||0,a=(a||0)*s,[bt,_t].indexOf(n)>=0?{x:a,y:r}:{x:r,y:a}}(i,e.rects,o),t}),{}),a=r[e.placement],l=a.x,c=a.y;null!=e.modifiersData.popperOffsets&&(e.modifiersData.popperOffsets.x+=l,e.modifiersData.popperOffsets.y+=c),e.modifiersData[n]=r}},Pe={name:"popperOffsets",enabled:!0,phase:"read",fn:function(t){var e=t.state,i=t.name;e.modifiersData[i]=Ce({reference:e.rects.reference,element:e.rects.popper,strategy:"absolute",placement:e.placement})},data:{}},je={name:"preventOverflow",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0!==r&&r,l=i.boundary,c=i.rootBoundary,h=i.altBoundary,d=i.padding,u=i.tether,f=void 0===u||u,p=i.tetherOffset,m=void 0===p?0:p,g=ke(e,{boundary:l,rootBoundary:c,padding:d,altBoundary:h}),_=Ut(e.placement),b=ce(e.placement),v=!b,y=ee(_),w="x"===y?"y":"x",E=e.modifiersData.popperOffsets,A=e.rects.reference,T=e.rects.popper,O="function"==typeof m?m(Object.assign({},e.rects,{placement:e.placement})):m,C={x:0,y:0};if(E){if(o||a){var k="y"===y?mt:bt,L="y"===y?gt:_t,x="y"===y?"height":"width",D=E[y],S=E[y]+g[k],N=E[y]-g[L],I=f?-T[x]/2:0,P=b===wt?A[x]:T[x],j=b===wt?-T[x]:-A[x],M=e.elements.arrow,H=f&&M?Kt(M):{width:0,height:0},B=e.modifiersData["arrow#persistent"]?e.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},R=B[k],W=B[L],$=oe(0,A[x],H[x]),z=v?A[x]/2-I-$-R-O:P-$-R-O,q=v?-A[x]/2+I+$+W+O:j+$+W+O,F=e.elements.arrow&&te(e.elements.arrow),U=F?"y"===y?F.clientTop||0:F.clientLeft||0:0,V=e.modifiersData.offset?e.modifiersData.offset[e.placement][y]:0,K=E[y]+z-V-U,X=E[y]+q-V;if(o){var Y=oe(f?ne(S,K):S,D,f?ie(N,X):N);E[y]=Y,C[y]=Y-D}if(a){var Q="x"===y?mt:bt,G="x"===y?gt:_t,Z=E[w],J=Z+g[Q],tt=Z-g[G],et=oe(f?ne(J,K):J,Z,f?ie(tt,X):tt);E[w]=et,C[w]=et-Z}}e.modifiersData[n]=C}},requiresIfExists:["offset"]};function Me(t,e,i){void 0===i&&(i=!1);var n=zt(e);zt(e)&&function(t){var e=t.getBoundingClientRect();e.width,t.offsetWidth,e.height,t.offsetHeight}(e);var s,o,r=Gt(e),a=Vt(t),l={scrollLeft:0,scrollTop:0},c={x:0,y:0};return(n||!n&&!i)&&(("body"!==Rt(e)||we(r))&&(l=(s=e)!==Wt(s)&&zt(s)?{scrollLeft:(o=s).scrollLeft,scrollTop:o.scrollTop}:ve(s)),zt(e)?((c=Vt(e)).x+=e.clientLeft,c.y+=e.clientTop):r&&(c.x=ye(r))),{x:a.left+l.scrollLeft-c.x,y:a.top+l.scrollTop-c.y,width:a.width,height:a.height}}function He(t){var e=new Map,i=new Set,n=[];function s(t){i.add(t.name),[].concat(t.requires||[],t.requiresIfExists||[]).forEach((function(t){if(!i.has(t)){var n=e.get(t);n&&s(n)}})),n.push(t)}return t.forEach((function(t){e.set(t.name,t)})),t.forEach((function(t){i.has(t.name)||s(t)})),n}var Be={placement:"bottom",modifiers:[],strategy:"absolute"};function Re(){for(var t=arguments.length,e=new Array(t),i=0;i<t;i++)e[i]=arguments[i];return!e.some((function(t){return!(t&&"function"==typeof t.getBoundingClientRect)}))}function We(t){void 0===t&&(t={});var e=t,i=e.defaultModifiers,n=void 0===i?[]:i,s=e.defaultOptions,o=void 0===s?Be:s;return function(t,e,i){void 0===i&&(i=o);var s,r,a={placement:"bottom",orderedModifiers:[],options:Object.assign({},Be,o),modifiersData:{},elements:{reference:t,popper:e},attributes:{},styles:{}},l=[],c=!1,h={state:a,setOptions:function(i){var s="function"==typeof i?i(a.options):i;d(),a.options=Object.assign({},o,a.options,s),a.scrollParents={reference:$t(t)?Ae(t):t.contextElement?Ae(t.contextElement):[],popper:Ae(e)};var r,c,u=function(t){var e=He(t);return Bt.reduce((function(t,i){return t.concat(e.filter((function(t){return t.phase===i})))}),[])}((r=[].concat(n,a.options.modifiers),c=r.reduce((function(t,e){var i=t[e.name];return t[e.name]=i?Object.assign({},i,e,{options:Object.assign({},i.options,e.options),data:Object.assign({},i.data,e.data)}):e,t}),{}),Object.keys(c).map((function(t){return c[t]}))));return a.orderedModifiers=u.filter((function(t){return t.enabled})),a.orderedModifiers.forEach((function(t){var e=t.name,i=t.options,n=void 0===i?{}:i,s=t.effect;if("function"==typeof s){var o=s({state:a,name:e,instance:h,options:n});l.push(o||function(){})}})),h.update()},forceUpdate:function(){if(!c){var t=a.elements,e=t.reference,i=t.popper;if(Re(e,i)){a.rects={reference:Me(e,te(i),"fixed"===a.options.strategy),popper:Kt(i)},a.reset=!1,a.placement=a.options.placement,a.orderedModifiers.forEach((function(t){return a.modifiersData[t.name]=Object.assign({},t.data)}));for(var n=0;n<a.orderedModifiers.length;n++)if(!0!==a.reset){var s=a.orderedModifiers[n],o=s.fn,r=s.options,l=void 0===r?{}:r,d=s.name;"function"==typeof o&&(a=o({state:a,options:l,name:d,instance:h})||a)}else a.reset=!1,n=-1}}},update:(s=function(){return new Promise((function(t){h.forceUpdate(),t(a)}))},function(){return r||(r=new Promise((function(t){Promise.resolve().then((function(){r=void 0,t(s())}))}))),r}),destroy:function(){d(),c=!0}};if(!Re(t,e))return h;function d(){l.forEach((function(t){return t()})),l=[]}return h.setOptions(i).then((function(t){!c&&i.onFirstUpdate&&i.onFirstUpdate(t)})),h}}var $e=We(),ze=We({defaultModifiers:[pe,Pe,ue,Ft]}),qe=We({defaultModifiers:[pe,Pe,ue,Ft,Ie,xe,je,le,Ne]});const Fe=Object.freeze({__proto__:null,popperGenerator:We,detectOverflow:ke,createPopperBase:$e,createPopper:qe,createPopperLite:ze,top:mt,bottom:gt,right:_t,left:bt,auto:vt,basePlacements:yt,start:wt,end:Et,clippingParents:At,viewport:Tt,popper:Ot,reference:Ct,variationPlacements:kt,placements:Lt,beforeRead:xt,read:Dt,afterRead:St,beforeMain:Nt,main:It,afterMain:Pt,beforeWrite:jt,write:Mt,afterWrite:Ht,modifierPhases:Bt,applyStyles:Ft,arrow:le,computeStyles:ue,eventListeners:pe,flip:xe,hide:Ne,offset:Ie,popperOffsets:Pe,preventOverflow:je}),Ue="dropdown",Ve="Escape",Ke="Space",Xe="ArrowUp",Ye="ArrowDown",Qe=new RegExp("ArrowUp|ArrowDown|Escape"),Ge="click.bs.dropdown.data-api",Ze="keydown.bs.dropdown.data-api",Je="show",ti='[data-bs-toggle="dropdown"]',ei=".dropdown-menu",ii=m()?"top-end":"top-start",ni=m()?"top-start":"top-end",si=m()?"bottom-end":"bottom-start",oi=m()?"bottom-start":"bottom-end",ri=m()?"left-start":"right-start",ai=m()?"right-start":"left-start",li={offset:[0,2],boundary:"clippingParents",reference:"toggle",display:"dynamic",popperConfig:null,autoClose:!0},ci={offset:"(array|string|function)",boundary:"(string|element)",reference:"(string|element|object)",display:"string",popperConfig:"(null|object|function)",autoClose:"(boolean|string)"};class hi extends B{constructor(t,e){super(t),this._popper=null,this._config=this._getConfig(e),this._menu=this._getMenuElement(),this._inNavbar=this._detectNavbar()}static get Default(){return li}static get DefaultType(){return ci}static get NAME(){return Ue}toggle(){return this._isShown()?this.hide():this.show()}show(){if(c(this._element)||this._isShown(this._menu))return;const t={relatedTarget:this._element};if(j.trigger(this._element,"show.bs.dropdown",t).defaultPrevented)return;const e=hi.getParentFromElement(this._element);this._inNavbar?U.setDataAttribute(this._menu,"popper","none"):this._createPopper(e),"ontouchstart"in document.documentElement&&!e.closest(".navbar-nav")&&[].concat(...document.body.children).forEach((t=>j.on(t,"mouseover",d))),this._element.focus(),this._element.setAttribute("aria-expanded",!0),this._menu.classList.add(Je),this._element.classList.add(Je),j.trigger(this._element,"shown.bs.dropdown",t)}hide(){if(c(this._element)||!this._isShown(this._menu))return;const t={relatedTarget:this._element};this._completeHide(t)}dispose(){this._popper&&this._popper.destroy(),super.dispose()}update(){this._inNavbar=this._detectNavbar(),this._popper&&this._popper.update()}_completeHide(t){j.trigger(this._element,"hide.bs.dropdown",t).defaultPrevented||("ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._popper&&this._popper.destroy(),this._menu.classList.remove(Je),this._element.classList.remove(Je),this._element.setAttribute("aria-expanded","false"),U.removeDataAttribute(this._menu,"popper"),j.trigger(this._element,"hidden.bs.dropdown",t))}_getConfig(t){if(t={...this.constructor.Default,...U.getDataAttributes(this._element),...t},a(Ue,t,this.constructor.DefaultType),"object"==typeof t.reference&&!o(t.reference)&&"function"!=typeof t.reference.getBoundingClientRect)throw new TypeError(`${Ue.toUpperCase()}: Option "reference" provided type "object" without a required "getBoundingClientRect" method.`);return t}_createPopper(t){if(void 0===Fe)throw new TypeError("Bootstrap's dropdowns require Popper (https://popper.js.org)");let e=this._element;"parent"===this._config.reference?e=t:o(this._config.reference)?e=r(this._config.reference):"object"==typeof this._config.reference&&(e=this._config.reference);const i=this._getPopperConfig(),n=i.modifiers.find((t=>"applyStyles"===t.name&&!1===t.enabled));this._popper=qe(e,this._menu,i),n&&U.setDataAttribute(this._menu,"popper","static")}_isShown(t=this._element){return t.classList.contains(Je)}_getMenuElement(){return V.next(this._element,ei)[0]}_getPlacement(){const t=this._element.parentNode;if(t.classList.contains("dropend"))return ri;if(t.classList.contains("dropstart"))return ai;const e="end"===getComputedStyle(this._menu).getPropertyValue("--bs-position").trim();return t.classList.contains("dropup")?e?ni:ii:e?oi:si}_detectNavbar(){return null!==this._element.closest(".navbar")}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_getPopperConfig(){const t={placement:this._getPlacement(),modifiers:[{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"offset",options:{offset:this._getOffset()}}]};return"static"===this._config.display&&(t.modifiers=[{name:"applyStyles",enabled:!1}]),{...t,..."function"==typeof this._config.popperConfig?this._config.popperConfig(t):this._config.popperConfig}}_selectMenuItem({key:t,target:e}){const i=V.find(".dropdown-menu .dropdown-item:not(.disabled):not(:disabled)",this._menu).filter(l);i.length&&v(i,e,t===Ye,!i.includes(e)).focus()}static jQueryInterface(t){return this.each((function(){const e=hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}static clearMenus(t){if(t&&(2===t.button||"keyup"===t.type&&"Tab"!==t.key))return;const e=V.find(ti);for(let i=0,n=e.length;i<n;i++){const n=hi.getInstance(e[i]);if(!n||!1===n._config.autoClose)continue;if(!n._isShown())continue;const s={relatedTarget:n._element};if(t){const e=t.composedPath(),i=e.includes(n._menu);if(e.includes(n._element)||"inside"===n._config.autoClose&&!i||"outside"===n._config.autoClose&&i)continue;if(n._menu.contains(t.target)&&("keyup"===t.type&&"Tab"===t.key||/input|select|option|textarea|form/i.test(t.target.tagName)))continue;"click"===t.type&&(s.clickEvent=t)}n._completeHide(s)}}static getParentFromElement(t){return n(t)||t.parentNode}static dataApiKeydownHandler(t){if(/input|textarea/i.test(t.target.tagName)?t.key===Ke||t.key!==Ve&&(t.key!==Ye&&t.key!==Xe||t.target.closest(ei)):!Qe.test(t.key))return;const e=this.classList.contains(Je);if(!e&&t.key===Ve)return;if(t.preventDefault(),t.stopPropagation(),c(this))return;const i=this.matches(ti)?this:V.prev(this,ti)[0],n=hi.getOrCreateInstance(i);if(t.key!==Ve)return t.key===Xe||t.key===Ye?(e||n.show(),void n._selectMenuItem(t)):void(e&&t.key!==Ke||hi.clearMenus());n.hide()}}j.on(document,Ze,ti,hi.dataApiKeydownHandler),j.on(document,Ze,ei,hi.dataApiKeydownHandler),j.on(document,Ge,hi.clearMenus),j.on(document,"keyup.bs.dropdown.data-api",hi.clearMenus),j.on(document,Ge,ti,(function(t){t.preventDefault(),hi.getOrCreateInstance(this).toggle()})),g(hi);const di=".fixed-top, .fixed-bottom, .is-fixed, .sticky-top",ui=".sticky-top";class fi{constructor(){this._element=document.body}getWidth(){const t=document.documentElement.clientWidth;return Math.abs(window.innerWidth-t)}hide(){const t=this.getWidth();this._disableOverFlow(),this._setElementAttributes(this._element,"paddingRight",(e=>e+t)),this._setElementAttributes(di,"paddingRight",(e=>e+t)),this._setElementAttributes(ui,"marginRight",(e=>e-t))}_disableOverFlow(){this._saveInitialAttribute(this._element,"overflow"),this._element.style.overflow="hidden"}_setElementAttributes(t,e,i){const n=this.getWidth();this._applyManipulationCallback(t,(t=>{if(t!==this._element&&window.innerWidth>t.clientWidth+n)return;this._saveInitialAttribute(t,e);const s=window.getComputedStyle(t)[e];t.style[e]=`${i(Number.parseFloat(s))}px`}))}reset(){this._resetElementAttributes(this._element,"overflow"),this._resetElementAttributes(this._element,"paddingRight"),this._resetElementAttributes(di,"paddingRight"),this._resetElementAttributes(ui,"marginRight")}_saveInitialAttribute(t,e){const i=t.style[e];i&&U.setDataAttribute(t,e,i)}_resetElementAttributes(t,e){this._applyManipulationCallback(t,(t=>{const i=U.getDataAttribute(t,e);void 0===i?t.style.removeProperty(e):(U.removeDataAttribute(t,e),t.style[e]=i)}))}_applyManipulationCallback(t,e){o(t)?e(t):V.find(t,this._element).forEach(e)}isOverflowing(){return this.getWidth()>0}}const pi={className:"modal-backdrop",isVisible:!0,isAnimated:!1,rootElement:"body",clickCallback:null},mi={className:"string",isVisible:"boolean",isAnimated:"boolean",rootElement:"(element|string)",clickCallback:"(function|null)"},gi="show",_i="mousedown.bs.backdrop";class bi{constructor(t){this._config=this._getConfig(t),this._isAppended=!1,this._element=null}show(t){this._config.isVisible?(this._append(),this._config.isAnimated&&u(this._getElement()),this._getElement().classList.add(gi),this._emulateAnimation((()=>{_(t)}))):_(t)}hide(t){this._config.isVisible?(this._getElement().classList.remove(gi),this._emulateAnimation((()=>{this.dispose(),_(t)}))):_(t)}_getElement(){if(!this._element){const t=document.createElement("div");t.className=this._config.className,this._config.isAnimated&&t.classList.add("fade"),this._element=t}return this._element}_getConfig(t){return(t={...pi,..."object"==typeof t?t:{}}).rootElement=r(t.rootElement),a("backdrop",t,mi),t}_append(){this._isAppended||(this._config.rootElement.append(this._getElement()),j.on(this._getElement(),_i,(()=>{_(this._config.clickCallback)})),this._isAppended=!0)}dispose(){this._isAppended&&(j.off(this._element,_i),this._element.remove(),this._isAppended=!1)}_emulateAnimation(t){b(t,this._getElement(),this._config.isAnimated)}}const vi={trapElement:null,autofocus:!0},yi={trapElement:"element",autofocus:"boolean"},wi=".bs.focustrap",Ei="backward";class Ai{constructor(t){this._config=this._getConfig(t),this._isActive=!1,this._lastTabNavDirection=null}activate(){const{trapElement:t,autofocus:e}=this._config;this._isActive||(e&&t.focus(),j.off(document,wi),j.on(document,"focusin.bs.focustrap",(t=>this._handleFocusin(t))),j.on(document,"keydown.tab.bs.focustrap",(t=>this._handleKeydown(t))),this._isActive=!0)}deactivate(){this._isActive&&(this._isActive=!1,j.off(document,wi))}_handleFocusin(t){const{target:e}=t,{trapElement:i}=this._config;if(e===document||e===i||i.contains(e))return;const n=V.focusableChildren(i);0===n.length?i.focus():this._lastTabNavDirection===Ei?n[n.length-1].focus():n[0].focus()}_handleKeydown(t){"Tab"===t.key&&(this._lastTabNavDirection=t.shiftKey?Ei:"forward")}_getConfig(t){return t={...vi,..."object"==typeof t?t:{}},a("focustrap",t,yi),t}}const Ti="modal",Oi="Escape",Ci={backdrop:!0,keyboard:!0,focus:!0},ki={backdrop:"(boolean|string)",keyboard:"boolean",focus:"boolean"},Li="hidden.bs.modal",xi="show.bs.modal",Di="resize.bs.modal",Si="click.dismiss.bs.modal",Ni="keydown.dismiss.bs.modal",Ii="mousedown.dismiss.bs.modal",Pi="modal-open",ji="show",Mi="modal-static";class Hi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._dialog=V.findOne(".modal-dialog",this._element),this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._isShown=!1,this._ignoreBackdropClick=!1,this._isTransitioning=!1,this._scrollBar=new fi}static get Default(){return Ci}static get NAME(){return Ti}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||this._isTransitioning||j.trigger(this._element,xi,{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._isAnimated()&&(this._isTransitioning=!0),this._scrollBar.hide(),document.body.classList.add(Pi),this._adjustDialog(),this._setEscapeEvent(),this._setResizeEvent(),j.on(this._dialog,Ii,(()=>{j.one(this._element,"mouseup.dismiss.bs.modal",(t=>{t.target===this._element&&(this._ignoreBackdropClick=!0)}))})),this._showBackdrop((()=>this._showElement(t))))}hide(){if(!this._isShown||this._isTransitioning)return;if(j.trigger(this._element,"hide.bs.modal").defaultPrevented)return;this._isShown=!1;const t=this._isAnimated();t&&(this._isTransitioning=!0),this._setEscapeEvent(),this._setResizeEvent(),this._focustrap.deactivate(),this._element.classList.remove(ji),j.off(this._element,Si),j.off(this._dialog,Ii),this._queueCallback((()=>this._hideModal()),this._element,t)}dispose(){[window,this._dialog].forEach((t=>j.off(t,".bs.modal"))),this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}handleUpdate(){this._adjustDialog()}_initializeBackDrop(){return new bi({isVisible:Boolean(this._config.backdrop),isAnimated:this._isAnimated()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_getConfig(t){return t={...Ci,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Ti,t,ki),t}_showElement(t){const e=this._isAnimated(),i=V.findOne(".modal-body",this._dialog);this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE||document.body.append(this._element),this._element.style.display="block",this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.scrollTop=0,i&&(i.scrollTop=0),e&&u(this._element),this._element.classList.add(ji),this._queueCallback((()=>{this._config.focus&&this._focustrap.activate(),this._isTransitioning=!1,j.trigger(this._element,"shown.bs.modal",{relatedTarget:t})}),this._dialog,e)}_setEscapeEvent(){this._isShown?j.on(this._element,Ni,(t=>{this._config.keyboard&&t.key===Oi?(t.preventDefault(),this.hide()):this._config.keyboard||t.key!==Oi||this._triggerBackdropTransition()})):j.off(this._element,Ni)}_setResizeEvent(){this._isShown?j.on(window,Di,(()=>this._adjustDialog())):j.off(window,Di)}_hideModal(){this._element.style.display="none",this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._isTransitioning=!1,this._backdrop.hide((()=>{document.body.classList.remove(Pi),this._resetAdjustments(),this._scrollBar.reset(),j.trigger(this._element,Li)}))}_showBackdrop(t){j.on(this._element,Si,(t=>{this._ignoreBackdropClick?this._ignoreBackdropClick=!1:t.target===t.currentTarget&&(!0===this._config.backdrop?this.hide():"static"===this._config.backdrop&&this._triggerBackdropTransition())})),this._backdrop.show(t)}_isAnimated(){return this._element.classList.contains("fade")}_triggerBackdropTransition(){if(j.trigger(this._element,"hidePrevented.bs.modal").defaultPrevented)return;const{classList:t,scrollHeight:e,style:i}=this._element,n=e>document.documentElement.clientHeight;!n&&"hidden"===i.overflowY||t.contains(Mi)||(n||(i.overflowY="hidden"),t.add(Mi),this._queueCallback((()=>{t.remove(Mi),n||this._queueCallback((()=>{i.overflowY=""}),this._dialog)}),this._dialog),this._element.focus())}_adjustDialog(){const t=this._element.scrollHeight>document.documentElement.clientHeight,e=this._scrollBar.getWidth(),i=e>0;(!i&&t&&!m()||i&&!t&&m())&&(this._element.style.paddingLeft=`${e}px`),(i&&!t&&!m()||!i&&t&&m())&&(this._element.style.paddingRight=`${e}px`)}_resetAdjustments(){this._element.style.paddingLeft="",this._element.style.paddingRight=""}static jQueryInterface(t,e){return this.each((function(){const i=Hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t](e)}}))}}j.on(document,"click.bs.modal.data-api",'[data-bs-toggle="modal"]',(function(t){const e=n(this);["A","AREA"].includes(this.tagName)&&t.preventDefault(),j.one(e,xi,(t=>{t.defaultPrevented||j.one(e,Li,(()=>{l(this)&&this.focus()}))}));const i=V.findOne(".modal.show");i&&Hi.getInstance(i).hide(),Hi.getOrCreateInstance(e).toggle(this)})),R(Hi),g(Hi);const Bi="offcanvas",Ri={backdrop:!0,keyboard:!0,scroll:!1},Wi={backdrop:"boolean",keyboard:"boolean",scroll:"boolean"},$i="show",zi=".offcanvas.show",qi="hidden.bs.offcanvas";class Fi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._isShown=!1,this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._addEventListeners()}static get NAME(){return Bi}static get Default(){return Ri}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||j.trigger(this._element,"show.bs.offcanvas",{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._element.style.visibility="visible",this._backdrop.show(),this._config.scroll||(new fi).hide(),this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.classList.add($i),this._queueCallback((()=>{this._config.scroll||this._focustrap.activate(),j.trigger(this._element,"shown.bs.offcanvas",{relatedTarget:t})}),this._element,!0))}hide(){this._isShown&&(j.trigger(this._element,"hide.bs.offcanvas").defaultPrevented||(this._focustrap.deactivate(),this._element.blur(),this._isShown=!1,this._element.classList.remove($i),this._backdrop.hide(),this._queueCallback((()=>{this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._element.style.visibility="hidden",this._config.scroll||(new fi).reset(),j.trigger(this._element,qi)}),this._element,!0)))}dispose(){this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}_getConfig(t){return t={...Ri,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Bi,t,Wi),t}_initializeBackDrop(){return new bi({className:"offcanvas-backdrop",isVisible:this._config.backdrop,isAnimated:!0,rootElement:this._element.parentNode,clickCallback:()=>this.hide()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_addEventListeners(){j.on(this._element,"keydown.dismiss.bs.offcanvas",(t=>{this._config.keyboard&&"Escape"===t.key&&this.hide()}))}static jQueryInterface(t){return this.each((function(){const e=Fi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}j.on(document,"click.bs.offcanvas.data-api",'[data-bs-toggle="offcanvas"]',(function(t){const e=n(this);if(["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this))return;j.one(e,qi,(()=>{l(this)&&this.focus()}));const i=V.findOne(zi);i&&i!==e&&Fi.getInstance(i).hide(),Fi.getOrCreateInstance(e).toggle(this)})),j.on(window,"load.bs.offcanvas.data-api",(()=>V.find(zi).forEach((t=>Fi.getOrCreateInstance(t).show())))),R(Fi),g(Fi);const Ui=new Set(["background","cite","href","itemtype","longdesc","poster","src","xlink:href"]),Vi=/^(?:(?:https?|mailto|ftp|tel|file|sms):|[^#&/:?]*(?:[#/?]|$))/i,Ki=/^data:(?:image\/(?:bmp|gif|jpeg|jpg|png|tiff|webp)|video\/(?:mpeg|mp4|ogg|webm)|audio\/(?:mp3|oga|ogg|opus));base64,[\d+/a-z]+=*$/i,Xi=(t,e)=>{const i=t.nodeName.toLowerCase();if(e.includes(i))return!Ui.has(i)||Boolean(Vi.test(t.nodeValue)||Ki.test(t.nodeValue));const n=e.filter((t=>t instanceof RegExp));for(let t=0,e=n.length;t<e;t++)if(n[t].test(i))return!0;return!1};function Yi(t,e,i){if(!t.length)return t;if(i&&"function"==typeof i)return i(t);const n=(new window.DOMParser).parseFromString(t,"text/html"),s=[].concat(...n.body.querySelectorAll("*"));for(let t=0,i=s.length;t<i;t++){const i=s[t],n=i.nodeName.toLowerCase();if(!Object.keys(e).includes(n)){i.remove();continue}const o=[].concat(...i.attributes),r=[].concat(e["*"]||[],e[n]||[]);o.forEach((t=>{Xi(t,r)||i.removeAttribute(t.nodeName)}))}return n.body.innerHTML}const Qi="tooltip",Gi=new Set(["sanitize","allowList","sanitizeFn"]),Zi={animation:"boolean",template:"string",title:"(string|element|function)",trigger:"string",delay:"(number|object)",html:"boolean",selector:"(string|boolean)",placement:"(string|function)",offset:"(array|string|function)",container:"(string|element|boolean)",fallbackPlacements:"array",boundary:"(string|element)",customClass:"(string|function)",sanitize:"boolean",sanitizeFn:"(null|function)",allowList:"object",popperConfig:"(null|object|function)"},Ji={AUTO:"auto",TOP:"top",RIGHT:m()?"left":"right",BOTTOM:"bottom",LEFT:m()?"right":"left"},tn={animation:!0,template:'<div class="tooltip" role="tooltip"><div class="tooltip-arrow"></div><div class="tooltip-inner"></div></div>',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:[0,0],container:!1,fallbackPlacements:["top","right","bottom","left"],boundary:"clippingParents",customClass:"",sanitize:!0,sanitizeFn:null,allowList:{"*":["class","dir","id","lang","role",/^aria-[\w-]*$/i],a:["target","href","title","rel"],area:[],b:[],br:[],col:[],code:[],div:[],em:[],hr:[],h1:[],h2:[],h3:[],h4:[],h5:[],h6:[],i:[],img:["src","srcset","alt","title","width","height"],li:[],ol:[],p:[],pre:[],s:[],small:[],span:[],sub:[],sup:[],strong:[],u:[],ul:[]},popperConfig:null},en={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},nn="fade",sn="show",on="show",rn="out",an=".tooltip-inner",ln=".modal",cn="hide.bs.modal",hn="hover",dn="focus";class un extends B{constructor(t,e){if(void 0===Fe)throw new TypeError("Bootstrap's tooltips require Popper (https://popper.js.org)");super(t),this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this._config=this._getConfig(e),this.tip=null,this._setListeners()}static get Default(){return tn}static get NAME(){return Qi}static get Event(){return en}static get DefaultType(){return Zi}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}toggleEnabled(){this._isEnabled=!this._isEnabled}toggle(t){if(this._isEnabled)if(t){const e=this._initializeOnDelegatedTarget(t);e._activeTrigger.click=!e._activeTrigger.click,e._isWithActiveTrigger()?e._enter(null,e):e._leave(null,e)}else{if(this.getTipElement().classList.contains(sn))return void this._leave(null,this);this._enter(null,this)}}dispose(){clearTimeout(this._timeout),j.off(this._element.closest(ln),cn,this._hideModalHandler),this.tip&&this.tip.remove(),this._disposePopper(),super.dispose()}show(){if("none"===this._element.style.display)throw new Error("Please use show on visible elements");if(!this.isWithContent()||!this._isEnabled)return;const t=j.trigger(this._element,this.constructor.Event.SHOW),e=h(this._element),i=null===e?this._element.ownerDocument.documentElement.contains(this._element):e.contains(this._element);if(t.defaultPrevented||!i)return;"tooltip"===this.constructor.NAME&&this.tip&&this.getTitle()!==this.tip.querySelector(an).innerHTML&&(this._disposePopper(),this.tip.remove(),this.tip=null);const n=this.getTipElement(),s=(t=>{do{t+=Math.floor(1e6*Math.random())}while(document.getElementById(t));return t})(this.constructor.NAME);n.setAttribute("id",s),this._element.setAttribute("aria-describedby",s),this._config.animation&&n.classList.add(nn);const o="function"==typeof this._config.placement?this._config.placement.call(this,n,this._element):this._config.placement,r=this._getAttachment(o);this._addAttachmentClass(r);const{container:a}=this._config;H.set(n,this.constructor.DATA_KEY,this),this._element.ownerDocument.documentElement.contains(this.tip)||(a.append(n),j.trigger(this._element,this.constructor.Event.INSERTED)),this._popper?this._popper.update():this._popper=qe(this._element,n,this._getPopperConfig(r)),n.classList.add(sn);const l=this._resolvePossibleFunction(this._config.customClass);l&&n.classList.add(...l.split(" ")),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>{j.on(t,"mouseover",d)}));const c=this.tip.classList.contains(nn);this._queueCallback((()=>{const t=this._hoverState;this._hoverState=null,j.trigger(this._element,this.constructor.Event.SHOWN),t===rn&&this._leave(null,this)}),this.tip,c)}hide(){if(!this._popper)return;const t=this.getTipElement();if(j.trigger(this._element,this.constructor.Event.HIDE).defaultPrevented)return;t.classList.remove(sn),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1;const e=this.tip.classList.contains(nn);this._queueCallback((()=>{this._isWithActiveTrigger()||(this._hoverState!==on&&t.remove(),this._cleanTipClass(),this._element.removeAttribute("aria-describedby"),j.trigger(this._element,this.constructor.Event.HIDDEN),this._disposePopper())}),this.tip,e),this._hoverState=""}update(){null!==this._popper&&this._popper.update()}isWithContent(){return Boolean(this.getTitle())}getTipElement(){if(this.tip)return this.tip;const t=document.createElement("div");t.innerHTML=this._config.template;const e=t.children[0];return this.setContent(e),e.classList.remove(nn,sn),this.tip=e,this.tip}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),an)}_sanitizeAndSetContent(t,e,i){const n=V.findOne(i,t);e||!n?this.setElementContent(n,e):n.remove()}setElementContent(t,e){if(null!==t)return o(e)?(e=r(e),void(this._config.html?e.parentNode!==t&&(t.innerHTML="",t.append(e)):t.textContent=e.textContent)):void(this._config.html?(this._config.sanitize&&(e=Yi(e,this._config.allowList,this._config.sanitizeFn)),t.innerHTML=e):t.textContent=e)}getTitle(){const t=this._element.getAttribute("data-bs-original-title")||this._config.title;return this._resolvePossibleFunction(t)}updateAttachment(t){return"right"===t?"end":"left"===t?"start":t}_initializeOnDelegatedTarget(t,e){return e||this.constructor.getOrCreateInstance(t.delegateTarget,this._getDelegateConfig())}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_resolvePossibleFunction(t){return"function"==typeof t?t.call(this._element):t}_getPopperConfig(t){const e={placement:t,modifiers:[{name:"flip",options:{fallbackPlacements:this._config.fallbackPlacements}},{name:"offset",options:{offset:this._getOffset()}},{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"arrow",options:{element:`.${this.constructor.NAME}-arrow`}},{name:"onChange",enabled:!0,phase:"afterWrite",fn:t=>this._handlePopperPlacementChange(t)}],onFirstUpdate:t=>{t.options.placement!==t.placement&&this._handlePopperPlacementChange(t)}};return{...e,..."function"==typeof this._config.popperConfig?this._config.popperConfig(e):this._config.popperConfig}}_addAttachmentClass(t){this.getTipElement().classList.add(`${this._getBasicClassPrefix()}-${this.updateAttachment(t)}`)}_getAttachment(t){return Ji[t.toUpperCase()]}_setListeners(){this._config.trigger.split(" ").forEach((t=>{if("click"===t)j.on(this._element,this.constructor.Event.CLICK,this._config.selector,(t=>this.toggle(t)));else if("manual"!==t){const e=t===hn?this.constructor.Event.MOUSEENTER:this.constructor.Event.FOCUSIN,i=t===hn?this.constructor.Event.MOUSELEAVE:this.constructor.Event.FOCUSOUT;j.on(this._element,e,this._config.selector,(t=>this._enter(t))),j.on(this._element,i,this._config.selector,(t=>this._leave(t)))}})),this._hideModalHandler=()=>{this._element&&this.hide()},j.on(this._element.closest(ln),cn,this._hideModalHandler),this._config.selector?this._config={...this._config,trigger:"manual",selector:""}:this._fixTitle()}_fixTitle(){const t=this._element.getAttribute("title"),e=typeof this._element.getAttribute("data-bs-original-title");(t||"string"!==e)&&(this._element.setAttribute("data-bs-original-title",t||""),!t||this._element.getAttribute("aria-label")||this._element.textContent||this._element.setAttribute("aria-label",t),this._element.setAttribute("title",""))}_enter(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusin"===t.type?dn:hn]=!0),e.getTipElement().classList.contains(sn)||e._hoverState===on?e._hoverState=on:(clearTimeout(e._timeout),e._hoverState=on,e._config.delay&&e._config.delay.show?e._timeout=setTimeout((()=>{e._hoverState===on&&e.show()}),e._config.delay.show):e.show())}_leave(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusout"===t.type?dn:hn]=e._element.contains(t.relatedTarget)),e._isWithActiveTrigger()||(clearTimeout(e._timeout),e._hoverState=rn,e._config.delay&&e._config.delay.hide?e._timeout=setTimeout((()=>{e._hoverState===rn&&e.hide()}),e._config.delay.hide):e.hide())}_isWithActiveTrigger(){for(const t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1}_getConfig(t){const e=U.getDataAttributes(this._element);return Object.keys(e).forEach((t=>{Gi.has(t)&&delete e[t]})),(t={...this.constructor.Default,...e,..."object"==typeof t&&t?t:{}}).container=!1===t.container?document.body:r(t.container),"number"==typeof t.delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),a(Qi,t,this.constructor.DefaultType),t.sanitize&&(t.template=Yi(t.template,t.allowList,t.sanitizeFn)),t}_getDelegateConfig(){const t={};for(const e in this._config)this.constructor.Default[e]!==this._config[e]&&(t[e]=this._config[e]);return t}_cleanTipClass(){const t=this.getTipElement(),e=new RegExp(`(^|\\s)${this._getBasicClassPrefix()}\\S+`,"g"),i=t.getAttribute("class").match(e);null!==i&&i.length>0&&i.map((t=>t.trim())).forEach((e=>t.classList.remove(e)))}_getBasicClassPrefix(){return"bs-tooltip"}_handlePopperPlacementChange(t){const{state:e}=t;e&&(this.tip=e.elements.popper,this._cleanTipClass(),this._addAttachmentClass(this._getAttachment(e.placement)))}_disposePopper(){this._popper&&(this._popper.destroy(),this._popper=null)}static jQueryInterface(t){return this.each((function(){const e=un.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(un);const fn={...un.Default,placement:"right",offset:[0,8],trigger:"click",content:"",template:'<div class="popover" role="tooltip"><div class="popover-arrow"></div><h3 class="popover-header"></h3><div class="popover-body"></div></div>'},pn={...un.DefaultType,content:"(string|element|function)"},mn={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"};class gn extends un{static get Default(){return fn}static get NAME(){return"popover"}static get Event(){return mn}static get DefaultType(){return pn}isWithContent(){return this.getTitle()||this._getContent()}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),".popover-header"),this._sanitizeAndSetContent(t,this._getContent(),".popover-body")}_getContent(){return this._resolvePossibleFunction(this._config.content)}_getBasicClassPrefix(){return"bs-popover"}static jQueryInterface(t){return this.each((function(){const e=gn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(gn);const _n="scrollspy",bn={offset:10,method:"auto",target:""},vn={offset:"number",method:"string",target:"(string|element)"},yn="active",wn=".nav-link, .list-group-item, .dropdown-item",En="position";class An extends B{constructor(t,e){super(t),this._scrollElement="BODY"===this._element.tagName?window:this._element,this._config=this._getConfig(e),this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,j.on(this._scrollElement,"scroll.bs.scrollspy",(()=>this._process())),this.refresh(),this._process()}static get Default(){return bn}static get NAME(){return _n}refresh(){const t=this._scrollElement===this._scrollElement.window?"offset":En,e="auto"===this._config.method?t:this._config.method,n=e===En?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),V.find(wn,this._config.target).map((t=>{const s=i(t),o=s?V.findOne(s):null;if(o){const t=o.getBoundingClientRect();if(t.width||t.height)return[U[e](o).top+n,s]}return null})).filter((t=>t)).sort(((t,e)=>t[0]-e[0])).forEach((t=>{this._offsets.push(t[0]),this._targets.push(t[1])}))}dispose(){j.off(this._scrollElement,".bs.scrollspy"),super.dispose()}_getConfig(t){return(t={...bn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}}).target=r(t.target)||document.documentElement,a(_n,t,vn),t}_getScrollTop(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop}_getScrollHeight(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)}_getOffsetHeight(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height}_process(){const t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),i=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=i){const t=this._targets[this._targets.length-1];this._activeTarget!==t&&this._activate(t)}else{if(this._activeTarget&&t<this._offsets[0]&&this._offsets[0]>0)return this._activeTarget=null,void this._clear();for(let e=this._offsets.length;e--;)this._activeTarget!==this._targets[e]&&t>=this._offsets[e]&&(void 0===this._offsets[e+1]||t<this._offsets[e+1])&&this._activate(this._targets[e])}}_activate(t){this._activeTarget=t,this._clear();const e=wn.split(",").map((e=>`${e}[data-bs-target="${t}"],${e}[href="${t}"]`)),i=V.findOne(e.join(","),this._config.target);i.classList.add(yn),i.classList.contains("dropdown-item")?V.findOne(".dropdown-toggle",i.closest(".dropdown")).classList.add(yn):V.parents(i,".nav, .list-group").forEach((t=>{V.prev(t,".nav-link, .list-group-item").forEach((t=>t.classList.add(yn))),V.prev(t,".nav-item").forEach((t=>{V.children(t,".nav-link").forEach((t=>t.classList.add(yn)))}))})),j.trigger(this._scrollElement,"activate.bs.scrollspy",{relatedTarget:t})}_clear(){V.find(wn,this._config.target).filter((t=>t.classList.contains(yn))).forEach((t=>t.classList.remove(yn)))}static jQueryInterface(t){return this.each((function(){const e=An.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(window,"load.bs.scrollspy.data-api",(()=>{V.find('[data-bs-spy="scroll"]').forEach((t=>new An(t)))})),g(An);const Tn="active",On="fade",Cn="show",kn=".active",Ln=":scope > li > .active";class xn extends B{static get NAME(){return"tab"}show(){if(this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE&&this._element.classList.contains(Tn))return;let t;const e=n(this._element),i=this._element.closest(".nav, .list-group");if(i){const e="UL"===i.nodeName||"OL"===i.nodeName?Ln:kn;t=V.find(e,i),t=t[t.length-1]}const s=t?j.trigger(t,"hide.bs.tab",{relatedTarget:this._element}):null;if(j.trigger(this._element,"show.bs.tab",{relatedTarget:t}).defaultPrevented||null!==s&&s.defaultPrevented)return;this._activate(this._element,i);const o=()=>{j.trigger(t,"hidden.bs.tab",{relatedTarget:this._element}),j.trigger(this._element,"shown.bs.tab",{relatedTarget:t})};e?this._activate(e,e.parentNode,o):o()}_activate(t,e,i){const n=(!e||"UL"!==e.nodeName&&"OL"!==e.nodeName?V.children(e,kn):V.find(Ln,e))[0],s=i&&n&&n.classList.contains(On),o=()=>this._transitionComplete(t,n,i);n&&s?(n.classList.remove(Cn),this._queueCallback(o,t,!0)):o()}_transitionComplete(t,e,i){if(e){e.classList.remove(Tn);const t=V.findOne(":scope > .dropdown-menu .active",e.parentNode);t&&t.classList.remove(Tn),"tab"===e.getAttribute("role")&&e.setAttribute("aria-selected",!1)}t.classList.add(Tn),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),u(t),t.classList.contains(On)&&t.classList.add(Cn);let n=t.parentNode;if(n&&"LI"===n.nodeName&&(n=n.parentNode),n&&n.classList.contains("dropdown-menu")){const e=t.closest(".dropdown");e&&V.find(".dropdown-toggle",e).forEach((t=>t.classList.add(Tn))),t.setAttribute("aria-expanded",!0)}i&&i()}static jQueryInterface(t){return this.each((function(){const e=xn.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(document,"click.bs.tab.data-api",'[data-bs-toggle="tab"], [data-bs-toggle="pill"], [data-bs-toggle="list"]',(function(t){["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this)||xn.getOrCreateInstance(this).show()})),g(xn);const Dn="toast",Sn="hide",Nn="show",In="showing",Pn={animation:"boolean",autohide:"boolean",delay:"number"},jn={animation:!0,autohide:!0,delay:5e3};class Mn extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._timeout=null,this._hasMouseInteraction=!1,this._hasKeyboardInteraction=!1,this._setListeners()}static get DefaultType(){return Pn}static get Default(){return jn}static get NAME(){return Dn}show(){j.trigger(this._element,"show.bs.toast").defaultPrevented||(this._clearTimeout(),this._config.animation&&this._element.classList.add("fade"),this._element.classList.remove(Sn),u(this._element),this._element.classList.add(Nn),this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.remove(In),j.trigger(this._element,"shown.bs.toast"),this._maybeScheduleHide()}),this._element,this._config.animation))}hide(){this._element.classList.contains(Nn)&&(j.trigger(this._element,"hide.bs.toast").defaultPrevented||(this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.add(Sn),this._element.classList.remove(In),this._element.classList.remove(Nn),j.trigger(this._element,"hidden.bs.toast")}),this._element,this._config.animation)))}dispose(){this._clearTimeout(),this._element.classList.contains(Nn)&&this._element.classList.remove(Nn),super.dispose()}_getConfig(t){return t={...jn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}},a(Dn,t,this.constructor.DefaultType),t}_maybeScheduleHide(){this._config.autohide&&(this._hasMouseInteraction||this._hasKeyboardInteraction||(this._timeout=setTimeout((()=>{this.hide()}),this._config.delay)))}_onInteraction(t,e){switch(t.type){case"mouseover":case"mouseout":this._hasMouseInteraction=e;break;case"focusin":case"focusout":this._hasKeyboardInteraction=e}if(e)return void this._clearTimeout();const i=t.relatedTarget;this._element===i||this._element.contains(i)||this._maybeScheduleHide()}_setListeners(){j.on(this._element,"mouseover.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"mouseout.bs.toast",(t=>this._onInteraction(t,!1))),j.on(this._element,"focusin.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"focusout.bs.toast",(t=>this._onInteraction(t,!1)))}_clearTimeout(){clearTimeout(this._timeout),this._timeout=null}static jQueryInterface(t){return this.each((function(){const e=Mn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}return R(Mn),g(Mn),{Alert:W,Button:z,Carousel:st,Collapse:pt,Dropdown:hi,Modal:Hi,Offcanvas:Fi,Popover:gn,ScrollSpy:An,Tab:xn,Toast:Mn,Tooltip:un}}));
7
- //# sourceMappingURL=bootstrap.bundle.min.js.map
 
 
 
 
 
 
 
 
spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/visualizer.py DELETED
@@ -1,233 +0,0 @@
1
- import numpy as np
2
- import os
3
- import sys
4
- import ntpath
5
- import time
6
- from . import util, html
7
- from subprocess import Popen, PIPE
8
-
9
- if sys.version_info[0] == 2:
10
- VisdomExceptionBase = Exception
11
- else:
12
- VisdomExceptionBase = ConnectionError
13
-
14
-
15
- def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
16
- """Save examples to the disk.
17
-
18
- Parameters:
19
- webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
20
- visuals (OrderedDict) -- an ordered dictionary that stores (name, examples (either tensor or numpy) ) pairs
21
- image_path (str) -- the string is used to create image paths
22
- aspect_ratio (float) -- the aspect ratio of saved examples
23
- width (int) -- the examples will be resized to width x width
24
-
25
- This function will save examples stored in 'visuals' to the HTML file specified by 'webpage'.
26
- """
27
- image_dir = webpage.get_image_dir()
28
- short_path = ntpath.basename(image_path[0])
29
- name = os.path.splitext(short_path)[0]
30
-
31
- webpage.add_header(name)
32
- ims, txts, links = [], [], []
33
-
34
- for label, im_data in visuals.items():
35
- im = util.tensor2im(im_data)
36
- image_name = '%s/%s.png' % (label, name)
37
- os.makedirs(os.path.join(image_dir, label), exist_ok=True)
38
- save_path = os.path.join(image_dir, image_name)
39
- util.save_image(im, save_path)
40
- ims.append(image_name)
41
- txts.append(label)
42
- links.append(image_name)
43
- webpage.add_images(ims, txts, links, width=width)
44
-
45
-
46
- class Visualizer():
47
- """This class includes several functions that can display/save examples and print/save logging information.
48
-
49
- It uses a Python library 'visdom' for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with examples.
50
- """
51
-
52
- def __init__(self, opt):
53
- """Initialize the Visualizer class
54
-
55
- Parameters:
56
- opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
57
- Step 1: Cache the training/test options
58
- Step 2: connect to a visdom server
59
- Step 3: create an HTML object for saveing HTML filters
60
- Step 4: create a logging file to store training losses
61
- """
62
- self.opt = opt # cache the option
63
- if opt.display_id is None:
64
- self.display_id = np.random.randint(100000) * 10 # just a random display id
65
- else:
66
- self.display_id = opt.display_id
67
- self.use_html = opt.isTrain and not opt.no_html
68
- self.win_size = opt.display_winsize
69
- self.name = opt.name
70
- self.port = opt.display_port
71
- self.saved = False
72
- if self.display_id > 0: # connect to a visdom server given <display_port> and <display_server>
73
- import visdom
74
- self.plot_data = {}
75
- self.ncols = opt.display_ncols
76
- self.vis = visdom.Visdom(server=opt.display_server, port=opt.display_port, env=opt.display_env)
77
- if not self.vis.check_connection():
78
- self.create_visdom_connections()
79
-
80
- if self.use_html: # create an HTML object at <checkpoints_dir>/web/; examples will be saved under <checkpoints_dir>/web/examples/
81
- self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')
82
- self.img_dir = os.path.join(self.web_dir, 'examples')
83
- print('create web directory %s...' % self.web_dir)
84
- util.mkdirs([self.web_dir, self.img_dir])
85
- # create a logging file to store training losses
86
- self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
87
- with open(self.log_name, "a") as log_file:
88
- now = time.strftime("%c")
89
- log_file.write('================ Training Loss (%s) ================\n' % now)
90
-
91
- def reset(self):
92
- """Reset the self.saved status"""
93
- self.saved = False
94
-
95
- def create_visdom_connections(self):
96
- """If the program could not connect to Visdom server, this function will start a new server at port < self.port > """
97
- cmd = sys.executable + ' -m visdom.server -p %d &>/dev/null &' % self.port
98
- print('\n\nCould not connect to Visdom server. \n Trying to start a server....')
99
- print('Command: %s' % cmd)
100
- Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
101
-
102
- def display_current_results(self, visuals, epoch, save_result):
103
- """Display current results on visdom; save current results to an HTML file.
104
-
105
- Parameters:
106
- visuals (OrderedDict) - - dictionary of examples to display or save
107
- epoch (int) - - the current epoch
108
- save_result (bool) - - if save the current results to an HTML file
109
- """
110
- if self.display_id > 0: # show examples in the browser using visdom
111
- ncols = self.ncols
112
- if ncols > 0: # show all the examples in one visdom panel
113
- ncols = min(ncols, len(visuals))
114
- h, w = next(iter(visuals.values())).shape[:2]
115
- table_css = """<style>
116
- table {border-collapse: separate; border-spacing: 4px; white-space: nowrap; text-align: center}
117
- table td {width: % dpx; height: % dpx; padding: 4px; outline: 4px solid black}
118
- </style>""" % (w, h) # create a table css
119
- # create a table of examples.
120
- title = self.name
121
- label_html = ''
122
- label_html_row = ''
123
- images = []
124
- idx = 0
125
- for label, image in visuals.items():
126
- image_numpy = util.tensor2im(image)
127
- label_html_row += '<td>%s</td>' % label
128
- images.append(image_numpy.transpose([2, 0, 1]))
129
- idx += 1
130
- if idx % ncols == 0:
131
- label_html += '<tr>%s</tr>' % label_html_row
132
- label_html_row = ''
133
- white_image = np.ones_like(image_numpy.transpose([2, 0, 1])) * 255
134
- while idx % ncols != 0:
135
- images.append(white_image)
136
- label_html_row += '<td></td>'
137
- idx += 1
138
- if label_html_row != '':
139
- label_html += '<tr>%s</tr>' % label_html_row
140
- try:
141
- self.vis.images(images, nrow=ncols, win=self.display_id + 1,
142
- padding=2, opts=dict(title=title + ' examples'))
143
- label_html = '<table>%s</table>' % label_html
144
- self.vis.text(table_css + label_html, win=self.display_id + 2,
145
- opts=dict(title=title + ' labels'))
146
- except VisdomExceptionBase:
147
- self.create_visdom_connections()
148
-
149
- else: # show each image in a separate visdom panel;
150
- idx = 1
151
- try:
152
- for label, image in visuals.items():
153
- image_numpy = util.tensor2im(image)
154
- self.vis.image(image_numpy.transpose([2, 0, 1]), opts=dict(title=label),
155
- win=self.display_id + idx)
156
- idx += 1
157
- except VisdomExceptionBase:
158
- self.create_visdom_connections()
159
-
160
- if self.use_html and (save_result or not self.saved): # save examples to an HTML file if they haven't been saved.
161
- self.saved = True
162
- # save examples to the disk
163
- for label, image in visuals.items():
164
- image_numpy = util.tensor2im(image)
165
- img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label))
166
- util.save_image(image_numpy, img_path)
167
-
168
- # update website
169
- webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=1)
170
- for n in range(epoch, 0, -1):
171
- webpage.add_header('epoch [%d]' % n)
172
- ims, txts, links = [], [], []
173
-
174
- for label, image_numpy in visuals.items():
175
- img_path = 'epoch%.3d_%s.png' % (n, label)
176
- ims.append(img_path)
177
- txts.append(label)
178
- links.append(img_path)
179
- webpage.add_images(ims, txts, links, width=self.win_size)
180
- webpage.save()
181
-
182
- def plot_current_losses(self, epoch, counter_ratio, losses):
183
- """display the current losses on visdom display: dictionary of error labels and values
184
-
185
- Parameters:
186
- epoch (int) -- current epoch
187
- counter_ratio (float) -- progress (percentage) in the current epoch, between 0 to 1
188
- losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
189
- """
190
- if len(losses) == 0:
191
- return
192
-
193
- plot_name = '_'.join(list(losses.keys()))
194
-
195
- if plot_name not in self.plot_data:
196
- self.plot_data[plot_name] = {'X': [], 'Y': [], 'legend': list(losses.keys())}
197
-
198
- plot_data = self.plot_data[plot_name]
199
- plot_id = list(self.plot_data.keys()).index(plot_name)
200
-
201
- plot_data['X'].append(epoch + counter_ratio)
202
- plot_data['Y'].append([losses[k] for k in plot_data['legend']])
203
- try:
204
- self.vis.line(
205
- X=np.stack([np.array(plot_data['X'])] * len(plot_data['legend']), 1),
206
- Y=np.array(plot_data['Y']),
207
- opts={
208
- 'title': self.name,
209
- 'legend': plot_data['legend'],
210
- 'xlabel': 'epoch',
211
- 'ylabel': 'loss'},
212
- win=self.display_id - plot_id)
213
- except VisdomExceptionBase:
214
- self.create_visdom_connections()
215
-
216
- # losses: same format as |losses| of plot_current_losses
217
- def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
218
- """print current losses on console; also save the losses to the disk
219
-
220
- Parameters:
221
- epoch (int) -- current epoch
222
- iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
223
- losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
224
- t_comp (float) -- computational time per data point (normalized by batch_size)
225
- t_data (float) -- data loading time per data point (normalized by batch_size)
226
- """
227
- message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data)
228
- for k, v in losses.items():
229
- message += '%s: %.3f ' % (k, v)
230
-
231
- print(message) # print the message
232
- with open(self.log_name, "a") as log_file:
233
- log_file.write('%s\n' % message) # save the message
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/segment_models/edit_anything_model.py DELETED
@@ -1,62 +0,0 @@
1
- import cv2
2
- import torch
3
- import mmcv
4
- import numpy as np
5
- from PIL import Image
6
- from utils.util import resize_long_edge
7
- from concurrent.futures import ThreadPoolExecutor
8
- import time
9
-
10
- class EditAnything:
11
- def __init__(self, image_caption_model):
12
- self.device = image_caption_model.device
13
- self.data_type = image_caption_model.data_type
14
- self.image_caption_model = image_caption_model
15
-
16
- def region_classify_w_blip2(self, images):
17
- inputs = self.image_caption_model.processor(images=images, return_tensors="pt").to(self.device, self.data_type)
18
- generated_ids = self.image_caption_model.model.generate(**inputs)
19
- generated_texts = self.image_caption_model.processor.batch_decode(generated_ids, skip_special_tokens=True)
20
- return [text.strip() for text in generated_texts]
21
-
22
- def process_ann(self, ann, image, target_size=(224, 224)):
23
- start_time = time.time()
24
- m = ann['segmentation']
25
- m_3c = m[:, :, np.newaxis]
26
- m_3c = np.concatenate((m_3c, m_3c, m_3c), axis=2)
27
- bbox = ann['bbox']
28
- region = mmcv.imcrop(image * m_3c, np.array([bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]]), scale=1)
29
- resized_region = mmcv.imresize(region, target_size)
30
- end_time = time.time()
31
- print("process_ann took {:.2f} seconds".format(end_time - start_time))
32
- return resized_region, ann
33
-
34
- def region_level_semantic_api(self, image, anns, topk=5):
35
- """
36
- rank regions by area, and classify each region with blip2, parallel processing for speed up
37
- Args:
38
- image: numpy array
39
- topk: int
40
- Returns:
41
- topk_region_w_class_label: list of dict with key 'class_label'
42
- """
43
- start_time = time.time()
44
- if len(anns) == 0:
45
- return []
46
- sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
47
- topk_anns = sorted_anns[:min(topk, len(sorted_anns))]
48
- with ThreadPoolExecutor() as executor:
49
- regions_and_anns = list(executor.map(lambda ann: self.process_ann(ann, image), topk_anns))
50
- regions = [region for region, _ in regions_and_anns]
51
- region_class_labels = self.region_classify_w_blip2(regions)
52
- for (region, ann), class_label in zip(regions_and_anns, region_class_labels):
53
- ann['class_name'] = class_label
54
- end_time = time.time()
55
- print("region_level_semantic_api took {:.2f} seconds".format(end_time - start_time))
56
-
57
- return [ann for _, ann in regions_and_anns]
58
-
59
- def semantic_class_w_mask(self, img_src, anns):
60
- image = Image.open(img_src)
61
- image = resize_long_edge(image, 384)
62
- return self.region_level_semantic_api(image, anns)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/demucs/train.py DELETED
@@ -1,127 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import sys
8
-
9
- import tqdm
10
- from torch.utils.data import DataLoader
11
- from torch.utils.data.distributed import DistributedSampler
12
-
13
- from .utils import apply_model, average_metric, center_trim
14
-
15
-
16
- def train_model(epoch,
17
- dataset,
18
- model,
19
- criterion,
20
- optimizer,
21
- augment,
22
- quantizer=None,
23
- diffq=0,
24
- repeat=1,
25
- device="cpu",
26
- seed=None,
27
- workers=4,
28
- world_size=1,
29
- batch_size=16):
30
-
31
- if world_size > 1:
32
- sampler = DistributedSampler(dataset)
33
- sampler_epoch = epoch * repeat
34
- if seed is not None:
35
- sampler_epoch += seed * 1000
36
- sampler.set_epoch(sampler_epoch)
37
- batch_size //= world_size
38
- loader = DataLoader(dataset, batch_size=batch_size, sampler=sampler, num_workers=workers)
39
- else:
40
- loader = DataLoader(dataset, batch_size=batch_size, num_workers=workers, shuffle=True)
41
- current_loss = 0
42
- model_size = 0
43
- for repetition in range(repeat):
44
- tq = tqdm.tqdm(loader,
45
- ncols=120,
46
- desc=f"[{epoch:03d}] train ({repetition + 1}/{repeat})",
47
- leave=False,
48
- file=sys.stdout,
49
- unit=" batch")
50
- total_loss = 0
51
- for idx, sources in enumerate(tq):
52
- if len(sources) < batch_size:
53
- # skip uncomplete batch for augment.Remix to work properly
54
- continue
55
- sources = sources.to(device)
56
- sources = augment(sources)
57
- mix = sources.sum(dim=1)
58
-
59
- estimates = model(mix)
60
- sources = center_trim(sources, estimates)
61
- loss = criterion(estimates, sources)
62
- model_size = 0
63
- if quantizer is not None:
64
- model_size = quantizer.model_size()
65
-
66
- train_loss = loss + diffq * model_size
67
- train_loss.backward()
68
- grad_norm = 0
69
- for p in model.parameters():
70
- if p.grad is not None:
71
- grad_norm += p.grad.data.norm()**2
72
- grad_norm = grad_norm**0.5
73
- optimizer.step()
74
- optimizer.zero_grad()
75
-
76
- if quantizer is not None:
77
- model_size = model_size.item()
78
-
79
- total_loss += loss.item()
80
- current_loss = total_loss / (1 + idx)
81
- tq.set_postfix(loss=f"{current_loss:.4f}", ms=f"{model_size:.2f}",
82
- grad=f"{grad_norm:.5f}")
83
-
84
- # free some space before next round
85
- del sources, mix, estimates, loss, train_loss
86
-
87
- if world_size > 1:
88
- sampler.epoch += 1
89
-
90
- if world_size > 1:
91
- current_loss = average_metric(current_loss)
92
- return current_loss, model_size
93
-
94
-
95
- def validate_model(epoch,
96
- dataset,
97
- model,
98
- criterion,
99
- device="cpu",
100
- rank=0,
101
- world_size=1,
102
- shifts=0,
103
- overlap=0.25,
104
- split=False):
105
- indexes = range(rank, len(dataset), world_size)
106
- tq = tqdm.tqdm(indexes,
107
- ncols=120,
108
- desc=f"[{epoch:03d}] valid",
109
- leave=False,
110
- file=sys.stdout,
111
- unit=" track")
112
- current_loss = 0
113
- for index in tq:
114
- streams = dataset[index]
115
- # first five minutes to avoid OOM on --upsample models
116
- streams = streams[..., :15_000_000]
117
- streams = streams.to(device)
118
- sources = streams[1:]
119
- mix = streams[0]
120
- estimates = apply_model(model, mix, shifts=shifts, split=split, overlap=overlap)
121
- loss = criterion(estimates, sources)
122
- current_loss += loss.item() / len(indexes)
123
- del estimates, streams, sources
124
-
125
- if world_size > 1:
126
- current_loss = average_metric(current_loss, len(indexes))
127
- return current_loss
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Amor.ly App Descargar Apk.md DELETED
@@ -1,62 +0,0 @@
1
-
2
- <h1>Love.ly App: Una forma creativa de capturar y compartir tu historia de vídeo</h1>
3
- <p>¿Te encanta hacer videos y compartirlos con tus amigos y seguidores? ¿Quieres expresarte a través de streaming de video en vivo y conectarte con gente de todo el mundo? ¿Desea vender sus productos o servicios a través de chat de vídeo en vivo y aumentar sus ventas? Si respondiste sí a cualquiera de estas preguntas, entonces deberías revisar la aplicación Love.ly, una forma creativa de capturar y compartir tu historia de video. </p>
4
- <h2>¿Qué es la aplicación Love.ly? </h2>
5
- <p>La aplicación Love.ly es una aplicación de videollamada gratuita que te permite hacer videollamadas en vivo y chatear con personas al azar, hacer nuevos amigos e incluso vender tus productos o servicios a través de streaming de video en vivo. También puedes mostrar tus emociones a tus streamers favoritos enviándoles regalos virtuales geniales y disfrutar de las ventajas exclusivas a medida que subes de nivel. La aplicación Love.ly está disponible para dispositivos iOS y Android, y puedes descargarla desde la App Store o Google Play Store.</p>
6
- <h2>amor.ly app descargar apk</h2><br /><p><b><b>Download</b> &mdash;&mdash;&mdash;>>> <a href="https://bltlly.com/2v6LQL">https://bltlly.com/2v6LQL</a></b></p><br /><br />
7
- <h3>Características de la aplicación Love.ly</h3>
8
- <p>Algunas de las características de la aplicación Love.ly son:</p>
9
- <ul>
10
- <li>Puedes hacer videollamadas gratis en cualquier momento y en cualquier lugar con cualquier persona. </li>
11
- <li>Puedes conectarte con personas de diferentes países y culturas y aprender algo nuevo. </li>
12
- <li>Puedes crear tu propia historia de video y compartirla con tus amigos y seguidores. </li>
13
- <li> Puede vender sus productos o servicios a través de chat de vídeo en vivo y obtener comentarios instantáneos de sus clientes. </li>
14
- <li>Puedes enviar y recibir regalos y monedas virtuales y usarlas para desbloquear más funciones y recompensas. </li>
15
- <li>Puedes unirte o crear chats de video grupales con tus amigos y divertirte juntos. </li>
16
- </ul>
17
- <h3>Cómo descargar e instalar la aplicación Love.ly en tu dispositivo</h3>
18
- <p>Para descargar e instalar la aplicación Love.ly en tu dispositivo, sigue estos sencillos pasos:</p>
19
- <ol>
20
- <li>Ir a la App Store o Google Play Store en su dispositivo. </li>
21
-
22
- <li> Toque en el icono de la aplicación y luego toque en "Instalar" o "Obtener" botón. </li>
23
- <li>Espere a que la aplicación se descargue e instale en su dispositivo. </li>
24
- <li>Abra la aplicación y regístrese con su correo electrónico, número de teléfono o cuenta de redes sociales. </li>
25
- <li>Empieza a hacer videollamadas y disfruta de la aplicación. </li>
26
- </ol>
27
- <h2>¿Por qué debería usar la aplicación Love.ly? </h2>
28
- <p>Hay muchas razones por las que deberías usar la aplicación Love.ly, como:</p>
29
- <h3>Beneficios de usar la aplicación Love.ly</h3>
30
- <ul>
31
- <li>Puedes expresarte creativa y auténticamente a través de streaming de video en vivo. </li>
32
- <li>Puedes conocer gente nueva, hacer nuevos amigos y expandir tu red social. </li>
33
- <li> Puede aumentar sus ventas, promover su marca, y hacer crecer su negocio a través de chat de vídeo en vivo. </li>
34
- <li>Puedes divertirte, relajarte y entretenerte viendo o uniéndote a transmisiones de video en vivo. </li>
35
- <li>Puedes ganar monedas, regalos y recompensas usando la aplicación regularmente. </li>
36
- </ul>
37
- <h3>Consejos y trucos para usar la aplicación Love.ly</h3>
38
- <ul>
39
- <li>Usa una buena cámara, micrófono e iluminación para mejorar la calidad de tu video. </li>
40
- <li>Elija un título pegadizo, descripción y miniatura para su flujo de vídeo para atraer a más espectadores. </li>
41
- <li>Interactúa con tus espectadores, responde a sus preguntas, agradéceles por sus regalos y pídeles que te sigan. </li>
42
- <li>Sé respetuoso, educado y amigable con todos en la aplicación. Reporta cualquier comportamiento o contenido inapropiado. </li>
43
- <li>Explora diferentes categorías, temas y hashtags en la aplicación para encontrar flujos interesantes o personas para ver o chatear con. </li>
44
- </ul>
45
- <h2>Conclusión</h2>
46
-
47
- <p>En este artículo, hemos discutido qué es la aplicación Love.ly, qué características tiene, cómo descargarla e instalarla en tu dispositivo, por qué deberías usarla y algunos consejos y trucos para usarla. Esperamos que este artículo te haya ayudado a aprender más sobre la aplicación Love.ly y te haya inspirado a probarla. Si tiene alguna pregunta o comentario, no dude en contactarnos o dejar un comentario a continuación. </p>
48
- <h2>Preguntas frecuentes</h2>
49
- <p>Aquí hay algunas preguntas frecuentes sobre la aplicación Love.ly:</p>
50
- <p></p>
51
- <h4>Q: ¿Es segura la aplicación Love.ly? </h4>
52
- <p>A: Sí, la aplicación Love.ly es segura. Utiliza el cifrado y la protección de la privacidad para garantizar que su información personal y sus datos no se filtren o se utilicen indebidamente. También puedes bloquear o reportar a cualquier usuario que te moleste en la aplicación. </p>
53
- <h4>Q: ¿Cómo puedo ganar monedas y regalos en la aplicación Love.ly? </h4>
54
- <p>A: Puede ganar monedas y regalos en la aplicación Love.ly viendo o uniéndose a transmisiones de video en vivo, enviando o recibiendo regalos virtuales, invitando a sus amigos a unirse a la aplicación, completando tareas diarias y participando en eventos y actividades. Puedes usar las monedas y los regalos para desbloquear más funciones y recompensas en la aplicación. </p>
55
- <h4>Q: ¿Cómo puedo convertirme en un streamer en la aplicación Love.ly? </h4>
56
- <p>A: Puedes convertirte en un streamer en la aplicación Love.ly creando tu propia historia de video y compartiéndola con tus amigos y seguidores. También puede solicitar convertirse en un streamer verificado en la aplicación mediante el cumplimiento de ciertos requisitos y criterios. Como streamer, puede ganar dinero recibiendo regalos de sus espectadores, vendiendo sus productos o servicios a través del chat de video en vivo y uniéndose al programa de socios. </p>
57
- <h4>Q: ¿Cuáles son las categorías y temas en la aplicación Love.ly? </h4>
58
- <p>A: Hay varias categorías y temas en la aplicación Love.ly que puede explorar, como música, danza, belleza, moda, juegos, deportes, viajes, educación, estilo de vida, comedia, arte y más. También puedes buscar hashtags o palabras clave específicas para encontrar flujos o personas que coincidan con tus intereses. </p>
59
-
60
- <p>A: Puede ponerse en contacto con el servicio de atención al cliente de la aplicación Love.ly enviando un correo electrónico a [email protected] o rellenando el formulario de comentarios en la aplicación. También puede visitar el sitio web oficial de la aplicación Love.ly en www.love.ly.com para obtener más información y actualizaciones. </p> 64aa2da5cf<br />
61
- <br />
62
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat/src/lib/stores/errors.ts DELETED
@@ -1,7 +0,0 @@
1
- import { writable } from "svelte/store";
2
-
3
- export const ERROR_MESSAGES = {
4
- default: "Oops, something went wrong.",
5
- };
6
-
7
- export const error = writable<string | null>(null);
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat_new/src/lib/stores/pendingMessage.ts DELETED
@@ -1,3 +0,0 @@
1
- import { writable } from "svelte/store";
2
-
3
- export const pendingMessage = writable<string>("");
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/pkg_resources.py DELETED
@@ -1,270 +0,0 @@
1
- import email.message
2
- import email.parser
3
- import logging
4
- import os
5
- import zipfile
6
- from typing import Collection, Iterable, Iterator, List, Mapping, NamedTuple, Optional
7
-
8
- from pip._vendor import pkg_resources
9
- from pip._vendor.packaging.requirements import Requirement
10
- from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
11
- from pip._vendor.packaging.version import parse as parse_version
12
-
13
- from pip._internal.exceptions import InvalidWheel, NoneMetadataError, UnsupportedWheel
14
- from pip._internal.utils.egg_link import egg_link_path_from_location
15
- from pip._internal.utils.misc import display_path, normalize_path
16
- from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file
17
-
18
- from .base import (
19
- BaseDistribution,
20
- BaseEntryPoint,
21
- BaseEnvironment,
22
- DistributionVersion,
23
- InfoPath,
24
- Wheel,
25
- )
26
-
27
- logger = logging.getLogger(__name__)
28
-
29
-
30
- class EntryPoint(NamedTuple):
31
- name: str
32
- value: str
33
- group: str
34
-
35
-
36
- class InMemoryMetadata:
37
- """IMetadataProvider that reads metadata files from a dictionary.
38
-
39
- This also maps metadata decoding exceptions to our internal exception type.
40
- """
41
-
42
- def __init__(self, metadata: Mapping[str, bytes], wheel_name: str) -> None:
43
- self._metadata = metadata
44
- self._wheel_name = wheel_name
45
-
46
- def has_metadata(self, name: str) -> bool:
47
- return name in self._metadata
48
-
49
- def get_metadata(self, name: str) -> str:
50
- try:
51
- return self._metadata[name].decode()
52
- except UnicodeDecodeError as e:
53
- # Augment the default error with the origin of the file.
54
- raise UnsupportedWheel(
55
- f"Error decoding metadata for {self._wheel_name}: {e} in {name} file"
56
- )
57
-
58
- def get_metadata_lines(self, name: str) -> Iterable[str]:
59
- return pkg_resources.yield_lines(self.get_metadata(name))
60
-
61
- def metadata_isdir(self, name: str) -> bool:
62
- return False
63
-
64
- def metadata_listdir(self, name: str) -> List[str]:
65
- return []
66
-
67
- def run_script(self, script_name: str, namespace: str) -> None:
68
- pass
69
-
70
-
71
- class Distribution(BaseDistribution):
72
- def __init__(self, dist: pkg_resources.Distribution) -> None:
73
- self._dist = dist
74
-
75
- @classmethod
76
- def from_directory(cls, directory: str) -> BaseDistribution:
77
- dist_dir = directory.rstrip(os.sep)
78
-
79
- # Build a PathMetadata object, from path to metadata. :wink:
80
- base_dir, dist_dir_name = os.path.split(dist_dir)
81
- metadata = pkg_resources.PathMetadata(base_dir, dist_dir)
82
-
83
- # Determine the correct Distribution object type.
84
- if dist_dir.endswith(".egg-info"):
85
- dist_cls = pkg_resources.Distribution
86
- dist_name = os.path.splitext(dist_dir_name)[0]
87
- else:
88
- assert dist_dir.endswith(".dist-info")
89
- dist_cls = pkg_resources.DistInfoDistribution
90
- dist_name = os.path.splitext(dist_dir_name)[0].split("-")[0]
91
-
92
- dist = dist_cls(base_dir, project_name=dist_name, metadata=metadata)
93
- return cls(dist)
94
-
95
- @classmethod
96
- def from_metadata_file_contents(
97
- cls,
98
- metadata_contents: bytes,
99
- filename: str,
100
- project_name: str,
101
- ) -> BaseDistribution:
102
- metadata_dict = {
103
- "METADATA": metadata_contents,
104
- }
105
- dist = pkg_resources.DistInfoDistribution(
106
- location=filename,
107
- metadata=InMemoryMetadata(metadata_dict, filename),
108
- project_name=project_name,
109
- )
110
- return cls(dist)
111
-
112
- @classmethod
113
- def from_wheel(cls, wheel: Wheel, name: str) -> BaseDistribution:
114
- try:
115
- with wheel.as_zipfile() as zf:
116
- info_dir, _ = parse_wheel(zf, name)
117
- metadata_dict = {
118
- path.split("/", 1)[-1]: read_wheel_metadata_file(zf, path)
119
- for path in zf.namelist()
120
- if path.startswith(f"{info_dir}/")
121
- }
122
- except zipfile.BadZipFile as e:
123
- raise InvalidWheel(wheel.location, name) from e
124
- except UnsupportedWheel as e:
125
- raise UnsupportedWheel(f"{name} has an invalid wheel, {e}")
126
- dist = pkg_resources.DistInfoDistribution(
127
- location=wheel.location,
128
- metadata=InMemoryMetadata(metadata_dict, wheel.location),
129
- project_name=name,
130
- )
131
- return cls(dist)
132
-
133
- @property
134
- def location(self) -> Optional[str]:
135
- return self._dist.location
136
-
137
- @property
138
- def installed_location(self) -> Optional[str]:
139
- egg_link = egg_link_path_from_location(self.raw_name)
140
- if egg_link:
141
- location = egg_link
142
- elif self.location:
143
- location = self.location
144
- else:
145
- return None
146
- return normalize_path(location)
147
-
148
- @property
149
- def info_location(self) -> Optional[str]:
150
- return self._dist.egg_info
151
-
152
- @property
153
- def installed_by_distutils(self) -> bool:
154
- # A distutils-installed distribution is provided by FileMetadata. This
155
- # provider has a "path" attribute not present anywhere else. Not the
156
- # best introspection logic, but pip has been doing this for a long time.
157
- try:
158
- return bool(self._dist._provider.path)
159
- except AttributeError:
160
- return False
161
-
162
- @property
163
- def canonical_name(self) -> NormalizedName:
164
- return canonicalize_name(self._dist.project_name)
165
-
166
- @property
167
- def version(self) -> DistributionVersion:
168
- return parse_version(self._dist.version)
169
-
170
- def is_file(self, path: InfoPath) -> bool:
171
- return self._dist.has_metadata(str(path))
172
-
173
- def iter_distutils_script_names(self) -> Iterator[str]:
174
- yield from self._dist.metadata_listdir("scripts")
175
-
176
- def read_text(self, path: InfoPath) -> str:
177
- name = str(path)
178
- if not self._dist.has_metadata(name):
179
- raise FileNotFoundError(name)
180
- content = self._dist.get_metadata(name)
181
- if content is None:
182
- raise NoneMetadataError(self, name)
183
- return content
184
-
185
- def iter_entry_points(self) -> Iterable[BaseEntryPoint]:
186
- for group, entries in self._dist.get_entry_map().items():
187
- for name, entry_point in entries.items():
188
- name, _, value = str(entry_point).partition("=")
189
- yield EntryPoint(name=name.strip(), value=value.strip(), group=group)
190
-
191
- def _metadata_impl(self) -> email.message.Message:
192
- """
193
- :raises NoneMetadataError: if the distribution reports `has_metadata()`
194
- True but `get_metadata()` returns None.
195
- """
196
- if isinstance(self._dist, pkg_resources.DistInfoDistribution):
197
- metadata_name = "METADATA"
198
- else:
199
- metadata_name = "PKG-INFO"
200
- try:
201
- metadata = self.read_text(metadata_name)
202
- except FileNotFoundError:
203
- if self.location:
204
- displaying_path = display_path(self.location)
205
- else:
206
- displaying_path = repr(self.location)
207
- logger.warning("No metadata found in %s", displaying_path)
208
- metadata = ""
209
- feed_parser = email.parser.FeedParser()
210
- feed_parser.feed(metadata)
211
- return feed_parser.close()
212
-
213
- def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]:
214
- if extras: # pkg_resources raises on invalid extras, so we sanitize.
215
- extras = frozenset(extras).intersection(self._dist.extras)
216
- return self._dist.requires(extras)
217
-
218
- def iter_provided_extras(self) -> Iterable[str]:
219
- return self._dist.extras
220
-
221
-
222
- class Environment(BaseEnvironment):
223
- def __init__(self, ws: pkg_resources.WorkingSet) -> None:
224
- self._ws = ws
225
-
226
- @classmethod
227
- def default(cls) -> BaseEnvironment:
228
- return cls(pkg_resources.working_set)
229
-
230
- @classmethod
231
- def from_paths(cls, paths: Optional[List[str]]) -> BaseEnvironment:
232
- return cls(pkg_resources.WorkingSet(paths))
233
-
234
- def _iter_distributions(self) -> Iterator[BaseDistribution]:
235
- for dist in self._ws:
236
- yield Distribution(dist)
237
-
238
- def _search_distribution(self, name: str) -> Optional[BaseDistribution]:
239
- """Find a distribution matching the ``name`` in the environment.
240
-
241
- This searches from *all* distributions available in the environment, to
242
- match the behavior of ``pkg_resources.get_distribution()``.
243
- """
244
- canonical_name = canonicalize_name(name)
245
- for dist in self.iter_all_distributions():
246
- if dist.canonical_name == canonical_name:
247
- return dist
248
- return None
249
-
250
- def get_distribution(self, name: str) -> Optional[BaseDistribution]:
251
- # Search the distribution by looking through the working set.
252
- dist = self._search_distribution(name)
253
- if dist:
254
- return dist
255
-
256
- # If distribution could not be found, call working_set.require to
257
- # update the working set, and try to find the distribution again.
258
- # This might happen for e.g. when you install a package twice, once
259
- # using setup.py develop and again using setup.py install. Now when
260
- # running pip uninstall twice, the package gets removed from the
261
- # working set in the first uninstall, so we have to populate the
262
- # working set again so that pip knows about it and the packages gets
263
- # picked up and is successfully uninstalled the second time too.
264
- try:
265
- # We didn't pass in any version specifiers, so this can never
266
- # raise pkg_resources.VersionConflict.
267
- self._ws.require(name)
268
- except pkg_resources.DistributionNotFound:
269
- return None
270
- return self._search_distribution(name)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_clib.py DELETED
@@ -1,101 +0,0 @@
1
- import distutils.command.build_clib as orig
2
- from distutils.errors import DistutilsSetupError
3
- from distutils import log
4
- from setuptools.dep_util import newer_pairwise_group
5
-
6
-
7
- class build_clib(orig.build_clib):
8
- """
9
- Override the default build_clib behaviour to do the following:
10
-
11
- 1. Implement a rudimentary timestamp-based dependency system
12
- so 'compile()' doesn't run every time.
13
- 2. Add more keys to the 'build_info' dictionary:
14
- * obj_deps - specify dependencies for each object compiled.
15
- this should be a dictionary mapping a key
16
- with the source filename to a list of
17
- dependencies. Use an empty string for global
18
- dependencies.
19
- * cflags - specify a list of additional flags to pass to
20
- the compiler.
21
- """
22
-
23
- def build_libraries(self, libraries):
24
- for (lib_name, build_info) in libraries:
25
- sources = build_info.get('sources')
26
- if sources is None or not isinstance(sources, (list, tuple)):
27
- raise DistutilsSetupError(
28
- "in 'libraries' option (library '%s'), "
29
- "'sources' must be present and must be "
30
- "a list of source filenames" % lib_name)
31
- sources = list(sources)
32
-
33
- log.info("building '%s' library", lib_name)
34
-
35
- # Make sure everything is the correct type.
36
- # obj_deps should be a dictionary of keys as sources
37
- # and a list/tuple of files that are its dependencies.
38
- obj_deps = build_info.get('obj_deps', dict())
39
- if not isinstance(obj_deps, dict):
40
- raise DistutilsSetupError(
41
- "in 'libraries' option (library '%s'), "
42
- "'obj_deps' must be a dictionary of "
43
- "type 'source: list'" % lib_name)
44
- dependencies = []
45
-
46
- # Get the global dependencies that are specified by the '' key.
47
- # These will go into every source's dependency list.
48
- global_deps = obj_deps.get('', list())
49
- if not isinstance(global_deps, (list, tuple)):
50
- raise DistutilsSetupError(
51
- "in 'libraries' option (library '%s'), "
52
- "'obj_deps' must be a dictionary of "
53
- "type 'source: list'" % lib_name)
54
-
55
- # Build the list to be used by newer_pairwise_group
56
- # each source will be auto-added to its dependencies.
57
- for source in sources:
58
- src_deps = [source]
59
- src_deps.extend(global_deps)
60
- extra_deps = obj_deps.get(source, list())
61
- if not isinstance(extra_deps, (list, tuple)):
62
- raise DistutilsSetupError(
63
- "in 'libraries' option (library '%s'), "
64
- "'obj_deps' must be a dictionary of "
65
- "type 'source: list'" % lib_name)
66
- src_deps.extend(extra_deps)
67
- dependencies.append(src_deps)
68
-
69
- expected_objects = self.compiler.object_filenames(
70
- sources,
71
- output_dir=self.build_temp,
72
- )
73
-
74
- if (
75
- newer_pairwise_group(dependencies, expected_objects)
76
- != ([], [])
77
- ):
78
- # First, compile the source code to object files in the library
79
- # directory. (This should probably change to putting object
80
- # files in a temporary build directory.)
81
- macros = build_info.get('macros')
82
- include_dirs = build_info.get('include_dirs')
83
- cflags = build_info.get('cflags')
84
- self.compiler.compile(
85
- sources,
86
- output_dir=self.build_temp,
87
- macros=macros,
88
- include_dirs=include_dirs,
89
- extra_postargs=cflags,
90
- debug=self.debug
91
- )
92
-
93
- # Now "link" the object files together into a static library.
94
- # (On Unix at least, this isn't really linking -- it just
95
- # builds an archive. Whatever.)
96
- self.compiler.create_static_lib(
97
- expected_objects,
98
- lib_name,
99
- output_dir=self.build_clib,
100
- debug=self.debug
101
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/include/pybind11/detail/descr.h DELETED
@@ -1,100 +0,0 @@
1
- /*
2
- pybind11/detail/descr.h: Helper type for concatenating type signatures at compile time
3
-
4
- Copyright (c) 2016 Wenzel Jakob <[email protected]>
5
-
6
- All rights reserved. Use of this source code is governed by a
7
- BSD-style license that can be found in the LICENSE file.
8
- */
9
-
10
- #pragma once
11
-
12
- #include "common.h"
13
-
14
- PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
15
- PYBIND11_NAMESPACE_BEGIN(detail)
16
-
17
- #if !defined(_MSC_VER)
18
- # define PYBIND11_DESCR_CONSTEXPR static constexpr
19
- #else
20
- # define PYBIND11_DESCR_CONSTEXPR const
21
- #endif
22
-
23
- /* Concatenate type signatures at compile time */
24
- template <size_t N, typename... Ts>
25
- struct descr {
26
- char text[N + 1];
27
-
28
- constexpr descr() : text{'\0'} { }
29
- constexpr descr(char const (&s)[N+1]) : descr(s, make_index_sequence<N>()) { }
30
-
31
- template <size_t... Is>
32
- constexpr descr(char const (&s)[N+1], index_sequence<Is...>) : text{s[Is]..., '\0'} { }
33
-
34
- template <typename... Chars>
35
- constexpr descr(char c, Chars... cs) : text{c, static_cast<char>(cs)..., '\0'} { }
36
-
37
- static constexpr std::array<const std::type_info *, sizeof...(Ts) + 1> types() {
38
- return {{&typeid(Ts)..., nullptr}};
39
- }
40
- };
41
-
42
- template <size_t N1, size_t N2, typename... Ts1, typename... Ts2, size_t... Is1, size_t... Is2>
43
- constexpr descr<N1 + N2, Ts1..., Ts2...> plus_impl(const descr<N1, Ts1...> &a, const descr<N2, Ts2...> &b,
44
- index_sequence<Is1...>, index_sequence<Is2...>) {
45
- return {a.text[Is1]..., b.text[Is2]...};
46
- }
47
-
48
- template <size_t N1, size_t N2, typename... Ts1, typename... Ts2>
49
- constexpr descr<N1 + N2, Ts1..., Ts2...> operator+(const descr<N1, Ts1...> &a, const descr<N2, Ts2...> &b) {
50
- return plus_impl(a, b, make_index_sequence<N1>(), make_index_sequence<N2>());
51
- }
52
-
53
- template <size_t N>
54
- constexpr descr<N - 1> _(char const(&text)[N]) { return descr<N - 1>(text); }
55
- constexpr descr<0> _(char const(&)[1]) { return {}; }
56
-
57
- template <size_t Rem, size_t... Digits> struct int_to_str : int_to_str<Rem/10, Rem%10, Digits...> { };
58
- template <size_t...Digits> struct int_to_str<0, Digits...> {
59
- static constexpr auto digits = descr<sizeof...(Digits)>(('0' + Digits)...);
60
- };
61
-
62
- // Ternary description (like std::conditional)
63
- template <bool B, size_t N1, size_t N2>
64
- constexpr enable_if_t<B, descr<N1 - 1>> _(char const(&text1)[N1], char const(&)[N2]) {
65
- return _(text1);
66
- }
67
- template <bool B, size_t N1, size_t N2>
68
- constexpr enable_if_t<!B, descr<N2 - 1>> _(char const(&)[N1], char const(&text2)[N2]) {
69
- return _(text2);
70
- }
71
-
72
- template <bool B, typename T1, typename T2>
73
- constexpr enable_if_t<B, T1> _(const T1 &d, const T2 &) { return d; }
74
- template <bool B, typename T1, typename T2>
75
- constexpr enable_if_t<!B, T2> _(const T1 &, const T2 &d) { return d; }
76
-
77
- template <size_t Size> auto constexpr _() -> decltype(int_to_str<Size / 10, Size % 10>::digits) {
78
- return int_to_str<Size / 10, Size % 10>::digits;
79
- }
80
-
81
- template <typename Type> constexpr descr<1, Type> _() { return {'%'}; }
82
-
83
- constexpr descr<0> concat() { return {}; }
84
-
85
- template <size_t N, typename... Ts>
86
- constexpr descr<N, Ts...> concat(const descr<N, Ts...> &descr) { return descr; }
87
-
88
- template <size_t N, typename... Ts, typename... Args>
89
- constexpr auto concat(const descr<N, Ts...> &d, const Args &...args)
90
- -> decltype(std::declval<descr<N + 2, Ts...>>() + concat(args...)) {
91
- return d + _(", ") + concat(args...);
92
- }
93
-
94
- template <size_t N, typename... Ts>
95
- constexpr descr<N + 2, Ts...> type_descr(const descr<N, Ts...> &descr) {
96
- return _("{") + descr + _("}");
97
- }
98
-
99
- PYBIND11_NAMESPACE_END(detail)
100
- PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/iter_swap.h DELETED
@@ -1,44 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // the purpose of this header is to #include the iter_swap.h header
22
- // of the sequential, host, and device systems. It should be #included in any
23
- // code which uses adl to dispatch iter_swap
24
-
25
- #include <thrust/system/detail/sequential/iter_swap.h>
26
-
27
- // SCons can't see through the #defines below to figure out what this header
28
- // includes, so we fake it out by specifying all possible files we might end up
29
- // including inside an #if 0.
30
- #if 0
31
- #include <thrust/system/cpp/detail/iter_swap.h>
32
- #include <thrust/system/cuda/detail/iter_swap.h>
33
- #include <thrust/system/omp/detail/iter_swap.h>
34
- #include <thrust/system/tbb/detail/iter_swap.h>
35
- #endif
36
-
37
- #define __THRUST_HOST_SYSTEM_ITER_SWAP_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/iter_swap.h>
38
- #include __THRUST_HOST_SYSTEM_ITER_SWAP_HEADER
39
- #undef __THRUST_HOST_SYSTEM_ITER_SWAP_HEADER
40
-
41
- #define __THRUST_DEVICE_SYSTEM_ITER_SWAP_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/iter_swap.h>
42
- #include __THRUST_DEVICE_SYSTEM_ITER_SWAP_HEADER
43
- #undef __THRUST_DEVICE_SYSTEM_ITER_SWAP_HEADER
44
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/normalization/main.py DELETED
@@ -1,43 +0,0 @@
1
-
2
- import ast
3
- import pandas as pd
4
-
5
- from normalization.hand_normalization import normalize_hands_full
6
- from normalization.body_normalization import normalize_body_full
7
-
8
-
9
- # Load the dataset
10
- df = pd.read_csv("/Users/matyasbohacek/Documents/WLASL_test_15fps.csv", encoding="utf-8")
11
-
12
- # Retrieve metadata
13
- video_size_heights = df["video_size_height"].to_list()
14
- video_size_widths = df["video_size_width"].to_list()
15
-
16
- # Delete redundant (non-related) properties
17
- del df["video_size_height"]
18
- del df["video_size_width"]
19
-
20
- # Temporarily remove other relevant metadata
21
- labels = df["labels"].to_list()
22
- video_fps = df["video_fps"].to_list()
23
- del df["labels"]
24
- del df["video_fps"]
25
-
26
- # Convert the strings into lists
27
- convert = lambda x: ast.literal_eval(str(x))
28
- for column in df.columns:
29
- df[column] = df[column].apply(convert)
30
-
31
- # Perform the normalizations
32
- df = normalize_hands_full(df)
33
- df, invalid_row_indexes = normalize_body_full(df)
34
-
35
- # Clear lists of items from deleted rows
36
- # labels = [t for i, t in enumerate(labels) if i not in invalid_row_indexes]
37
- # video_fps = [t for i, t in enumerate(video_fps) if i not in invalid_row_indexes]
38
-
39
- # Return the metadata back to the dataset
40
- df["labels"] = labels
41
- df["video_fps"] = video_fps
42
-
43
- df.to_csv("/Users/matyasbohacek/Desktop/WLASL_test_15fps_normalized.csv", encoding="utf-8", index=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/th.py DELETED
@@ -1,41 +0,0 @@
1
- import torch
2
- from torch.autograd import Variable
3
- import numpy as np
4
- import collections
5
-
6
- __all__ = ['as_variable', 'as_numpy', 'mark_volatile']
7
-
8
- def as_variable(obj):
9
- if isinstance(obj, Variable):
10
- return obj
11
- if isinstance(obj, collections.Sequence):
12
- return [as_variable(v) for v in obj]
13
- elif isinstance(obj, collections.Mapping):
14
- return {k: as_variable(v) for k, v in obj.items()}
15
- else:
16
- return Variable(obj)
17
-
18
- def as_numpy(obj):
19
- if isinstance(obj, collections.Sequence):
20
- return [as_numpy(v) for v in obj]
21
- elif isinstance(obj, collections.Mapping):
22
- return {k: as_numpy(v) for k, v in obj.items()}
23
- elif isinstance(obj, Variable):
24
- return obj.data.cpu().numpy()
25
- elif torch.is_tensor(obj):
26
- return obj.cpu().numpy()
27
- else:
28
- return np.array(obj)
29
-
30
- def mark_volatile(obj):
31
- if torch.is_tensor(obj):
32
- obj = Variable(obj)
33
- if isinstance(obj, Variable):
34
- obj.no_grad = True
35
- return obj
36
- elif isinstance(obj, collections.Mapping):
37
- return {k: mark_volatile(o) for k, o in obj.items()}
38
- elif isinstance(obj, collections.Sequence):
39
- return [mark_volatile(o) for o in obj]
40
- else:
41
- return obj
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/example/进群退群通知.js DELETED
@@ -1,64 +0,0 @@
1
- import plugin from '../../lib/plugins/plugin.js'
2
- export class newcomer extends plugin {
3
- constructor() {
4
- super({
5
- name: '欢迎新人',
6
- dsc: '新人入群欢迎',
7
- /** https://oicqjs.github.io/oicq/#events */
8
- event: 'notice.group.increase',
9
- priority: 5000
10
- })
11
- }
12
-
13
- /** 接受到消息都会执行一次 */
14
- async accept() {
15
- if (this.e.user_id == this.e.self_id) return
16
-
17
- /** 定义入群欢迎内容 */
18
- let msg = '欢迎新人!'
19
- /** 冷却cd 30s */
20
- let cd = 30
21
-
22
- /** cd */
23
- let key = `Yz:newcomers:${this.e.group_id}`
24
- if (await redis.get(key)) return
25
- redis.set(key, '1', { EX: cd })
26
-
27
- /** 回复 */
28
- await this.reply([
29
- segment.at(this.e.user_id),
30
- // segment.image(),
31
- msg
32
- ])
33
- }
34
- }
35
-
36
- export class outNotice extends plugin {
37
- constructor() {
38
- super({
39
- name: '退群通知',
40
- dsc: 'xx退群了',
41
- event: 'notice.group.decrease'
42
- })
43
-
44
- /** 退群提示词 */
45
- this.tips = '退群了'
46
- }
47
-
48
- async accept() {
49
- if (this.e.user_id == this.e.self_id) return
50
-
51
- let name, msg
52
- if (this.e.member) {
53
- name = this.e.member.card || this.e.member.nickname
54
- }
55
-
56
- if (name) {
57
- msg = `${name}(${this.e.user_id}) ${this.tips}`
58
- } else {
59
- msg = `${this.e.user_id} ${this.tips}`
60
- }
61
- logger.mark(`[退出通知]${this.e.logText} ${msg}`)
62
- await this.reply(msg)
63
- }
64
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cippppy/RegressionVisualization/app.py DELETED
@@ -1,271 +0,0 @@
1
- ## CHOOSE BETWEEN ALTAIR & MATPLOTLIB
2
-
3
- import gradio as gr
4
- import altair as alt
5
- import numpy as np
6
- import pandas as pd
7
- import matplotlib.pyplot as plt
8
- import time
9
-
10
- def make_plot(plot_type, a, epoch, progress=gr.Progress()):
11
- if plot_type == "log":
12
- return logReg(a=a, epoch=epoch, progress=progress)
13
- elif plot_type == "lin":
14
- return linReg(a=a,epoch=epoch, progress=progress)
15
-
16
-
17
- # a = learning rate
18
- # epoch = number of training iterations
19
- def logReg(a, epoch, progress):
20
- #### generate random data-set ####
21
- progress(0.2, desc="Generating Data")
22
- time.sleep(1)
23
- #np.random.seed(0) # set random seed (optional)
24
-
25
- ## set mean and covariance of our datasets
26
- mean1 = [20,35]
27
- cov1 = [[100,100],[-100,100]]
28
- mean2 = [60,70]
29
- cov2 = [[100,100],[100,-100]]
30
-
31
- ## concatenate values to set x values for datasets
32
- x1, x2 = np.random.multivariate_normal(mean1, cov1, 100).T
33
- x_1, x_2 = np.random.multivariate_normal(mean2, cov2, 100).T
34
- x1 = (np.concatenate((x1, x_1), axis=0))/10
35
- x2 = (np.concatenate((x2, x_2), axis=0))/10
36
-
37
- ## set y values of datasets
38
- y1 = np.zeros(100) # y[0:100] is zero dataset (dataset we want our decision boundary to be above)
39
- y2 = np.ones(100) # y[101:200] is one dataset (dataset we want our decision boundary to be below)
40
- y = np.concatenate((y1, y2), axis=0) # combine datasets into one term
41
-
42
- w = np.matrix([(np.random.rand())/100,(np.random.rand())+0.0001/100]) # begin weights at random starting point
43
- b = np.matrix([np.random.rand()]) # begin bias term at random starting point
44
- wb = np.concatenate((b, w), axis=1) # combine w and b into one weight term
45
- print('f = b + x1*w1 + x2*w2')
46
- print('Starting weights:', 'f = ', wb[0,0],'+ x1', wb[0,1], '+ x2' , wb[0,2])
47
-
48
- loss = np.empty([epoch]) # term to store all loss terms for plotting
49
- iterat = np.empty([epoch]) # term to store all epoch numbers to be plotted vs loss
50
- for n in range (epoch):
51
- iterat[n] = n
52
-
53
- progress(0.5, desc="Finding Loss & Regression")
54
- time.sleep(1.5)
55
-
56
- for p in range (epoch):
57
- L, J = np.matrix([[0.0, 0.0, 0.0]]), 0.0 # reset gradient (∂J(w)/∂w) and loss for each epoch
58
- #### Code the equations to solve for the loss and to update
59
- #### the weights and biases for each epoch below.
60
-
61
- #### Hint: you will need to use the for loop below to create a summation to solve
62
- #### for wb and J (loss) for each epoch. xj has been given as a starting point.
63
- for i in range(len(x1)):
64
- xj = np.matrix([1,x1[i],x2[i]])
65
-
66
- # y_hat = (y_hat or h_w(x) expression)
67
- y_hat = 1 / (1 + np.exp(-(wb * xj.T)))
68
- # J = (cost function, also referred to as L)
69
- J = -((y[i] * np.log(y_hat)) + ((1 - y[i])*np.log(1 - y_hat)))
70
- # d_J = (∂J(w)/∂w function, equation can be solved with information on slide 27)
71
- d_J = ((y_hat) - y[i]) * xj
72
- # wb = (weight updating equation)
73
- wb = wb - a * (d_J)
74
-
75
- loss[p] = J
76
- if ((p % 100) == 0):
77
- print('loss:', J,' Gradient (∂J(w)/∂w) [[b, w1, w2]]:',L[0])
78
- print('Updated weights:', 'f = ', wb[0,0],'+ x1', wb[0,1], '+ x2' , wb[0,2])
79
- equation = "f = {w1} + {w2}x1 + {w3}x2".format(w1 = wb[0,0], w2 = wb[0,1], w3 = wb[0,2])
80
-
81
- ## Plot decision boundary and data
82
-
83
- progress(0.8, desc="Plotting Data")
84
- time.sleep(1.5)
85
-
86
- scatterData1 = pd.DataFrame({'x': x1[1:100],
87
- 'y': x2[1:100]})
88
- scatterFig1 = alt.Chart(scatterData1).mark_point().encode(
89
- x='x:Q',
90
- y='y:Q'
91
- ).properties(
92
- title="Decision Boundary"
93
- )
94
- scatterData2 = pd.DataFrame({'x': x1[101:200],
95
- 'y': x2[101:200]})
96
- scatterFig2 = alt.Chart(scatterData2).mark_point(color='green').encode(
97
- x='x:Q',
98
- y='y:Q',
99
- ).properties(
100
- title="Decision Boundary"
101
- )
102
-
103
- y2 = np.array(np.array(-(x1*wb[0,1] + wb[0,0])/wb[0,2],dtype=float))
104
-
105
- trendLine = pd.DataFrame({'x': x1.flatten(),
106
- 'y': y2.flatten() })
107
- trendLineFig = alt.Chart(trendLine).mark_line().encode(
108
- x='x:Q',
109
- y='y:Q'
110
- ).properties(
111
- title="Decision Boundary"
112
- )
113
-
114
- finalFig = scatterFig1 + scatterFig2 + trendLineFig
115
-
116
- lossData = pd.DataFrame({'Number of Iterations': iterat[100:],
117
- 'Loss Value': loss[100:] })
118
- lossFig = alt.Chart(lossData).mark_line().encode(
119
- x='Number of Iterations:Q',
120
- y='Loss Value:Q'
121
- ).properties(
122
- title='Plot of loss values over number of iterations'
123
- )
124
-
125
- plt.figure()
126
- plt.plot(x1[1:100],x2[1:100],'x', x1[101:200], x2[101:200],'x') # plot random data points
127
- plt.plot(x1, -(x1*wb[0,1] + wb[0,0])/wb[0,2] , linestyle = 'solid') # plot decision boundary
128
- plt.axis('equal')
129
- plt.xlabel('x1')
130
- plt.ylabel('x2')
131
- plt.title('Decision Boundary')
132
- plt.savefig("plt1.png")
133
-
134
- ## Plot training loss v epoch
135
- plt.figure()
136
- plt.plot(iterat[100:],loss[100:],'x')
137
- plt.xlabel('Epoch')
138
- plt.ylabel('Loss')
139
- plt.title('Training Loss v Epoch')
140
- plt.savefig("plt2.png")
141
-
142
- return [finalFig.interactive(),lossFig.interactive(),"plt1.png","plt2.png",str(loss[len(loss)-1]),str(equation)]
143
-
144
- # a = learning rate step size
145
- # epoch = number of training iterations
146
- def linReg(a, epoch, progress):
147
- # generate random data-set
148
- progress(0.2, desc="Generating Data")
149
- time.sleep(1)
150
- # np.random.seed(0) # choose random seed (optional)
151
- x = np.random.rand(100, 1)
152
- y = 2 + 3 * x + np.random.rand(100, 1)
153
-
154
- # J = 0 # initialize J, this can be deleted once J is defined in the loop
155
- w = np.matrix([np.random.rand(),np.random.rand()]) # slope and y-intercept
156
- ite = epoch # number of training iterations
157
-
158
- jList = []
159
- numIte = []
160
-
161
- # Write Linear Regression Code to Solve for w (slope and y-intercept) Here ##
162
- progress(0.5, desc="Finding Loss & Regression")
163
- time.sleep(1.5)
164
-
165
- for p in range (ite):
166
- for i in range(len(x)):
167
- # Calculate w and J here
168
- x_vec = np.matrix([x[i][0],1]) # Option 1 | Setting up a vector for x (x_vec[j] corresponds to w[j])
169
- h = w * x_vec.T ## Hint: you may need to transpose x or w by adding .T to the end of the variable
170
- w = w - a * (h - y[i]) * x_vec
171
- J = (1/2) * (((h - y[i])) ** 2)
172
- J = J.item()
173
-
174
- jList.append(J)
175
- numIte.append(p)
176
- print('Loss:', J)
177
-
178
- ## if done correctly the line should be in line with the data points ##
179
-
180
- print('f = ', w[0,0],'x + ', w[0,1])
181
- equation = "f = {w1}x + {w2}".format(w1 = w[0,0], w2 = w[0,1])
182
-
183
- progress(0.8, desc="Plotting Data")
184
- time.sleep(1.5)
185
- y2 = np.array(np.array((w[0,1]+(w[0,0] * x)),dtype=float)).T
186
-
187
- scatterData = pd.DataFrame({'x': x.flatten(),
188
- 'y': y.flatten()})
189
- scatterFig = alt.Chart(scatterData).mark_point().encode(
190
- x='x:Q',
191
- y='y:Q'
192
- ).properties(
193
- title='Plot of random data values with linear regression line'
194
- )
195
-
196
- trendLine = pd.DataFrame({'x': x.flatten(),
197
- 'y': y2.flatten() })
198
- trendLineFig = alt.Chart(trendLine).mark_line().encode(
199
- x='x:Q',
200
- y='y:Q'
201
- )
202
-
203
- finalFig = scatterFig + trendLineFig
204
-
205
- lossData = pd.DataFrame({'Number of Iterations': range(1,len(jList)+1),
206
- 'Loss Value': jList })
207
- lossFig = alt.Chart(lossData).mark_line().encode(
208
- x='Number of Iterations:Q',
209
- y='Loss Value:Q'
210
- ).properties(
211
- title='Plot of loss values over number of iterations'
212
- )
213
-
214
- # plot
215
- plt.figure(1)
216
- plt.scatter(x,y,s=ite)
217
- plt.plot(x, w[0,1] + (w[0,0] * x), linestyle='solid')
218
- plt.xlabel('x')
219
- plt.ylabel('y')
220
- plt.title('Plot of random data values with linear regression line')
221
- plt.savefig("plt1.png")
222
-
223
- plt.figure(2)
224
- plt.plot(jList)
225
- plt.xlabel('Number of Iterations')
226
- plt.ylabel('Loss Value')
227
- plt.title('Plot of loss values over number of iterations')
228
- plt.savefig("plt2.png")
229
-
230
- return [finalFig.interactive(),lossFig.interactive(),"plt1.png","plt2.png",str(jList[len(jList)-1]),str(equation)]
231
-
232
- with gr.Blocks(title="Regression Visualization") as demo:
233
- gr.Markdown(
234
- """
235
- # Regression Visualization for Machine Learning
236
- Choose your variables below to create a linear or logistic regression model!
237
- """)
238
- with gr.Row():
239
- pack = gr.Radio(label="Plot Package",info="Choose 'MatPlot' for MatPlotLib, Choose 'Altair' for Altair",
240
- choices=['MatPlot','Altair'], value='Altair')
241
- bType = gr.Radio(label="Regression Type",info="Choose 'log' for logistic, Choose 'lin' for linear",
242
- choices=['log','lin'], value='log')
243
- l_rate = gr.Number(value=0.01,label="Learning Rate",info="Enter a value in the range 0.0 - 1.0")
244
- epochs = gr.Number(value=100,label="Number of Epochs (Number of Training Iterations)",info="Enter an integer larger than 0",precision=0)
245
- bStart = gr.Button(label="Start")
246
- with gr.Row() as alt_row:
247
- altPlot1 = gr.Plot()
248
- altPlot2 = gr.Plot()
249
- with gr.Row(visible=False) as mat_row:
250
- matPlot1 = gr.Image(type='filepath',label="Regression Graph",height=600,width=600)
251
- matPlot2 = gr.Image(type='filepath',label="Regression Graph",height=600,width=600)
252
- loss = gr.Textbox(label="Final Loss Value")
253
- equ = gr.Textbox(label="Equation for Plotted Line")
254
- def changeComp(package):
255
- if package == "Altair":
256
- return {
257
- alt_row: gr.Row.update(visible=True),
258
- mat_row: gr.Row.update(visible=False)
259
- }
260
- else:
261
- return {
262
- alt_row: gr.Row.update(visible=False),
263
- mat_row: gr.Row.update(visible=True)
264
- }
265
-
266
- pack.input(changeComp, show_progress=True, inputs=[pack], outputs=[alt_row, mat_row])
267
- bStart.click(make_plot, show_progress=True, inputs=[bType,l_rate,epochs], outputs=[altPlot1,altPlot2, matPlot1, matPlot2, loss, equ])
268
- demo.load()
269
-
270
- if __name__== "__main__" :
271
- demo.queue().launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ClearLove443/Robby-chatbot/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Robby Chatbot
3
- emoji: 🌍
4
- colorFrom: pink
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.25.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Covert1107/sd-diffusers-webui/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: Sd Diffusers Webui
3
- emoji: 🐳
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: docker
7
- sdk_version: 3.9
8
- pinned: false
9
- license: openrail
10
- app_port: 7860
11
- duplicated_from: nyanko7/sd-diffusers-webui
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FitsImagePlugin.py DELETED
@@ -1,73 +0,0 @@
1
- #
2
- # The Python Imaging Library
3
- # $Id$
4
- #
5
- # FITS file handling
6
- #
7
- # Copyright (c) 1998-2003 by Fredrik Lundh
8
- #
9
- # See the README file for information on usage and redistribution.
10
- #
11
-
12
- import math
13
-
14
- from . import Image, ImageFile
15
-
16
-
17
- def _accept(prefix):
18
- return prefix[:6] == b"SIMPLE"
19
-
20
-
21
- class FitsImageFile(ImageFile.ImageFile):
22
- format = "FITS"
23
- format_description = "FITS"
24
-
25
- def _open(self):
26
- headers = {}
27
- while True:
28
- header = self.fp.read(80)
29
- if not header:
30
- msg = "Truncated FITS file"
31
- raise OSError(msg)
32
- keyword = header[:8].strip()
33
- if keyword == b"END":
34
- break
35
- value = header[8:].split(b"/")[0].strip()
36
- if value.startswith(b"="):
37
- value = value[1:].strip()
38
- if not headers and (not _accept(keyword) or value != b"T"):
39
- msg = "Not a FITS file"
40
- raise SyntaxError(msg)
41
- headers[keyword] = value
42
-
43
- naxis = int(headers[b"NAXIS"])
44
- if naxis == 0:
45
- msg = "No image data"
46
- raise ValueError(msg)
47
- elif naxis == 1:
48
- self._size = 1, int(headers[b"NAXIS1"])
49
- else:
50
- self._size = int(headers[b"NAXIS1"]), int(headers[b"NAXIS2"])
51
-
52
- number_of_bits = int(headers[b"BITPIX"])
53
- if number_of_bits == 8:
54
- self.mode = "L"
55
- elif number_of_bits == 16:
56
- self.mode = "I"
57
- # rawmode = "I;16S"
58
- elif number_of_bits == 32:
59
- self.mode = "I"
60
- elif number_of_bits in (-32, -64):
61
- self.mode = "F"
62
- # rawmode = "F" if number_of_bits == -32 else "F;64F"
63
-
64
- offset = math.ceil(self.fp.tell() / 2880) * 2880
65
- self.tile = [("raw", (0, 0) + self.size, offset, (self.mode, 0, -1))]
66
-
67
-
68
- # --------------------------------------------------------------------
69
- # Registry
70
-
71
- Image.register_open(FitsImageFile.format, FitsImageFile, _accept)
72
-
73
- Image.register_extensions(FitsImageFile.format, [".fit", ".fits"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/arrayTools.py DELETED
@@ -1,422 +0,0 @@
1
- """Routines for calculating bounding boxes, point in rectangle calculations and
2
- so on.
3
- """
4
-
5
- from fontTools.misc.roundTools import otRound
6
- from fontTools.misc.vector import Vector as _Vector
7
- import math
8
- import warnings
9
-
10
-
11
- def calcBounds(array):
12
- """Calculate the bounding rectangle of a 2D points array.
13
-
14
- Args:
15
- array: A sequence of 2D tuples.
16
-
17
- Returns:
18
- A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.
19
- """
20
- if not array:
21
- return 0, 0, 0, 0
22
- xs = [x for x, y in array]
23
- ys = [y for x, y in array]
24
- return min(xs), min(ys), max(xs), max(ys)
25
-
26
-
27
- def calcIntBounds(array, round=otRound):
28
- """Calculate the integer bounding rectangle of a 2D points array.
29
-
30
- Values are rounded to closest integer towards ``+Infinity`` using the
31
- :func:`fontTools.misc.fixedTools.otRound` function by default, unless
32
- an optional ``round`` function is passed.
33
-
34
- Args:
35
- array: A sequence of 2D tuples.
36
- round: A rounding function of type ``f(x: float) -> int``.
37
-
38
- Returns:
39
- A four-item tuple of integers representing the bounding rectangle:
40
- ``(xMin, yMin, xMax, yMax)``.
41
- """
42
- return tuple(round(v) for v in calcBounds(array))
43
-
44
-
45
- def updateBounds(bounds, p, min=min, max=max):
46
- """Add a point to a bounding rectangle.
47
-
48
- Args:
49
- bounds: A bounding rectangle expressed as a tuple
50
- ``(xMin, yMin, xMax, yMax)``.
51
- p: A 2D tuple representing a point.
52
- min,max: functions to compute the minimum and maximum.
53
-
54
- Returns:
55
- The updated bounding rectangle ``(xMin, yMin, xMax, yMax)``.
56
- """
57
- (x, y) = p
58
- xMin, yMin, xMax, yMax = bounds
59
- return min(xMin, x), min(yMin, y), max(xMax, x), max(yMax, y)
60
-
61
-
62
- def pointInRect(p, rect):
63
- """Test if a point is inside a bounding rectangle.
64
-
65
- Args:
66
- p: A 2D tuple representing a point.
67
- rect: A bounding rectangle expressed as a tuple
68
- ``(xMin, yMin, xMax, yMax)``.
69
-
70
- Returns:
71
- ``True`` if the point is inside the rectangle, ``False`` otherwise.
72
- """
73
- (x, y) = p
74
- xMin, yMin, xMax, yMax = rect
75
- return (xMin <= x <= xMax) and (yMin <= y <= yMax)
76
-
77
-
78
- def pointsInRect(array, rect):
79
- """Determine which points are inside a bounding rectangle.
80
-
81
- Args:
82
- array: A sequence of 2D tuples.
83
- rect: A bounding rectangle expressed as a tuple
84
- ``(xMin, yMin, xMax, yMax)``.
85
-
86
- Returns:
87
- A list containing the points inside the rectangle.
88
- """
89
- if len(array) < 1:
90
- return []
91
- xMin, yMin, xMax, yMax = rect
92
- return [(xMin <= x <= xMax) and (yMin <= y <= yMax) for x, y in array]
93
-
94
-
95
- def vectorLength(vector):
96
- """Calculate the length of the given vector.
97
-
98
- Args:
99
- vector: A 2D tuple.
100
-
101
- Returns:
102
- The Euclidean length of the vector.
103
- """
104
- x, y = vector
105
- return math.sqrt(x**2 + y**2)
106
-
107
-
108
- def asInt16(array):
109
- """Round a list of floats to 16-bit signed integers.
110
-
111
- Args:
112
- array: List of float values.
113
-
114
- Returns:
115
- A list of rounded integers.
116
- """
117
- return [int(math.floor(i + 0.5)) for i in array]
118
-
119
-
120
- def normRect(rect):
121
- """Normalize a bounding box rectangle.
122
-
123
- This function "turns the rectangle the right way up", so that the following
124
- holds::
125
-
126
- xMin <= xMax and yMin <= yMax
127
-
128
- Args:
129
- rect: A bounding rectangle expressed as a tuple
130
- ``(xMin, yMin, xMax, yMax)``.
131
-
132
- Returns:
133
- A normalized bounding rectangle.
134
- """
135
- (xMin, yMin, xMax, yMax) = rect
136
- return min(xMin, xMax), min(yMin, yMax), max(xMin, xMax), max(yMin, yMax)
137
-
138
-
139
- def scaleRect(rect, x, y):
140
- """Scale a bounding box rectangle.
141
-
142
- Args:
143
- rect: A bounding rectangle expressed as a tuple
144
- ``(xMin, yMin, xMax, yMax)``.
145
- x: Factor to scale the rectangle along the X axis.
146
- Y: Factor to scale the rectangle along the Y axis.
147
-
148
- Returns:
149
- A scaled bounding rectangle.
150
- """
151
- (xMin, yMin, xMax, yMax) = rect
152
- return xMin * x, yMin * y, xMax * x, yMax * y
153
-
154
-
155
- def offsetRect(rect, dx, dy):
156
- """Offset a bounding box rectangle.
157
-
158
- Args:
159
- rect: A bounding rectangle expressed as a tuple
160
- ``(xMin, yMin, xMax, yMax)``.
161
- dx: Amount to offset the rectangle along the X axis.
162
- dY: Amount to offset the rectangle along the Y axis.
163
-
164
- Returns:
165
- An offset bounding rectangle.
166
- """
167
- (xMin, yMin, xMax, yMax) = rect
168
- return xMin + dx, yMin + dy, xMax + dx, yMax + dy
169
-
170
-
171
- def insetRect(rect, dx, dy):
172
- """Inset a bounding box rectangle on all sides.
173
-
174
- Args:
175
- rect: A bounding rectangle expressed as a tuple
176
- ``(xMin, yMin, xMax, yMax)``.
177
- dx: Amount to inset the rectangle along the X axis.
178
- dY: Amount to inset the rectangle along the Y axis.
179
-
180
- Returns:
181
- An inset bounding rectangle.
182
- """
183
- (xMin, yMin, xMax, yMax) = rect
184
- return xMin + dx, yMin + dy, xMax - dx, yMax - dy
185
-
186
-
187
- def sectRect(rect1, rect2):
188
- """Test for rectangle-rectangle intersection.
189
-
190
- Args:
191
- rect1: First bounding rectangle, expressed as tuples
192
- ``(xMin, yMin, xMax, yMax)``.
193
- rect2: Second bounding rectangle.
194
-
195
- Returns:
196
- A boolean and a rectangle.
197
- If the input rectangles intersect, returns ``True`` and the intersecting
198
- rectangle. Returns ``False`` and ``(0, 0, 0, 0)`` if the input
199
- rectangles don't intersect.
200
- """
201
- (xMin1, yMin1, xMax1, yMax1) = rect1
202
- (xMin2, yMin2, xMax2, yMax2) = rect2
203
- xMin, yMin, xMax, yMax = (
204
- max(xMin1, xMin2),
205
- max(yMin1, yMin2),
206
- min(xMax1, xMax2),
207
- min(yMax1, yMax2),
208
- )
209
- if xMin >= xMax or yMin >= yMax:
210
- return False, (0, 0, 0, 0)
211
- return True, (xMin, yMin, xMax, yMax)
212
-
213
-
214
- def unionRect(rect1, rect2):
215
- """Determine union of bounding rectangles.
216
-
217
- Args:
218
- rect1: First bounding rectangle, expressed as tuples
219
- ``(xMin, yMin, xMax, yMax)``.
220
- rect2: Second bounding rectangle.
221
-
222
- Returns:
223
- The smallest rectangle in which both input rectangles are fully
224
- enclosed.
225
- """
226
- (xMin1, yMin1, xMax1, yMax1) = rect1
227
- (xMin2, yMin2, xMax2, yMax2) = rect2
228
- xMin, yMin, xMax, yMax = (
229
- min(xMin1, xMin2),
230
- min(yMin1, yMin2),
231
- max(xMax1, xMax2),
232
- max(yMax1, yMax2),
233
- )
234
- return (xMin, yMin, xMax, yMax)
235
-
236
-
237
- def rectCenter(rect):
238
- """Determine rectangle center.
239
-
240
- Args:
241
- rect: Bounding rectangle, expressed as tuples
242
- ``(xMin, yMin, xMax, yMax)``.
243
-
244
- Returns:
245
- A 2D tuple representing the point at the center of the rectangle.
246
- """
247
- (xMin, yMin, xMax, yMax) = rect
248
- return (xMin + xMax) / 2, (yMin + yMax) / 2
249
-
250
-
251
- def rectArea(rect):
252
- """Determine rectangle area.
253
-
254
- Args:
255
- rect: Bounding rectangle, expressed as tuples
256
- ``(xMin, yMin, xMax, yMax)``.
257
-
258
- Returns:
259
- The area of the rectangle.
260
- """
261
- (xMin, yMin, xMax, yMax) = rect
262
- return (yMax - yMin) * (xMax - xMin)
263
-
264
-
265
- def intRect(rect):
266
- """Round a rectangle to integer values.
267
-
268
- Guarantees that the resulting rectangle is NOT smaller than the original.
269
-
270
- Args:
271
- rect: Bounding rectangle, expressed as tuples
272
- ``(xMin, yMin, xMax, yMax)``.
273
-
274
- Returns:
275
- A rounded bounding rectangle.
276
- """
277
- (xMin, yMin, xMax, yMax) = rect
278
- xMin = int(math.floor(xMin))
279
- yMin = int(math.floor(yMin))
280
- xMax = int(math.ceil(xMax))
281
- yMax = int(math.ceil(yMax))
282
- return (xMin, yMin, xMax, yMax)
283
-
284
-
285
- def quantizeRect(rect, factor=1):
286
- """
287
- >>> bounds = (72.3, -218.4, 1201.3, 919.1)
288
- >>> quantizeRect(bounds)
289
- (72, -219, 1202, 920)
290
- >>> quantizeRect(bounds, factor=10)
291
- (70, -220, 1210, 920)
292
- >>> quantizeRect(bounds, factor=100)
293
- (0, -300, 1300, 1000)
294
- """
295
- if factor < 1:
296
- raise ValueError(f"Expected quantization factor >= 1, found: {factor!r}")
297
- xMin, yMin, xMax, yMax = normRect(rect)
298
- return (
299
- int(math.floor(xMin / factor) * factor),
300
- int(math.floor(yMin / factor) * factor),
301
- int(math.ceil(xMax / factor) * factor),
302
- int(math.ceil(yMax / factor) * factor),
303
- )
304
-
305
-
306
- class Vector(_Vector):
307
- def __init__(self, *args, **kwargs):
308
- warnings.warn(
309
- "fontTools.misc.arrayTools.Vector has been deprecated, please use "
310
- "fontTools.misc.vector.Vector instead.",
311
- DeprecationWarning,
312
- )
313
-
314
-
315
- def pairwise(iterable, reverse=False):
316
- """Iterate over current and next items in iterable.
317
-
318
- Args:
319
- iterable: An iterable
320
- reverse: If true, iterate in reverse order.
321
-
322
- Returns:
323
- A iterable yielding two elements per iteration.
324
-
325
- Example:
326
-
327
- >>> tuple(pairwise([]))
328
- ()
329
- >>> tuple(pairwise([], reverse=True))
330
- ()
331
- >>> tuple(pairwise([0]))
332
- ((0, 0),)
333
- >>> tuple(pairwise([0], reverse=True))
334
- ((0, 0),)
335
- >>> tuple(pairwise([0, 1]))
336
- ((0, 1), (1, 0))
337
- >>> tuple(pairwise([0, 1], reverse=True))
338
- ((1, 0), (0, 1))
339
- >>> tuple(pairwise([0, 1, 2]))
340
- ((0, 1), (1, 2), (2, 0))
341
- >>> tuple(pairwise([0, 1, 2], reverse=True))
342
- ((2, 1), (1, 0), (0, 2))
343
- >>> tuple(pairwise(['a', 'b', 'c', 'd']))
344
- (('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'a'))
345
- >>> tuple(pairwise(['a', 'b', 'c', 'd'], reverse=True))
346
- (('d', 'c'), ('c', 'b'), ('b', 'a'), ('a', 'd'))
347
- """
348
- if not iterable:
349
- return
350
- if reverse:
351
- it = reversed(iterable)
352
- else:
353
- it = iter(iterable)
354
- first = next(it, None)
355
- a = first
356
- for b in it:
357
- yield (a, b)
358
- a = b
359
- yield (a, first)
360
-
361
-
362
- def _test():
363
- """
364
- >>> import math
365
- >>> calcBounds([])
366
- (0, 0, 0, 0)
367
- >>> calcBounds([(0, 40), (0, 100), (50, 50), (80, 10)])
368
- (0, 10, 80, 100)
369
- >>> updateBounds((0, 0, 0, 0), (100, 100))
370
- (0, 0, 100, 100)
371
- >>> pointInRect((50, 50), (0, 0, 100, 100))
372
- True
373
- >>> pointInRect((0, 0), (0, 0, 100, 100))
374
- True
375
- >>> pointInRect((100, 100), (0, 0, 100, 100))
376
- True
377
- >>> not pointInRect((101, 100), (0, 0, 100, 100))
378
- True
379
- >>> list(pointsInRect([(50, 50), (0, 0), (100, 100), (101, 100)], (0, 0, 100, 100)))
380
- [True, True, True, False]
381
- >>> vectorLength((3, 4))
382
- 5.0
383
- >>> vectorLength((1, 1)) == math.sqrt(2)
384
- True
385
- >>> list(asInt16([0, 0.1, 0.5, 0.9]))
386
- [0, 0, 1, 1]
387
- >>> normRect((0, 10, 100, 200))
388
- (0, 10, 100, 200)
389
- >>> normRect((100, 200, 0, 10))
390
- (0, 10, 100, 200)
391
- >>> scaleRect((10, 20, 50, 150), 1.5, 2)
392
- (15.0, 40, 75.0, 300)
393
- >>> offsetRect((10, 20, 30, 40), 5, 6)
394
- (15, 26, 35, 46)
395
- >>> insetRect((10, 20, 50, 60), 5, 10)
396
- (15, 30, 45, 50)
397
- >>> insetRect((10, 20, 50, 60), -5, -10)
398
- (5, 10, 55, 70)
399
- >>> intersects, rect = sectRect((0, 10, 20, 30), (0, 40, 20, 50))
400
- >>> not intersects
401
- True
402
- >>> intersects, rect = sectRect((0, 10, 20, 30), (5, 20, 35, 50))
403
- >>> intersects
404
- 1
405
- >>> rect
406
- (5, 20, 20, 30)
407
- >>> unionRect((0, 10, 20, 30), (0, 40, 20, 50))
408
- (0, 10, 20, 50)
409
- >>> rectCenter((0, 0, 100, 200))
410
- (50.0, 100.0)
411
- >>> rectCenter((0, 0, 100, 199.0))
412
- (50.0, 99.5)
413
- >>> intRect((0.9, 2.9, 3.1, 4.1))
414
- (0, 2, 4, 5)
415
- """
416
-
417
-
418
- if __name__ == "__main__":
419
- import sys
420
- import doctest
421
-
422
- sys.exit(doctest.testmod().failed)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/qu2cu.c DELETED
The diff for this file is too large to render. See raw diff
 
spaces/DakMak/gradio-start/index.html DELETED
@@ -1,2 +0,0 @@
1
-
2
- <img src="qr-img.png" alt="qr-1234.png">